What is quantum computing and how will it change AI?
As we’re now long past the point of being impressed by how small our technology can be, most of us take Moore’s law for granted – that chip performance doubles every two years or so, and that chips themselves decrease in both cost and size. We now seem to be reaching the physical limit of continuing that kind of exponential change, what with transistors on chips now around the size of single atoms.
Combine that with the fact that people create 2.5 exabytes of data per day, on track to balloon the amount of global data to 44 zettabytes (or 44 trillion gigabytes) by 2020, and we have nearly perfect conditions for a technological revolution. Classical computing simply won’t be able to handle us. Quantum computing, however, will not only be able to process everything easily, it will solve problems we can’t imagine with solutions we couldn’t conceive ourselves.
Quantum computing explained
First, it’s important to note that quantum computing is not going to take the place of classical computing because they are two fundamentally different things that serve different purposes. Imagine them like horses and jumbo jets. It is, of course, much easier to cross oceans in a jet than on a horse, but much harder to herd cattle. Similarly, quantum computing will provide all kinds of new opportunities the way that planes have, but it would be overkill and not really that much faster to use them for many of the things for which we use classical computers.
In classical computing, bits work in binary language (1s and 0s) and can be imagined as on/off switches. For a particular input, each bit becomes either 1 or 0, on or off, which then generates a corresponding output. In quantum computing, each quantum bit, or qubit, is some measure of both 1 and 0 simultaneously (in a state dubbed “superposition”).
On an individual level this may just sound mildly perplexing and not too impressive, but qubits work together exponentially. Since each qubit exists in two states at once, a system of 50 qubits (the largest currently in existence, created by IBM) could theoretically perform – or over 1 quadrillion – functions, and in less time while using less energy than a system of 50 bits. Following this logic, a system of just 300 qubits could encode all information in the known universe. Keep in mind that eight bits equal one byte, and even though a standard smartphone has 16 billion of those, it runs out of space whenever you try to take a new video of your dog.
Quantum computing will solve problems in a matter of seconds that would have taken today’s most powerful supercomputers centuries. Currently, the best-known example of quantum computing capability is that it can quickly factor enormous numbers, which poses a big problem for IT security. All encryption is based on the idea that it would take classical computing forever to go through all possible factors without a key to find the right one, but it would take quantum computers just a few minutes to decrypt everything thanks to their exponentially sped-up processing power.
This could be bad news for everyone’s personal data; researchers are keenly aware of the dangers of mass decryption and are working on solutions, such as creating new kinds of codes that don’t involve factorization.
How will quantum computing affect artificial intelligence?
Once it becomes viable, quantum artificial intelligence will be light-years ahead of our own intelligence and have applications in everything. It could detect financial market instabilities, create medical protein models, plan military missions, and much more. Machine learning in a quantum system will be able to recognize patterns in huge amounts of data that would be extremely difficult and time-consuming for classical computers and would function more similarly to an actual human brain, inspired by the basic tenets of biological neural pathways.
The primary goal of neural networks, quantum or otherwise, is to recognize patterns. Each neuron, like a bit, can be a simple on/off switch, which tracks multiple other neurons and switches on if enough of them are also on. The first layer of neurons gets input (like image pixels), then a secondary layer analyzes the input in multiple combinations (finding edges, geometric shapes, etc.) and a final layer produces output (a description of the image). An ongoing process of trial and error constantly adjusts the neural connections so that after around 10,000 photo examples of lions, tigers, and bears, the system will know what’s what. They’ll be able to move beyond recognition and do other things like reconstruct missing ears, for example.
Current limitations of quantum computing
Just to temper excitement (or fear) with reality, quantum computers currently resemble computers from the 1950s that took up entire rooms. They are also hugely expensive, extremely unstable (so far, the longest sustained quantum state lasted 90 microseconds), and the flipside to exponential processing power is that there are exponential ways for things to go wrong. Also, putting classical data into a quantum state and then retrieving the output are both fantastically difficult and tend to collapse the system before anything useful happens.
However, Google engineers successfully simulated a hydrogen molecule in 2016, and the company ambitiously aims to commercialize quantum technologies by 2022. Whether they manage that or not, quantum computing will likely be a big deal soon.
What are your customer care challenges? Let's talk through the solutions.