Computing at the Edge of Reality – Sponsor Content

Quantum computers are machines that calculate by exploiting quantum mechanics, a branch of physics that describes reality at its most fundamental level. In the quantum realm, nature operates according to principles that have no analogue to our daily experiences: Particles are waves, occupy many positions at once, and can send information to particles on the other side of the universe. For humans used to living a mechanistic world of cause and effect, the quantum world is strange and unsettling. Even Albert Einstein, famously, could never come to terms with the weirdness of quantum mechanics.

Although quantum physics has been studied by some of the greatest minds in physics for more than a century, it wasn’t until the 1980s that anyone began seriously thinking about how to apply the insights from quantum mechanics to computing. The basic idea is that rather than trying to translate inherently quantum aspects of reality into the binary logic of classical computers and then onto nature to build a computer, we can directly harness the quantum mechanical properties of matter to do computations. This new breed of computer would leverage phenomena like superposition (the ability of particles to be in two states simultaneously) and entanglement (the ability for particles to remain correlated with other particles regardless of physical proximity) to do computations that would be practically impossible for a conventional computer.

It’s an ambitious dream, and one that is still in the making. A decade passed between the time the famed physicist Richard Feynman proposed the idea of quantum computing in the early 1980s and when the mathematician Peter Shor described a useful quantum algorithm that could outperform a classical computer—at least in theory. Shor’s algorithm described a way to use quantum computers to factor integers, which could, in principle, be used to break the 2048-bit encryption standards that the modern internet depends on. It was a major moment in the history of quantum computing. But 30 years later, quantum computers still don’t have nearly enough qubits to make that happen. In fact, it took another decade after Shor published his algorithm to experimentally implement it on a quantum computer, which was only able to factor the number 15 into its prime factors—a calculation so simple that most 10-year-old children could do it by hand.

Since then, however, progress toward a universal quantum computer has been accelerating, and researchers are increasingly thinking about how these machines might be usefully applied in fields ranging from theoretical physics to the development of pharmaceutical drugs. Creating realistic quantum mechanistic models of all but the most simple molecules remains challenging or even impossible for classical computers, which makes it difficult to study and develop new classes of potentially life-saving drugs. At the molecular and sub-molecular levels, these compounds are subject to quantum mechanical effects that are well beyond the simulation capabilities of today’s most powerful supercomputers, but should—in principle—be a breeze for a computer that uses quantum phenomena to do its calculations.

“There are elements of nature that are beyond even the best supercomputers,” says Charina Chou, the chief operating officer of Google’s Quantum AI lab. “Nature isn’t classical, and you’re never going to be able to compute exactly with a classical computer, for example, how every molecule behaves—and our entire world is made up of molecules. If we could fully understand them and use these insights to design new molecules, that is an enormous advantage of a quantum computer.”

The same is true for the development of advanced materials, which also require a deep understanding of the molecular and subatomic properties of the material. AI running on classical computers is already helping accelerate the discovery of new materials for a broad range of applications in agriculture, aerospace, and industrial manufacturing. The hope is that quantum computers could advance this capability by providing increasingly high-fidelity subatomic models of these materials.

“The simulation of systems where quantum effects are important is of rather significant economic relevance because many systems fall into this category,” says Hartmut Neven, vice president of engineering for Google’s Quantum AI. “Want to design a better fusion reactor? There’s plenty of quantum problems there. Want to make lighter, faster, and more robust batteries? There’s plenty of quantum chemistry problems there, too. Whenever engineering involves quantum effects, there is an application for a quantum computer.”

Realizing this vision will require tackling staggering technical challenges that include the construction of massive ultracold refrigerators for quantum hardware and the near-perfect isolation of quantum computers from the outside world to prevent quantum mechanical interference. For now, most of the promises of quantum computing—such as accelerating the discovery of new drugs and materials or unlocking new insights into physics, biology, and chemistry—are still theoretical. But by bridging the gap between the fields of quantum computing and artificial intelligence, it may be possible to reduce the timeline to building a bona fide universal quantum computer that will open new frontiers in biology, physics, chemistry, and more.

Alien Math

The past century of research on quantum mechanics has shown that if Newtonian physics—the world of billiard balls and planetary orbits—operates as a clock does, quantum mechanics prefers dice. Countless experiments have demonstrated that it’s impossible to predict quantum phenomena, such as how a particle will scatter or a radioactive atom will decay—with perfect accuracy. We can only give probabilities for a certain outcome.

“A lot of people think that quantum mechanics is really complicated and involves waves being particles, particles being waves, spooky action at a distance, and all that,” says Scott Aaronson, a theoretical computer scientist and the founding director of the Quantum Information Center at the University of Texas at Austin. “But really quantum mechanics is just one change to the rules of probability that we have no experience with. But once we learn that rule, everything else is just a logical consequence of that change.”

Probability is ultimately an exercise in quantifying uncertainty, and there are well-established rules for adapting probabilities to new information. All probabilities exist on a spectrum from zero to one, for which zero is complete certainty that something won’t happen and one is complete certainty that something will happen. But unlike the probabilities that determine your fortunes at the casino, quantum mechanical probabilities—called amplitudes—can exist within the unit circle of complex numbers and be less than zero. If you strip away all the jargon, the fundamental insight of quantum mechanics is that nature operates according to alien rules of probability at the base layer of reality.

This small tweak in the rules of probability has profound implications for our understanding of reality—and our ability to harness it for computing. While classical computers use bits (0 or 1), quantum computers use qubits. In addition to zeros and ones, qubits can exist as some combination of zero and one simultaneously—a phenomenon known as superposition. This allows quantum computers to represent and process exponentially more information than classical computers with the same number of bits.

This exponential growth in computing power is the reason quantum computers should, in principle, be able to dramatically speed up the time it takes to compute the answer to certain types of problems. But harnessing this power presents significant challenges.

Superposition is crucial to a quantum computer’s power, but it’s fragile. Measuring a qubit collapses its superposition, making it behave like a classical bit (i.e., it is either a 0 or a 1). This challenge requires careful isolation of qubits from their environment until computation is complete. “Superposition,” says Aaronson, “is something that particles like to do in private when no one is watching.”

It’s a phenomenon that the physicist Erwin Schrödinger famously captured in a thought experiment in which he imagined putting a cat in a box that contains poison and shutting the lid. Until the lid is opened and the cat is observed, it is impossible to determine whether the cat is still alive or has eaten the poison and died. The cat is in a superposition of dead and alive; the only way to know for sure is to look in the box and observe the cat’s state, at which point the cat is definitely in one of the two states: dead or alive.

The problem is that for a quantum computer to be useful, researchers need to be able to measure its output; they need to open the box and look at the cat. But measuring qubits directly will destroy their superposition and any advantages offered by a quantum computer. The key is to ensure the measurement happens only when the computation is finished. In the meantime, the qubits need to remain as isolated as possible from their external environment so it doesn’t destroy their superposition and entanglement.

Implementing this in practice is tricky. Many quantum computers, such as Google’s Sycamore, operate at near absolute-zero temperatures to achieve superconductivity and shield qubits from external interference. However, perfect isolation remains elusive, and noise-induced errors persist as a major hurdle in quantum computing.

Today, quantum computing is considered to be in its “noisy intermediate-scale quantum” (NISQ) era. Intermediate-scale refers to the fact that most existing quantum computers have about 100 qubits—orders of magnitude fewer qubits than what most researchers estimate will be required to make a quantum computer useful. Even at this intermediate scale, these systems are still plagued by error-inducing noise.

Solving the noise problem is arguably the most important and daunting problem facing quantum computing in a field of research overflowing with important and daunting problems. A variety of approaches are being explored to solve quantum computing’s noise problem, and generally speaking they can be grouped into two main categories: approaches that try to limit the amount of noise introduced to the system and approaches that attempt to correct the errors introduced to the system.

“Every quantum bit has error associated with it, which means as you bring together more qubits to do more computation, you’re also introducing more error into your system,” says Chou. “The whole idea behind quantum error correction is using qubits to protect against new errors introduced to the system so that, as you add more qubits into a system, the amount of error actually decreases.”

Chou estimates that a universal quantum computer will require at least 1 million qubits to do useful calculations for molecules and materials. Overcoming errors even at this modest size is still a formidable challenge, and getting to 1 million qubits will likely require some mix of enhanced noise resistance and improved error correction. The question is how to get there. Increasingly, researchers are turning to AI to help make it happen.

The Rise of Quantum AI

The history of science and technology is, in many respects, a history of serendipity. From apocryphal eureka moments like Newton’s apple to the discovery of penicillin on stale bread, the flashes of insight that have profoundly changed the world have often come from the most unexpected places. For Neven of Quantum AI, it was the decision to listen to a public radio station on his way home from the office one evening that changed the trajectory of his career—and possibly all of computing.

At the time, Neven had already made a name for himself as one of the world’s leading researchers on machine vision. In the early 2000s, he had been tapped by Google to lead its visual search team. At Google, he developed the visual recognition technologies that are foundational for Image Search, Google Photos, YouTube, and Street View, and he was nearing completion of the first prototype of the augmented reality–enabled glasses that would become Google Glass.

Meanwhile, Neven had also been carving out a niche for himself in the burgeoning field of quantum computing. He was intrigued by how this new technology might be applied to machine learning (ML) to usher in a new computing paradigm that could accomplish tasks neither technology could on its own. He had already made significant progress toward this goal by becoming the first to implement an ML and image recognition algorithm on a quantum computer in 2007, but it was the public radio broadcast during that fateful commute that convinced him to go all in.

“I had heard a story on NPR about quantum computing, and it sounded to me that a quantum computer would be a good tool to do certain image transformations, like Fourier transforms,” says Neven, referring to a technique that decomposes an image into frequencies so that its features can be more efficiently processed by a computer. “That kindled my interest, but I was semi-mistaken about quantum computers being a good tool for it. That application may come one day, but it won’t be one of the first applications.”

Nevertheless, as Neven continued to explore the relationship between quantum computing and machine learning, it became apparent that there were some very promising ways to bridge these two worlds, particularly when it came to optimizing how ML systems are trained. So in 2012, Neven and his team at Google launched the Quantum Artificial Intelligence lab, in partnership with researchers at NASA Ames and the Universities Space Research Association, with the goal of building a quantum computer and finding impactful ways to use it—including advancing machine learning.

As Neven wrote in a blog post announcing the lab, the way machine learning improves is by creating better models of the world to enable more accurate predictions. But the world is a complex place, and some aspects of nature are effectively impossible to model with binary code. Classical computers operate in the world of ones and zeros, presence and absence, on and off. But there, if you probe nature at a deep enough level, you’ll encounter issues that can’t be fully modeled using binary code. Sometimes, when nature poses an either/or question, the answer is simply yes.

Unknown Unknowns

By the time Neven started the Quantum AI lab in 2012, he and several other researchers had already demonstrated that ML algorithms could be implemented on research-grade quantum systems that were designed to solve specific and narrow tasks. Implementing ML algorithms on modern “general purpose” quantum computers remains a significant obstacle and an active area of research for Neven and his collaborators.

So far, quantum computers have struggled to show that they provide superior performance vis-à-vis classical computers in any context that is useful in the real world. Part of the reason for this is they still struggle with errors and so are not accurate or large enough to implement many quantum algorithms; the other reason is that not every problem has an obvious—and, more importantly, provable—quantum advantage. Most benchmarks for quantum advantage involve computing solutions to esoteric mathematical problems that have no obvious real-world relevance. Even then, quantum computers have struggled to demonstrate that they are faster at solving these problems than the most advanced classical computers.

In 2019, Neven’s team at Google’s Quantum AI lab achieved “quantum supremacy” for the first time in history. Their quantum computer, Sycamore, took roughly three and a half minutes to find the answer to a technical problem in quantum computing called random circuit sampling that would’ve taken the most capable classical supercomputer at the time 10,000 years to solve. It was an important scientific achievement and benchmark, though the problem has no obvious real-world application.

Part of the challenge with quantum computing is it’s difficult to prove that no algorithm run on a classical computer can find a solution equally, if not more efficiently, than the quantum computer. Most of the time, this highfalutin mathematical debate plays out at research conferences and in the annals of scientific journals. But in this case, Google didn’t have to wait long to see its claims to quantum supremacy possibly thumped by a new classical technique. In 2024, a group of Chinese researchers published data showing that they had outperformed the 2019 Sycamore processor on the same challenge using hundreds of conventional chips. Soon after, Google published a follow-up paper demonstrating that an updated Sycamore processor could outperform 2024’s most powerful supercomputer.

The uncertainty around quantum supremacy, however, is just the nature of the game in quantum computing. There is still broad consensus among researchers that Neven’s team at Google seems to be leading the pack regarding quantum computing. The team’s work over the past decade is why it no longer seems quite so far-fetched that the world could have a functioning quantum computer doing useful work within the next 10 to 20 years.

Neven is the first to admit that the road to a general-purpose quantum computer that can unambiguously outperform advanced classical computers will be long and winding. The technical challenges are immense, but so, too, are the stakes. The emergence of a bona fide “universal quantum computer” would likely change the course of human history and unlock new frontiers in mathematics, physics, biology, and everything in between. This computer would allow us to model the physical world in all its dynamism that can’t be captured in the comparatively flat language of binary. The more accurately we can model nature, the faster we can find answers to our biggest scientific mysteries. In biology, for example, cells are sometimes a multiplicity of identities and potential; with classic computing, we’re forced to flatten these cells into a single instant in time or a stack rank of identities: skin cell, cancer cell, dying cell, growing cell; on the arm, in the bloodstream, in the brain. However, in reality, as the body moves, changes, and shifts, these cells are everything, everywhere, at once.

This is the type of scientific challenge that’s made for a universal quantum computer, but a quantum computer that is up to the task is neither guaranteed nor imminent. However, in the past few years alone, there has been an increasing number of signs that we are at least on the right path to a universal quantum computer and that the intersection of quantum computing and AI will be an important part of the puzzle—including both the use of AI to accelerate quantum computing and, eventually, AI applications for quantum computing.

At Google, Neven, Chou, and their colleagues are studying ways to both apply AI to better design quantum computers and use quantum computers to build enhanced AI systems. For example, Chou points to how Google engineers are using AI to improve quantum-chip fabrication processes with image recognition systems that streamline qubit quality assessment; developing ML tools to automate coding tasks for quantum systems; and building transformer models that are helping enhance quantum error correction.

Reciprocally, Neven highlighted how quantum computing promises to dramatically reduce the sample complexity in machine learning, potentially leading to AI systems that can learn from exponentially fewer examples than their classical counterparts. This hints at a future when quantum-enhanced optimization could solve more sophisticated training problems, even discerning and discarding mislabeled data in large datasets.

The future of quantum computing is full of promise and uncertainty. Staggering advances are being made every day in each field, but when we look at the history of computing, we see that time and time again, the best forecasts about the future of technology are laid to waste. More than 55 years ago, when the first basic terminals were connected via ARPANET—the progenitor of the modern web—no one could have predicted the rise of the major platforms we know today. But if history teaches us anything, it’s that the furiture is always weirder, and often more wonderful, than we could have ever imagined.

Sensi Tech Hub
Logo