Daniel Gottesman stands by a whiteboard filled with written equations.

One of the key barriers to practical quantum computing is the fragility of quantum systems. Quantum computers rely on entangled particles to work, but entanglement is easily disrupted by ‘noise’: interactions with the outside world.

Because of this, scientists are pouring their energy into devising error-correcting codes that can keep quantum computers from making mistakes. It’s challenging—unlike classical computing, you can’t just make a copy of a quantum system to ‘back it up’. You must use the unique properties of entanglement to try and ‘hide’ or protect it from interference.

Plenty of effort has gone into studying exact quantum error-correcting codes since the turn of the century, and this research has produced several promising results, including a powerful code type known as stabilizer codes. But exact codes are narrow in scope, and sometimes experts find it useful to examine more generalized schemes where approximate quantum error-correction (AQEC) is allowed, that offer richer possibilities in many scenarios.

As the name implies, these approximate codes can help a quantum computer get almost, but not necessarily exactly, back to an intended state. AQEC is not just a ‘second best’ option—it is a tool that could eventually be made sufficiently precise for practical purposes in quantum computing, and it can even outperform exact quantum error-correction in some contexts.

Until now, the weakness of AQEC codes is that they have been rather loosely understood. Any encoding you can imagine can be thought of as a quantum error-correcting code. However, some of them are 'trivial'—they will not succeed at correcting errors. For instance, 'encoding' by taking your qubit and accompanying it with some additional random qubits is not helpful. Unfortunately, scientists have been lacking a fundamental way to understand which codes should be regarded as trivial and which as nontrivial.

This situation dissatisfied quantum information expert Daniel Gottesman, a senior investigator in the NSF Quantum Leap Challenge Institute for Robust Quantum Simulation.

“Existing research didn't really put any constraints on how good approximate codes had to be. It seemed to me that there ought to be some kind of limit to how bad an approximation you could have, and still meaningfully call it an approximate quantum error-correcting code,” he says.  

Gottesman is the Brin Family Endowed Professor in Theoretical Computer Science at the University of Maryland, a co-director of the Joint Center for Quantum Information and Computer Science and a Distinguished Visiting Research Chair at Perimeter Institute.

To find a solution, Gottesman teamed up with some current and former Perimeter scientists.

Zi-Wen Liu, assistant professor at Tsinghua University and a former Perimeter postdoctoral fellow, shared Gottesman’s concerns.

They were joined in their efforts by Jinmin Yi, a Ph.D. student at Perimeter, and Weicheng Ye, a former Perimeter Ph.D. student and current postdoctoral fellow at the University of British Columbia.

The quartet of scientists published a paper today (September 3, 2024) in Nature Physics that establishes a new rigorous AQEC framework, putting a lower bound on these approximations to weed out codes that are inadequate.

One of the paper’s key ideas is the use of a new code parameter, which the authors dub ‘subsystem variance’, that draws a connection between quantum circuit complexity, a concept of profound importance in both computer science and physics, and the degree of approximation of the quantum code.

What is circuit complexity? Gottesman explains:

“If you start off with a bunch of separate qubits (quantum bits) that have no relation to each other, and you want to make a code space or encode it, how much time do you need to do that? Does the time depend on the size of the system? If it grows with the size of the system, how fast does it grow? The new bound that we're using is essentially the minimum time needed for all the qubits to talk to each other. If you have less time than that, it means it's still kind of disjointed, because not all the qubits have had time to connect up.”

In their new paper, the four scientists have established that when AQEC codes have an error—a subsystem variance—that is sufficiently small, then that subjects the code to a lower bound on the time needed for all the qubits to connect up—it puts a constraint on the circuit complexity.

This insight has given researchers a way to fundamentally characterize the ‘components’ of a nontrivial AQEC code for the first time. It’s an important step forward for quantum information theory. But that isn’t all.

“The exciting part about this paper has two components,” says Liu. “First, it gives us a fundamental theory for separating a somewhat nontrivial approximate code from trivial ones, offering a versatile framework for understanding all kinds of generalized notions of quantum codes. But second, when applied to a bunch of physical scenarios, including topological order and conformal field theories, we found some interesting implications.”

This new bound, in other words, seems to be telling physicists something fundamental about the behavior of quantum systems in the real world.

Beyond quantum computing: the implications for condensed matter and quantum gravity

The first of these real physical scenarios relates to topological order.

Topological order is an essential concept in quantum matter research, offering a particular way of describing the organization of particles within a material, and it can help describe phase changes and emergent behavior in quantum materials like superconductors.

In topological order research, there are several ways to define the properties of a system. One is the entanglement conditions (there are actually two ‘definitions’ under this banner: long-range entanglement, which is in fact a circuit complexity condition, and topological entanglement entropy. Both of these relate to the description of entanglement in the system). The other is a mathematically rigorous code property that accompanies it.

Until now, the relationship between them was never fully understood.  

“The entanglement conditions and the code property conditions are often lumped together in the community, and there has been little understanding about their relationship,” says Yi. “So, our work provides a quantitative understanding of why these two notions are actually different – we gave a mathematically rigorous way of defining it.”

It’s an important insight, but their research also has compelling implications for quantum gravity, through the lens of conformal field theories (CFTs).

Conformal field theories arise from critical quantum systems (‘critical’ in this instance refers to the point where a phase change occurs, like the exact moment a metal reaches a cool enough temperature to become a superconductor – many quantum materials show unique behaviors at the critical point of phase changes, and CFTs help describe some of these behaviors).

CFTs are incredibly valuable in quantum gravity research. They are most famously used in the AdS/CFT correspondence—a ‘toy model universe’ in which a description of gravity is equivalent to a lower dimension description of a quantum field theory. You may have heard the sci-fi-sounding idea that the universe might be a hologram. AdS/CFT is among the most prominent concrete realizations of that idea. It’s what physicists call a ‘holographic duality’ because it describes an equivalence between a quantum field theory and a gravity theory in one higher dimension: one acts like a holographic projection of the other.

Physicists are fascinated by AdS/CFT because it provides an intriguing, though limited, way to unify quantum mechanics with Einstein’s theory of gravity, two theories which seem to be incompatible otherwise.

“There has been extensive research into how to use quantum error-correcting code to study holographic systems, but mostly people are just playing with toy models based on exact codes that fail to capture key CFT properties. Our work first studies the code properties of CFT systems, and we give a rigorous description of such CFT codes in relation to intrinsic features of CFT, revealing their approximate nature. The analysis potentially gives new insights about what kind of CFT systems are accompanied by a gravity dual,” says Yi.

The research team discovered that the CFT systems whose code properties pass their new AQEC test seem more likely to be compatible with a gravitational description. More work is necessary to understand this, but it is a tantalizing result.

Liu believes that their work gives more impetus to studying AQEC in various contexts.

“Physicists have been applying ideas from the field of quantum information to study physical scenarios for a long time,” he says. “They have been thinking about topological order as fundamentally a quantum error-correcting code, or in the field of holography, people talk about ‘holography code’. In most of the cases, the model that they are studying is intrinsically an exact code. But in real life, there are all kinds of imperfections, deviations, and more fundamental reasons such as symmetries or physical nature that render AQEC codes,” says Liu.

What intrigued the team most is that AQEC codes emerge from nature – they aren’t arbitrary constructs.

The bottom line, says Gottesman, “is that this new dividing line that we're proposing between acceptable and unacceptable codes corresponds to something physically meaningful. For instance, in the case of conformal field theories, there exists a separate reason why it’s an important dividing line. That's quite nice. It shows that this was not just some random thing that we came up with, but that it's connected to something fundamental.”

—This story was written by Scott Johnston and was republished from the Perimeter Institute site.

Jinmin Yi, Weicheng Ye, Daniel Gottesman, and Zi-Wen Liu, “Complexity and order in approximate quantum error-correcting codes.” Nature Physics, August 2024.

Experts
Groups
Misc