Google tries out error correction on its quantum processor


Enlarge / Google’s Sycamore processor.

Google

The current generation of quantum hardware has been termed “NISQ”: noisy, intermediate-scale quantum processors. “Intermediate-scale” refers to a qubit count that is typically in the dozens, while “noisy” references the fact that current qubits frequently produce errors. These errors can be caused by problems setting or reading the qubits or by the qubit losing its state during calculations.

Long-term, however, most experts expect that some form of error correction will be essential. Most of the error-correction schemes involve distributing a qubit’s logical information across several qubits and using additional qubits to track that information in order to identify and correct errors.

Back when we visited the folks from Google’s quantum computing group, they mentioned that the layout of their processor was chosen because it simplifies implementing error correction. Now, the team is running two different error-correction schemes on the processor. The results show that error correction clearly works, but we’ll need a lot more qubits and a lower inherent error rate before correction is useful.

Variable geometry

In all quantum processors, the qubits are arranged with connections to their neighbors. There are many ways to arrange these connections, with limits imposed by the qubits that have to sit at the edge of the network and thus have fewer connections. (Most processors with a higher qubit count tend to have one or more inactivated connections, either due to a manufacturing problem or a high error rate.)

The connections among the qubits on a Sycamore chip. The real chip has far more qubits, but they're all in this pattern.
Enlarge / The connections among the qubits on a Sycamore chip. The real chip has far more qubits, but they’re all in this pattern.

John Timmer

Google chose a geometry in which all internal qubits are connected to four neighbors. Meanwhile, the ones on the edge only have a pair of connections. You can see this basic layout on the right.

The two error-correction schemes are diagrammed below. In both diagrams, the data—a single logical qubit—is spread through the qubits represented by the red dots. The blue dots are qubits that can be measured to check for errors and manipulated to correct them. To make an analogy to standard bits, you can think of the blue bits as a way of checking the parity of the neighboring bits and, if something has gone wrong, identifying the qubit most likely to have suffered from the problem.

In the first setup, at left, the measurement and storage qubits alternate along a linear chain, with the length of the chain limited only by the number of qubits in the processor (which is larger than the diagram shown here). Each measurement qubit tracks both its neighbors; if either suffers a single error, measurements of that bit would detect it. (These being qubits, there is more than one possible error, and this scheme will fail if two different types of error occur simultaneously.)

The data (red) and measurement (blue) qubits are connected in two different ways: as a single chain (left) and an interconnected unit (right).
Enlarge / The data (red) and measurement (blue) qubits are connected in two different ways: as a single chain (left) and an interconnected unit (right).

John Timmer

The second scheme, on the right, requires a more specific geometry, so it is harder to spread across larger portions of the processor. Determining which of the data qubits is at fault when an error is detected is more difficult. Calculations must be discarded rather than corrected when problems are found. The scheme’s advantage, however, is that it can detect both types of error simultaneously, so it provides more robust protection.

Did it work?

In general, it worked. In what’s probably the clearest demonstration, the researchers started the linear error correction system with a chain of five qubits, and they progressively added more until the chain reached 21 qubits. As the chain gained more and more qubits, it became progressively more robust, with the error rate falling by a factor of 100 between the chain of five and the chain of 21. Errors still occurred, though, so the error correction isn’t flawless. Performance remained stable out to 50 rounds of error checks.

For the second error-correction configuration, errors also occurred, but most were caught, and the precise nature of the errors was generally possible to infer. But because the setup requires a more precise geometry to work, the team didn’t expand it beyond a limited number of qubits.

The error-correction system failures happened in part because the system is being asked to do so much. For the linear system, the researchers determined that 11 percent of the checks ended up with the detection of an error, which is substantial. That is clearly a function of the “noisy” aspect of our current NISQ processors, but it also means that the error correction has to be incredibly effective if it’s supposed to catch them all. And since the system operates using the same hardware, it is also subject to the same potential for errors that the data qubits are.

Another problem the researchers saw is a product of the chain-like nature of the first system. Because the chain loops through the processor, qubits that are far from each other in the chain can actually end up physically adjacent to each other. That physical proximity allows the qubits to influence each other, creating correlated errors in measurements.

Finally, the whole system sometimes experienced extremely poor performance. The researchers ascribe performance issues to the impacts of cosmic rays or local radiation sources hitting the chip. While the issues aren’t especially frequent, they happen enough to be a problem and will scale up as the number of qubits continues to grow, simply because the processors will present an ever-growing target.

Practicality

In the end, we’re not there yet. For the second scheme, where the detection of errors caused the calculation to be thrown out, the research team found that the system was throwing out over a quarter of the operations. “We find that the overall performance of Sycamore [processors] must be improved to observe error suppression in [this] code,” the researchers concede.

Even with a 21-qubit-long chain, the error rate ended up being about one in every 100,000 operations. That’s certainly enough to expect that a calculation can proceed, with errors being caught and corrected. But remember: All 21 of these qubits were used to encode a single logical qubit. Even the largest of the current processors could only hold two qubits using these systems.

None of this will be a surprise to anyone involved in the world of quantum computing, where it is generally accepted that we’ll need roughly a million qubits before we can error correct enough qubits to perform useful calculations. That’s not to say NISQ processors won’t be useful before then. If there’s an important calculation that would require a billion years of supercomputing time, running it a few thousand times on a quantum processor is still reasonable, as it will produce an error-free result. But that useful error correction will clearly have to wait.

Nature, 2021. DOI: 10.1038/s41586-021-03588-y  (About DOIs).



Source link

Share

Written by bourbiza

bourbiza is an entertainment reporter for iltuoiphone News and is based in Los Angeles.

Comments

Leave a Reply

Loading…

0