Ordinary computers can at least beat Google’s quantum computers |  Science

Ordinary computers can at least beat Google’s quantum computers | Science

When the age of quantum computers dawned 3 years ago, its rising sun might have been hiding behind a cloud. In 2019, Google researchers claimed they had achieved a milestone known as quantum supremacy when their Sycamore quantum computer performed an absurd calculation in 200 seconds that they said would tie a supercomputer for 10,000 years. Now scientists in China have done the calculation in a few hours using ordinary processors. A supercomputer, they say, could easily beat Sycamore.

“I think they’re right that they could have simulated the . . . task in seconds if they had access to a large enough supercomputer,” says Scott Aaronson, a computer scientist at the University of Texas, Austin. The advance takes some of the shine off Google’s claim, says Greg Kuperberg, a mathematician at the University of California, Davis. “Getting 300 feet from the summit is less exciting than getting to the top.”

Still, the promise of quantum computers remains untarnished, say Kuperberg and others. And Sergio Boixo, chief scientist for Google Quantum AI, said in an email the Google team knows its advantage won’t last very long. “In our 2019 paper, we said that classical algorithms would improve,” he said. But “we don’t think this classical approach can compete with quantum circuits in 2022 and beyond.”

The “problem” that Sycamore solved was designed to be difficult for a conventional computer, but as simple as possible for a quantum computer manipulating qubits set to 0, 1, or — thanks to quantum mechanics — any combination of 0 and 1 can be set at the same time. Together, Sycamore’s 53 qubits, tiny electrical resonant circuits made of superconducting metal, can encode any number from 0 to 253 (roughly 9 quadrillion) – or even all at once.

Starting with all qubits set to 0, Google researchers applied a random but fixed set of logic operations or gates over 20 cycles to individual qubits and pairs, and then read out the qubits. Roughly speaking, quantum waves representing all possible outputs slopped between the qubits, and the gates created interference that boosted some outputs and canceled others. So some should be more likely to show up than others. A spiky output pattern emerged over millions of attempts.

The Google researchers argued that simulating these interference effects would overwhelm even Summit, a supercomputer at Oak Ridge National Laboratory that has 9,216 central processing units and 27,648 faster graphics processing units (GPUs). IBM researchers who developed Summit were quick to counter that if they used every bit of hard drive available to the computer, the computer could do the computation in a matter of days. Now, Pan Zhang, a statistical physicist at the Institute of Theoretical Physics of the Chinese Academy of Sciences, and colleagues have shown how to beat sycamore in a paper going to press Physical Verification Letters.

According to others, Zhang and colleagues transformed the problem into a 3D mathematical array called a tensor network. It consisted of 20 layers, one for each gate cycle, with each layer containing 53 points, one for each qubit. Lines connected the dots to represent the gates, with each gate encoded in a tensor — a 2D or 4D lattice of complex numbers. Running the simulation was then essentially reduced to multiplying all the tensors. “The advantage of the tensor network method is that we can use many GPUs to do the computations in parallel,” says Zhang.

Zhang and colleagues also relied on an important finding: Sycamore’s calculation was far from accurate, so theirs didn’t have to be either. Sycamore calculated the distribution of the outputs with an estimated accuracy of 0.2% – just enough to distinguish the fingerprint spikes from the noise in the circuit. So Zhang’s team traded accuracy for speed, trimming some lines in his network and eliminating the corresponding goals. Losing only eight rows made the calculation 256 times faster while maintaining an accuracy of 0.37%.

The researchers calculated the output pattern for 1 million of the 9 quadrillion possible number sequences, using a proprietary innovation to arrive at a truly random, representative set. It took 15 hours to compute on 512 GPUs and resulted in the telltale spiky output. “It’s fair to say that the Google experiment was simulated on a conventional computer,” says Dominik Hangleiter, a quantum computer scientist at the University of Maryland, College Park. On a supercomputer, the calculation would take a few dozen seconds, Zhang says — 10 billion times faster than the Google team estimated.

The advance underscores the pitfalls of pitting a quantum computer against a conventional one, researchers say. “There is an urgent need for better quantum superiority experiments,” says Aaronson. Zhang suggests a more practical approach: “We should find some real-world applications to demonstrate the quantum advantage.”

Still, the Google demonstration wasn’t just hype, researchers say. Sycamore required far fewer operations and less power than a supercomputer, Zhang notes. And if Sycamore had had slightly higher fidelity, his team’s simulation couldn’t have kept up, he says. As Hangleiter puts it, “The Google experiment did what it was supposed to do, start this race.”

Leave a Comment

Your email address will not be published.