Is Google’s quantum supremacy milestone now really obsolete?

In August 2022 news has spread through the media, that a team of researchers from the Chinese Academy of Sciences managed to implement Google‘s quantum supremacy task on a classical supercomputer. Regarding the fact that Google’s 2019-milestone plays a prominent role for the quantum computing community and also for the communication with the public, I think, it is necessary that the community needs to deal with the details of this new paper.

 

The Sycamore chip

The Sycamore chip (credits Google, CC BY 3.0, via Wikimedia Commons)

Google‘s quantum supremacy milestone

You know the story: In October 2019 Google published a seminal milestone in quantum computing. In a paper, which Google‘s Quantum AI team managed to keep secret almost until the end, they announced its new quantum computer Sycamore with 53 qubits i. But most important they also described a quantum algorithm that managed to outperform even the largest classical supercomputer by the factor 200s : 10 000 years.

Despite some criticism, the work has been regarded as one of major milestone for quantum computing in recent history. By time, criticism grew louder and in August 2022 news has spread through the media that a team of researchers from the Chinese Academy of Sciences finally managed to equalize the performance on the same task using a classical supercomputer. BTW, the preprint dates back to last year November.

In October 2019 I myself wrote a pretty enthusiastic article about quantum supremacy in my German online book and now I have looked into the paper „Solving the sampling problem of the Sycamore quantum circuits“ by Pan, Chen and Zhang ii.

Quantum Random Circuit Sampling

The idea of Google‘s supremacy experiment is simple and pretty brilliant. Once a quantum circuit is complex enough no classical supercomputer should be able to simulate it due to the exponentially large Hilbert space of the resulting state. Google achieved just this by randomly dicing a 53-qubit circuit from a set of one and two qubit quantum gates. The two qubit gates are iSWAP-like gates, that are especially hard to simulate for a classical computer on a large scale. The complexity of the circuit grows exponentially in circuit depth and at 20 cycles Google hit the supremacy regime. The result of the algorithm is a distribution of random looking 53 bit strings, like a „speckled intensity pattern produced by light interference in laser scatter“, as the paper states.

A very central feature of the paper is the answer to the following question: How do you actually prove, that the quantum computer did what it was supposed to do and didn‘t just produce random looking trash?

For one thing, they reproduced the result of their quantum hardware for less complex circuits of the same style on classical hardware, this is how Google‘s team got the 10,000 year estimate. But most of all: They used a reliable benchmark tool, which they had already introduced in 2017 iii, to measure the correctness of the resulting distribution …

The key point: Google‘s cross entropy benchmark

The basic idea of the cross entropy bench marking fidelity is very clever: If we measure the quantum circuit we will most likely measure a bit string with a large amplitude in the resulting quantum state of the circuit. If we now calculate the ideal amplitude of just this bit string on a classical computer, we should get a value, which is larger than the average. Now we collect a large number of measurement results and their bit strings. If we sum up all their calculated ideal amplitudes, we should definitely get a value that is „suspiciously large“ (as Scott Aaronson calls this).

BTW 1: To calculate „a few“ amplitudes of the ideal circuit is nothing compared with calculating the full exponentially large state vector as you will see later in this text.

BTW 2: Could you „spoof“ the result of the cross entropy fidelity? Let‘s say, we don‘t measure a quantum circuit but just select a series of random bit strings. What would the fidelity look like?

In detail the fidelity is defined as

$$
F_{XEB} = \tfrac{2^n}{k} \big(
\sum^k_{i=1} |\langle 0^n | C | x_i \rangle|^2
\big) -1
$$ Here, $ n $ is the number of qubits, $ k $ is the number of samples, $ C $ is the quantum circuit and $ x_i $ is the i‘th sampled bit string.

Now, $ \tfrac{1}{k} \big( \sum … \big) $ is like a monte carlo integration and if the $ x_i $ were drawn from the uniform distribution, then this integration would converge to the uniform summation over the full computational basis. Thus, giving $ \rightarrow \tfrac{1}{2^n} || 0^n ||^2 = \frac{1}{2^n} $ as C is unitary. So for random guessing $ x_i $ we have

$$
F_{XEB} = 0
$$ Question: But could you improve this by guessing better?

Answer: This is exactly what Pan, Chen and Zhang did!

An important fact about Google‘s result is: As Sycamore is a NISQ-device it produces imperfect results and the measured distribution is not the true distribution. A perfect quantum device would actually generate a fidelity 1 (the argument for this is much more subtle iv). But Google scored a fidelity of

$$
F_{XEB} = 0,002
$$. So to rival Google‘s result, you do not need a perfect result either.

Interpretations of „Solving the sampling problem“

The 10,000 year estimate by Google was also caused by the type of simulation variant, that they used. As there is no way to store a state of a $ 2^{53} \approx 10^{18} $ dimensional Hilbert space on RAM, they used a hybrid Schrödinger-Feynman algorithm, that breaks the circuit in patches and recombines those via path-integrals an the end. This is memory efficient but becomes exponentially more computationally expensive than a pure Schrödinger simulation. But Google‘s team also mentions in the paper „We expect that lower simulation costs than reported here will eventually be achieved, but we also expect they will be consistently outpaced by hardware improvements on larger quantum processors.“.

Just a few days after the release, IBM’s quantum team managed to propose the former: An alternative approach that aimed at using the massive hard drive of the Summit supercomputer instead of RAM. They estimated a runtime of 2 days. Now, this approach still consumes exponential resources and probably due to practical reasons they never executed the algorithm. So trying to solve the problem by executing a full amplitude simulation and sample from this seems unrealistic, what also the Chinese team now emphasizes.

We all know, a quantum computer itself does just this: A full amplitude generation. But of course, we are never able to see the full amplitude result, all we see are repeated measurement results. So in this sense: If you are able to somehow generate a series of bit strings from a given quantum circuit which scores at least 0,002 in the cross entropy benchmark, what should this be called? Is this „solving the sampling problem“ or is this just „spoofing the Sycamore test“? I think, both terms are fair to say.

The classical sampling algorithm by Pan, Chen and Zhang

So a full amplitude simulation won‘t work. If we would just do a single amplitude simulation of one bit string this would vastly reduces the computational cost. Just think of a simple example like $ \langle 0^n | H^n | 1^n \rangle| $. You never need to write down the full state vector, you just focus / project on the given bit strings. Pan, Chen and Zhang compare the supremacy circuit with a gigantic three dimensional tensor network with certain boundary conditions for the initial and the final state. To request a single amplitude for a certain computational basis state / bit string, both boundaries are fixed. The case in their work is a little different:

They choose their 53-bit strings the following way:

      1. 6 bits are variable
      2. For the remaining 47 bits, they generate $ 2^{20} \approx 1 million $ randomly chosen strings

Altogether this covers almost $ 10^8 $ of all possible $ 10^{18} $ bit strings. Thus, the latter are 10 billions times more strings. It is remarkable that this proved to be enough samples. They call the collection the „sparse state“. For each of the 47-bit strings: For the additional 6 variable bits, finally they chose only the most significant ones, i.e. the ones that generate the highest scores on the cross entropy fidelity. This leaves about 1 million uncorrelated bit strings, which is comparable with the figures from Google‘s experiment.

In a sense this is similar to what Ewin Tang does in her quantum inspired algorithms: Sample first and then calculate (quantum algorithms do the opposite).

There is another trick that the researchers do, to simplify the calculation: They drill K holes into the tensor network, which makes the computation far easier but decreases the fidelity by a factor $ 2^{-K} $. I think, this is also a fair alteration of the sampling problem, since Sycamore also inserts errors – yet unwillingly, of course.

The remaining problem is still highly difficult. The team manages to calculate all $ 10^8 $ amplitudes with a single tensor network contraction: The basic purpose of the sparse state is to multiply each gate or contract each subtensor starting from the bottom right of the circuit (qubit no. 53 of the final state) and then iteratively work yourself up and to the initial state. Each iteration step projects the subtensor only to those product states / bit combinations that are included in the sparse state. For instance, if we contract a two qubit gate for qubit no. 53 and 52 and in the sparse state only the combinations 01 and 10 exist for bits no. 53 and 52, than the remaining two combination will be dropped out of the computation. This decreases the complexity of the sparse state calculation, compared to a full amplitude simulation, approximately by a factor 10 billion.

All in all the Chinese team arrives at a fidelity

$$
F_{XEB} = 0.0037
$$  in about 15 hours using 512 GPUs. They argue, that using even more advanced hardware would finally beat Google‘s 200 seconds.

The future of quantum supremacy experiments

The paper of Pan, Chen and Zhang finally dissolves the 10,000 year estimate of Google’s work for good. You could argue, that there remains a quantum advantage regarding computing cycles or carbon footprint (as Scott Aaronson states).

In my opinion Google’s result is still a milestone, may-be a less spectacular one by now. The current development was kind of expected by the team and the community. Also, we should get used to the fact that, newly detected quantum advantages might be caught up by some improvement on the classical side.

But what about the other part of the Google-quote from above “…but we also expect they [the classical simulations] will be consistently outpaced by hardware improvements on larger quantum processors.”?

The quantum algorithm could be improved in various ways by hardware advances. Most of all: Better gate fidelity, which would lead to deeper circuit depth and better cross entropy fidelity scoring. At some point using enhanced random guessing and drilling holes would probably be not good enough on the classical side. Of course, as always in quantum, more qubits would also work pretty good.

Might there be some news in preparation? Alongside a preprint about toric code implementations, Google Quantum AI recently mentioned (by the way) a planar 72-qubit quantum processor, Sycamore’s big brother v. I very much wonder, if the team is still on the quantum supremacy track …

Yet there is an intrinsic problem with random quantum circuit sampling. At some point, the heart of all the arguments, the cross entropy benchmark, will hit the quantum supremacy regime itself, making it impossible to verify the measured distribution on classical hardware. So at some point conceptionally new quantum supremacy experiments are probably needed. Now, this is related to a much more interesting question:

At what tasks will we be able to observe exponential quantum advantages in the near future?

I will deal with this question in a future post. So stay tuned … Smiley

Footnotes

i https://www.nature.com/articles/s41586-019-1666-5: „Quantum supremacy using a programmable superconducting processor“ by Google Quantum AI

ii https://arxiv.org/abs/2111.03011: „Solving the sampling problem of the Sycamore quantum circuits“ by Pan, Chen and Zhang from the Chinese Academy of Sciences

iii https://arxiv.org/abs/1608.00263: „Characterizing Quantum Supremacy in Near-Term Devices“ by Google Quantum AI

ivhttps://ion.nechita.net/wp-content/uploads/2021/01/Nechita-Qsup-jan-2021.pdf, „Mathematical aspects of Google’s quantum supremacy experiment“ by Ion Nechita. Since I could not come up for the explanation myself why $ F_{XEB} = 1 $ for ideal circuits, this is best / easiest explanation, I have found. For more and more samples N the distribution converges to the „Porter-Thomas distribution“ $ N*exp(-Nx) $ which leads to an easy integral and the result from above. If you have a simpler explanation feel free to tell me.

v https://arxiv.org/pdf/2207.06431.pdf: „Suppressing quantum errors by scaling a surface code logical qubit“ by Google Quantum AI

Scroll to Top