Does Quadratic Quantum Speedup have a problem?

As quantum computing evolves the community focuses on the next steps towards fault tolerance. Recent estimates for the overhead of error correcting codes on reasonable hardware assumptions conclude certain unfavorable results for quadratic quantum speedups. As this might effect important use cases for quantum computing we should take a closer look at these results.

Image by PublicDomainPictures from Pixabay

Google and the problem with quadratic quantum speedups

As quantum computing evolves the community starts to focus on the next steps towards fault tolerance. First trial implementations of error correcting codes, as the surface code, have been successfully executed and companies start to work out industrially relevant use cases beyond NISQ in detail. These developments start to highlight an aspect that is often overlooked: The computational overhead of fault tolerance. In this light Google’s Quantum AI team published a study in 2021 “Focus beyond quadratic speedups for error-corrected quantum advantage” i in which they provide some “back of the envelope” calculations and conclude with certain unfavorable results for the quantum case. These roughly estimate the runtime of quantum algorithms with a small polynomial speedup and ask the question: How long would such an algorithm need to run in order to beat the classical runtime? They simplify the aspect in the following generic form:

Consider a quantum algorithm, that repeatedly executes a quantum primitive M times (e.g. an oracle for optimization). If the runtime of each round would be $ t_Q $ the total runtime is:

$$
T_{total} = M * t_Q
$$

If the classical algorithm needs to execute the same primitive $ M^d $ times (e.g. $ d=2 $ for a quadratic speedup like for Grover search), the total classical runtime is:

$$
T_{total} = M^d * t_C
$$

The central question is: How long are $ t_Q $ and $ t_C $? They argue that the quantum runtime for such algorithms is mostly governed by costly, error corrected Toffoli gates which are the quantum version of classical NAND gates. Toffoli gates are not “native” to the surface code and require the generation or “distillation” of so called magic states from a number of noisy states.
Considering state of the art surface code implementations and reasonable assumptions for superconducting quantum computers, they predict a Toffoli execution time of 170 µs (they also mention the case for trapped ions). How many Toffoli executions can we expect until we achieve a quantum advantage? In the case of a Grover search implementation for optimization, Google’s team reasons a count of order $ N $ Toffolis ($ N $ being the number of qubits) for a typical oracle which should be at least $ N = 100 $ to hit quantum advantage. Thus

$$
t_Q = 440 ms
$$

On the classical side they give an estimate of

$$
t_C = 33 ns
$$

Such a quantum algorithm would have to execute for 2.4 hours in order to breakeven with the classical algorithm. Of course, a Grover search for $ N = 100 $ whould need only about 10 cycles to end, but these are just estimates to give an impression. Also Google’s team explains, that considering even larger Ns (and more interesting problems) would be even less favorable for the quantum side. I guess, you could also argue to sequentially execute many $ N=100 $ – problem instances until you reach the breakeven time.

Now, 2.4 hours seems like good news – until you consider multiple CPU cores and a classical algorithm exploiting parallelization (which would be easy for optimization). Even worse: In order to execute the surface code to error-correct the quantum computer you would also need classical processing. And you need lots of it. Google’s team names “thousands of classical CPUs”. And if you use additional 3000 CPUs on the classical side as well, the breakeven time increases to 1 year!

BTW, they also compare annealing algorithms and Sherrington-Kirkpatrick model instances, achieving even worse results for quantum computers. Now, this looks very disappointing for quadratic speedups and let’s hope for significantly better error correcting codes and quantum hardware. But Google’s team also considers the case for $ d = 3, 4 $, achieving drastically better results for the quantum algorithms, especially for quartic speedups. Unfortunately, there aren’t that many such algorithms and most speedups for optimization are at most a quadratic one ii.

Google’s team states that their and similar results are already “folk wisdom” among the small community that studies error-correction. Indeed for instance Microsoft’s Matthias Troyer formulated similar concerns in the past iii.

A detailed study by Montanaro et al

Already in 2019 Montanaro et al published a much more detailed resource estimation about the same subject “Applying quantum algorithms to constraint satisfaction problems” iv. Again at first sight, the researches calculate very promising prospects for best case random instances of the classes 12-SAT, 14-SAT and graph coloring. These classes are among the hardest combinatorial optimization problems out there. Montanaro et al even compared the quantum algorithms against leading classical algorithms for the same task. In particular, they also achieved their performance results by trading space for time, resulting in a very large physical qubit overhead. You may have noticed that Google’s estimates assume that the Toffoli gates are executed sequentially. Actually they mention the option of performing magic state distillation for Toffoli gates in parallel. They argue that, due do the runtime of the involved Clifford gates and the routing overhead such an effect can only accelerate the quantum algorithm by a factor 10 to 100.

But even Montanaro et al conclude, and again, they formulate this in great detail: Once the classical processing overhead for the surface code is taken into account, this quantum advantage vanishes. BTW Google’s team states that error correcting needs to improve by a magnitude of about 3 to enable favorable results. But this would require significant breakthroughs. Let’s hope, that this will not be the end of the story. Nevertheless, once more it emphasizes the role of exponential quantum speedups.

Exponential Quantum Advantage: Aaronson’s state of the union

If you wonder what Scott Aaronson, one of quantum computing‘s most renowned scientists, has to say about the current state of exponential quantum speedups, you just need to look inside a transcript of his talk at the Solvay Physics Conference in Brussels, May 2022. It is on quantum computing‘s top five list at SciRate for the past year and highly informative v. In there, Aaronson also mentions Google’s work from above and the importance of exponential asympotics to quickly reach the breakeven time compared to classical algorithms. In some sense it reminds me of his “QML – Read the fine print” paper, because it is very readable and kind of … well, relentless. But you should really read it yourself.

In this light, it is reasonable to ask how far the boundaries of known, super fast quantum algorithms can be pushed. In my next article, I will discuss this in more detail for the case of quantum computing‘s arguably most central algorithm – Shor’s algorithm:

From Shor to the Quantum Hidden Subgroup Problem – A jumpstart

Footnotes

i https://arxiv.org/abs/2011.04149: “Focus beyond quadratic speedups for error-corrected quantum advantage” paper by R. Babbush, J. McClean, M. Newman, C. Gidney, S. Boixo, and H. Neven

ii https://www.youtube.com/watch?v=1-2LIopvNIk: “Quantum Algorithms for Optimization” Talk by Ronald de Wolf on the Quantum Colloquium of the Simons Institute. He introduces various strategies for fault tolerant quantum computers. In the following panel discussion, with Edward Farhi present, also the case for QAOA is discussed. BTW, de Wolf also mentions a simpler kind of argument for the ropic of this section, the problem with quadratic speedups.

iii https://www.youtube.com/watch?v=WY3htdKUGsA: “Towards Practical Quantum Advantage” Talk by Matthias Troyer on the Quantum Colloquium of the Simons Institute in which he roughly estimates the overhead for quantum error correction by a constant factor of magnitude 10 to 12.

iv https://arxiv.org/abs/1810.05582: “Applying quantum algorithms to constraint satisfaction problems”, paper by E. Campbell, A. Khurana and A. Montanaro which calculates detailed resource estimates for the fault tolerant computation in comparison to best classical algorithms.

v https://arxiv.org/abs/2209.06930: “How Much Structure Is Needed for Huge Quantum Speedups?” Transcript of Scott Aaronson‘s talk at the Solvay Physics Conference in Brussels, May 2022.

Scroll to Top