Challenging the “Exponential Quantum Advantage Hypothesis“ for Quantum Chemistry

Ever since Feynman‘s statement “Nature isn‘t classical dammit“ there exists the general agreement in the quantum computing community that fault tolerant quantum computers will generically provide an exponential quantum advantage in quantum chemistry for problems of practical interest. A well-known quantum chemist from Caltech now challenges this “Exponential Quantum Advantage Hypothesis“ for a most important class of problems – we should be well advised to pay attention to his insights.

Symbolic sketch for Quantum Chemistry

Image by Irene from Pixabay

Quantum computing and the “Exponential Quantum Advantage Hypothesis” (EQA) for chemistry

Quantum computing has several great use cases of tremendous public interest. Yet, not for all of those use cases is the predicted benefit well established. But there is one field, for which an exponential quantum advantage will remain, no matter what. The purpose for which quantum computers had been proposed in the first place: For the simulation of nature at its fundamental level and its very prominent industrial version: Quantum chemistry. OK, we are not sure, if this advantage could already be achieved in the current NISQ-era. But even if algorithms like the variational eigensolver will not do the job, then definitely fault tolerant quantum computing will do.

This is not just my expectation. It is a general agreement which has almost the status of a fact in the quantum computing community. Rock solid and absolutely hype free. And come on, why not?! We know nature at this level mathematically lives in an exponential Hilbert space. We know since Seth Lloyd’s seminal paper on “Universal Quantum Simulators” that quantum dynamics is of polynomial complexity for quantum computing and thus efficiently solvable. And even for calculating the energy levels of quantum systems, we know a generic algorithm with polynomial complexity: Quantum phase estimation.

A Quantum Chemist from Caltech

Recently, I was very surprised to hear a different view about this matter in the Quantum Colloquium of the Simons Institute (organized by Umesh Vazirani, one of quantum computing’s “fathers”). In his talk Garnet Chan from Caltech discusses the somewhat bold question “Is There Evidence of Exponential Quantum Advantage in Quantum Chemistry?” i. Like often in this series, there is also a panel discussion with other leading scientists following ii – but more about this later. And of course there is also a preprint to Chan’s talk iii.

Now, Chan is a respected scientist from a leading university … and he knows both worlds very well: His main domains are theoretical quantum chemistry and quantum many-body physics. But also, he has worked on some interesting papers on quantum computing e.g. together with Google’s and IBM’s quantum teams (e.g. iv v). As Vazirani states in his introduction “I am particularly looking forward to Garnet’s talk … he has been the one who has been most clear in expressing the big picture as well as the subtleties of the problems of simulating quantum chemistry”.

I find the fact, that such experts start to get really serious with quantum computing extremely promising. It drives the research in an industrially relevant direction and forces quantum computing to start proving itself in practice. What I also find remarkable regarding his preprint on the EQA-hypothesis is its list of co-authors: Among them are leading scientists in quantum computing like John Preskill himself and Ryan Babbush from Google.

So, we should be well advised to pay attention to his insights and his opinion.

Quantum Chemistry: The ground state energy problem

As Chan explains, among the various questions one can ask in quantum chemistry, probably the most central one is the question for the ground state energy of an electronic Hamiltonian. The reason for this is, that one can deduce many things from it

      • If you calculate the energy for two different configurations of molecules, the difference will be the reaction energy

      • If you do the same for two different phases of matter, you can study thermodynamic stability

      • If you calculate the energy as a function of the nuclear coordinates, you get the potential surface for analyzing catalytic and reaction cascades

      • If you study the energy in terms of externally generated perturbations you can analyze aspects of spectroscopy

The textbook algorithm for calculating this ground state energy is quantum phase estimation (QPE). It efficiently splits any state into a sum of eigenstates / eigenvalues – pairs of a Hamiltonian, weighted by the individual overlap of the trial state with these eigenstates. A measurement on the eigenvalue register collapses the superposition into a single pair. The probability that this is the ground state depends on the mentioned overlap.

In this procedure the exact Hamiltonian is discretized with a “basis set”, a more or less generic state-set / orbital-set of size L which relates to the system size. The general expectation is, that this is a generic method to calculate the ground state energy of interesting chemical systems with an exponential advantage in L compared to classical methods.

Quantum Advantage: The problem with the initial overlap

Chan points out that the weakness of QPE is certainly not its polynomial circuit size. The problem is the overlap S of the initial trial state with the true, unknown ground state. This ratio controls how often we need to execute the circuit in order to hit the ground state and not some other eigenstate of the system. Thus, the complexity of the quantum algorithm and all its necessary repetitions scale like:

poly(1 / S)

(Here I leave out other dependencies as they are irrelevant for this article)

By the way, according to Chan, post-QPE-methods exist, but these show the same behavior.

If we choose some arbitrary quantum state, we should expect an overlap, which shrinks as the size of the Hilbert space increases. As this scales exponentially with the system size L, we are left with an exponentially small overlap. This destroys the efficiency of the QPE strategy.

More specifically: If the initial state is chosen as a product state of L non-interacting subsystems and each subsystem has an local overlap of $ s < 1 $, then the global overlap will become exponentially small


Thus, in order to achieve an efficient, polynomial quantum performance, we need an initial state with good overlap, but which mismatches the ground state in terms of its energy. If we had a match in both terms, this would be a good approximation of the ground state at the first place and we would not need to bother calculating.

So the big question for Chan is: Can we generically construct such initial states for systems where classical methods fail (estimating the ground state energy efficiently)?

The search for the right initial states in quantum chemistry

According to Chan, there is currently no indication, that we could construct such states. The argument from above about initial product states also works for single Slater determinant states. These are more or less the electron-version or fermionic version of generic product states while respecting the fermionic anti-communication rules of multi-electron systems (thus, the trick with the determinant).

Another candidate would be adiabatic state preparation, by slowly adding a perturbation to some initial Hamiltonian system and its trial ground state. This method seems promising, but according to Chan this relies on heuristics which lacks the initial requirement of a generic method.

Now, you can improve any type of state preparation by exploiting known properties of the specific system (like its symmetries) and physical intuitions. But as he argues, this also applies to any classical method, leading to improvements on the other side as well.

Chan gives a few numerical simulations of his claims. Specifically, he calculates the overlap of the “best” Slater determinant with a classically estimated or extrapolated ground state for different Iron-sulfur (Fe-S) clusters, including the famous FeMo-cofactor of nature’s nitrogenase. The qubit count for mapping the basis into a quantum computer register ranges from 40 to 154 qubits. In his simulations the overlap indeed drops sharply as this count increases. This does not significantly change by improving the Slater-determinant-ansatz.

Evidence for classical hardness for a Quantum Advantage?

As for the other side of the EQA-hypothesis, he asks himself if there is indeed evidence of exponential hardness for classical methods and gives some counter examples. The first class of those systems have the “single-reference” character and he gives some reasoning, why those are representatives for large parts of the chemical space (after all, the preprint contains 77 pages). In single-reference systems, a single Slater determinant qualitatively describes the ground state locally which can be estimated well by classical methods such as coupled cluster.

The other class of systems he discusses is the Heisenberg model. This is a model for quantum magnets but according to Chan also serves as a representative model for correlated materials. He names several efficient classical methods which can address these systems (like tensor networks / PEPS, sign-free quantum Monte Carlo and to some extend DMRG).

The Quantum Computing Colloquium of the Simons Institute

If you haven’t done so far, I recommend to definitely check out the YouTube-channel of the Quantum Colloquium vi . It started in the middle of the pandemic and is the source of many highly interesting talks of well-known speakers, high-class audience and usually followed by a panel discussion with top scientists. I am myself a co-organizer of a Meetup-group on quantum computing and I have to realize: If your name is Umesh Vazirani, I guess, you have a few other options on how to organize these things.

What I found very remarkable in the discussion afterwards was the amount of consent among the panelists with Chan’s talk. Also I have learned, that if you are a computer scientist, even a top scientist, you argue very cautiously when it comes to quantum chemistry. I guess this is just very respectable. The panelists agree with Chan, that a solution of the overlap-problem in state preparation is totally unknown.

Nathan Wiebe agrees with all the data, while coming to different conclusions, because the requirements for the quantum and classical calculations need to be specified further. Indeed, as also Chan and co-author Lin Lin point out, classical heuristics are carried out in polynomial time, without necessarily achieving a specific accuracy, while the calculation error in quantum computing can be efficiently controlled.

Wiebe also doubts, if conclusions can be made for the asymptotics from small size examples.

Birgitta Whaley, a chemist and Vazirani’s colleague from Berkley, outlines that multi-reference systems (for which multiple Slater-determinants play a role) and conical intersection problems of potential surfaces definitely show exponential hardness for classical methods. Also for “non-othogonal reference states” (I guess meaning: reference states expanded in non-orthogonal orbitals) couple cluster expansion tend to continue forever.

She and also Isaac Chuang advertise to look for other problems in quantum chemistry which are more optimal for quantum computers. Whaley specifically names spin ordering which may be formulated as a decision problem (yet not a binary one).

What about Quantum Advantage for quantum computing?

What is the conclusion from all of this? Nevertheless, even Chan expects that quantum computers will open the door for answering lots of interesting questions and he can’t wait to get fault tolerant quantum computing in his hands. While he questions a generic exponential advantage, he assumes that there will be lots of room for drastic, polynomial advantage. After all, every improvement in classical quantum chemistry has been a polynomial one. Among the panelists there are also doubts whether asymptotic complexity is the right tool to study quantum advantage.

At the end, I find this reassuring. Indeed, even for the ground state energy problem, there has been detailed resource estimations for the related quantum circuit in the past which calculate a quantum advantage – in wall-clock time. One example for this is the famous FeMo-cofactor of nitrogenase (btw: the initial paper and some further work was co-authored by panelist Nathan Wiebe) vii. Another nice example is a recent thorough investigation by Google Quantum AI and Boehringer Ingelheim about the class of cytochrome P450 enzymes viii.

So I think, there is no reason to doubt quantum advantage – especially for quantum chemistry. But perhaps we should be more careful about statements regarding exponential speed ups – even for quantum chemistry.


i “Is There Evidence of Exponential Quantum Advantage in Quantum Chemistry?“ talk by Garnet Chan / Caltech on the Quantum Colloquium of the Simons Institute.

ii Panel discussion to Garnet Chan‘s talk with various leading scientists.

iii “Is there evidence for exponential quantum advantage in quantum chemistry?“ Preprint by Garnet Chan et al.

iv “Simulating challenging correlated molecules and materials on the Sycamore quantum processor“ paper by Chan and Google AI

v “Quantum algorithms for quantum chemistry and quantum materials science“ paper by Chan and IBM‘s quantum group

vi YouTube-channel of the Quantum Colloquium / Simons Institute

vii“Elucidating Reaction Mechanisms on Quantum Computers”, the seminal FeMoCo-paper by ETH Zürich and Microsoft which has started extensive research. A more recent advance regarding the molecule was published by Google in“Even more efficient quantum computations of chemistry through tensor hypercontraction”

viii“Reliably assessing the electronic structure of cytochrome P450 on today’s classical computers and tomorrow’s quantum computers” paper by Google Quantum AI and Boehringer Ingelheim