The title of this post is taken from a recent interesting lecture (judging from the slides) by Scott Aaronson at Columbia University. The lecture explored a wide range of topics at the intersection of physics, computation, and philosophy. In this post, I will discuss, from my perspective, several of the ideas raised in Scott’s talk—beginning with the role of computational complexity reasoning to restrict physical theories.
I will then turn to the meaning of probability in our physical world and to various interpretations of quantum mechanics, including my own. I will also discuss Craig Gidney’s view on what it would t…
The title of this post is taken from a recent interesting lecture (judging from the slides) by Scott Aaronson at Columbia University. The lecture explored a wide range of topics at the intersection of physics, computation, and philosophy. In this post, I will discuss, from my perspective, several of the ideas raised in Scott’s talk—beginning with the role of computational complexity reasoning to restrict physical theories.
I will then turn to the meaning of probability in our physical world and to various interpretations of quantum mechanics, including my own. I will also discuss Craig Gidney’s view on what it would take to prove me wrong, quote David Deutsch’s 1997 challenge about the relationship between the Many-Worlds Interpretation and Shor’s algorithm, and touch on a few other related themes.
Complexity as Armor for Penalizing Physical Hypotheses
Let me offer a few comments on one of the items in Scott’s lecture:
Scott asks: Should we penalize physical hypotheses not only for high descriptive complexity, but also for computational complexity? His view is:
“[We] can’t be too dogmatic about this, or we’d rule out quantum computing—and arguably quantum mechanics itself! (As indeed many quantum computing skeptics do.) On the other hand, ‘no fast solution to NP-complete problems’ feels not that dissimilar to ‘no superluminal signaling’ or ‘no perpetual motion.’”
Four computational-complexity insights offered to rule out quantum computation
My general view on this matter is quite similar. Regarding skepticism about quantum computing, I am aware of four computational-complexity insights that have been offered to rule out scalable quantum computation (or certain fragments of it):
- Efficient factoring is such science fiction that the rational conclusion is that quantum computation cannot work. (Levin, Goldreich, and others since the mid-1990s.) A common response is that factoring is not such a big deal—it is low in the computational-complexity hierarchy, and some even believe it lies in P.
- Precise sampling according to the values of permanents—a task beyond the polynomial hierarchy—is such science fiction that, even if one accepts efficient factoring (classically or quantumly), the rational conclusion remains that quantum computation cannot work. (This idea appeared around 2010.)
- NISQ computers are such primitive classical computational devices that they cannot be used to achieve “quantum supremacy.” (Kalai–Kindler, 2014; and subsequent works by me.) This argument relies on standard noise modeling with constant noise levels. Even under a wide range of subconstant noise levels, the resulting samples consist of inherently noise-sensitive components combined with computationally trivial ones. Guy and I described a complexity class LDP, which captures the sampling power of NISQ devices.
- NISQ computers (and the complexity class LDP) are also too primitive to produce high-quality quantum error-correcting codes, thereby breaking the** chain reaction** needed for quantum fault tolerance. (Kalai, following item 3.)
I think all four arguments are fairly strong—though surely not ironclad. The advantage of the last two (closely related) points is that they exclude not only far-future possibilities but also some near-term experimental hopes and even certain current experimental claims. Another advantage of items 3 and 4 is that (if correct) they have physical consequences which seems much closer to physics than integer factorization and permanent sampling, including to interpretations of quantum mechanics which is another item in Scott’s lecture.
The first two items assert that the theoretic model of (noiseless) quantum computation represents computational miracles that are hard to believe. The last two items assert that in the intermediate regime, before quantum fault-tolerance kicks in, noisy quantum computers are no more than primitive kind of classical computers, uncapable of any computational miracles.
Complexity and Noise-sensitivity as Armor for Penalizing Physics Hypotheses.
Note that in items 3 and 4, we rely not only on computational complexity (that is, on the assumption of no computational miracles) as an armor for penalizing physics hypotheses, but also on the notion of noise stability—the idea that effective theories must be robust under small perturbations or noise. Related guiding principles have also been proposed in biology.
The Meaning of Probability and My Interpretation of Quantum Mechanics
Another item in Scott’s lecture concerned interpretations of quantum mechanics, particularly the Many-Worlds Interpretation. Coming from mathematics, (and having been educated by Greg Kuperberg), I have always felt quite comfortable with the formalism of Hilbert spaces and unitary evolutions—and I have never fully understood the need for additional interpretations.
A question that I found exciting is the question about the meaning of probability in physics. Does probability merely represent human uncertainty? Is it just an emerging mathematical concept which is convenient for modeling? Do matters change when we move from classical to quantum mechanics? We have discussed this question in several earlier posts (here, here, and here), and it is related to the question of interpretation of quantum mechanics. Here are two prevalent views, followed by my own:
- (Copenhagen, MWI) Probability is inherent: the outcomes of the experiment are inherently probabilistic.
- (Bohm) Probability is not inherent. the outcomes of the experiment are determined and are decoded somewhere in the universe.
- (GK) Probability is inherent, and noise sensitivity is inherent: the outcomes measurements arising from complex quantum processes are both inherently probabilistic and inherently noise sensitive.
Inherent noise sensitivity means that the outcomes cannot be described or approximated by any fixed probability distribution.
It is commonly accepted that determinism in the Newtonian view of the physical world has been replaced by “probabilistic determinism,” in which future events are described precisely by a probability distribution conditioned on past events. (Some interpretations of quantum mechanics even claim that the exact outcomes themselves—not just their probabilities—are encoded in the physical universe.)
In contrast, I propose that the outcomes of complex quantum processes are inherently noise sensitive—that is, they cannot be described or even approximated by a probability distribution. This applies to complex quantum evolutions and, in fact, the required complexity need not be extreme: even the outcomes of random quantum circuits on 12 qubits, of the type used by Google, are inherently noise sensitive.
Of course, the hypothesis of inherent noise sensitivity would be refuted by a successful realization of quantum fault tolerance.
In my paper Quantum Computers, Predictability, and Free Will I attempt to connect inherent noise sensitivity with philosophical questions of determinism, predictability, and free will. (See also this post.)
Here are three interesting posts on “Shtetl Optimized” about QM interpretations: The Zen Anti-Interpretation of Quantum Mechanics (2021); Interpretive cards (MWI, Bohm, Copenhagen: collect ’em all)(2018); Why Many-Worlds is not like Copernicanism (2012). Scott often says that he agree with any given interpretation when it comes to criticism of other interpretations.
What Would It Take to Prove Me Wrong? — Craig Gidney’s Take
In the Columbia lecture and in many other lectures over the past two decades, Scott expressed his hope to prove the skeptics of quantum computation wrong, a task regarded by him as the most important application of quantum computers. So, what would it take to prove me wrong?
This was the topic of a 2023 question by Tristan Nemoz in a Q&A quantum computing forum, and Craig Gidney offered the following response:
“…personally I’ll be waiting for one-in-a-trillion error rates on complex states before saying the physical-noise-is-conspiring position has become indefensible.”
Craig’s proposed threshold sounds rather fantastic—and remains utterly remote from current experimental reality. Yet it seems roughly in the ballpark of what would be required to implement Shor’s algorithm for factoring large integers. (Indeed, Gidney—of the Google Quantum AI team—is among the world’s leading experts on what Shor’s algorithm demands in practice. His answer above reflects his personal view.) I added a comment to Craig’s answer that I’ll
From Scott Aaronson’s Columbia (top) and FOCS 2025 (bottom) lectures.
Other Meetings Points Between Computational Complexity and Physics, and the Interface Between Theory and Practice.
Several other topics from Scott’s lecture also invite rich discussion—for example, the Harrow–Hayden proposal for resolving the firewall paradox, and the broader connections between thermodynamics, computation, and complexity. (Topics not mentioned this time—such as links to quantum field theory and to computational aspects of high-energy physics—are also closely related to this general theme.)
Many of these topics touch on questions central to the possible reality of a “world devoid of quantum computation”—a theme I have explored extensively over the years.
I discussed these and other places where physics meets computational complexity in
- Section 8.2 of my 2016 paper The Quantum Computer Puzzle (Expanded Version), and
- Section 8 of my 2024 paper The Argument against Quantum Computers, the Quantum Laws of Nature, and Google’s Supremacy Claims.
The discussion of quantum computation and physics also connects to a broader issue: the relationship between theory and practice across disciplines. In my 2018 ICM paper, Three puzzles on mathematics computations, and games, I examined three areas where theoretical computer science meets practical reality—and where deep mathematical ideas illuminate that interface.
Tensions between computational complexity and physics—or even between different insights within physics—are often related to fascinating mathematical questions. This is also the case for tensions between theory and practical reality in other areas of computer science. (See this post for a discussion based on Moshe Vardi’s persepective.)
Questions about the gap between theory and reality arise in many other domains as well: Is it really impossible for two rational agents to agree to disagree? Does game theory imply what policy to choose in cases of hostages held by terrorists? Does economics theory give a clear choice between the economic system of the US and the social-democrat systems of west Europe?
Complexity As Armor for Protecting Physical Hypotheses
Our discussion began with the question of whether miraculous computational complexity powers could limit or even invalidate physical hypotheses Scott raised also a kind of dual question:
Philosophical Problem: Would a contradiction in physics be OK if it took exponential time to uncover it?
David Deutsch’s 1997 Challenge
In his lecture, Scott presented an intriguing 1997 quote from David Deutsch, offering a philosophical argument for why the Many-Worlds Interpretation (MWI) is forced by Shor’s algorithm.
A small anecdote: a few years ago, I attended a very interesting MWI conference at Tel Aviv University. At one point, I asked Lev Vaidman—a leading advocate of the MWI—why we cannot simply be satisfied with the mathematical formalism of quantum mechanics, without adding an interpretation on top of it. Lev’s answer was that the mathematical formalism is sufficient—and that this is precisely what the MWI teaches us. 🙂
The following quote is from Deutsch’s 1997 book “Fabric of reality”.
“To those who still cling to a single-universe world-view, I issue this challenge: explain how Shor’s algorithm works. I do not merely mean predict that it will work, which is merely a matter of solving a few uncontroversial equations. I mean provide an explanation. When Shor’s algorithm has factorized a number, using 10500 or so times the computational resources that can be seen to be present, where was the number factorized? There are only about 1080 atoms in the entire visible universe, an utterly minuscule number compared with 10500. So if the visible universe were the extent of physical reality, physical reality would not even remotely contain the resources required to factorize such a large number. Who did factorize it, then? How, and where, was the computation performed?”
Mark Spinelli gave a thoughtful comment on Scott’s blog about the evolution of Deutsch’s point of view in the late 90s.
David Deutsch first asked his “Where does the calculation occur?” question even back in his 1985 paper, where he ponders how a quantum computer could sometimes be used to more efficiently predict the stock market. His later 1997 question posed in *The Fabric of Reality* changed the game from the rather cute Deutsch’s algorithm to the much, much more dramatic Shor’s algorithm.
It’s interesting to observe this development. His 1985 posting seems like an earnest and humble rhetorical question, that is, as an invitation to debate. His later, post-Shor ’94 question seems more assertive and dispositive in shifting the burden to those that challenge MWI.
But even still, is not the answer to the question of “where was the work done”, in Deutsch’s and Shor’s algorithm, “in Hilbert space!”?
In my view, it is a serious possibility that implementing Shor’s algorithm is fundamentally impossible, and it is interesting if this possibility indeed weakens the case for the Many-Worlds Interpretation.
Two More Items
The interpretation of “interpretation”, Preskill’s 2013 Summary of What Could Cause Scalability to Fail as a Matter of Principle — and My 2025 View
Summarizing the state of the quantum computing debate in 2013, John Preskill wrote:
For scalability to fail as a matter of principle then, either quantum mechanics must fail for complex highly entangled systems (as t’Hooft [8] has suggested), or else either the Hamiltonian or the quantum state of the world must impose noise correlations that overwhelm fault-tolerant quantum protocols (as Alicki et al. [9, 10, 11] and Kalai [12, 13, 14, 15, 16] have suggested).
Adopting John’s framework, my current (2025) position is the following:
The Hamiltonian or the quantum state of the world impose a rate of noise that prevents high-quality quantum error correction at the NISQ scale—and therefore also prevents quantum fault tolerance (and Shor’s algorithm) at larger scales. (Noise correlation is important but too-large noise rate is the main phenomenon.) This constitutes a law of physics derived from computational complexity considerations.
Perhaps one way to understand the notion of an “interpretation” of quantum mechanics is as referring not only to the basic mathematical formalism of Hilbert spaces and unitary operators, but also to certain fundamental physical principles that govern the Hamiltonian and the quantum state of the world, and are necessary for understanding the foundation of quantum physics.
Three Additional Computational Complexity Limitations for Physical Theories
Returning to the starting point of this discussion, here are three further principles connecting computational complexity and physics:
NP-hard is hard. No physical computer can efficiently solve NP-complete problems. 1.
Random states are out of reach. No physical computer can efficiently approximate a random state in a large Hilbert space. 1.
Pseudorandom states are out of reach. No physical computer can efficiently approximate a pseudorandom state in a large Hilbert space.
On the first two items, I believe Scott and I are largely in agreement. Scott wrote (cautiously) that “no fast solution to NP-complete problems feels not that dissimilar to ‘no superluminal signaling’ or ‘no perpetual motion’,” and I agree. The second principle, “no fast way to achieve a random state in a large Hilbert space,” is also widely accepted. It may appear to be a straightforward mathematical consequence of quantum mechanics, but it also depends on an additional physical assumption—locality. The meaning, source, and modeling of locality form another important topic.
In the third item “psudorandom state” is one achieved by a random circuit. The third item brings us back to disagreement. Here, Scott and I have sharp disagreement on what can be achieved, and also disagree on what has been achieved.