Skip to main content

Quantum Computing: What It Actually Is, Where It Actually Stands

A clear-eyed look at quantum computing in 2026—what the physics means, what the breakthroughs really proved, and why the useful machine is still years away.

Jensen Huang, CEO of Nvidia, said in January 2025 that useful quantum computers are “15 to 30 years away.”1 The quantum computing industry spent the rest of the year trying to prove him wrong. They didn’t quite manage it—but they made the gap look a lot smaller than it did before.

Here is an honest account of where we are.

The Basic Idea

Classical computers work with bits: 0 or 1. Everything—your email, a movie, a nuclear simulation—reduces to sequences of these.

Quantum computers work with qubits, which exploit two properties of quantum mechanics:

Superposition. A qubit can exist in a combination of 0 and 1 simultaneously, until you measure it.2 This is not the same as saying “it’s secretly one or the other and we don’t know which”—it genuinely is both, in a mathematically precise sense. The combination collapses to a definite value only when observed.

Entanglement. Two qubits can be correlated in a way that has no classical analogue. Measuring one instantly determines the state of the other, regardless of distance. Einstein called this “spooky action at a distance” and found it deeply troubling. It is real, experimentally verified beyond any doubt.3

These properties mean that a quantum computer with n qubits can, in some sense, represent 2n states at once. With 300 qubits, that’s more states than there are atoms in the observable universe. The trick—and it is a very hard trick—is to structure your computation so that the wrong answers cancel out (via interference) and the right answer amplifies, so that when you finally measure, you get what you wanted.

This only works for certain problem structures. Quantum computers are not universally faster. They are specifically faster for problems with particular mathematical symmetries: factoring large numbers, simulating quantum systems, certain optimization problems. For most things you do on a laptop, a quantum computer would be slower.4

The Error Problem

Here is why we don’t have useful quantum computers yet: qubits are extraordinarily fragile.

Any interaction with the environment—heat, vibration, stray electromagnetic fields—collapses the superposition and ruins the computation. This is called decoherence. Today’s best systems can maintain coherence for at most a few milliseconds.5 Useful computations would need far longer, running billions of quantum operations with error rates around 0.1% or less per operation.

The solution is quantum error correction: spread a single logical qubit across many physical qubits, so that errors can be detected and corrected without ever measuring the underlying quantum state (which would collapse it). The mathematics is elegant.6 The overhead is brutal. Current estimates suggest you might need 1,000 to 10,000 physical qubits per logical qubit for a fault-tolerant machine.7 A useful quantum computer for, say, breaking RSA encryption or simulating proteins might need millions of logical qubits—meaning billions of physical qubits, compared to the few hundred we have today.

What Willow Actually Proved

In December 2024, Google announced its Willow chip—105 qubits—and the headlines were explosive. “Google’s quantum chip performed a calculation that would take supercomputers 10 septillion years.” That number (1025 years, many orders of magnitude longer than the age of the universe) is real. It is also carefully chosen.

The calculation in question is a benchmark—specifically, Random Circuit Sampling—designed to be hard for classical computers and easy to demonstrate quantum advantage on. It has no known practical application. No drug was discovered. No encryption was broken. No optimization problem was solved. It was a proof of capability, like running 100 meters in world-record time on a purpose-built rubber track—impressive, but not the same as running it in a race.8

What Willow actually demonstrated, and this is the part that genuinely matters: error rates decreased as the system scaled up. For almost 30 years, adding more qubits made things worse—more components meant more opportunities for error. Willow showed for the first time that a quantum system can cross what engineers call the threshold: a regime where adding more error-correcting qubits reduces the total error rate faster than it introduces new ones.9 This is the prerequisite for building large-scale fault-tolerant machines. It is a genuine milestone, even if it is not what the headlines suggested.

The Other Players

Google is not alone.

IBM has published a roadmap targeting fault-tolerant quantum computing by 2029. Their current systems have over 1,000 qubits but with error rates that still make sustained useful computation difficult. IBM’s parallel strategy involves “utility-scale” hybrid quantum-classical systems now, while building toward fully error-corrected machines.10

Quantinuum, a spin-out from Honeywell, uses trapped-ion qubits. Their systems have far fewer qubits (dozens) but significantly higher gate fidelity. Their Helios system, launched commercially in November 2025, claims the highest accuracy of any commercial quantum computer available—early testers include JPMorgan Chase, Amgen, and BMW. A recent funding round valued the company at $10 billion.11

Microsoft has pursued the most exotic path: topological qubits based on Majorana fermions—quasiparticles that are theoretically far more resistant to decoherence.12 In early 2025, they published results claiming to have fabricated a working Majorana-based qubit device. The physics community remains cautious—similar claims were retracted in 2021—but if confirmed, topological qubits could dramatically reduce the physical qubit overhead needed for error correction.

PsiQuantum is betting on photonic qubits—using photons instead of electrons—and raised $1 billion in 2025, the largest single quantum raise in history.13 Their argument: photonic systems can be manufactured using existing semiconductor fabs (TSMC is a partner), giving a path to scale that superconducting qubits, which require exotic materials and near-absolute-zero temperatures, may not have.

The Cryptography Problem

The application that gets the most attention outside the research community is breaking encryption. RSA encryption—which secures most internet traffic—relies on the fact that factoring very large numbers is computationally infeasible for classical computers.14 Shor’s algorithm, discovered in 199432ya, can factor large numbers exponentially faster on a quantum computer.

The threat is real but not immediate. Running Shor’s algorithm on 2048-bit RSA keys would require millions of physical qubits operating with very low error rates—far beyond anything that exists today. Most serious estimates place this capability no earlier than 203052035, and many say later.15

The US NIST finalized its first post-quantum cryptographic standards in 2024—algorithms designed to resist attacks from both classical and quantum computers. Migration has begun. The concern driving this migration isn’t that quantum computers exist now—it’s “harvest now, decrypt later”: adversaries can capture encrypted traffic today and store it until quantum computers arrive to decrypt it. Long-lived secrets, government communications, medical records—these are already at risk under this threat model.16

The Honest Timeline

The most rigorous analysis estimates that commercially useful quantum computers require:17

  • Several million physical qubits (we have ~1,000 today)

  • Gate error rates below ~0.1% (current: ~0.5–1%)

  • Sustained coherence across billions of operations

Assuming exponential improvement in both qubit count and fidelity, the first genuinely useful quantum applications might arrive around 203552040. IBM’s roadmap is more optimistic, targeting fault-tolerant machines by 2029—but “fault-tolerant” is a capability milestone, not the same as “solving problems classical computers can’t.” Forrester Research revised their estimate in 2026, calling business-relevant quantum computing “likely by 2030,” citing Willow and accelerating investment—earlier than their 2024 prediction.

My own reading: these estimates have historically been optimistic. The engineering challenges between “demonstrated below-threshold error correction” (where we are now) and “millions of logical qubits doing useful chemistry” are enormous and not fully mapped. I would expect the first narrow useful applications—specific quantum chemistry problems, certain optimization tasks—around 203342037, with broader utility following later.

Why This Matters

The applications worth actually caring about are not consumer applications. They are deep scientific and industrial ones:

Drug discovery. Simulating how molecules interact at the quantum level is, appropriately, a quantum problem. Classical computers can only approximate these simulations. A large-scale quantum computer could model chemical reactions and protein interactions with exact accuracy—potentially compressing drug development timelines from decades to years.18

Materials science. High-temperature superconductors, better batteries, more efficient solar cells—these all involve quantum mechanical phenomena that classical simulation handles poorly. Room-temperature superconductivity alone would be a civilization-altering discovery.

Cryptography. Already discussed above—both as threat (to current encryption) and opportunity (quantum key distribution offers theoretically unbreakable communication).

Optimization. Supply chains, financial portfolios, traffic routing, drug trial design—problems where finding the optimal solution over a huge search space is classically intractable. Whether quantum computers offer practical advantages here remains contested; the theoretical speedups don’t always survive contact with real problem structure.19

What to Watch

The milestones that actually matter over the next few years:

  • Logical qubit demonstrations: showing a single error-corrected logical qubit that outperforms its physical components in a sustained way

  • Algorithm benchmarks on real problems: quantum advantage on a problem with genuine practical value, not a contrived benchmark

  • Microsoft Majorana confirmation: independent replication of their 2025 topological qubit results

  • Qubit count scaling: whether IBM’s roadmap (projected 100,000+ qubits by 2033) tracks reality

  • Post-quantum migration pace: how quickly government and financial infrastructure migrates to NIST’s new standards

The technology is real. The physics is verified. The engineering challenge remaining is enormous. The honest framing is not “will this ever work” but “how long, and at what cost, and for which problems first.”

Somewhere between Jensen Huang’s “30 years” and the industry’s breathless press releases is the truth. The groundwork being laid right now—in error correction, qubit fabrication, and algorithm design—determines where exactly that truth lands.


Written March 2026. I am not a physicist; corrections from people who are welcome.