Quantum Computing’s Roadblocks: The 3 Barriers Holding Back the Revolution ⸻ Why Quantum Isn’t Mainstream—Yet Quantum computing promises to revolutionize industries—from drug discovery to AI—by solving problems conventional computers can’t touch. Yet despite the buzz, practical quantum computing is not widely adopted. The reason? The field still faces three major barriers—technical, societal, and infrastructural—that must be overcome before it can fulfill its transformative potential. ⸻ The Three Major Barriers to Adoption 1. Technical Complexity • Qubit Stability: Qubits are highly sensitive to their environment and can lose coherence (i.e., stability) after mere milliseconds. • Error Rates: Even short computations often introduce significant errors, making output unreliable. • Scalability: While small-scale quantum devices exist, scaling them to thousands or millions of qubits with sufficient fidelity is a massive engineering challenge. 2. Security and Privacy Risks • Quantum Threat to Encryption: Once quantum computers are powerful enough, they could break today’s encryption standards—posing risks to global cybersecurity. • Need for Quantum-Safe Protocols: Organizations must invest now in post-quantum cryptography to protect long-term sensitive data. 3. Societal and Economic Integration • Workforce Gap: Few engineers and scientists are trained in quantum computing, creating a bottleneck for growth. • Infrastructure and Cost: Quantum computers often require ultra-low temperatures and specialized environments, making them expensive to develop and maintain. • Ethical and Regulatory Uncertainty: Societal impacts—such as AI acceleration and surveillance—raise questions that lack regulatory clarity. ⸻ Why It Matters: Timing the Leap For businesses and governments, the quantum era is not a question of “if,” but “when.” The race is on to develop applications and frameworks that will thrive once the barriers fall. Early movers who understand these challenges—and prepare accordingly—stand to gain outsized competitive advantages. Moreover, investments in workforce training, secure infrastructure, and ethical frameworks now will pay dividends as quantum breakthroughs emerge. The companies and countries best prepared for the coming quantum shift will define the future of technology, economics, and geopolitics. https://lnkd.in/gEmHdXZy
Challenges Facing Early Quantum Computing
Explore top LinkedIn content from expert professionals.
Summary
Quantum computing is a new technology that harnesses the strange laws of quantum physics to solve problems that are impossible for traditional computers. However, moving from experimental prototypes to practical, everyday use is extremely challenging due to technical, economic, and engineering obstacles.
- Strengthen reliability: Invest in technology and workflows that reduce errors and improve the stability of quantum systems, since even tiny fluctuations can derail results.
- Address workforce gaps: Support education and training programs so more people have the skills to work with quantum computing, helping to close the talent shortage in this emerging field.
- Plan for integration: Start building secure infrastructure and ethical guidelines now, as quantum computing will require new standards for cybersecurity, privacy, and collaboration across industries.
-
-
Doing nothing on a quantum computer is very challenging. One of the biggest sources of errors in quantum computing are "idle errors": the errors that build up when qubits are not doing anything, waiting around for operations on other qubits before they are used. This is why it's important to compile algorithms down to minimise the time that qubits are left waiting, and to make the speed of operations fast enough to get ahead of the build-up of errors. On a large scale, the solution is to use quantum error correction. In fact, hardware demos of quantum error correction typically start with "memory experiments", where the task is to maintain a quantum state. In other words, to demonstrate a quantum computer that can reliably do... nothing!
-
To understand real momentum in #quantum, compare the last 2–3 years, not the last 2–3 months. The progress is inspiring, but are we close to any kind of inflection point? 𝗪𝗵𝗲𝗻 Richard Givhan 𝗮𝗻𝗱 𝗜 𝘀𝘁𝗮𝗿𝘁𝗲𝗱 𝗛𝗮𝗶𝗾𝘂 𝗶𝗻 𝗲𝗮𝗿𝗹𝘆 𝟮𝟬𝟮𝟯 𝗶𝗻 Creative Destruction Lab, 𝗾𝘂𝗮𝗻𝘁𝘂𝗺 𝗰𝗼𝗺𝗽𝘂𝘁𝗶𝗻𝗴 𝘄𝗮𝘀 𝗹𝗮𝗿𝗴𝗲𝗹𝘆 𝗮 𝘄𝗼𝗿𝗹𝗱 𝗼𝗳 𝘁𝗼𝘆-𝘀𝗰𝗮𝗹𝗲 𝗽𝗿𝗼𝗼𝗳𝘀 𝗼𝗳 𝗰𝗼𝗻𝗰𝗲𝗽𝘁: few-qubit algorithms on simulators, and “real hardware” demos (often limited by tens-of-qubits devices and unstable performance) where the goal was to confirm the ability to extract any signal in the noise rather than solving anything practical. 𝗔 𝗳𝗲𝘄 𝗿𝗲𝗮𝗹-𝗹𝗶𝗳𝗲 𝗮𝗻𝗲𝗰𝗱𝗼𝘁𝗲𝘀 𝗼𝗳 𝘁𝗵𝗮𝘁 𝘁𝗶𝗺𝗲. In one of our early benchmarks, a publicly available QPU produced nearly random noise with no signs of declared performance specs. As we later learned, the device's cooling system was broken, causing significant thermal noise. On another public device, the algorithm's fidelity fluctuated 2x between calibration cycles. 𝗧𝗵𝗲 𝗼𝘂𝘁𝗹𝗼𝗼𝗸 𝗳𝗼𝗿 𝗿𝘂𝗻𝗻𝗶𝗻𝗴 𝘀𝗼𝗺𝗲𝘁𝗵𝗶𝗻𝗴 𝗽𝗿𝗮𝗰𝘁𝗶𝗰𝗮𝗹 𝗶𝗻 𝘁𝗵𝗶𝘀 𝘀𝗲𝘁𝘁𝗶𝗻𝗴 𝗳𝗲𝗹𝘁... 𝗱𝗶𝘀𝘁𝗮𝗻𝘁. At the same time, some of the one-off “quantum supremacy” experiments were already hinting at a different path. Even on these noisy QPUs, very shallow circuits can create entangled states that are hard to reproduce classically. The obvious question is: can such states be utilised for any useful computation, without the need for handcrafted deep circuits that hardware noise destroys? This reminds me of early #perception #AI systems: millions of lines of handcrafted logic in computer vision or signal processing were replaced by comparatively “shallow” neural nets - once the right training infrastructure and software stack emerged. ⏩ 𝗜𝗻 𝗷𝘂𝘀𝘁 𝗮 𝗰𝗼𝘂𝗽𝗹𝗲 𝗼𝗳 𝘆𝗲𝗮𝗿𝘀: 𝟭𝟬𝟬+ 𝗾𝘂𝗯𝗶𝘁 𝗱𝗲𝘃𝗶𝗰𝗲𝘀 𝗮𝗿𝗲 𝗻𝗼𝘄 𝗿𝗼𝘂𝘁𝗶𝗻𝗲𝗹𝘆 𝗮𝗰𝗰𝗲𝘀𝘀𝗶𝗯𝗹𝗲 (big thanks to IBM Quantum for this move), and 𝘄𝗲’𝘃𝗲 𝘀𝗲𝗲𝗻 𝗮 𝘀𝘂𝗿𝗴𝗲 𝗼𝗳 𝗹𝗮𝗿𝗴𝗲-𝘀𝗰𝗮𝗹𝗲 𝗾𝘂𝗮𝗻𝘁𝘂𝗺 𝗲𝘅𝗽𝗲𝗿𝗶𝗺𝗲𝗻𝘁𝘀 𝗶𝗻 𝗽𝗵𝘆𝘀𝗶𝗰𝘀, 𝗰𝗵𝗲𝗺𝗶𝘀𝘁𝗿𝘆, 𝗼𝗽𝘁𝗶𝗺𝗶𝘇𝗮𝘁𝗶𝗼𝗻, etc. Many of these applications are heuristic and shallow-circuit by design. However, running a reliable experiment at utility-scale is still hard. Reproducibility, noise, calibration, and cost still limit quantum runs at that scale. That’s the gap we’re closing at Haiqu - 𝘁𝘂𝗿𝗻𝗶𝗻𝗴 𝗲𝘅𝗲𝗰𝘂𝘁𝗶𝗼𝗻 𝗼𝗳 𝗮𝗹𝗴𝗼𝗿𝗶𝘁𝗵𝗺 𝗼𝗻 𝗤𝗣𝗨𝘀 𝗶𝗻𝘁𝗼 𝗮 𝗿𝗲𝗽𝗲𝗮𝘁𝗮𝗯𝗹𝗲, 𝗯𝘂𝗱𝗴𝗲𝘁𝗮𝗯𝗹𝗲 𝘄𝗼𝗿𝗸𝗳𝗹𝗼𝘄 𝘄𝗶𝘁𝗵 𝗮 𝗽𝗿𝗲𝗱𝗶𝗰𝘁𝗮𝗯𝗹𝗲 𝗵𝗶𝗴𝗵 𝗽𝗲𝗿𝗳𝗼𝗿𝗺𝗮𝗻𝗰𝗲. We’re doubling down on making this capability accessible to many more researchers and engineers. Even if reliable quantum hardware appears tomorrow, applications for broad commercial adoption need to be discovered. The inflection point is when prototyping becomes fast and cheap enough to validate practical use cases at scale.
-
I am pleased to highlight some recent work from the team that further evolves our understanding of building practical quantum computing architectures with bivariate bicycle codes and that addresses one of the fundamental challenges to real-time decoding. Our Nature paper from 2024 [https://lnkd.in/eS26sKx6] showed that a quantum memory using bivariate bicycle codes requires roughly 10x fewer physical qubits compared to the surface code. An important question to answer was whether this advantage is retained not only while storing information in memory but also during computations. To answer that question, our team designed fault-tolerant logical instruction sets for the codes and developed a strategy to compile circuits to these instructions. Using these tools, they performed end-to-end resource estimates demonstrating that bicycle architectures retain an order of magnitude qubit advantage over surface code architectures when implementing large logical circuits. The pre-print can be found here [https://lnkd.in/e7k7gYs7] One of the central doubts about the practicality of quantum low-density parity check (qLDPC) codes such as the bivariate bicycle codes has been the difficulty of real-time decoding. The second preprint [https://lnkd.in/eFbWNFeU] we posted this week hopefully puts those doubts to rest. A large challenge in decoding qLDPC codes arises from the perceived need for two-stage decoding solutions such as belief propagation (BP) followed by ordered statistics decoding (OSD). In particular, real-time implementation of OSD appears very challenging, which has spawned efforts to reduce the cost of OSD. Our team took a different approach. This new result shows that one can eliminate the need for a second-stage decoder altogether through a suitable modification of the BP algorithm. Our modified algorithm, called Relay-BP, enhances the traditional method by incorporating spatially disordered memory terms. This dampens oscillations and breaks symmetries that trap traditional BP algorithms. The result is an algorithm that outperforms the current state-of-the-art approach while simultaneously still being amenable to implementation in an FPGA. Congratulations to the team for these exciting advancements, which validate our strategy and move us one step closer to realizing a fault-tolerant quantum system.
-
Dear Prof Feynman, Since your 1982 paper “Simulating Physics with Computers,” quantum computing has developed from speculation into experimental reality. Here’s where we stand in June 2025. Your insight that classical computers cannot efficiently simulate quantum systems proved correct - this became the foundation for building quantum computers. Ion trapping techniques developed in the 1980s now control dozens of trapped ions as quantum bits, enabling high accuracy in single quantum operations and extended coherence times. Josephson junctions became artificial atoms: superconducting circuits that manipulate quantum states at millikelvin temperatures. Current superconducting processors include Google’s Willow chip and IBM’s advanced systems. Two-qubit gate accuracies approach 99%, though environmental noise still limits algorithmic applications to dozens of useful qubits working together. Shor’s factoring algorithm works on small numbers but would need millions of error-corrected quantum bits for practical cryptography. Google’s 2019 quantum demonstration solved a sampling problem faster than classical computers, though the practical advantage is close to nil. Scientists have built logical quantum bits that actually last longer and make fewer errors than the physical quantum bits they’re made from. However, fault-tolerant computation requires significant overhead, necessitating many physical quantum bits per logical quantum bit. IBM plans to develop 200-logical-qubit systems by 2029, utilizing advanced error correction codes. Your original challenge persists. Quantum many-body systems remain exponentially hard to simulate classically, yet building quantum simulators requires controlling thousands of quantum components with extraordinary precision.
-
The quantum threat to encryption is real, but the engineering 'valley of death' between 133 qubits and a million qubits is wider than most realize. Here is what happens when you actually try to run Shor’s algorithm on existing hardware. Researchers from armasuisse Wissenschaft und Technologie, ETH Zürich and PSI Paul Scherrer Institut ran Shor's algorithm on IBM Quantum's 133-qubit system. Their findings aren't just about whether it worked. They're about 𝗪𝗛𝗬 𝗶𝘁 𝗯𝗮𝗿𝗲𝗹𝘆 𝘄𝗼𝗿𝗸𝗲𝗱. Here are the challenges that stood out: 𝗖𝗶𝗿𝗰𝘂𝗶𝘁 𝗦𝗽𝗲𝗰𝗶𝗳𝗶𝗰𝗶𝘁𝘆: You can't just load up Shor's algorithm and factor any number you want. In this study, the modulus N was embedded directly into the gate patterns. This means even if you had the qubits, you need to custom-design, optimize, and validate a circuit for each integer you want to factor. 𝗠𝗮𝗰𝗵𝗶𝗻𝗲 𝗜𝗻𝘀𝘁𝗮𝗯𝗶𝗹𝗶𝘁𝘆: Day-to-day calibration drifts made some experimental runs unusable. One day your system works. The next day, the same circuit fails because error rates shifted. 𝗣𝗹𝗮𝘁𝗳𝗼𝗿𝗺 𝗟𝗼𝗰𝗸-𝗜𝗻: Circuits that worked on IBM hardware failed to even transpile to IonQ and Quantinuum systems using generic pipelines. Cross-platform quantum computing is still aspirational; it currently requires hardware-specific design. 𝗘𝗿𝗿𝗼𝗿 𝗔𝗰𝗰𝘂𝗺𝘂𝗹𝗮𝘁𝗶𝗼𝗻: For N = 35, they hit practical limits because circuit depth exceeded what the hardware can reliably execute. In fact, when using an "unfriendly" base, the noise was so high that the experiment actually failed to detect a statistically significant signal. These aren't problems you solve by just adding more qubits. You solve them by implementing QEC, new 'online' calibration routines, improving gate fidelities, and by developing platform-agnostic compilation toolchains. What is your take on how big this 'algorithmic' gap actually is ? 📸 Credits: Paul BAGOURD, Julian Jang-Jaccard , IBM / The New York Times
-
While it was initially thought that we would not see reliable quantum computers until the late 2030s, recent breakthroughs have led many experts to believe that early fault-tolerant machines will be a reality sooner than expected – we're now looking at years, not decades. The key to unlocking that reality – and one of our biggest challenges in the quantum community– is quantum error correction (QEC). Present day qubits are fragile and susceptible to quantum noise, which causes high rates of error and prevents today’s intermediate-scale quantum computers from achieving practical advantage. Microsoft’s qubit-virtualization system combines advanced runtime error diagnostics with computational error correction to significantly reduce the noise of physical qubits and enable the creation of reliable logical qubits – which are fundamental to resilient quantum computing. Think of it like noise-cancelling headphones, but for quantum disruption! Just love that visual! In April, we applied our qubit-virtualization system and Quantinuum’s ion-trap hardware to achieve an 800x improvement on the error rate of physical qubits, demonstrating the most reliable logical qubits on record. As we continue this groundbreaking work, we are getting closer to the era of fault-tolerant quantum computing and our goal of building a scalable hybrid supercomputer. What’s next? Stay tuned! #QuantumComputing #QEC #AzureQuantum
-
𝗤𝘂𝗮𝗻𝘁𝘂𝗺 𝗟𝗲𝗮𝗽 𝗼𝗿 𝗗𝗶𝘀𝘁𝗮𝗻𝘁 𝗗𝗿𝗲𝗮𝗺? Just like the excitement many felt in 2019 with Google's announcement of the Sycamore quantum chip, the recent unveiling of the Willow chip has both ignited hope for a quantum future and raised the question about how it may impact AI’s electricity demand. While these advancements are groundbreaking, practical applications are still a long way away. Challenges like quantum error correction (methods to protect information from losing its quantum state) and scaling the number of qubits (quantum bits) in a quantum computer need to be addressed before quantum computers can rival classical systems for AI and other workloads. Additional challenges, such as energy-intensive cooling for certain quantum systems, could also hamper adoption. National Institute of Standards and Technology (NIST)'s recommendation for organizations to transition to post-quantum cryptography by 2035 provides a clearer timeline for the potential impact of quantum computing. This suggests that while we're on the cusp of a quantum revolution, it's a journey that will unfold over the next decade. Much like fusion technology, quantum computing is still in its experimental phase. While the underlying physics is understood, engineering a practical and scalable system remains a significant challenge. Experts estimate that fusion energy could be commercially viable by 2045. As we anticipate the future of powering AI, it's crucial to balance optimism with realism. How do you think quantum computing will impact the future of AI and data centers? Share your thoughts and predictions in the comments below. #quantumcomputing #AI #machinelearning #datacenter #technology #futureoftech
-
Everyone seems to have a #HotTake on #quantum stocks and which CEO said what. So naturally, I feel inclined to add to that noise with my own two cents… When will a quantum computer become “useful”? The short answer: nobody knows. The long answer comes down to a discussion of noise and scale. Conventional computers are quite robust to noise. Modern CPUs with their billions of transistors are so robust that you can run one nonstop for millions of hours (at least) before expecting to see even a single transient bit fault occurring. Consequently, most CPUs don’t require any error correction and treat their bits as “essentially perfect.” Quantum operations, on the other hand, are error prone. Engineers & physicists continue to work miracles to drive down noise, but we will never get close to the “essentially perfect” operations we see classically. However, we can encode a “qubit of information” across many physical qubits to create a “logical qubit.” As long as the physical error rates are low enough, error correction techniques on these logical qubits can drive down noise to ultimately create “essentially perfect” logical qubits and gates. Currently, many quantum companies are racing to improve logical qubits and run logical gates on them. Despite the media focus around “demonstrating evidence of the multiverse,” the biggest breakthrough on Google’s new Willow chip was to demonstrate unequivocally that their physical qubits were “good enough” for quantum error correction to take care of the rest. The community now knows with certainty that, with enough physical qubits, it is indeed possible to create “essentially perfect” logical qubits. The other challenge is scale. We are still a long way from a quantum chip with enough qubits to compete with the billions of transistors on a classical chip. Furthermore, if a single logical qubit requires hundreds or even thousands of physical qubits, then treating logical qubits as the quantum analog of transistors in a CPU requires that much more overhead. In our current quantum computing landscape, there are architectures with a few hundred to a few thousand physical qubits. The challenges with scale and noise thus introduce an interesting question for quantum practitioners: Do I work on a larger problem with noisy qubits, or a smaller problem with “perfect” logical qubits? Some companies are focusing heavily on the first option, and believe we are more likely to demonstrate utility scale advantages in quantum computing sooner this way. I would argue that more quantum practitioners are of the opinion that logical qubits are the way to go, even if it means we need to wait longer to work on larger problems. Scaling up a logical qubit quantum computer remains a massive challenge, and there are a lot of “known unknowns” and undoubtedly many "unknown unknowns” to be discovered. As for whether scaling takes 5 years, 15 years, or 30 years… If we knew that answer, then quantum stocks would look very different!
-
Google's Willow chip shows that quantum error correction is starting to work. Just "starting", because while the ~1e-3 error rate reached by Willow is good, it has been achieved by others without error correction. So, how do we get error rates we couldn't reach with physical qubits alone? Easy: you "just" add more qubits in your logical qubit. But because there are errors on two dimensions in quantum computing, a 2D-structure (the surface code) is usually required to correct errors. This means that increasing protection against errors causes the number of qubits to grow quickly. With a surface code, protecting against 1 error at a time during an error correction cycle requires 17 qubits. 2 errors at a time? 49 qubits. 3 errors at a time? 97 qubits. This is the max Willow could achieve. This quadratic scaling leads Google to expect that reaching a 1e-6 error rate on a Willow-like chip will require some 1457 physical qubits (protecting against 13 errors at a time). And this is the reason why Alice & Bob is going for cat qubits instead. By reducing error correction from a 2D to a 1D problem, cat qubits make the scaling of error rates much more favorable. Even with the simplest error correction code (a repetition code), correcting one error at a time only requires 5 qubits. 2 errors? 9 qubits. 3 errors? 13 qubits. 13 errors? This is just 53 qubits instead of 1457! This situation is summarized in the graph below. It is taken from our white paper (link in the 1st comment) and I added a point corresponding to the biggest Willow experiment. Now, to be fair, Alice & Bob still needs to release the results of even a 5-qubit experiment. But when this is done, there is a fair chance the error rates will quickly catch up with those achieved by Google or others, because so few additional qubits are required to improve error rates. There are big challenges on both sides. Mastering cat qubits is hard. Scaling chips is hard. But consistent progress is being made on both sides too. Anyway, I can't wait for the moment when I can add the Alice & Bob equivalent of the Willow experiment on the chart below. And for once, I hope it will be up and to the left!