Reliable and scalable quantum computing: opportunities and challenges for tech innovation
- Marc Griffith

- Dec 4, 2025
- 3 min read

In the fast-evolving tech landscape, reliable and scalable quantum computing emerges as one of the key challenges for startups and innovators. If until recently practicality seemed distant, now the business models and infrastructures that will support this technology are starting to take shape.
IBM recently updated its roadmap with a firm date: 2029. In the next three years, researchers aim to realize the first fault-tolerant quantum computer, a milestone that will open a new application phase beyond pure theoretical research.
An epochal shift on the horizon
Fault tolerance is the ability of qubits to continue functioning despite the system's intrinsic errors. Today's quantum computers can perform about 5,000 operations with an acceptable level of reliability; achieving fault tolerance will allow hundreds of millions of operations, a leap that upends the current state of play.
Alessandro Curioni, director of IBM Research in Zurich, explains that the practical impact will be enormous: “Current quantum computers can perform 5,000 operations with a good level of reliability. Achieving fault tolerance will enable them to execute 100 million.”
A new fusion between AI and quantum
According to Curioni, the reach of this milestone is amplified by the synergy with AI development. “For the first time we are facing a scenario in which two deep technological innovations have emerged simultaneously,” he explains. “The impact of new AI models and the availability of quantum computers opens entirely new horizons.”
A shift in how we approach problems
The change is not just numerical: a quantum computer enables representing infinite possible probabilities simultaneously, surpassing the limits of traditional binary logic. Until now the approach involved analyzing a “physical” problem, abstracting it into an equation and solving it, in a very linear way. The introduction of AI allows generating large models, and from them, specialized models capable of solving problems that were previously out of reach. This is one of the reasons AI and quantum could reinforce each other.
The project, as researchers involved explain, underscores the importance of data-driven research and collaboration among universities, public agencies, and companies, aimed at building infrastructures and skills useful for broad future adoption.
Towards a new era of applications
Practically, the combined AI and quantum analysis could lead to new discoveries in areas such as materials design, optimization of complex processes, and the simulation of physical systems impossible to reproduce with classical computers. Speaking with experts, it is recognized that the potential is broad, but technological challenges, costs, and the need for common standards and widespread training to make the skills accessible remain.
Critical reflections and deal
Many experts believe that fully fault-tolerant systems will require substantial investments in hardware, controllers, and error-correcting codes. At the same time, cautious voices urge not to overestimate adoption timelines, noting that AI integration requires specific algorithms, data governance, and robust security management. Others, finally, estimate that the startup ecosystem could benefit from a clear road map, with pilot projects and public-private collaborations to test and validate applications in realistic contexts.
Conclusion: a trajectory to watch
The path toward reliable and scalable quantum computing is not just about hardware: it's a challenge of models, data, partnerships, and governance. For startups and innovators, it means defining concrete use cases, investing in cross-disciplinary skills, and building collaborative environments that turn theory into practical applications capable of competing on a global scale.




