Credit: Unsplash/CC0 Public Domain
Imagine a future where quantum computers supercharge machine learning—training models in seconds, extracting insights from massive datasets and powering next-gen AI. That future might be closer than you think, thanks to a breakthrough from researchers at Australia’s national research agency, CSIRO, and The University of Melbourne.
Until now, one big roadblock stood in the way: errors. Quantum processors are noisy, and quantum machine learning (QML) models need deep circuits with hundreds of gates. Even tiny errors pile up fast, wrecking accuracy. The usual fix—quantum error correction—may work, but it’s expensive. We’re talk…
Credit: Unsplash/CC0 Public Domain
Imagine a future where quantum computers supercharge machine learning—training models in seconds, extracting insights from massive datasets and powering next-gen AI. That future might be closer than you think, thanks to a breakthrough from researchers at Australia’s national research agency, CSIRO, and The University of Melbourne.
Until now, one big roadblock stood in the way: errors. Quantum processors are noisy, and quantum machine learning (QML) models need deep circuits with hundreds of gates. Even tiny errors pile up fast, wrecking accuracy. The usual fix—quantum error correction—may work, but it’s expensive. We’re talking millions of qubits just to run one model. That’s way beyond today’s hardware.
So, what’s the game-changer? The team discovered that you don’t need to correct everything.
The research is published in the journal Quantum Science and Technology.
In QML models, more than half the gates are trainable and they adjust during learning. By skipping error correction for these gates, the model can ‘self-correct’ as it trains. The result? Accuracy almost as good as full error correction, but with only a few thousand qubits instead of millions.
Lead author and Ph.D. student at The University of Melbourne, Haiyue Kang, describes this work as an important step forward.
"Until now, quantum machine learning has mostly been tested in perfect, error-free simulations. But real quantum computers aren’t perfect—they’re noisy, and that noise makes today’s hardware incompatible with these models. In other words, there’s a big gap between the theory and actually running QML on quantum processors without losing accuracy."
Professor Muhammad Usman, head of the Quantum Systems team at CSIRO, is senior author of the study.
"This is a paradigm shift," Professor Usman said.
"We’ve shown that partial error correction is enough to make QML practical on the quantum processors expected to be available in the near future."
Why does this matter? Because it could move quantum machine learning from theory to reality much sooner than expected. Faster training, smarter AI and real-world quantum advantage could now be within reach.
The study marks a major milestone for quantum computing and AI. It’s not just a technical tweak—it’s a rethink of how we build quantum algorithms for noisy hardware.
Bottom line: Quantum machine learning mightn’t be decades away. Thanks to this clever approach, it could be powering real-world applications in the near future.
More information: Haiyue Kang et al, Almost fault-tolerant quantum machine learning with drastic overhead reduction, Quantum Science and Technology (2025). DOI: 10.1088/2058-9565/ae2157
Citation: Quantum machine learning nears practicality as partial error correction reduces hardware demands (2025, December 10) retrieved 10 December 2025 from https://phys.org/news/2025-12-quantum-machine-nears-partial-error.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.