Encryption-busting quantum computing is on its way, warns Shobhit Gautam at HackerOne. Or is it?
Attracting serious attention from the science and tech communities, quantum computing news is now regularly cropping up in everyday media, with recent announcements like Oxford University’s quantum teleportation experiment demonstrating how quickly the field is advancing.
Each new development adds to the anticipation of a future where super-powerful quantum processors could revolutionise computing. But also, render today’s encryption methods useless. It’s a dramatic and compelling narrative. Yet, the actual threat to current cyber-security practices is still a long way off from becoming real.
Nevertheless, the media is happy to scaremonger, painting a not-so-distant picture where quantum computers pose a diabolical risk to digital security. Although, to be clear, quantum computing is nowhere near the stage where it can break encryption. For starters, it needs a highly stable system with almost zero error margins. And that remains a distant dream for quantum engineers.
It is true that quantum processors could one day crack today’s encryption. However, it’s worth remembering that the path from proof-of-concept to weaponisation is long, uncertain, and full of engineering obstacles. So, although quantum teleportation can help solve a scaling problem in the lab, it is by no means the same as a real-world, fully deployable solution. There’s a huge gap between an experiment and a tool that a threat actor, or anyone else, can use.
Cryptographic systems, for their part, aren’t standing idle either. They’re adapting to anticipate the eventual quantum threat. Working diligently in the background, the National Institute of Standards and Technology (NIST) has been devising specific cryptographic standards for developing algorithms that can withstand quantum attacks.
Even so, it’s understandable why alarming headlines spread disquiet. Modern encryption is the backbone of online security for everything from financial transactions to private messages. The thought of it being cracked is unsettling.
Therefore, it’s important to keep what the theory is in perspective, and not get distracted from addressing more imminent threats. While quantum breakthroughs grab attention, cyber-attacks powered by AI are already happening and growing in complexity and scale. The threat posed by AI definitely isn’t conceptual. There are issues within production systems right now, and the attack surface is expanding.
According to the latest Hacker-Powered Security Report, nearly half of cyber-security leaders (48%) already view GenAI as one of the most significant threats. And for good reason. From deepfake voice fraud to AI-powered phishing and malware attacks, malicious actors are weaponising AI in clever and surreptitious ways. They are not waiting for theoretical breakthroughs. Take, for example, when Google Bard’s AI extension launched, hackers exploited it within hours to exfiltrate user data. A telling reminder of how exposed these systems can be.
What’s more, the AI attack surface isn’t only the models themselves. It spans the data pipelines feeding them, the APIs, third-party plugins, backend integrations, and the real-world decisions these systems take autonomously. All of these layers can be manipulated by malicious actors with techniques such as prompt injection or by exploiting models and insecure AI plugins. Then, attackers will move very swiftly once they uncover any weaknesses to go after similar ones in other entities.
Yet, many organisations are racing to adopt AI capabilities without taking the time to properly secure them. In the rush to unlock new efficiencies and gain a competitive advantage, security aimed specifically at protecting AI is often bolted on after the fact, if at all. This leaves gaping vulnerabilities in production environments, and for adversaries, these offer plentiful opportunities for extortion and mayhem.
Stress-test to the limit
Organisations must ramp up their countermeasures by building systems resilient enough to stand up to the threats that exist now and are evolving quickly. This should include proactive stress-testing of critical systems with particular emphasis on red teaming.
In the context of AI, it means hiring security researchers, ethical hackers, and adversarial AI experts to deliberately try to manipulate models, exploit plugins, or trick automated systems into making the wrong decisions.
If organisations are not red teaming systems, then weaknesses are more likely to be overlooked. Attackers are agile, creative, and constantly probing for ways in. Cyber-security teams need to be just as innovative and determined in keeping them out, testing defences to the limit.
Encouragingly, there’s been a 171% year-on-year increase in AI assets being incorporated into security programs, reflecting a sharp rise in demand for rigorous AI red teaming and testing. As the technology becomes more integrated into critical workflows, more organisations are realising that trusting a model implicitly is dangerous. It needs to be tested continuously.
Ultimately, quantum computing is an exciting field that deserves continuing attention. It holds enormous potential, and yes, it will eventually change the cryptographic landscape. But the path from research experiments to a practical, scalable threat is still long and uncertain. In contrast, AI threats are here now — and growing.
Therefore, while it’s wise to monitor quantum developments and consider encryption strategies for the future, organisations must not lose focus on today’s challenges. The immediate concern is securing systems that are already in production, particularly those powered by AI.
Quantum might be the future, but AI is the present, and it’s redefining the cyber-security landscape. CISOs and security professionals need to stay grounded in reality and prioritise accordingly.
Shobhit Gautam is a Staff Solutions Architect, EMEA at HackerOne
Main image courtesy of iStockPhoto.com and bpawesome
© 2025, Lyonsdown Limited. teiss® is a registered trademark of Lyonsdown Ltd. VAT registration number: 830519543