ao link
Affino
Search Teiss
My Account
Remember Login
My Account
Remember Login

Quantifying cyber risk in the age of AI and quantum

AI and quantum threats are moving faster than risk models. Security leaders now need to quantify what they cannot yet fully see.

Linked InXFacebook

While enterprises rush to deploy agentic artificial intelligence to automate complex workflows, a quieter countdown is underway. The so-called “quantum decryption” horizon – the point at which quantum computers can break widely used cryptography – is no longer theoretical.

 

These are not incremental risks. Together, agentic AI and quantum computing are reshaping the cyber-threat landscape in ways that render traditional risk models increasingly inadequate.

 

For security leaders, the challenge is shifting. It is no longer enough to identify emerging threats. The question now is how to quantify them.

 

Cyber risk quantification has long relied on historical data, known vulnerabilities and probabilistic modelling. But emerging risks do not behave in predictable ways.

 

Take “shadow AI” unsanctioned AI tools or autonomous agents deployed outside formal governance structures. According to Gartner, by 2027 more than 40 per cent of AI-related data breaches will be caused by the improper use of generative AI across borders. That statistic signals scale, but not impact.

 

The difficulty lies in attribution. When an AI agent decides, triggers a process or interacts with external systems, where does accountability sit? And how do you model loss when the behaviour itself is emergent?

 

A similar challenge exists with quantum risk. The “harvest now, decrypt later” threat model where attackers capture encrypted data today with the intent to decrypt it once quantum capabilities mature, complicates traditional timelines.

 

The National Institute of Standards and Technology (NIST) has already begun standardising post-quantum cryptography, warning that migration timelines could take years. Yet many organisations struggle to justify immediate investment for a threat that may not fully materialise until the next decade.

 

From probability to exposure

 

Frameworks such as FAIR Institute’s Factor Analysis of Information Risk (FAIR) can help translate these questions into financial terms, but they must be adapted for emerging technologies where historical baselines are limited.

 

In practice, this means modelling scenarios rather than relying solely on past incidents. For example, estimating the cost of a compromised AI agent orchestrating supply chain transactions, or the long-term liability of sensitive data exposure post-quantum.

 

Governance for agentic and shadow AI

 

As AI systems become more autonomous, governance must evolve from policy enforcement to design-led control.

 

The European Union Agency for Cybersecurity has highlighted the need for “secure-by-design” AI systems, particularly as organisations integrate multiple models and third-party services into their environments.

 

Without these guardrails, shadow AI can quickly become an invisible attack surface.

 

Budget constraints mean not every emerging risk can be addressed immediately. The focus, therefore, should be on controls that reduce systemic exposure.

 

The IBM Security Cost of a Data Breach Report consistently shows that organisations with mature zero-trust and segmentation strategies reduce breach costs significantly. The same principle applies to emerging risks: containment matters.

 

Managing cascading risk across ecosystems

 

Modern enterprises operate within dense ecosystems of vendors, contractors and cloud platforms. Agentic AI amplifies this complexity by introducing autonomous interactions across these networks.

 

A compromised AI system in one organisation can propagate risk across partners in ways that are difficult to trace and even harder to quantify.

Supply chain attacks already account for a growing share of cyber incidents, as highlighted by World Economic Forum reports on systemic cyber risk. AI-driven automation increases both the speed and scale of these cascades.

 

Quantification, therefore, must extend beyond the enterprise perimeter. It requires mapping dependencies, understanding shared vulnerabilities and modelling cross-organisational impact.

 

A new discipline for a new risk landscape

 

Emerging cyber risks from AI and quantum computing are forcing a rethink of how risk itself is defined.

 

The task for security leaders is not to predict the future with precision, but to build models that capture uncertainty, exposure and systemic impact. That means combining technical insight with financial reasoning, and governance with design.

 

As these technologies evolve, so too must the frameworks used to assess them. Because in a landscape shaped by autonomy and cryptographic disruption, what cannot be quantified cannot be managed.

 

Linked InXFacebook
Affino

Winston House, 3rd Floor, Units 306-309, 2-4 Dollis Park, London, N3 1HF

23-29 Hendon Lane, London, N3 1RT

020 8349 4363

© 2025, Lyonsdown Limited. teiss® is a registered trademark of Lyonsdown Ltd. VAT registration number: 830519543