ao link
Affino
Search Teiss
My Account
Remember Login
My Account
Remember Login

How AI is supercharging the efficiency and intensity of cyber-threats 

Generative AI did not invent cyber-crime, but it has fundamentally changed the economics of it. 

Linked InXFacebook
 

The most significant shift is speed. Attackers can now generate convincing phishing messages in seconds, localise them across languages and rapidly adjust tone and context at scale. When combined with stolen data, this makes social engineering cheaper, faster and more adaptable than ever before. 

 

AI is also changing output. It is increasingly used to generate and refine malicious content on demand, lowering the skill barrier for entry while allowing experienced operators to automate parts of their workflow. Microsoft’s latest Digital Defense Report describes this clearly, noting that AI is accelerating cyber-crime operations while also introducing new vulnerabilities within AI-enabled workloads, including prompt-based attacks and supply-chain abuse of AI components. 

 

This matters because it redefines what volume looks like. Organisations can no longer rely on the assumption that malicious messages will be poorly written, generic or easy to filter. Efficiency gains now extend beyond phishing emails into reconnaissance, targeting, persuasion and iteration. Attackers can test and refine campaigns quickly, learning what works in near real time. 

 

Threat actors do not need fully autonomous AI for this to be effective. Even partial automation intensifies campaigns. Verizon’s 2025 Data Breach Investigations Report reinforces that breaches are still driven by familiar techniques, but in an environment where the speed of exploitation and operationalisation continues to improve. In that context, AI-assisted social engineering and rapid payload iteration can have a disproportionate impact. 

 

The result is an attacker ecosystem that can generate credible pretexts, scale outreach and adapt as defenders change controls, all at lower cost and higher quality. This efficiency multiplier is now a core challenge for CISOs. 

 

The growing gap between adaptive and vulnerable systems 

 

Many organisations are operating in a split reality. One side of the business is evolving rapidly, with cloud-first services, API-heavy architectures, modern identity controls and, increasingly, AI copilots and agentic workflows. The other side remains brittle, relying on legacy applications, older authentication models, uneven patching and third-party dependencies that were never designed for continuous exposure. 

 

That gap is widening. ENISA’s Threat Landscape 2025 describes a maturing threat environment characterised by rapid vulnerability exploitation and increasing complexity in tracking adversaries, while reaffirming that ransomware and intrusion activity remain central to the European threat picture. 

 

This is not simply a legacy infrastructure problem. Organisations often modernise customer-facing or productivity systems while back-end risk governance lags behind. Security teams may deploy advanced monitoring in one environment while another still relies on outdated logging, unmanaged endpoints or inconsistent identity enforcement. Attackers do not need to defeat the strongest controls if they can exploit the seams between them. 

 

AI adoption can widen this gap further. Many organisations are deploying generative AI tools before establishing consistent policies for data handling, identity, auditability and third-party risk. NIST’s Generative AI Profile, developed alongside its AI Risk Management Framework, stresses that generative AI introduces distinct risks and requires explicit governance and lifecycle management rather than bolt-on controls. 

 

The UK government’s Code of Practice for the Cyber-Security of AI makes a similar point, highlighting risks such as indirect prompt injection and data poisoning, and framing AI security as something that must be embedded into development and operations, not added after deployment. 

 

 

The result is a two-layered exposure: the traditional divide between modern and legacy IT, and a newer divide between fast-moving AI deployments and slower-moving security, risk and assurance processes. Organisations that fail to close this gap end up with advanced capabilities connected to corridors of unmanaged exposure. 

 

Expanding attack surfaces in the age of deployed AI models 

 

AI expands the attack surface in ways that are easy to underestimate. The weaknesses are often not classic software flaws, but design-level issues related to how models interpret instructions, connect to tools, access data and produce outputs that downstream systems trust. 

 

OWASP’s Top 10 for Large Language Model Applications provides a practical map of this emerging attack surface. It identifies risks such as prompt injection, insecure output handling, training data poisoning, model denial of service and supply-chain vulnerabilities in the components used to build and run LLM-based applications. 

 

These risks matter because deployed AI systems are not just another application layer. They routinely accept untrusted inputs, generate persuasive outputs and are increasingly integrated into business workflows. Where an LLM can trigger actions, call APIs or retrieve sensitive data, compromise can lead to unauthorised activity that appears legitimate. 

 

This is why security discussions are shifting from model accuracy to trust boundaries. As AI systems become decision-making or workflow layers, attackers target them as control points. The MITRE ATLAS knowledge base exists for this reason, documenting adversary tactics and techniques against AI-enabled systems based on observed and demonstrated attacks. 

 

MITRE’s SAFE-AI report complements this by focusing on protecting enterprise assets that rely on AI, linking adversarial techniques to practical defensive measures. 

There is also increasing recognition that some risks may never be fully eliminated. The UK National Cyber Security Centre has warned that prompt injection may be structurally difficult to solve using the methods to address earlier injection flaws, because large language models do not clearly separate instructions from data.

 As a result, guidance is shifting towards containment, limiting tool access, reducing privileged actions, isolating sensitive data and ensuring that compromised outputs cannot directly cause harm. 

 

Governments are beginning to publish more targeted guidance as AI moves into sensitive environments. A CISA-linked publication on secure integration of AI into operational technology makes clear that its scope includes machine learning, LLM-based systems and AI agents. 

 

 

What this means for security leaders 

 

For CISOs, the challenge is not only attackers using AI more effectively, but organisations deploying AI without redesigning controls around it. 

In 2026, mature programmes will treat AI systems as high-risk integrations that require explicit threat modelling, layered guardrails, strict identity and access controls, robust logging and continuous testing.

 

That direction aligns with NIST’s Generative AI Profile, OWASP’s LLM Top 10 and Microsoft’s guidance on monitoring AI applications, detecting unsanctioned shadow AI and protecting AI agents from prompt-based abuse. 

 

Organisations that struggle will be those that move quickly on AI adoption while governance, security architecture and third-party controls lag behind. Threat efficiency is rising, the gap between adaptive and vulnerable systems is widening, and the attack surface is expanding, often invisibly, inside the tools designed to make work easier. 

 

 

 

Linked InXFacebook
Affino

Winston House, 3rd Floor, Units 306-309, 2-4 Dollis Park, London, N3 1HF

23-29 Hendon Lane, London, N3 1RT

020 8349 4363

© 2025, Lyonsdown Limited. teiss® is a registered trademark of Lyonsdown Ltd. VAT registration number: 830519543