Artificial intelligence is rapidly becoming embedded in everyday business operations. Organisations are using it to automate workflows, analyse vast amounts of data and improve decision-making. But the same technologies are also reshaping the cyber-threat landscape.

Artificial intelligence is rapidly becoming embedded in everyday business operations. Organisations are using it to automate workflows, analyse vast amounts of data and improve decision-making. But the same technologies are also reshaping the cyber-threat landscape. Security experts increasingly warn that the next phase of cyber-risk will be defined not only by human attackers, but by machines capable of generating, planning and optimising attacks.
One of the first major shifts came with the rise of generative AI. Systems based on large language models can now produce convincing text, software code and images almost instantly. While these tools are helping businesses improve productivity, they are also lowering the barrier to cyber-crime.
Attackers can use generative AI to produce highly convincing phishing emails, fake documents or malicious scripts at scale. According to the European Union Agency for Cybersecurity’s Threat Landscape report, AI tools are already helping criminals craft more sophisticated social engineering attacks across multiple languages:
Similarly, the UK National Cyber Security Centre has warned that AI is likely to increase both the speed and the volume of cyber-attacks in the coming years.
The next development drawing attention from security leaders is agentic AI – systems capable of taking actions independently to achieve a goal. Unlike generative models that simply respond to prompts, agentic systems can plan, execute and adapt their behaviour over time. While these systems are being developed to automate complex tasks in business environments, they also introduce new security concerns.
In theory, an AI agent could be directed to scan networks for vulnerabilities, attempt exploitation and adjust its strategy based on the results. Researchers highlighted in the Microsoft Digital Defense Report note that increasing automation across the cyber-attack lifecycle could allow malicious actors to scale operations dramatically:
Another emerging risk lies in predictive AI. These systems analyse large datasets to forecast future events or behaviours, a capability widely used in business analytics and cyber-defence. But attackers can also use predictive models to study patch cycles, network patterns or vulnerability disclosures to identify the most promising targets. Analysts cited in the World Economic Forum’s Global Cybersecurity Outlook suggest that AI-driven threat actors may increasingly rely on data analysis to refine the timing and precision of their attacks:
Taken together, these developments point to a fundamental shift in cyber-security. Traditional attacks relied heavily on human skill and manual effort. AI, however, enables automation, scale and adaptability. Cyber-criminals can potentially test multiple attack strategies, refine them in real time and launch campaigns at a speed that would be impossible for humans alone.
For organisations, the challenge is not simply adopting artificial intelligence but understanding how adversaries may use it. Security teams are already responding by integrating AI into threat detection, behavioural analytics and incident response systems. Governments are also beginning to address the risks through regulatory frameworks such as the EU AI Act, which aims to establish safeguards around the development and deployment of artificial intelligence:
The long-term implication is clear: cyber-security is entering an era where intelligent systems operate on both sides of the conflict. As AI becomes more capable, defending digital infrastructure will increasingly depend on how quickly organisations can adapt to threats that are no longer created solely by humans, but also by machines.
Artificial intelligence is rapidly becoming embedded in everyday business operations. Organisations are using it to automate workflows, analyse vast amounts of data and improve decision-making. But the same technologies are also reshaping the cyber-threat landscape. Security experts increasingly warn that the next phase of cyber-risk will be defined not only by human attackers, but by machines capable of generating, planning and optimising attacks.
One of the first major shifts came with the rise of generative AI. Systems based on large language models can now produce convincing text, software code and images almost instantly. While these tools are helping businesses improve productivity, they are also lowering the barrier to cyber-crime. Attackers can use generative AI to produce highly convincing phishing emails, fake documents or malicious scripts at scale. According to the European Union Agency for Cybersecurity’s Threat Landscape report, AI tools are already helping criminals craft more sophisticated social engineering attacks across multiple languages:
Similarly, the UK National Cyber Security Centre has warned that AI is likely to increase both the speed and the volume of cyber-attacks in the coming years.
The next development drawing attention from security leaders is agentic AI – systems capable of taking actions independently to achieve a goal. Unlike generative models that simply respond to prompts, agentic systems can plan, execute and adapt their behaviour over time. While these systems are being developed to automate complex tasks in business environments, they also introduce new security concerns. In theory, an AI agent could be directed to scan networks for vulnerabilities, attempt exploitation and adjust its strategy based on the results. Researchers highlighted in the Microsoft Digital Defense Report note that increasing automation across the cyber-attack lifecycle could allow malicious actors to scale operations dramatically:
Another emerging risk lies in predictive AI. These systems analyse large datasets to forecast future events or behaviours, a capability widely used in business analytics and cyber-defence. But attackers can also use predictive models to study patch cycles, network patterns or vulnerability disclosures to identify the most promising targets. Analysts cited in the World Economic Forum’s Global Cybersecurity Outlook suggest that AI-driven threat actors may increasingly rely on data analysis to refine the timing and precision of their attacks:
Taken together, these developments point to a fundamental shift in cyber-security. Traditional attacks relied heavily on human skill and manual effort. AI, however, enables automation, scale and adaptability. Cyber-criminals can potentially test multiple attack strategies, refine them in real time and launch campaigns at a speed that would be impossible for humans alone.
For organisations, the challenge is not simply adopting artificial intelligence but understanding how adversaries may use it. Security teams are already responding by integrating AI into threat detection, behavioural analytics and incident response systems. Governments are also beginning to address the risks through regulatory frameworks such as the EU AI Act, which aims to establish safeguards around the development and deployment of artificial intelligence:
The long-term implication is clear: cyber-security is entering an era where intelligent systems operate on both sides of the conflict. As AI becomes more capable, defending digital infrastructure will increasingly depend on how quickly organisations can adapt to threats that are no longer created solely by humans, but also by machines.
© 2025, Lyonsdown Limited. teiss® is a registered trademark of Lyonsdown Ltd. VAT registration number: 830519543