On 6 November 2025, teissTalk host Thom Langford was joined by Michela Resta, Solicitor, CYXCEL; Paolo Palumbo, Vice President, WithSecure Intelligence; and Tiago Rosado, Chief Information Security Officer, Asite.
The October 2025 GTIG AI Threat Tracker highlights that malicious groups are exploiting AI to dynamically generate, rewrite, and disguise malicious code mid-execution. "For the first time, GTIG has identified malware families, such as PROMPTFLUX and PROMPTSTEAL, which use Large Language Models (LLMs) during execution," the report said. These tools can "generate malicious scripts, obfuscate their own code to evade detection, and leverage AI models to create malicious functions on demand." What can help businesses against these new trends is good governance – rather than waiting for Google to sort out the problem. But Google should also do more as it was them who unleashed these technologies. New AI solutions, such as Open AI browser already got to the point where drawbacks outweigh benefits. Empowering criminals with these technologies now will come home to roost. To optimise their RoI, cyber criminals don’t use a lot of these sophisticated AI tools just yet but mostly rely on the good old vulnerabilities they have been exploiting for the past ten years. The UK has taken a pro-innovation stance here, although the GDPR is in force and being tech agnostic, it also protects the data that goes into AI. If a business has general cybersecurity controls in place, however, it will cover them for the bulk of AI attacks as well.
Businesses can’t exclusively rely on regulation for protection against AI-powered cyber-attacks but must put the right general security controls in place, which can also contract their attack service to exploits leveraging AI. To bridge the gap between legislators and security experts, there is a great initiative called Hackers in the house, which gives UK security professionals an opportunity to meet policymakers and help inform and influence potential UK cyber policy development. Good legislation doesn’t ban activities outright but mandates with what controls cutting edge digital technologies can be deployed – for example, a certain ML system can only be used with such role-based access controls.
There is a huge pressure on businesses to implement AI but, to avoid wasting resources, they should always identify first the problems that plague the organisation. According to analysts, on average, a business needs 30-45 security solutions to cover the requirements of a large tech stack – numbers that can be reduced significantly if the cyber fundamentals have been implemented. New AI deployments increase system complexity and can create new attack surfaces – factors that must be considered when assessing the costs and benefits of a new AI investment, as well as AI’s resource heaviness (including man hours required to review its output) and whether the company has the time and capital to meet these needs. Integrations also create a lot of extra exposure. To make AI systems more transparent, vendors should adopt a bill of materials (BOM) type of protocol, which lists all components, data, models, software and hardware used in an AI system to ensure security and compliance.
While AI can’t be used without supervision in cyber defence either, it can do a great job at triaging alerts or detecting less sophisticated cyber-attacks. Increasingly, a control layer will be introduced into defence systems to enable humans in the loop – who will probably always stay there for accountability and control. But this must happen without slowing down defence and giving further advantage to attackers. There may come a point where the human presence slows down the decision making process of the AI so much that it becomes unviable. We are still at a stage where a breach is always the victim company’s responsibility and vendors who sold their software with vulnerabilities are never held to account. This should change soon – and there are already signs of a shift where insurers go after software developers. For incident response, you don’t need shiny digital tools but, rather, a piece of paper with phone numbers and ideas who to contact and what to do if the worst happens.
MCP servers – programs that act as intermediaries between AI agents and internal systems – introduce new vulnerabilities and entry points that traditional security measures often miss.
Expect more transparency around AI models in 2026.
When contracting a sock-as-a-service provider, include a break clause. If, for example, they fail to detect more than 75% of artifacts in a red team exercise, they are “out.“
With good training and achieving a certain standard of technical knowledge across the business, you can reduce the attack surface by 95%.
As a first step to improving your defences, reduce your attack surface and map where your data is.
Always make sure you know what AI you’re using – what it touches, what it’s processing. When an incident happens, regulators will want to know what you do and how you do it.
Ask your cyber team what their biggest problem is and see what AI tools you could use to address it.
© 2025, Lyonsdown Limited. teiss® is a registered trademark of Lyonsdown Ltd. VAT registration number: 830519543