On 16 April 2026, teissTalk host Jonathan Craven was joined by Milena Maneva, Head of BC and Resilience EMEA, Cantor Fitzgerald; Lessie Sciba, Deputy Managing Director, Cyber Readiness Institute; and James Tucker, Head of CISO, ZScaler.
The AI Security Institute (AISI) has urged organizations to double down on “cybersecurity basics” and consider harnessing AI to protect systems after testing Anthropic’s latest model. Although Anthropic promised not to release the new model to the public, there are concerns that it will eventually find its way into the hands of threat actors. Now, the UK’s AISI has weighed in, revealing in its evaluations of the model that it represents “a step up over previous frontier models in a landscape where cyber performance was already rapidly improving.” However, based on the results of its corporate network attack simulation, AISI “cannot say for sure” whether Mythos Preview would be able to successfully attack “well-defended systems.”
What’s alerting is that Mythos is only a general purpose model – one trained for security purposes will be even more powerful. However, on the upside, new vulnerabilities and the growing speed of exploitation will accelerate the replacement of legacy systems with new, secure ones. It can also mean that cyber security budgets will get more generous. The root cause, however, is that organisations don’t understand their own ecosystem– the critical services they depend on, their minimum viable product, let alone their third or fourth parties. Investments in security can be also framed highlighting what new investments, access to new markets or speedy acquisitions they can enable.
Understanding a business’s critical infrastructure is often hindered by the fact that those involved in compliance are mostly nontechnological people. Also, while many companies have a zero-trust strategy, it’s often not applied consistently to the entire organisation. The other common strategy, an outright ban of gen AI tools at work, on the other hand, will lead to employees working around it, creating new vulnerabilities. As a rule of thumb, AI agents should be regarded as identities whose access to data must be controlled. There must also be guardrails controlling what data goes into the model and visibility into what the agents are doing. Even co-pilots can become an attack service that cyber criminals will try and exploit.
© 2025, Lyonsdown Limited. teiss® is a registered trademark of Lyonsdown Ltd. VAT registration number: 830519543