ao link
Affino
Search Teiss
My Account
Remember Login
My Account
Remember Login

teissTalk: Securing your organisation in the age of GenAI

On 9 April 2026, teissTalk host Thom Langford was joined by David Cartwright, CISO, Santander International; Lisa VentuRA, CEO and Founder, AI and Cyber Security Association; Nina Pettersen, Senior Manager, Gritera Security; and Lionel Litty, CISO, Menlo.

Linked InXFacebook
close

Views on news


Clawdbot’s popularity has been meteoric, racking up more than 140,000 stars and 20,000 forks on its GitHub repository. One issue behind AI agents like Clawbot being insecure by design is because LLMs are unable to distinguish between different contexts. Now, thanks to its persistent memory, attackers can attempt to compromise a OpenClaw-based agentic architecture through time-delayed attacks, as attack strings can persist in memory, resulting in memory poisoning.


Perhaps the biggest factor in why AI security is struggling to catch up is how agentic AI tools such as OpenClaw introduce new levels of abstraction not seen in traditional digital infrastructure. Gen AI means that we must deal with the problems we’ve already had for some time but at much larger speed and scale. Many people already use AI at work, and they’ll continue to do so whether the company has a policy for it or not, while the company is also using considerable competitive advantage if it bans gen AI use altogether.  


Reaping the benefits while minimising threats


Agentic AI deployments must start with some threat modelling, using tools such as MAESTRO to understand what the agents are doing, whether they are accessing sensitive data or take input from unreliable sources. Businesses have been struggling with least privilege for a long time, but agentic AI will further amplify the threats that its lack poses. As I&A management rules are often not adhered to in a typical business, individuals usually have broader access than they should and, by extension, so will AI agents if they run under humans’ login.  Monitoring and auditing are inherently difficult with gen AI models – you can only tell whether the output is right or wrong. One approach is to deploy supervisor AI agents that monitor how the operating ones execute tasks, or, alternatively, different models can be used to execute sub-processes, which makes the detection of anomalies easier. 


The more AI agents are involved in decision making, the trickier establishing ownership and responsibility will be.  However, filtering data before it goes into an AI agent may be an effective way of enhancing its security controls. Also, tying enterprise AI use to training can enable more responsible use of these tools and a better understanding of what constitutes sensitive data. Good AI and data governance will empower employees as well. There are also more secure use cases than autonomous agentic AI, such as using them as assistants of threat analysts to help triage alerts. But AI can be very useful for smaller businesses with limited resources too, who can use it for threat monitoring outside office hours. 


The panel’s advice

  • If you push the use of gen AI underground, your people will find workarounds.
  • DLP technology, a capability often switched off earlier, will probably come into its own as a security tool to protect a growing risk surface.
  • Security leaders must start by mapping out where AI is being used in the organisation and what data AI tools have access to. 
Linked InXFacebook
Affino

Winston House, 3rd Floor, Units 306-309, 2-4 Dollis Park, London, N3 1HF

23-29 Hendon Lane, London, N3 1RT

020 8349 4363

© 2025, Lyonsdown Limited. teiss® is a registered trademark of Lyonsdown Ltd. VAT registration number: 830519543