
AI agents are reshaping the threat landscape as quickly as organisations adopt them. That was the challenge highlighted at a TEISS breakfast briefing at The Goring Hotel in London, hosted by Menlo Security. Attendees, all CISOs and senior security leaders from a range of sectors, discussed how to govern AI adoption, secure non-human identities and defend against a new generation of agentic threats.
Menlo’s focus had been on browser security, said Dan Foster, EMEA Sales Director at Menlo Security, but this was now changing as AI becomes embedded into every application, including browsers. Jonathan Lee, Cybersecurity Strategist at Menlo, framed the challenge: browser activity that mimics a human but is actually an AI agent raises questions existing security models were not built to answer.
Google’s Gemini is now integrated with Chrome in the US, with expansion to other territories coming soon, which makes agentic browsers difficult to block, with the data flowing through AI-powered browser sidebars remains largely unexamined. Agent-to-agent communication and MCP introduce further risk.
Governing AI at pace
Staff have a sense that AI assistance through tools like Microsoft Copilot is not enough, so they request additional tools. However, they often don’t know why they want them, attendees said. One participant said their IT department had gone from reviewing one or two AI use cases a month to 75. Many overlap, but the volume is still unmanageable.
Organisational strategy must come before technology, the group agreed. One attendee described setting up AI centres of excellence, defining the need and process before choosing a tool. Others emphasised measuring productivity gains: show us what people are doing with the time AI gives them, one said.
Use cases ranged from ‘vibe coding’ to process automation. MCP – an open-source method for connecting AI tools to external apps and data sets – was cited as the next evolution of APIs. Every vendor now offers an MCP server, but each brings risk, one attendee cautioned.
Non-human identities and the agent challenge
Securing non-human identities emerged as a pressing challenge. One participant reported a human-to-computer identity ratio of one to eighty, many insufficiently secured. Another described managing 150 agents, with identity management the central concern, as only four of the agents were officially sanctioned.
Traditional accountability assumes a person behind every action, and that breaks down with agents. Who owns one — a user, a department, a team? Agents should be treated as team members with the line manager accountable, the group agreed, but this is untested at scale.
Shadow AI compounds the problem: vendors keep adding AI features to tools, each requiring security assessment.
Trust, supply chains and agentic threats
Several argued for a dual-track approach: tools must deliver genuine benefit, and organisations need safeguards against mistakes and malicious actions. OpenClaw, the open-source AI agent platform, illustrated the point. It is poorly understood, with hype creating a sense of urgency that has people rushing products into production that would not normally meet approval standards. At one point, 17 per cent of published OpenClaw skills were found to be malicious.
Mr Lee returned to the browser. Agentic browsers can operate in hidden tabs where users never see what is happening. SEO poisoning was cited as a growing concern: it can push a rogue website into top search results and trick an agent into visiting it. That could provide a way for malicious actors to gain access to company data.
Prompt injection and LLM poisoning were considered less likely — an attacker that deep in a network would gain more by opting for ransomware — but SEO poisoning requires no such access. Hidden instructions in Reddit posts or YouTube comments can trigger prompt injections, and agents could act on concealed instructions in downloaded documents without the user’s knowledge.
Human oversight in an AI-driven world
Despite the risks, attendees acknowledged that AI adoption is inevitable. We think linearly, one observed, but AI grows exponentially. There is more data daily than humans can manage, and offloading some to AI is a matter of time. But the group was firm that AI should remain assistive for the foreseeable future.
AI cannot tell you what it does not know, and confidence scores cannot always be trusted. Lower temperature settings reduce hallucinations and permanent rules improve accuracy, but large language models are inherently backward-looking. The real value in threat intelligence is original insight, and LLMs cannot provide that. Very sophisticated attacks, one attendee concluded, are now becoming very simple to execute.
The copilot analogy resonated: you must see yourself as the senior pilot. A skilled developer using AI will produce good work, but a junior relying on it without mentorship may not spot its mistakes. That said, skills once seen as foundational are becoming irrelevant for younger workers, who will enter the profession without needing them.
Closing the discussion, Mr Foster said concerns about visibility into AI tools and balancing risk with opportunity were clearly shared by the group. Mr Lee reflected that governance was the consistent theme, and posed a question that lingered: how do you implement it when the technology changes this fast?
To learn more, please visit: www.menlosecurity.com
© 2025, Lyonsdown Limited. teiss® is a registered trademark of Lyonsdown Ltd. VAT registration number: 830519543