As organisations adopt more AI-driven automation, such as customer service bots and workflow engines, security teams are confronting a rapidly expanding attack surface.

As organisations adopt more AI-driven automation, such as customer service bots and workflow engines, security teams are confronting a rapidly expanding attack surface. In 2025, breaches increasingly stem not only from traditional insiders or opportunistic cyber-criminals, but from unmonitored AI agents with excessive permissions acting autonomously across core business systems. These instances of “shadow automation” are emerging as a new form of insider threat: powerful, efficient and often invisible to standard cyber-security controls.
Analysts have warned that enterprise AI platforms can be breached “in minutes” when misconfigured or insufficiently monitored, exposing sensitive workflows and authentication tokens. That was the message from Tenable’s chief security officer, who stressed that many organisations underestimate how easily AI systems can be manipulated or hijacked without triggering alerts.
This risk is amplified as attackers accelerate their own use of automation. AI-enabled cyber-crime is evolving faster than many organisations can manage their internal AI deployments. Axios recently reported a rise in AI-driven attacks, particularly those leveraging deepfake social engineering and automated credential-stuffing across multiple industries. Cyber-criminals are exploiting the same automation advantages businesses increasingly rely on.
The FBI has also highlighted widespread automated account-takeover campaigns responsible for more than $262 million in losses in 2025. These attacks succeed in part because many organisations lack visibility into how automated systems interact with their identity infrastructure. When an AI agent is compromised or behaves unexpectedly, its actions can resemble legitimate internal activity.
This exposes a blind spot in identity and access management. Most companies maintain clear Identity and Access Management (IAM) policies for employees and contractors. Yet far fewer apply the same practice to non-human identities such as AI agents, bots, API services and automation pipelines. These systems frequently hold elevated privileges, make autonomous decisions and process sensitive data, but are rarely logged, audited or governed with the same discipline as human accounts.
Addressing this gap requires extending IAM controls to AI-driven systems. Organisations should begin by auditing all automated processes to understand where AI is operating, what data it accesses and which permissions it uses. From there, they can enforce least-privilege access, establish clear governance standards for AI entities and introduce continuous monitoring to flag anomalies. Regular training for IT and security teams on emerging AI-related threats will further support vigilance and preparedness.
Shadow automation also arises when business units adopt AI tools without informing IT or security. A marketing team might deploy an AI content engine; finance may automate reconciliation through a third-party AI service; customer support might introduce an AI triage layer. Each new tool introduces its own credentials, data flows and behavioural patterns, creating a fragmented and often unmanaged network of automated identities. What begins as a productivity shortcut can become a material security risk.
Treating AI agents as first-class identities is therefore essential. They require a full lifecycle: onboarding, monitoring, access reviews, behavioural oversight and decommissioning. Security teams should maintain an up-to-date inventory of all automated processes, ensure both human and non-human accounts follow least-privilege principles, and apply zero-trust rules consistently across AI workflows. This includes reviewing AI agent logs, conducting periodic audits of automated interactions with sensitive data and establishing alerts for unusual machine behaviour and not just user actions.
The rise of shadow automation reflects a broader reality in 2025: as organisations delegate more decisions to AI, they must also assume responsibility for the risks those systems introduce. Without effective governance, AI becomes not only a tool for efficiency but a potential threat hidden in plain sight. The question for IT leaders is clear: how will your organisation strengthen oversight and governance to reduce the risks posed by unmonitored AI automation?
© 2025, Lyonsdown Limited. teiss® is a registered trademark of Lyonsdown Ltd. VAT registration number: 830519543