ao link
Affino
Search Teiss
My Account
Remember Login
My Account
Remember Login

Protecting the agentic workspace

Matt Cooke at Proofpoint explains why the Moltbook story underscores a critical new era in cyber-security

Linked InXFacebook

The Moltbook story underscores a critical new era in cyber-security: one where AI is transforming the digital workspace into an agentic workspace. Work is no longer done only by humans, but increasingly by AI agents acting on behalf of people. This shift means attacks are now directed at systems that can decide and act, not merely systems that store or transmit data. As organisations roll out AI assistants and agents to summarise information, draft communications and run workflows across business tools, the attack surface expands. It grows not just from software flaws, but through manipulation via language itself.

 

In this emerging paradigm, every collaboration, whether human-to-human or human-to-AI, carries data risk. The most valuable target is often the agent’s connected permissions: its access to internal knowledge bases, inboxes, ticketing systems and cloud services. Undermining the agent’s judgement can be enough to cause real-world harm without deploying traditional malware. Moltbook illustrates how trust boundaries are becoming harder to define. AI agents are built to ingest untrusted content (emails, documents and web pages) and combine it with trusted internal context in order to be useful. This creates a new category of risk, one where adversaries do not need technical exploits. Instructions can simply be embedded in the material the agent is expected to read. When an agent is set to follow instructions and operate quickly, a single successful manipulation can spread across systems at speed, magnifying the impact far beyond what a human user would typically do manually.

 

Real-world dangers of prompt injection attacks

The real-world risk from prompt injection becomes particularly serious when AI agents have access to sensitive information and the ability to use tools. Prompt injection can override or divert an agent’s priorities, effectively turning untrusted text into a control channel that persuades the agent to retrieve confidential material (customer data, internal strategy documents, source code, credentials or incident details) and disclose it in a response, an email or a ticket. In practice, this may look like the agent ’helpfully’ pulling in extra context from a connected drive or wiki to answer a question, without recognising that the request has been crafted to trigger a leak.

 

Beyond data exposure, prompt injection can also drive harmful actions. If an agent can send messages, update records, create accounts or approve workflows, an attacker can steer it towards fraudulent or damaging outcomes under the guise of legitimate automation. This is akin to business email compromise, but faster and with fewer human checks. Because agents often consult the same documents and templates repeatedly, poisoned content can create persistence: a malicious instruction planted in a commonly used knowledge source can influence behaviour repeatedly until it is detected and removed.

 

This is why traditional tools and policies built for the digital workspace (email security, cloud app controls, collaboration platform protections) remain critical, but are no longer sufficient on their own. They must evolve to protect both humans and AI agents. Prompt injection should be treated not only as a model-safety issue, but as a core security and governance concern linked to permissions, tool access and controls over how sensitive data is retrieved and shared. The goal must be to enable AI and humans to collaborate securely and with confidence, so businesses can embrace the technology without compromising security.

 


 

Matt Cooke is EMEA Cybersecurity Strategist at Proofpoint

 

Main image courtesy of iStockPhoto.com and Khanchit Khirisutchalual

Linked InXFacebook
Affino

Winston House, 3rd Floor, Units 306-309, 2-4 Dollis Park, London, N3 1HF

23-29 Hendon Lane, London, N3 1RT

020 8349 4363

© 2025, Lyonsdown Limited. teiss® is a registered trademark of Lyonsdown Ltd. VAT registration number: 830519543