ao link
Affino
Search Teiss
My Account
Remember Login
My Account
Remember Login

Beyond OpenClaw: the structural risks of agentic AI

Salvatore Gariuolo at TrendAI, a business unit of Trend Micro, reflects on the learnings from the unfolding Clawdbot controversy

Linked InXFacebook

In recent weeks, an open source AI personal assistant called Clawdbot, later renamed OpenClaw, has attracted attention from developers, researchers, and security teams alike. Designed to act autonomously on a user’s behalf, managing files, accounts, and connected services, the tool spread quickly through online communities experimenting with agentic AI.

 

This popularity has also drawn scrutiny. Security researchers have shown how easily instances can be misconfigured and exposed, while others have pointed out that the assistant stores sensitive information in plain text on local systems, making it an attractive target for infostealer malware. These are all legitimate concerns, and they warrant careful examination and remediation.

 

I believe that focusing solely on OpenClaw’s individual weaknesses risks missing a more uncomfortable truth. Even if every identified flaw were resolved, the broader risks would not disappear. What OpenClaw has brought into focus is not a single insecure tool, but a deeper structural issue with agentic AI systems themselves. The problem is not just how these assistants are built, but what they are designed to do.

 

Agentic AI assistants are fundamentally different from traditional software. They are not passive tools waiting for instructions. They are designed to act. In order to be useful, they require autonomy and wide ranging access to digital environments. They read emails, manage files, interact with cloud services, and trigger workflows across multiple systems. In other words, they are granted authority. This authority is exactly where the risk lies.

 

OpenClaw shows how quickly such systems can move from experimental projects to tools that go viral. The appeal is obvious. The development cycles are short, distribution is frictionless and a growing community of enthusiasts is eager to download and then run open source agents locally (and often without the safeguards built into enterprise platforms). This enthusiasm is understandable. Open source assistants offer freedom, flexibility, and the ability to customise behaviour in ways that commercial systems intentionally limit. But this same freedom moves responsibility for security almost entirely onto the user.

 

In practice, this means that the safety of an agent like OpenClaw is determined less by its codebase than by how it is configured. Users may grant full system access for convenience, they might reuse credentials across services, connect sensitive accounts, or install unvetted skills and plugins. None of these actions introduce new categories of risk. Instead, they magnify existing ones. They turn an already powerful assistant into a single point of failure across an entire digital ecosystem.

 

This is why OpenClaw is best understood as an amplifier. It doesn’t create new threats, but instead it amplifies the consequences of error, inexperience, or misplaced trust. A compromised or manipulated agent can access everything it has been allowed to touch. That might include personal communications, corporate data, cloud storage, or financial applications. Once that access is in place, exploitation becomes less about breaking sophisticated defences and more about nudging an autonomous system in the wrong direction.

 

The security discussion has so far focused on misconfigurations and exposed instances, but these are symptoms rather than root causes. The deeper issue is delegation. Agentic systems operate by making decisions on behalf of users. They interpret goals, plan actions, and execute tasks with minimum oversight. This delegation is what makes them attractive. It is also what makes them dangerous.

 

Unlike traditional automation, agentic assistants rely on large language models that are non deterministic by nature. Their behaviour can vary based on context, phrasing, or subtle changes in input. When such systems are given the ability to act directly, unpredictability becomes a risk factor in its own right. Even without malicious interference, an agent may take actions the user did not anticipate or intend. When interference is introduced, through a crafted prompt embedded in a document or a malicious skill, the consequences can escalate quickly.

 

From this perspective is an early and visible example of a broader trend. As more agentic tools emerge, the security community cannot realistically analyse every new release in depth before it gains traction. Nor should that be the primary goal. Pointing out flaws after the fact may reduce harm in specific cases, but it does little to address the underlying challenge of how authority is assigned and controlled.

 

The more pressing question is whether users and organisations truly understand what they are handing over when they deploy an agentic assistant. Authority, once delegated, is difficult to contain. High capability systems demand high trust, yet trust is often granted implicitly rather than earned. Convenience encourages over-permissioning. Speed encourages shortcuts. In this environment, mistakes are not just possible, they are likely.

 

Managing this tension requires a shift in mindset. Capability and risk are inseparable. The more an assistant can do, the more damage it can cause if something goes wrong. That does not mean agentic systems should be avoided altogether, but it does mean their scope should be deliberately constrained. Agents should only be able to perform tasks they genuinely need to execute. Access to external systems and sensitive data should be limited and reviewed. High impact actions should require oversight or confirmation, even if that reduces efficiency.

 

There are also decisions that may warrant a hard stop. Some tasks carry consequences that are simply too severe to automate safely. Financial actions are a clear example. While it is technically possible for an AI agent to initiate transactions, the risks of error or manipulation are obvious. The absence of such capabilities in some mainstream assistants is not a technical limitation, but a deliberate choice grounded in risk awareness.

 

OpenClaw highlights two realities. First, open source agentic tools are not designed for casual use. They demand a level of technical understanding and security discipline that many users do not have. Second, the agentic paradigm itself comes with unavoidable trade offs. Careful configuration and scoped permissions can reduce exposure, but they cannot eliminate it entirely, because delegating authority always involves risk.

 

The challenge ahead is not to slow innovation or suppress open source development. It’s to develop a clearer, more honest understanding of what agentic systems represent. They are not just smarter tools. They are actors within our digital environments. Treating them as such requires restraint, informed decision making, and a willingness to accept that some efficiencies are not worth the potential cost.

 


 

Salvatore Gariuolo is a Senior Threat Researcher at TrendAI, a business unit of Trend Micro

 

Main image courtesy of iStockPhoto.com and roberthyrons

Linked InXFacebook
Affino

Winston House, 3rd Floor, Units 306-309, 2-4 Dollis Park, London, N3 1HF

23-29 Hendon Lane, London, N3 1RT

020 8349 4363

© 2025, Lyonsdown Limited. teiss® is a registered trademark of Lyonsdown Ltd. VAT registration number: 830519543