ao link
Affino
Search Teiss
My Account
Remember Login
My Account
Remember Login

The Expert View: From urgency to uncertainty - governing AI in the age of uncontrolled adoption

Sponsored by Checkpoint
Linked InXFacebook

A group of security leaders, technologists, and risk specialists gathered at a recent dinner sponsored by Check Point to discuss a shared challenge: how to govern artificial intelligence when its adoption is no longer optional, orderly, or, in some cases, even fully understood.

If there was one point of consensus, it was that AI has broken the traditional model of enterprise technology deployment. Unlike cloud, mobile, or even the internet itself, this is not a capability being cautiously piloted and scaled. Instead, it is being mandated, top‑down, at pace, and often without the guardrails that organisations would typically demand.

 

A technology pushed from the top

 

Historically, innovation has bubbled up from technical teams seeking efficiency gains or competitive advantage. AI has inverted that dynamic. Boardrooms, under pressure to demonstrate progress and unlock productivity, are driving adoption before risks are fully articulated.

As one participant observed, “We’ve never seen a technology pushed down from the board at this level.” The implication is profound: security, governance, and risk teams are being asked to secure systems that already exist, rather than shaping them from inception.

This inversion creates a structural tension. Decisions about AI deployment are often made in pursuit of growth or cost optimisation, while the responsibility for managing risk sits elsewhere. The result is a widening gap between ambition and control.

 

From controlled systems to uncontrolled sprawl

 

Compounding this challenge is the nature of AI itself, particularly agentic AI. The ability for non‑technical users to build workflows, automate decisions, and integrate systems has democratised development. But it has also created an environment where oversight is fragmented or absent.

 

Several participants likened the phenomenon to the early days of spreadsheets: tools created and relied upon across organisations, but poorly documented, rarely governed, and often opaque to anyone beyond their creator. The difference now, however, is scale and impact. Where spreadsheets influenced decisions, AI agents can take them.

 

In many organisations, hundreds (if not thousands) of agents are already in operation, often without clear understanding of what data they access, what permissions they hold, or how they will behave over time. The risk is not just proliferation, but autonomy without accountability.

 

The illusion of intelligence

 

A recurring theme throughout the discussion was the mismatch between perception and reality. AI systems, particularly large language models, are designed to be helpful and responsive. However, that helpfulness can mask fundamental limitations.

 

Examples of hallucination, where systems fabricate outputs when lacking data, were not hypothetical. They are happening in live environments. In one case, an AI model admitted it had “made up” results because it could not access the required file.

 

This raises critical questions about trust. If outputs appear plausible but are not verifiable, the burden shifts back to the human, often negating the efficiency gains that AI promises.

 

Moreover, the design of these systems to please rather than challenge introduces new risks. As one attendee noted, AI may prioritise delivering an answer over delivering the correct answer. In high‑stakes environments, that distinction is not trivial.

 

Data: the problem behind the problem

 

While AI is often framed as the disruptive force, many participants argued that it is simply exposing pre‑existing weaknesses, particularly in data management.

 

Poor classification, excessive access permissions, and fragmented ownership have long been recognised issues.

 

AI does not create these problems, but it amplifies them. When systems can surface sensitive information instantly, the consequences of weak data governance become immediate and visible.

 

In this sense, AI is less a new risk category and more a catalyst, bringing latent vulnerabilities to the surface at speed.

 

A new attack surface

 

From a security perspective, the implications are equally significant. AI does not just consume data; it interacts with systems, executes tasks, and increasingly operates with elevated privileges. This creates a fundamentally different threat landscape.

 

Traditional attacks require footholds, malware, and lateral movement. With AI‑enabled systems, attackers may only need a prompt. By exploiting vulnerabilities such as prompt injection, they can manipulate models into performing unintended actions, potentially with legitimate access rights.

 

The concept of the “insider threat” is also evolving. AI agents, if poorly governed, effectively become autonomous insiders, capable of accessing, processing, and distributing information without malicious intent, but with potentially harmful consequences.

 

Governance that can’t keep up

 

Against this backdrop, organisations are struggling to operationalise governance. Existing frameworks, built for deterministic systems, are ill‑suited to technologies that are probabilistic, adaptive, and opaque.

 

Risk assessment becomes more complex when outcomes cannot be fully predicted. Compliance frameworks lag behind innovation. Global inconsistency, spanning the EU, the US, and China, adds further complexity for multinational organisations.

 

Some are experimenting with risk‑based approaches, adapting governance to the sensitivity of use cases. Others are embedding AI into controlled workflows, limiting free‑form interaction. But there is no consensus model, only a shared recognition that traditional controls are insufficient.

 

The cost of control

 

Another emerging tension is financial. Securing AI systems is not cost‑neutral. Tools, monitoring, and governance layers introduce new overheads, often falling into contested budget territory.

 

Business units driving AI adoption may resist absorbing these costs, viewing them as security responsibilities. Security teams, in turn, cannot scale indefinitely to accommodate decentralised experimentation.

 

The result is a governance gap, not just technical, but organisational.

 

The human factor

 

Beyond technology and process, the human dimension looms large. Adoption is uneven, with some employees embracing AI enthusiastically while others resist, often out of concern for job security. The promise that AI will “free up time” is increasingly questioned. In practice, many expect it to accelerate workloads rather than reduce them. If productivity gains translate into higher expectations rather than better work‑life balance, resistance is likely to grow.

 

At the same time, the accessibility of AI tools is reshaping behaviour. Employees are experimenting, testing boundaries, and, in some cases, bypassing restrictions altogether. Governance models that rely on strict control may struggle in this environment.

 

From experimentation to intent

 

Despite the challenges, the discussion was not devoid of optimism. Across industries, organisations are identifying meaningful use cases, from summarising complex datasets and enhancing security operations to enabling innovation in product development.

 

The common thread among successful implementations is clarity of purpose. Where AI is applied to well‑defined problems, within structured workflows, and with appropriate oversight, it can deliver tangible value.

 

Where it is deployed broadly, without clear objectives or controls, it risks becoming an expensive experiment.

 

A question of balance

 

Ultimately, the conversation returned to a familiar tension: innovation versus control.

 

Too much restriction, and organisations risk falling behind. Too little, and they expose themselves to operational, reputational, and regulatory risk.

 

The path forward lies not in choosing one over the other, but in finding a balance, one that evolves alongside the technology itself.

 

As one participant put it, “We’re not going to solve this unless we share.” In an environment defined by uncertainty, collaboration across industries, disciplines, and organisations may be the most effective control of all.


To learn more, please visit: www.checkpoint.com

Sponsored by Checkpoint
Linked InXFacebook
Affino

Winston House, 3rd Floor, Units 306-309, 2-4 Dollis Park, London, N3 1HF

23-29 Hendon Lane, London, N3 1RT

020 8349 4363

© 2025, Lyonsdown Limited. teiss® is a registered trademark of Lyonsdown Ltd. VAT registration number: 830519543