ao link
Affino
Search Teiss
My Account
Remember Login
My Account
Remember Login

The Expert View: Securing the AI-Driven World

Sponsored by Checkpoint
Linked InXFacebook

The rapid adoption of artificial intelligence (AI), including large language models (LLMs), is creating new opportunities but also new uncertainties, said Charlotte Wilson, Head of Enterprise Business UKI at Check Point, introducing a TEISS dinner at the Conrad London St James Hotel. Cybersecurity is a challenge, she said, as are ethics, bias and organisational readiness. Check Point’s recent acquisition of Lakera, she said, reflects its ambition to lead the era of AI security.

 

Samuel Watts, Senior Product Manager at Lakera, told the senior industry leaders present that LLMs expand the security surface “not exponentially, but infinitely”. In his view, as organisations move towards an emerging “internet of agents”, the complexity will increase.

 

Securing AI will require not only protecting individual models but managing networks of autonomous systems working together.

 

Many pilots, few take-offs


Across sectors, participants described a tension between organisations “truly believing AI is the future and needing to go large” but not seeing benefits beyond incremental gains.

 

Much of the value is in back-office functions: document summarisation or preparation, coding assistance, and basic analytics. Demand has surged as teams realise what might be possible, but expectations often outpace results. Regulatory pressure also limits experimentation, particularly in financial services, where firms must demonstrate to regulators how models behave.

 

However, attendees say many use cases are being explored, hundreds, in the case of one executive. In operational technology (OT), AI is being deployed for visual analytics and safety monitoring, for example. As one attendee noted, organisations are learning to match the right model to the right task, choosing between foundational, optimised or fine-tuned LLMs depending on the level of precision required.

 

New technology, new risks


Meanwhile, security professionals face the problem of securing a technology whose full capabilities, and failure modes, are not yet fully understood, even by its creators. As an attacker entering a network, one participant said, “the first thing I would do is talk to your AI”. Models can create insider risk, just like employees.

 

Data protection was a recurrent theme. Organisations want the benefits of AI, but many prefer internal models to maintain control over sensitive information. Questions such as where the data is stored and who can access are especially pressing when outsourcing AI.

 

Another risk comes from LLMs in normal use: they can ‘hallucinate’, inventing false information that employees might not spot. Training and oversight were considered crucial, both to warn staff about the danger of hallucination, and to prevent them losing faith in the technology when these unavoidable errors happen. “Everything LLMs do is a hallucination,” one attendee said, but he said certain use cases were generally safe, including translation, summarisation and coding support, but caution is needed elsewhere.

 

There are wider risks too. As organisations automate entry-level tasks, how will junior employees gain the necessary experience to progress? And without that, does the workforce lose talent over time? Meanwhile, at an environmental level, the carbon footprint of training and running models risks becoming a reputational issue.

 

Mitigation strategies


Despite the complexity, attendees outlined a pragmatic set of mitigation strategies. First, human-in-the-loop governance was considered essential. Accountability for AI work remains with the human operator. Participants emphasised the need for quality checks and non-AI validation layers to balance automation with oversight. The goal is not to eliminate risk, but to make it visible and manageable.

 

Data governance is non-negotiable. Adding information to an LLM is like adding a drop of ink to a glass of water, said one attendee. It’s easy to get it in but hard to get it out again. The priority is therefore to point models at carefully governed datasets. RAG (retrieval-augmented generation) is emerging as a way of grounding models in trusted data and reducing hallucination risk.

 

Building models internally can simplify compliance by keeping data inside the organisation’s own perimeter. But this must be matched with third-party governance, since even internal systems rely on external libraries and tools. The question security teams must ask is not “what can go wrong in theory?” but “what is the worst thing that could happen to our data, and how do we prevent it?”

 

Finally, participants emphasised the importance of organisational infrastructure: prompt libraries searchable by function, task or department; a blend of technical and cultural security measures; and governance frameworks that consider ethics explicitly rather than bolting them on after deployment.

 

The agentic future


The discussion concluded with a nod to the agentic AI future. These tools, that can autonomously pursue goals with minimal human oversight, are like “digital toddlers” said one attendee, so they must be managed carefully. To prepare, organisations must create workflows for agents, just as they do for people, so errors can be traced and corrected. Most agents are still “just microservices”, but their autonomy will grow, demanding new forms of assurance.

 

Bringing the discussion to a close, Wilson said she welcomed the framing of the evening, particularly the recognition that ethics plays a crucial role in how AI should be deployed. Different sectors face different constraints, she noted, especially those with OT environments. Organisations will therefore need the right partners, and a willingness to learn, to navigate this rapidly evolving landscape.

 

AI’s potential is vast, attendees accept, but realising it safely will require discipline, transparency and an approach to security and risk that evolves as fast as the technology itself.


To learn more, please visit: www.checkpoint.com

Sponsored by Checkpoint
Linked InXFacebook
Affino

Winston House, 3rd Floor, Units 306-309, 2-4 Dollis Park, London, N3 1HF

23-29 Hendon Lane, London, N3 1RT

020 8349 4363

© 2025, Lyonsdown Limited. teiss® is a registered trademark of Lyonsdown Ltd. VAT registration number: 830519543