ao link
Affino
Search Teiss
My Account
Remember Login
My Account
Remember Login

The Expert View: Why AI risk is a people problem as much as a technology one

Sponsored by Culture AI
Linked InXFacebook

Executives are under pressure to roll out AI, but without proper governance, secure systems, and cultural safeguards, the rush to deploy risks doing more harm than good.

 

IT teams can deploy tools to safeguard networks and lock down laptops, but human-centred attacks and behavioural risks are a trickier problem. With the rise of AI tools, said James Moore, CEO of behavioural security company CultureAI, the human-layer of security is becoming harder to manage than ever.

 

At a TEISS Breakfast Briefing at the Goring Hotel in London, a group of senior executives from a range of sectors discussed the challenge of understanding, managing and influencing behaviour in ways that reduce exposure to evolving cyber threats - in a world where AI is becoming embedded.

 

Playing catch-up


Attendees shared different levels of maturity, with some warning that speed of adoption is outpacing corporate safeguards. One firm, for example, discovered confidential HR documents had been uploaded to AI platforms. “We started without proper access management or data controls,” an executive admitted. “We had to move fast to block services and put governance in place.”

 

Even in firms with more advanced policies, there was a learning curve. “We had to teach employees what they could and couldn’t share with AI,” said another attendee. “They didn’t know.” Internal tools and data lakes with tighter protections have helped, but challenges continue to evolve.

 

As one participant noted, the shift autonomous agents has raised fresh questions: “It’s not a human, but it’s not a static machine anymore either.”

 

The tension between executive enthusiasm and operational risk came through clearly. “Our exec committee is passionate about using AI to drive efficiency,” said one attendee. “That puts pressure on us to push projects through quickly. We’ve seen increased data risk, but not yet the productivity benefits.”

 

Model transparency

 
The pressure is especially acute in the public sector. One participant described a “drive to make things cheaper, faster and more efficient,” adding that AI is seen as the solution, even when data quality and security aren’t properly considered.

Many vendors are complicating the problem. “There’s an enormous rush to bolt AI onto SaaS products,” Mr Moore warned. “But many haven’t considered security.” Firms find themselves in a constant battle to block emerging services before data leaks out.

 

And the risks go far beyond leaks. In one example, a bank discovered its fraud model disproportionately rejected applications from people with South Asian names. The culprit was training data that included historical fraud linked to Tamil Tigers. “The model learned a spurious correlation between vowels in a name and the likelihood of fraud,” the executive said. “Without transparency, these things are hard to catch.”

 

Training troubles


The group also reflected on the practical actions CISOs could take to mitigate human-centred risks. Training alone, it was agreed, is not enough to change behaviour. “It can’t be one and done,” said one CISO. “It has to be constant. And it has to be paired with other controls.”

 

AI conversations feel so convincing that users may forget their training altogether. “We know about hallucinations and false confidence,” said one executive, “but people forget in the moment. They see an answer and stop thinking critically.”

Mr Moore agreed. “There’s evidence that training can make things worse, by giving people a false sense of security,” he said. CultureAI takes a different approach: introducing “friction” into risky workflows, with a message telling users they can’t proceed or asking them to confirm their intentions.

 

Some have found creative ways to change behaviour. A law firm’s senior partners were the main source of risk, until it launched a cyber safety campaign framed as advice they could share with their parents. “It worked,” said the attendee. “They took it seriously and changed their own habits in the process.”

 

Changing mindsets


Several attendees highlighted the challenges of managing a generational shift in attitudes. “A new generation is growing up with AI,” one executive said. “And they’re not worried about data breaches because they assume they’ll happen. That mindset is coming into the workplace.”

 

Others pointed to the rise of insider threats. “LLMs can be persuaded to reveal confidential data, so someone will do it,” said one participant. “Maybe not maliciously, but it’s still a risk.”

 

With geopolitical tensions rising, the stakes are even higher. “We’re seeing increasing concerns about nation state infiltration,” another attendee warned. “Maybe it’s time to vet staff more thoroughly. Desperate people are more likely to be tempted to sell data.”

 

In regulated industries, companies are being asked to manage risks that even governments struggle to tackle. “We’re expected to know what third-party tools our vendors use,” said a financial services executive. “But sometimes we don’t even know who those vendors are, or what data they’re accessing.”

 

In the end, attendees agreed that while perfect security may be out of reach, it’s vital to build visibility into the human layer, understand the signals of risky behaviour, reinforce good habits, and - above all - be prepared for failure. Whether the threat comes from a rushed AI deployment, an insider mistake, or an external attacker exploiting human error, people are the first and final line of defence.


To learn more, please visit: www.cultureai.com

Sponsored by Culture AI
Linked InXFacebook
Affino

Winston House, 3rd Floor, Units 306-309, 2-4 Dollis Park, London, N3 1HF

23-29 Hendon Lane, London, N3 1RT

020 8349 4363

© 2025, Lyonsdown Limited. teiss® is a registered trademark of Lyonsdown Ltd. VAT registration number: 830519543