
For Kevin Schwarz, Head of CTO at Zscaler, a crisis of trust is the central challenge in cybersecurity. “We’re questioning our trust in the solutions we’ve relied on,” he told guests at a recent TEISS dinner, sponsored by ZScaler and BT, at the House of Lords. Segmentation alone is proving insufficient as attack volumes rise, and trust in cloud services is being undermined by geopolitical concerns.
“We’re even questioning what’s real,” he added, noting the rise of AI-generated content and the risk of fake personas appearing. “How do you know the person on the other side of that Zoom chat is real?”
Schwarz said Zscaler is not widespread attacks using AI yet, but that ransomware has quadrupled, which he suspects is because criminals are using AI to produce more of it.
BT’s Lee Stephens, Principal in Security Advisory Services, continued the theme of trust but with a focus on people. “Kevin’s message really hits home,” he said. “Trust used to be one-to-one. Then it became centralised. Now it’s distributed, and we’re still learning how to manage that.”
In a world where breaches are a matter of when, not if, Stephens urged attendees to prioritise openness and shared learning. “How can we be more transparent when things go wrong, so others can learn from it?”
AI confusion is rife
The widespread availability of generative AI tools has created an illusion of capability, several attendees noted. “People assume AI can do more than it can,” said one participant, adding that the public associate all AI with ChatGPT.
Inside organisations, the risks multiply. Businesses put increasing trust in the data that feeds AI models, while asking users to act responsibly, without always offering the necessary education. “You have to train your people to verify output,” one guest said. That challenge is compounded by the rise of shadow AI, with employees using tools on personal devices when workplace restrictions are too tight.
There’s a difference, though, between general-purpose AI used in public and narrow enterprise models developed for internal use. Several participants described using domain-specific models for tasks such as CV matching or internal search. Others cautioned that smaller models offer less capability, so a balance must be struck. “We need the flexibility to use the right model for the task,” said one participant.
Embedding AI into risk
Even as AI is deployed across the enterprise, most agreed that human oversight remains essential. “A human in the loop is still a critical control,” one said. An academic attendee said AI use in education relies on a ‘PAIR’ framework: Problem, AI, Interact, Reflect.
Still, AI is altering risk landscapes. “It’s easy to extend tools beyond their original remit,” said one security leader. “A chatbot built for customer service can suddenly access sensitive data, because nobody rethought the design.” That kind of creep, they warned, can lead to exposure if security and data protection aren’t revisited in parallel.
Others flagged deeper architectural concerns. Memory safety vulnerabilities remain endemic, they said, because “the underlying architecture of computing hasn’t changed in decades.” That means a simple phishing click can compromise an entire business. Organisations must demand more security from component manufacturers to fix that.
Risk prioritisation
Several participants agreed that resilience planning starts not with technology, but with identifying what matters most. “The business has to tell us what would bring the company down,” one said. “Then we know what to protect.”
One retailer said they explore every scenario. For example, what if every shop had to operate offline? How long could they operate without the ordering system? That kind of thinking, attendees agreed, should shape investment decisions. “What’s a reasonable amount to spend to stay functional in a crisis?” asked one attendee.
Those calculations are imperfect, however. It can be hard to estimate non-financial losses, for example, such as reputation damage. And when companies can’t sell, they won’t buy materials either, so costs change as well as revenues.
That complexity means post-breach mindsets often shift dramatically. “We’ve seen clients become far more risk-averse once they’ve been hit,” said Mr. Stephens.
Governance and sovereignty
As the conversation moved to vendor lock-in, some attendees argued that data sovereignty and resilience cannot be guaranteed if businesses depend on overseas providers. Some attendees said European firms are questioning their use of US providers for this reason, citing as an example, the Swedish government’s move away from Microsoft. Data
Building strong relationships with cloud vendors is critical. “They won’t just do as they’re told,” said one guest. “You have to work with them.”
Clear internal policies and frequent training are vital too. “You need a data classification framework that’s understood by employees,” one participant stressed. “Otherwise, people won’t know what can be stored or where.”
Inform, Incentivise, Instruct
As the evening ended, Mr Stephens offered a framework for managing human behaviour in a high-risk digital world. “Inform, incentivise, instruct,” he said. “Make sure people understand what’s expected. Give them a reason to do the right thing. And when necessary, tell them.”
For Mr. Schwarz, the discussion returned full circle to trust and lock-in. “Sovereignty means being able to leave the table,” he said. “But how can I do that, when the big vendors need a return on investment? Lock-in is something we all need to keep in mind.”
In the age of AI and cyber warfare, resilience depends on better tools or stronger defences, and also on clarity about what matters most, and shared responsibility for keeping it safe.
To learn more, please visit: www.bt.com & www.zscaler.com
© 2025, Lyonsdown Limited. teiss® is a registered trademark of Lyonsdown Ltd. VAT registration number: 830519543