Building a trusted security model for Generative and Agentic AI
- Transferable lessons - how overlooking fundamental security and data trust leads to Generative and Agentic AI failures
- Steps for embedding security checkpoints and governance directly into your AI pipeline
- Strategies to scale AI safely - avoiding costly retrofits - and positioning security as a key competitive advantage
Thom Langford, Host, teissTalk
Tim Roberts, Managing Director, AlixPartners
Satyam Rastogi, Director of Information Security & DevOps, BAMKO
Deryck Mitchelson, Head of Global CISO Team & C-Suite Advisor, Check Point
The rush to integrate Generative and Agentic AI models has led many enterprises to prioritise speed over trust. This has resulted in costly, real-world mistakes that demonstrate the catastrophic risk of neglecting security in the AI.
AI is poised to transform your organisation, but only if it’s built on a foundation of integrity. How can you ensure that security is an architectural design choice, not a retrofit?
In our next episode of teissTalk with Thom Langford, we’ll explore:
- Transferable lessons - how overlooking fundamental security and data trust leads to Generative and Agentic AI failures
- Steps for embedding security checkpoints and governance directly into your AI pipeline
- Strategies to scale AI safely - avoiding costly retrofits - and positioning security as a key competitive advantage
Join us to move beyond reacting to security incidents and learn how to proactively embed governance and trust into your AI strategy.
Featured SpeakersView All

Thom Langford

Deryck Mitchelson

TIM ROBERTS

Satyam Rastogi
When you register on our site you are automatically registered for all teissTalk episodes
Featured Speakers

Thom Langford

Deryck Mitchelson

TIM ROBERTS
