Danielle Kinsella at Gigamon outlines the steps to a successful and secure implementation in AI deployment
Britain stands as the world’s third-largest AI market, with the UK government forging ahead with its ’AI Opportunities Action Plan’ – a roadmap aimed at reinforcing the country’s leadership in innovation and transforming the UK into an ‘AI maker’ rather than an ‘AI taker’. Since announcing the plan at the start of the year, the government has already attracted £44bn worth of investment into the UK’s AI sector.
Business leaders are clearly onboard with this strategic direction, seeking every opportunity to leverage AI to automate mundane tasks, boost operational efficiency, and enhance decision-making. Recent research from PwC reveals that 93 percent of UK CEOs have already integrated AI into their operations—ahead of 83 percent of their global counterparts. This marks a striking shift in attitudes toward AI technology.
However, with opportunity comes risk. Recent data from the 2025 Hybrid Cloud Security Survey reveals that breach rates have surged to 55 percent, and 47 percent of Security and IT leaders report a rise in attacks specifically targeting their large language models (LLMs). This exposes a growing readiness gap between AI adoption and the infrastructure needed to manage and secure it.
The urgency to address escalating security risks tied to AI adoption has been reinforced by UK Secretary of State for Science, Innovation, and Technology, Peter Kyle. Speaking at the Munich Security Conference earlier in 2025, Kyle announced the transformation of the ‘AI Safety Institute’ into the ‘AI Security Institute,’ shifting its focus towards crime prevention and national security.
This strategic pivot underscores the need for organisations to broaden their security measures, not just safeguarding data, but fortifying the entire AI ecosystem, including infrastructure and operational processes.
Kyle also highlighted the increasing threat posed by AI-driven cyberattacks, deepfake fraud, and state-sponsored hacking, reinforcing the UK’s commitment to mitigating these risks through enhanced AI governance.
So how do business leaders navigate this seemingly impossible balancing act—a perfect storm where they are urged to adopt AI, but in doing so, could potentially expose themselves, their customers, and shareholders to substantial risk?
Rather than opting to slow down or ban AI, organisations need to focus on building a stronger foundation, built on visibility, clean, high-quality data and control - all enabled through deep observability. By combining this foundation with a clear, concise, and accessible AI security policy rolled out at every level, organisations can position themselves to fully capitalise on the benefits of AI while minimising risk.
The following are three critical steps organisations can take to supercharge their AI adoption while remaining secure:
AI adoption brings unique security challenges due to the uncharted nature of associated risk. While it’s impossible to eliminate all risk, organisations can and should take proactive steps to protect their AI infrastructure.
Too often, the focus is on securing AI models, while the supporting infrastructure is overlooked, creating vulnerabilities that inadvertently open the door to exploitation. In fact, 46 percent of security leaders report a lack of clean, high-quality data to support secure AI workload deployment, undermining the very foundation these models rely on.
To effectively circumnavigate these risks, organisations must first assess their AI risk appetite before deploying solutions. The level of risk an organisation is willing to accept will influence its AI adoption strategy, shaping security standards, infrastructure decisions, and model selection. A clear AI risk framework is essential, defining the necessary security measures for the respective industry, deciding whether to build, buy, or locally host AI infrastructure, and evaluating potential threats from insiders and external actors.
The ability to fully monitor, manage, and secure all data in motion across increasingly complex IT infrastructures is a challenge faced by UK organisations when adopting AI. We all know that organisations can’t manage what they can’t see.
In fact, 47 percent of organisations admit they lack comprehensive visibility across their hybrid cloud environments, particularly lateral traffic. As such, tracking AI models and LLM usage is essential for identifying unusual behaviours, such as unauthorised external communications.
To ensure the security of AI systems, businesses should strive to establish comprehensive visibility across all network layers exposed to AI models. Composed of vast amounts of open-source software, LLMs introduce supply chain risks due to frequent updates. As new versions are rolled out with alarming regularity, the potential for malicious actors to exploit these updates increases in turn.
Without extensive monitoring capabilities and a proactive security stance that assumes compromise, organisations may fall by the wayside as they contend with proliferating threats.
The dynamic nature of data movement further complicates security efforts. If AI systems lack real-time insights, vulnerabilities can surface, enabling attackers to infiltrate and manipulate data undetected. A compromised data channel could lead to the alteration or exposure of sensitive information, damaging reputation and potentially prompting legal and financial issues.
Remote AI services should also be supervised to track access and usage with the "zero trust" principle applied, assuming any AI interaction could pose a risk and require verification for every data exchange. Although there are various reasons humans may fail to report AI use or misuse, network traffic data remains the fundamental source of truth.
This aligns with 64 percent of security and IT leaders globally who report that real-time visibility into all data in motion is now their number one security priority. Deep observability—network-derived intelligence and insights efficiently delivered to cloud, security, and observability tools—offers an enhanced view of telemetry and log data covering application and infrastructure across environments.
By supplementing existing data, organisations can achieve complete visibility into all network traffic—encrypted or unencrypted, whether it flows through private/public cloud and on-prem, or in any direction (north-south, east-west). This capability allows organisations to fully monitor their chosen LLM and the data fed to it as updates occur at will and is crucial for any modern organisation aiming to identify and mitigate the risks tied to AI adoption.
Banning AI in the enterprise is not practical. Instead, comprehensive policies around AI will ensure a smoother transition when implementing new technologies. Whether organisations recognise it or not, AI tools and models are likely already in use across operations. Rather than enforcing prohibition, businesses should focus on crafting well-rounded policies that regulate AI use while fostering secure innovation.
Acting as the first line of defence, these policies should go beyond simple "allow or ban" frameworks and provide specific guidelines for safe and responsible AI use. They should outline which AI models employees can use, specify acceptable data-sharing practices, introduce security measures to prevent misuse, and consider continuous training programs to promote ethical AI usage.
While effective network controls are necessary to protect AI systems, employee education is essential to identify anomalies and prevent risks that arise from human error. This begins with fostering a culture with a “security-first” mindset at the forefront of innovation and change.
Enterprises that encourage continuous learning and adaptability grounded in security are more likely to thrive in the AI space and maintain an enduring security posture.
As AI continues to reshape business and IT environments, the majority of CISOs (86 percent) now view cyber risk as directly tied to the organisation’s health, placing security strategy on par with financial governance.
By embedding security considerations into AI adoption, businesses can turn mandatory innovation into a strong, resilient advantage built on a secure foundation.
Danielle Kinsella is Technical Advisor EMEA at Gigamon
Main image courtesy of iStockPhoto.com and Thanakorn Piadaeng
© 2025, Lyonsdown Limited. teiss® is a registered trademark of Lyonsdown Ltd. VAT registration number: 830519543