ao link
Affino
Search Teiss
My Account
Remember Login
My Account
Remember Login

Effective cyber-skills development

Dan Potter at Immersive explains why skills development is often based on the wrong material

Linked InXFacebook

From spotting increasingly sophisticated AI-enabled phishing and deep fakes to keeping a cool head in the midst of a cyber-attack, experience and skill often contribute more to security than the tech stack.

 

More organisations are recognising the importance of experience and training in dealing with cyber-threats, and we’ve seen an uptick in both skills development exercises and engagement with the board.

 

As a result, confidence is high. In fact, we found that almost all (94%) of organisations believe they would be effective in a major cyber-crisis,

 

However, when you start delving into the performance data of these exercises, you see a different story. Many security development programmes that appear mature on paper aren’t actually delivering improved resilience.

 

Decision-making performance in the exercises we analysed was far lower under realistic pressure. In a crisis sim, the average decision-making accuracy was just 22%, and incidents took an average of 29 hours to contain.

 

When it comes to building real resilience, we can’t afford to mistake activity for advancement.

 

Skills development starts strong but plateaus early

At first glance, the investment in cyber-training looks substantial, and organisations have plenty to be pleased with. They have busy development plans in place with plenty of exercises being scheduled, and dashboards showing positive metrics like healthy participation rates.

 

But while participation and activity are always good, they aren’t necessarily moving the needle against readiness for a cyber-incident. Looking more closely at what teams are practising,

it becomes apparent that many are failing to progress past the earlier of development.

 

Our benchmark data found that 36% of completed exercises focused on fundamental skills, with training labs set at the beginner or entry level.

 

Mastering the basics is certainly important, and these foundational skills will help with many of the lower-level attacks that flood organisations every day. But when organisations remain concentrated at that level, their journey towards cyber-maturity stalls.

 

Similarly, we found that most training activity still centres on vulnerabilities that are more than two years old. Again, these skills are still useful, but with attacks continuing to evolve, it only goes so far.

 

Taken together, these tendencies can inadvertently create a false sense of confidence. This is especially dangerous when we remember that threat actors are always innovating and pushing forward with new tools and tactics.

 

Organisations stuck focusing on foundational knowledge and old threat patterns will find a widening improvement gap between their capabilities and the threats.

 

The cyber-Dunning-Kruger effect

A common challenge in skills development is overcoming the Dunning-Kruger effect. This is a cognitive bias you see in many fields of work, in which people with limited exposure to a complex domain overestimate their competence.

 

False confidence in misaligned cyber-skills development can result in Dunning-Kruger on a company-wide scale. When metrics emphasise completion rates and policy adherence, confidence naturally rises, even if advanced capability is not being stretched.

 

For example, we’re seeing heavy focus on the early stages of an attack, such as initial access or defence evasion, but less attention on lateral movement, data collection and exfiltration, where real damage occurs. This means security teams can perform well on their exercises and feel confident, yet still be unprepared when a modern attack actually unfolds.

 

This is compounded by trends we’re seeing higher up the chain. Senior professionals often carry deep institutional knowledge, but that familiarity can reduce experimentation.

 

Participation by senior staff in AI-focused scenario labs dropped 14% year on year, despite 77% of organisations saying they are highly concerned about AI-enabled threats and 80% expecting AI use to increase. Concern is rising, but advanced practice is not keeping pace.

 

Moving beyond feel-good metrics

Getting past the development plateau needs a structural shift. Training programmes and cyber-simulation exercises need to be framed as progressive upskilling programmes designed to increase difficulty over time. Treating them as annual validations will produce some nice reports with a lot of high numbers and ticks but won’t help move the needle on cyber-maturity or resilience.

 

So, while foundational knowledge remains important, it must be reliably followed by intermediate and adversary-led scenarios, particularly those angled around an assumed breach rather than prevention.

 

Further, introducing regular cadence, such as monthly micro-drills and quarterly full-scale simulations, helps build more behavioural fluency and muscle memory, rather than one-off familiarity. Difficulty should escalate deliberately, moving from single-vector incidents to complex, cross-functional crises involving AI-enabled threats or supply-chain compromises.

 

Performance should also be tracked by experience band. By monitoring how decision accuracy evolves for junior, mid-level and senior staff, organisations can determine whether progression is happening evenly or stalling at the top. It’s also crucial that exercises are fully completed, not merely attempted.

 

The participation stats from partial engagement will be misleadingly encouraging and will not build the cognitive resilience required when it’s crunch time in a real crisis.

 

Time to start putting the pressure on

Prioritising activity-based indicators, such as security awareness completion or exercise attendance, means many organisations have accidentally put their development programmes on easy mode. 

 

This isn’t a problem in most professions, but cybersecurity is all about dealing with threats. Security teams might need to leap into action against a serious attack at 2 am. Senior decision makers may have to make snap decisions that have millions of pounds riding on them. Playing it safe won’t prepare anyone for these crisis scenarios.

 

It’s essential, therefore, to focus on performance metrics that reflect real capability. Decision accuracy under pressure, decision speed, mean time to detect and contain, and the quality of cross-functional coordination are all factors that will make or break a cyber-crisis response.

 

Remember that high-pressure simulations are not designed to reassure, but to calibrate. Coming out of a crisis scenario with a 22% decision accuracy score won’t feel good, but it provides a hard reality check that completion rates never will.

 

Finally, training should also align more closely with real adversary behaviour. Basing exercises and scenarios around detection, lateral movement and exfiltration scenarios based on real threat intelligence ensures teams are practising against how attacks actually unfold.

 

Becoming genuinely cyber-ready means escalation. Harder scenarios, broader company participation and metrics that reflect performance under pressure. Cyber-resilience progress isn’t measured by participation, but by how teams and leaders can execute when the stakes are highest. 

 


 

Dan Potter is Senior Director of Operational Resilience at Immersive

 

Main image courtesy of iStockPhoto.com and Kindamorphic

Linked InXFacebook
Affino

Winston House, 3rd Floor, Units 306-309, 2-4 Dollis Park, London, N3 1HF

23-29 Hendon Lane, London, N3 1RT

020 8349 4363

© 2025, Lyonsdown Limited. teiss® is a registered trademark of Lyonsdown Ltd. VAT registration number: 830519543