Cognitive dissonance and why tackling the people element of cyber security is so hard

Cognitive dissonance and why tackling the people element of cyber security is so hard

Two psychologists, an academic and an ex-soldier walk into a bar… Unfortunately this isn’t a reality given the current situation with COVID-19, but it was the basis for an interesting webinar I took part in recently. We had a stimulating conversation about the cyber security industry’s ‘people problem’ and why it is so hard to address. What became clear is that the industry needs to do more to identify, measure, quantify and manage human risk.

ICO statistics show that over 90% of security incidents occur due to human error. To anyone in the industry this is old news, and we have seen this happening for a number of years now. Huge amounts of money have gone into creating incredible tools to secure companies’ tech, and a lot of thought has been put into developing processes. CISOs have done a great job of securing these parts of their business, so now need to turn their attention to people. We are living in a kind of cognitive dissonance, where we know what the problems are, but aren’t able to address them.

The challenges of securing people

Why? The short answer is because it is very difficult to address human risk, and we all had a great deal of sympathy with today’s CISOs, who are having to move from a technical expertise, into a more social one. But the industry has recently started to take great strides in the right direction, and it is encouraging to see such a diverse array of people taking part in conversations about people-centred cyber security. We discussed a number of reasons for why cyber security is struggling to get to grips with human risk, but the most interesting were:

  1. There is no framework for human risk. A review published by the European Agency for Network and Information Security (ENISA) last year found that there are only a few models available to companies that address the behavioural aspects of security, and none of them were a “particularly good fit for understanding, predicting, or changing cyber security behaviour”. In almost every other field of cyber security, robust framework is available to guide companies in how to secure themselves. As an industry, we must look to take this crucial first step.
  2. Many security tools act as a blocker to users. There are a lot of great cyber security tools that can help us secure our emails, our browsers, our servers – the list goes on. However, most of these were built with security principally in mind, with little thought given to how they would affect the productivity of their users. Unfortunately, this means that a lot of tools are a real blocker to users, giving cyber security a bad reputation within companies and forcing users to circumvent them in order to get on with their day to day job. We must realise that security that doesn’t work for people doesn’t work.
  3. Security Awareness Training has been muddled. Training has often been hailed as the silver bullet for the people side of security. However, we have been using it for over a decade now and haven’t seen a fall in the number of security incidents caused by people. There are a few reasons behind this. When initiated in practice, training often comes to blows with a process or policy, leaving users confused. Also, training often isn’t very engaging – we have all sat through mandatory modules that aren’t relevant to our role, or so basic that they become patronising. We must realise that training is just the start of addressing human risk and make use of storytelling and relevant training in order to increase engagement.
  4. Phrases like ‘weakest link’ are still used, which victimises users. This can be disheartening for users, particularly those that are trying to help and doing a good job of keeping the company safe. With unhelpful phrases like this still being used, it is easy to slip into a vicious circle whereby users are put off, so they don’t engage with security, incidents occur, users are blamed, and the spiral continues.

Addressing human risk

Ultimately, these all point to the fact that we don’t have a solid understanding of human risk. Having done a great job of creating policies and developing tech, CISOs must now look to address this gap in their defences. Clearly, there are a number of hurdles to overcome, but the first step must be identifying, measuring, quantifying and ultimately managing human risk.

Security awareness training by itself has limitations – we have to go beyond it and try to get a much deeper understanding of human behaviour. This starts with getting to know each user and their behaviours – their willingness to comply with security policies; any risky behaviours they exhibit; whether they are engaged with security; and their general attitudes towards security, especially if there are tools that impede their day to day jobs.

By having an open, two-way conversation with users, CISOs and their team can gather this information and accurately profile risk. Of course, having face-to-face conversations with every user in an enterprise would take up a huge amount of time, and simply isn’t practical, so we just look to make use of tech that allows us to make this vital first step towards addressing the people ‘problem’.


Author:  Dr Shorful Islam, Chief Product & Data Officer at OutThink

Copyright Lyonsdown Limited 2021

Top Articles

Is your security in need of an update this Cybersecurity Awareness month?

Cyber security experts tell teiss about the evolving threat landscape and how organisations can bolster their cyber security defenses

A new case for end-to-end encryption

How a hacker group got hold of calling records and text messages deploying highly sophisticated tools that show signs of originating in China

Telcos in Europe put muscle behind firewalls as SMS grows

Messaging is set to be one of the biggest traffic sources for telcos worldwide prompting them to protect loss of revenue to Grey Route practices 

Related Articles

[s2Member-Login login_redirect=”” /]