The inflated rhetoric of cyber threat
What strikes the classical philologist first when studying the language of cyber threats such as in this programme’s news item (The Most Dangerous Attack Techniques in 2021) is that the way we talk about danger hasn’t changed for millennia. We often exaggerate and amplify the urgency of the threat suggesting that there is no choice – unless we take a certain course of action, for example, buy a certain anti-malware product, catastrophe and ruin will befall us. What we need to appreciate different levels of cyber threat for what they are is consensus on what counts as a catastrophe and what as only some passing inconvenience.
Project 2020, an initiative of the International Cyber Security Protection Alliance (ICSPA) to anticipate and prepare for the future of cybercrime and which Dr Baines is the director of, established that by 2030 both cyber-attacks and defence will have become Ai-based. Although attacks have been widely automated for the past decade, luckily for information security experts, they haven’t become overly intelligent yet – think of spoofing domains or sending out scam emails. But in 10 years’ time cybersecurity may become the fight of defensive AI against adversary AI without humans in the loop.
The limitations of ML
Although customers tend to regard AI as the silver bullet, there are a couple of caveats to its deployment. Guests on teissTalk always go out of their way to point out that AI is a misnomer for machine learning, and the conversation on the management of different levels of threat alerts in this programme has eventually boiled down to the limitations of ML. There are some very powerful tools on the market, but they aren’t good for anyone, and they often remain under- or misused creating only heaps of data rather than intelligence. We’re far from the stage when people can be taken completely out of the loop. It may sound a cliche but it’s true that currently ML is to enable analysts and not to replace them. Another typical ML-related concern is explainability. Clients need solutions that they can understand and use. Some understanding of the C-suite regarding what’s going on in the black box is also key because at the end of the day it’s not the vendor but the CEO or the CISO who will be held accountable if anything goes awry.
Another problem area is when ML is leveraged for anomaly detection and there are many users in a sizable company, who often display erratic behaviour, which makes identifying what baseline normality is challenging. Also, when triaging threats, it’s instrumental that the specific business context is added to the software, which – while it can be done – may present an insurmountable problem when it comes to updating.
Finally, ML is partly to blame for the high level of burnout among information security professionals – in addition to stress and overwhelming workload – as ML can remove the most fulfilling aspects of their job, where they can use their judgement and “nose” and dive deeper than ML could.
teissTalk panellists’ advice
Get the basics right! Size of business shouldn’t be the reason per se for small businesses not implementing powerful tools such as SIEM (Security Information and Event Management). However, don’t deploy it when the organisation is still in the early stages of its maturity curve because it can do more harm than good. Don’t source DDoS attack prevention tools if your business has little exposure to the threat. Proceed in baby steps rather than going headfirst deploying ML for only specific use cases first.