How deep learning can hold back the false positive flood

How deep learning can hold back the false positive flood

Chuck Everette at  Deep Instinct explains the importance of deep learning for AI powered cyber-security tools

The complexity of the modern IT environment means that threat actors have nearly an endless variety of options for initiating an attack. Countering these threats require organisations to continually monitor their entire IT environments with a variety of security solutions.

The threat alerts generated by these tools must go somewhere however, and unfortunately more often than not the destination is the inbox of extremely overworked security operations centre (SOC) analysts, who must then manually assess each one.

In many cases the security information and event management (SIEM) tools that aggregate threat data and turn it into alerts will provide little in the way of context, forcing personnel to try and keep up with a constant deluge of alerts with no way of prioritising them. Teams often receive thousands of alerts every day- far more than is humanly possible to handle.

Compounding this problem is the fact that most, sometimes all of these alerts will turn out to be false positives. False positives are alerts that don’t correspond to an actual threat to the organisation. They are usually the result of the scanning tool lacking sufficient fidelity, or legitimate indicators that closely resembles known threat signatures.

As such, false positives are a growing crisis for many organisations, increasing their risk exposure and consuming a disproportionate amount of valuable time and resources.

So how bad is the problem, and what can be done about it?

The rising flood

We frequently encounter organisations that are drowning in a mounting flood of alerts, with most being false positives. In one stark example, a large enterprise was receiving around 75,000 alerts a day, with just two being legitimate threats.

To get a more empiric sense of the problem, we recently commissioned the Voice of SecOps report, which surveyed over 600 security decision makers and practitioners on a variety of issues. Respondents reported that an average 10 out of every 39 hours in a working week was spent handling false positives – leading to around a quarter of the week being wasted on tedious manual work that generates no real value.

Genuine threat alerts can quickly be lost in the sea of false alarms. The backlog this creates is often so severe that it may be several days before an alert is assessed by a member of the SOC team. In the case of a real alert providing the initial signs of a serious attack, this means the threat actors will be granted free rein of environments for extended periods of time.   

Alongside the more obvious security risk, this situation also fosters an extremely negative working environment for security personnel. 90 percent of respondents in our survey stated that false positives contributed to low staff morale as analysts spend a large chunk of each day grinding through repetitive, low-value manual work. A demoralised team is also more likely to miss genuine threats, which is often referred to as alert fatigue. 

Alert fatigue is a leading cause of the security industry’s burnout problem, and it’s common to find that analysts will only stay in a role for 12 to 24 months before looking for greener pastures elsewhere or leaving the industry altogether.

Turning to automation

It’s clear that the false positive problem is not one that can be solved by manpower alone. This is a textbook case for the abilities of AI-powered analytics, and indeed a growing number of SOCs are now supporting their human analysts with automated tools. Our research indicates tools such as AI, machine learning (ML) and deep learning (DL) can significantly reduce the number of false positives as well as improving the chances of identifying unknown threats.

The most widespread form of AI currently being used is machine learning, which sees a tool being trained on attack data until it can recognise patterns and threat indicators independently. A trained ML solution can rapidly analyse large volumes of threat alert data, saving human analysts from hours of tedious, unrewarding work.

Automated tools can filter through the false positives and leave the team to deal with the genuine threats. Other processes can also be automated so that real but low-level alerts can be dealt with, without human intervention. Well applied AI-powered automation can both reduce the company’s risk exposure and provide a powerful shot in the arm for SOC team morale and productivity.

Deep learning is the future

While most organisations currently use machine learning to power their analytics and automate security processes, it has several flaws that criminals are beginning to exploit. In particular, traditional ML tools are susceptible to being manipulated with “poisoned” data sets that have been created by another ML tool.

These sets feed the solution bad data, subtly training it to ignore genuine threat data and create false negatives for attackers to hide behind. ML tools are also usually beholden to data feeds from AV, endpoint detection and response (EDR), and other security tools.

This means they can only react to, rather than predict threats, something adversaries are increasingly capable of exploiting with attacks designed to do their damage before they can be detected.

These flaws are resolved by deep learning (DL), a more recent advanced development of AI. While it shares the same basic principle as ML, DL takes things a step further as it is capable of making unsupervised decisions resulting in files being identified as benign or malicious autonomously.

The solution starts by training hundreds of millions of raw files until it can identify the good data from the bad data independently. Once applied to a security stack, this allows it to operate beyond merely reacting to incoming datasets and begin predicting threat behaviour instead.

Coupled with this predictive analytic ability, DL has a blisteringly fast processing speed and can identify a potential breach in less than 20 milliseconds, meaning the number of threat alerts received are lessened and the potential for false positives is reduced drastically.

Deep learning is a relatively new development, although it is seeing use by leading tech firms like Tesla and YouTube. Organisations looking to begin incorporating the new technology into their security stack will need to carefully consider how it will interact with existing solutions and processes, determining what will be enhanced and what will likely need to be replaced.

Once it is properly integrated into the security stack, SOC teams will benefit from an immediate reduction in the volume of false positives and other low-level alerts consuming their days. Better yet, the organisation will be able to change footing from reacting to incoming attacks to proactively predicting and stopping them before they can begin.

With the onslaught of attacks showing no signs of ceasing, businesses should be looking to future-proof their operations to give themselves a fighting chance against known and unknown attacks.

The proactive nature of deep learning could give them the much-needed shield to deflect the crippling blows heading their way.


Chuck Everette is Director of Cybersecurity Advocacy at Deep Instinct

Main image courtesy of iStockPhoto.com

Copyright Lyonsdown Limited 2021

Top Articles

Double trouble: the rising threat of double-extortion ransomware

Ransomware attackers continue to threaten businesses at an increasing scale, speed and sophistication.

The blurring line between nation-state and cyber-criminals

Russia is widely known to be involved in a plethora of cyber-criminal activity.

XDR: Delivering value where SIEMs fail

Implementing an XDR solution means faster detection, and remediation of cyber incidents

Related Articles

[s2Member-Login login_redirect=”https://www.teiss.co.uk” /]