Darktrace’s Director of Threat Hunting Max Heinemeyer explores how AI augments cyber-criminal capabilities at every stage of the kill chain
The mind of an experienced and dedicated cyber-criminal works like that of an entrepreneur: the relentless pursuit of profit guides every move they make. At each step of an attack, the same questions are asked: how can I minimise my time and resources? How can I mitigate against risk? What measures can I take which will return the best results?
This way of thinking uncovers why attackers are turning to new technology in an attempt to maximise efficiency, and why a report from Forrester earlier this year revealed that 88 per cent of security leaders now consider the malicious use of AI in cyber-activity to be inevitable. More than half of responders to that same survey foresee AI attacks manifesting themselves to the public in the next 12 months – or think they are already occurring.
AI has already achieved breakthroughs in fields such as healthcare, facial recognition, voice assistance and many others. In the current cat-and-mouse game of cyber-security, defenders have started to accept that augmenting their defences with AI is necessary, with more than 4,000 organisations using machine learning to protect their digital environments.
Enhancing the attack life-cycle
To a cyber-criminal ring, the benefits of leveraging AI in their attacks are at least four-fold:
- It gives them an understanding of context
- It helps to scale up operations
- It makes attribution and detection harder
- It ultimately increases their profitability
Let’s break down the life-cycle of a typical data exfiltration attempt:
Stage one: reconnaissance
Automated chatbots interact with employees via social media, leveraging profile pictures of non-existent people created by AI. Meanwhile, CAPTCHA-breakers are used for automated reconnaissance on the organisation’s public-facing web pages.
Stage two: intrusion
Attackers then craft convincing spear-phishing attacks, while an adapted version of SNAP R can be leveraged to create realistic Tweets at scale, targeting several key employees. The Tweets either trick the user into downloading malicious documents, or contain links to servers which facilitate exploit-kit attacks.
An autonomous vulnerability-fuzzing engine based on Shellphish would be constantly crawling the victim’s perimeter – internet-facing servers and websites – and trying to find new vulnerabilities for an initial foothold.
Stage three: command and control
A popular hacking framework called Empire allows attackers to blend in with regular business operations, restricting command-and-control traffic to periods of peak activity. An agent using some form of automated decision-making engine for lateral movement might not even require command-and-control traffic at all. Eliminating the need for command-and-control traffic drastically reduces the detection surface of existing malware.
Stage four: privilege escalation
At this stage, a password crawler could feed target-specific keywords into a pre-trained neural network, creating hundreds of realistic permutations of contextualised passwords at machine-speed. These can be automatically entered in periodical bursts so as to not alert the security team or trigger resets.
Stage five: lateral movement
Lateral movement can be accelerated by concepts from the CALDERA framework using automated planning AI methods. This would greatly reduce the time required to reach the final destination.
Stage six: data exfiltration
Instead of running a costly post-intrusion analysis operation and sifting through gigabytes of data, the attackers can leverage a neural network that pre-selects only relevant material for exfiltration.
Conclusion
Offensive AI will make detecting and responding to attacks far more difficult. Traditional security controls that rely on rules and signatures are already struggling to detect attacks that have never been seen before in the wild – and these tools will be even less effective when AI attacks become the norm.
To stay ahead of this next wave of attacks, AI is becoming a necessary part of the defender’s stack. No matter how well-trained or how well-staffed, humans alone will no longer be able to keep up. Hundreds of organisations are already using autonomous response to fight back against new strains of ransomware, insider threats, previously unknown techniques, tools and procedures, and many other threats. A new age in cyber-defence is beginning, and the effect of defensive AI on this battleground is already proving fundamental.
Discover more about offensive AI at https://www.darktrace.com/en/ai-attacks/?utm_source=event&utm_medium=teissbenelux