Cyber criminals leveraging AI to carry out malicious attacks, warns Europol

Europol has warned in a new report that cyber criminals have the ability and necessary expertise to leverage AI both as an attack vector and an attack surface to support social engineering attacks at scale, evade image recognition and voice biometrics, and launch ransomware attacks through intelligent targeting and evasion.

The new report, which is a product of coordinated research carried out by Europol, the United Nations Interregional Crime and Justice Research Institute (UNICRI), and Trend Micro, says that just like AI and ML algorithms are shaping the world in an increasing range of sectors by addressing a number of complex global challenges, they can also enable a range of digital, physical, and political threats.

While AI-as-a-Service will steadily lower the barrier to entry by reducing the skills and technical expertise needed to employ AI, the Crime-as-a-Service (CaaS) business model will also enable non-technologically savvy criminals to procure technical tools, services, and new technologies such as AI to extend their attack capacity and sophistication, the report warned.

The fact that cyber criminals can use AI to make their activities more potent and sophisticated is not only a warning but has been demonstrated in real-time by security researchers. For instance, malware developers can use AI in more obfuscated ways without being detected by researchers and ML-based antivirus engines, hackers can use AI to craft phishing emails that can bypass spam filters, and can also use ML techniques to analyse years’ worth of data related to BEC attacks and predict if an attack will be successful or not.

Researchers from Europol, Trend Micro, and UNICRI also noted that hackers can use neural networks and generative adversarial networks (GANs) in particular to guess passwords more accurately as these technologies make it possible to analyse a large dataset of passwords and generate variations that fit the statistical distribution, such as for password leaks.

At the same time, hackers can exploit exposed smart speakers to issue audio commands to a nearby smart assistant, such as Amazon Alexa or Google Home, and others that are often in control of home automation systems. Cyber criminals can also use software that implement neural networks to solve CAPTCHAs that are put in place by website owners to prevent automated attacks such as those involving the creation of new accounts or adding new comments or replies on forums.

"AI is intrinsically a dual-use technology at the heart of the so-called fourth industrial revolution. As a result of this duality, while it can bring enormous benefits to society and help solve some of the biggest challenges we currently face, AI could also enable a range of digital, physical, and political threats. Therefore, the risks and potential criminal abuse of AI systems need to be well-understood in order to protect not only society but also critical industries and infrastructures from malicious actors," the report said.

"It is safe to assume that cybercriminals will progressively integrate AI techniques to enhance the scope and scale of their attacks, thereby exploiting AI both as an attack vector and an attack surface, additionally powered by the service-based criminal business model."

Therefore, the researchers said, close cooperation with industry and academia is a must in order to develop a body of knowledge and raise awareness on the potential use and misuse of AI by criminals. Not only will such cooperation anticipate malicious and criminal activities facilitated by AI, it will also prevent, respond to, or mitigate the effects of these attacks in a proactive manner.

This is not the first time that security researchers have warned about the possibility of cyber criminals using AI to conduct malicious activities. In early 2018, a report from the Future of Humanity Institute warned readers about how cyber criminals could exploit advanced AI tools for malicious purposes. Attacks using such tools could be more efficient compared to existing threats and more large-scale and have so far been underestimated.

“In the cyber domain, even at current capability levels, AI can be used to augment attacks on and defences of cyberinfrastructure and its introduction into society changes the attack surface that hackers can target, as demonstrated by the examples of automated spear-phishing and malware detection tools.

“As AI systems increase in capability, they will first reach and then exceed human capabilities in many narrow domains, as we have already seen with games like backgammon, chess, Jeopardy!, Dota 2, and Go and are now seeing with important human tasks like investing in the stock market or driving cars,” the report said.

Appearing before a House of Lords committee, experts at security firm Darktrace also warned how cyber criminals could misuse AI tools to impersonate individuals, learn their habits and writing skills, and take over their systems to spread malicious software to systems used by the victims' colleagues and acquaintances. They added that the operation could explode and victimise millions of people.

"Imagine a piece of malicious software on your laptop that can read your calendar, emails, messages etc. Now imagine that it has AI that can understand all of that material and can train itself on how you differently communicate with different people. It could then contextually contact your co-workers and customers replicating your individual communication style with each of them to spread itself," said Dave Palmer, director of technology at Darktrace.

"Maybe you have a diary appointment with someone and it sends them a map reminding them where to go, and hidden in that map is a copy of malicious software. Perhaps you are editing a document back and forth with another colleague, the software can reply whilst making a tiny edit, and again include the malicious software.

"Will your colleagues open those emails? Absolutely. Because they will sound like they are from you and be contextually relevant. Whether you have a formal relationship, informal, discuss football or the Great British Bake Off, all of this can be learnt and replicated. Such an attack is likely to explode across supply chains. Want to go after a hard target like an individual in a bank or a specific individual in public life? This may be the best way," he added.

Mr Palmer also told the House of Lords committee on AI that by using AI tools, cyber criminals could also infiltrate corporate meetings, use translation and transcription tools to access sensitive corporate secrets, and carry out round-the-clock surveillance of enterprises they intend to victimise.

MORE ABOUT:

Leave a Reply