How AI and Machine Learning are changing the rules of cyber security
June 7, 2018
TEISS guest blogger and cybersecurity consultant, Harold Kilpatrick, talks us through the impact of AI and Maching Learning on cyber security.
The rapid development of artificial intelligence may significantly improve efficiency in businesses, but the technology could also pose serious threats to online security, a report by a group of UK and US experts warn.
As AI is becoming more powerful and faster in performing automated tasks, it is increasingly adopted in a wide variety of industries, from manufacturing to software development.
In fact, analysts expect that by 2020 artificial intelligence solutions will be applied in almost all new software products and services, which will irreversibly change the way we interact with technologies and make use of their benefits.
But in the quest for innovation and better operations, many miss the obvious risks AI and machine learning could bring. While these technologies have already proven to be extremely helpful in fighting ever-emerging cyber threats, experts say that the same techniques could also be used to introduce new types of attacks and boost cybercrime.
“As AI capabilities become more powerful and widespread, we expect the growing use of AI systems to lead to the expansion of existing threats, the introduction of new threats and a change to the typical character of threats,” researchers write in the report.
Therefore, before enlisting AI to help enhance cybersecurity, experts strongly suggest considering the fact that cybercriminals will also use artificial technology to get around your defense strategy.
How AI and machine learning can help prevent cyber attacks
AI systems and deep learning algorithms are already helping cybersecurity professionals develop effective solutions to fight against cyber crime. If it weren’t for artificial intelligence and machine learning, the cybersecurity landscape would be very different than it is right now.
As cyber threats evolve, and the attacks become more complex and widespread, conventional defense tools are often not enough to detect and stop them on time. Therefore, security solutions that are powered by machine learning are the next big thing in cybersecurity.
Thanks to their ability to learn and adapt over time, such tools can promptly eliminate well-known threats, as well as respond to new emerging risks before they do any harm, by recalling and processing data from prior attacks.
Another benefit of artificial intelligence is the ability to perform specific tasks on its own, this way saving time and reducing the risk of human error. Unlike people, AI systems don’t make mistakes as they handle threats according to a standardized playbook, this way responding to each threat in the most effective way.
With the AI systems on their side, security experts can spend less time performing routine tasks and focus on building a stronger defense that would allow stopping sophisticated cyber attacks before they even occur. Therefore, implementing machine learning and AI systems is crucial to stay one step ahead of cybercriminals.
And yet, no technology is a silver bullet, and AI is just a tool, which can only do what criminals or security experts command it to do.
Artificial intelligence and machine learning in the hands of cybercriminals
Just as cybersecurity professionals are making use of AI and ML to develop better protection tools and strategies, criminals are using the same technologies to look for potential vulnerabilities, improve existing techniques, and create new types of cyber attacks.
“With cybersecurity, as our models become effective at detecting threats, bad actors will look for ways to confuse the models,” states Steve Grobman, CTO at McAfee.
The danger with machine learning is the fact that these technologies are becoming mainstream, making it easier and cheaper for cyber bad guys to pull off their crimes.
Good news is that even with the help of AI, more sophisticated, disruptive attacks still require time, money, and effort. Bad news - while creating such complex attacks, cyber criminals can also improve good old scams. With the help of AI, attacks that imitate human behavior are getting even more convincing.
By accurately mimicking the language and writing style of actual people, bad actors can now create an email that looks like it has been sent by your best friend or a colleague. The trick might be old but it works as many people still haven’t learned to spot obvious phishing attempts and keep clicking on malicious links.
We all know someone who has clicked on a phishing link and unwittingly let criminals into their network. Well now that AI is here, it will be even more tricky to tell which email is written by your boss, and which is fake, meaning that even tech-savvy users may fall a victim of AI-based scams.
But phishing emails written by machines is nothing compared to the possible dangers that weaponized technologies could bring.
By employing the power of machine learning, hackers will gradually develop tools that not only can automatically scan systems for potential vulnerabilities, but also effectively learn about the systems they are about to target, making the attacks more accurate, and therefore, even more damaging.
About a year ago, researchers created AI that can easily tweak malware code to circumvent anti-malware AI. The example clearly demonstrated how the same technology could be successfully used for both - building robust defense systems and bypassing them.
So what’s next in the AI vs. AI battle?
Technological advancements are pushing forward cyber security systems that are constantly learning, adapting, and helping professionals develop new methods to prevent cyber attacks. Simultaneously, artificial intelligence's ability to learn is also assisting hackers in identifying security flaws and inventing new types of malware.
Artificial intelligence in the hands of criminals is a terrifying concept. At the same time, it highlights the importance of researching AI capabilities and developing intelligent defense strategies.
Therefore, in order to stay ahead in the AI vs. AI battle, the infosec community should be thinking about replacing outdated security tools with an intelligent technology that is continuously learning about emerging threats. Essentially, the main focus should be on detecting and responding to attacks before they even occur.
In the digital age where artificial intelligence and machine learning are becoming increasingly common, the detection and prompt response to attacks should be quicker than ever.