How should we deal with the dark side of Artificial Intelligence?
August 14, 2018
Vendor View: Naomi Hodges, Surfnet, on the good, the bad and the ugly sides of AI.
Artificial Intelligence is a concept that has been developing for quite some time. It has, for a while now, turned into a tangible influence in our lives. AI has become mainstream, and it is virtually in everything we do, see, touch, or hear.
YOU MAY ALSO LIKE:
The problem is that life comes down to good and bad. Some people use their talent to make a positive contribution to society, while others enjoy earning money with questionable morality, without caring about the community and the consequences of their acts.
Good or evil: the never-ending debate
Artificial Intelligence suffers a similar fate: when it is applied and developed by the right people, it can be the best provider of cybersecurity solutions for a very needy field. However, the issue remains hackers and cybercriminals using these devices and gadgets to perform their shady activities.
The use duality issue
The development of AI technologies could derive, sadly, in innovative forms of cybercrimes and political distress. Reality indicates that cybersecurity enterprises and organizations are falling behind in the battle against hackers and cybercriminals.
When hackers strike, the digital security market can take a while to come up with a plausible solution, whereas cybercriminals usually don’t take so long to find new vulnerabilities in diverse systems and networks to strike again and get away with their actions.
Experts and pundits have recently described Artificial Intelligence as a double-edged sword, meaning that it is as likely to be used to combat crime and enhance security systems as it is to start a nuclear confrontation or as a hacking measure.
The importance of raising awareness
It all starts with awareness, of course. Better online and data protection habits would lead to decreased cybercrime rates, including identity thefts, privacy breaches or stolen credit card numbers. However, there comes the point in which even the most privacy-conscious people can fall victims of AI technology used for evil purposes.
Artificial Intelligence can be considered a widespread theme since last year. Although 2018 hasn’t produced many important breakthroughs in the industry, its potential for it being used to create new threats is evident. As a consequence, it is crucial to creating efficient regulatory frameworks to be in a better position to prevent the malicious implementation of AI technologies.
Artificial Intelligence is a wonderful thing. However, its malicious implementations in the cyber security field can threaten to endanger every existing online interaction.
People need to know more about Artificial Intelligence and how it can be used for good purposes. As they often say, it is usually easier to act in a bad and irresponsible way: AI can be a perfect tool to make automatically detect software vulnerabilities. Even worse, it can be applied for striking when people expect it the least: AI can power social engineering offenses, increasing odds of clicking on malicious links or attachments.
Sadly, the malicious use of artificial intelligence can transcend the digital platform and enter the physical. With so many terrorism organizations, the world should be shaking out of fear that this technology is used to weaponize evil organizations.
If AI devices, appliances, and tools are utilized to facilitate our lives on a daily basis, which is to say that it can’t be implemented to terrorize people? There are ways to implant explosives in tech-powered developments and inventions.
Other malicious applications of Artificial Intelligence
Artificial intelligence is also being applied to generate political disruption in certain societies. It is used by some governments, particularly China, to suppress minorities and those that dissent the regime.
These devices can also create disinformation campaigns or perform denial of information offenses in order to generate and spread fake news. What Russian hackers did in the United States 2016 presidential election is a perfect example.
Virtual Private Networks can help
A possible remedy to be implemented is designing software and devices that result less prone to hacking attacks. Governments and international regulators need to be involved in this fight, as injections of capital may be needed to fund the research and development of the “good guys.”
According to Dmitri Alperovitch, an important figure of the security enterprise CrowdStrike, both sides (good and bad) will keep adjusting to one another, with AI remaining incredibly helpful to the cybersecurity industry. However, it’s also going to be beneficial to cybercriminals, and the battle is still wide open.
The expert stated that he forecasts more benefits to the defensive party while providing data collection as its primary point.
The fact remains that, while the best way to defend against artificial intelligence is to apply artificial intelligence, the AI-based defense is not the remedy to all of our issues. There is a lot of work to be done.
Measures to apply
For starters, people should understand the need for improving technical efforts to verify how robust some systems are. Also, there needs to be a better, smoother policy integration between countries in which AI is more easy to implement and those that are less favored in that regard.
The primary goal of understanding the dangers of misinformation and misuses regarding artificial intelligence is to minimize the impacts of AI-powered cyberattacks or hacking incidents and to use the technology as a defense mechanism.
The first thing that needs to be fostered is a collaboration between the people making policies and the people doing the proper research or funding it. Governments and the cybersecurity industry are charged with the responsibility of controlling, fostering, and funding innovative advances in the artificial intelligence applied for digital security.
Artificial Intelligence researchers need to take up ethical best practices to ensure that their inventions are both efficient and harmless to the average internet user and to the cyber security field.
Effort and investment are needed from everyone involved
A very important thing to adopt should be an effective methodology to cope with the dual-use concerns of artificial intelligence. There might be some ambiguous or controversial waters to navigate, and the mentioned methodology may prove to be the best tool to determine the right course of action.
There is, unfortunately, no immediate or quick solution to the threat of the malicious use of artificial intelligence in the cybersecurity field. The adoption of the aforementioned measures, though, can prove efficient in coping with the effects of evil activity and building a framework for a mid to long-term solution.
Some people in the industry have even stated that there might be no definitive solutions to implement in order to stop the malicious use of artificial intelligence to find vulnerabilities in the cyber security industry. In fact, there might be just palliative measures that help mitigate the impact of hackers’ activity.
By now, governmental agencies and dependencies are acquainted with the extent of damage that a hacking scandal or a massive cyberattack can cause.
People inside F-Secure, a very recognize online security firm, tend to believe that the mentioned social engineering and disinformation campaigns will become easier with the ability to generate ‘fake’ content (text, voice, and video).
While there are people that can quickly determine whether an image has been altered or “photoshopped”, it may not be so easy for the mainstream media to know whether a specific material was generated or altered by a machine learning algorithm. Panic can take the scene unless there are information campaigns about artificial intelligence, its uses, and applications.
To make things short, the thing with artificial intelligence is that projecting the future may be a futile exercise, since AI will scale the same threats of today, but at the same time, it will also scale the defense mechanism to fend off those threats.
In conclusion, there is a sizable gap when it comes to knowledge among countries and communities that know and use artificial intelligence tools and appliances and those locations and societies in which the concept, for various reasons that may escape any localized debate, isn’t yet fully developed.
Governments, researchers, cybersecurity companies and people within the industry need to work together to fight the advances of hackers and cybercriminals, not only by establishing a framework of policies to adopt to come up with possible solutions, but also by looking for investment for tools to combat the malicious use of Artificial Intelligence in the cybersecurity market.
It is a problem that may affect all of us: if hackers keep using and perfecting artificial intelligence appliances, devices, and tools, everybody is going to suffer, from ourselves to entire governments. And the repercussions could be huge, given that data privacy and anonymity are at a premium these days.
By Paul Griffiths, Senior Director, Advanced Technology Group, Riverbed Technology Last year, Google DeepMind’s AI program AlphaZero took 4 hours to master the game of chess, exceeding the realms of human …