Is AI a friend of foe? Tyler Reese, product manager for One Identity, explores the future of AI in cyber security.
Since Stephen Hawkings first predicted in 2014 that it “could spell the end of the human race”, Artificial Intelligence (AI) technology has become a topic of mass debate worldwide. The use of AI in cyber security in particular has fuelled a whole new subsection of the market and it’s become accessible to IT security professionals worldwide.
AI has enabled human researchers to move away from crunching data and numbers, to focusing on the bigger security picture. The increase in computing power, especially through economical cloud solutions and easy-to-use tools, has allowed a much wider range of users to apply sophisticated machine learning and artificial intelligence algorithms to solve their complex problems.
At the same time, companies and security vendors have realised how difficult it is to fight cyber criminals who are constantly evolving to find new ways to infiltrate corporate networks in order to evade detection. For IT teams, updating and maintaining security solutions and policies to keep up with this volatile threat landscape is extremely costly and almost unsustainable as the number of these attacks grows.
In fact, a study conducted by the Ponemon Institute found that the human costs associated with the implementation and regular maintenance of Security Information and Event Management solutions averaged $ 1.78 million per year for businesses. As such, organisations are eagerly searching for new solutions that require the least personalisation and adjustment, so now all signs are firmly pointing to self-learning technology.
Most AI and machine learning solutions possess self-adaptive capabilities and require little customisation and maintenance. Put simply, the technology analyses how things happen in a given environment and adapts to those surroundings. Hence, AI also allows for a significant reduction in maintenance and overhead costs.
In terms of security, AI can detect problems and attacks that humans and other technologies have not yet been explicitly programmed to identify. These are referred to as "unknown" threats.
For example, security researchers have shown that AI can be used to help identify malicious insiders that produce sporadic activity across multiple systems, even when that activity is a very small amount of the total observed activity, IE < 0.001 percent.
IT security teams can stay a step ahead of attackers by leveraging AI technologies into their everyday routines and business operations. As IT systems get more complex with more interaction, AI technology can help correlate activities across multiple systems spanning months (even years in some cases) to help identify a progressing threat.
AI is capable of making more nuanced decisions than we are accustomed to. The question is no longer whether something is allowed or not, or whether an action is malicious or harmless. We’re entering a world of machines calculating a multitude of probabilities and outcomes, and many see this as a foreign and frightening approach to security.
There is often confusion between how AI operates and a human’s ability to understand how it has arrived at its outcome or conclusion. To be clear, in order to achieve the best results, an algorithm follows a process, that in many cases, is impossible for it to explain or us to grasp perfectly. If the AI technology decides that an attack is taking place, it will put its defences up and do its job well.
On the flipside, a false detection and response can have significant consequences that are less appealing, such as cancelling a transaction that didn’t need to be terminated, suspending an account, or launching a costly investigation process. Many companies see AI as a threat to their business and customer loyalty due to these "false positives".
AI doubters also argue that the technology steps on conscience and ethics. It learns and remembers the way humans make decisions or optimise parameters, in order to achieve an ideal result.
However, that output does not always match the one we are looking for. Applied naively, AI algorithms can amplify our prejudices and create systems that discriminate against certain people or make decisions that a human deems unethical.
AI is certainly a weapon that will occupy a very important place in the cyber security defence arsenal. Limiting access, generating detailed audit logs, and strengthening surveillance are just a few examples of AI-based applications that are already being quickly adopted among enterprise IT security teams. And while AI will certainly help reduce risks of both internal and external threats, human operations will always have a place in effective IT security.
The objective of AI technology is not to replace human beings, but to allow them to devote their resources to more important activities. The best AI tools relieve us of tedious subordinate tasks and help solve more pressing problems. Of course, businesses must keep in mind that AI is a means and not an end; organisations must define objectives and choose the tools best suited to achieve them.
Using AI to free employees to accomplish other tasks is a game-changing benefit of this technology and to an enterprise as a whole. Additionally, AI-driven behavioural analysis, can be used to recognise changes in work habits and to inform the security teams of threats in real time.
AI is already arguably the biggest technology in the cyber industry right now. Most companies are talking about it and many have and will experiment with the technology in the year ahead. As companies progress and evolve to adopt AI technology, the industry will be forced to stop treating it as a harmful algorithm or concept, and instead will find ways to incorporate it into their daily routines that will help grow their business efficiency.
Security remains an arms race and attackers will continue to develop more sophisticated programmes and other hacking tools that allow them to infiltrate networks by escaping detection. Security teams will have to continue their efforts to win the race by using the best technology at their disposable to help them learn about the threat environment. AI is the best place to start.
Ugandan doctors are giving new mothers artificial intelligence-enabled devices to remotely monitor their health in a first-of-its-kind study aiming to curb thousands of preventable maternal deaths across Africa, medics and …