Even though the proportion of organisations deploying AI and machine learning technologies has increased by over 35 percent over the past year, less than half of employees believe that their organisations can accurately assess the security of systems based on AI and machine learning.
Earlier today, a report from Help Net Security revealed that organisations across the globe are now adopting AI and machine learning technologies at a brisk pace, so much so that global spending on cognitive and artificial intelligence systems could touch $77.6 billion (£58.82 billion) in 2022, thrice the amount being spent by organisations this year.
The report added that organisations are mostly deploying AI and machine learning technologies for various uses such as automated customer service agents, automated threat intelligence and prevention systems, sales process recommendation and automation, and automated preventive maintenance.
It also predicted that business investments on such technologies could see the fastest growth between 2017-2022 for use cases such as pharmaceutical research and discovery (46.8% CAGR), expert shopping advisors & product recommendations (46.5% CAGR), digital assistants for enterprise knowledge workers (45.1% CAGR), and intelligent processing automation (43.6% CAGR).
“The market for AI continues to grow at a rapid pace. Vendors looking to take advantage of AI, deep learning and machine learning need to move quickly to gain a foothold in this emergent market. IDC is already seeing that organisations using these technologies to drive innovation are benefitting in terms of revenue, profit, and overall leadership in their respective industries and segments,” said David Schubmehl, research director, Cognitive/Artificial Intelligence Systems at IDC.
Org’s unclear about security of AI and machine learning tech
Considering that AI and machine learning technologies can be used for many purposes while saving cost, improving the efficiency of operations and minimising the risk of errors, what is also important is that organisations deploying such technologies should have sufficient control and visibility over them to prevent their misuse or security incidents.
However, at ISACA’s second annual Digital Transformation Barometer, less than half of respondents (40%) were confident that their organisations could accurately assess the security of systems based on AI and machine learning. This despite the fact that such employees are aware that AI and machine learning tools employed by their organisations are vulnerable to social engineering, manipulated media content and data poisoning attempts by malicious actors.
According to Help Net Security, AI tools have ushered in various advancements in accelerating medical research, improving farmers’ crop yields and assisting law enforcement, but such advancements “are unfolding so quickly that it often is challenging for organisations to develop the expertise needed to put the corresponding safeguards in place to account for potential security vulnerabilities and ethical implications”.
“ISACA’s global membership shows in this research that digital transformation is by no means complete, and organisations are still struggling with fundamental questions of risk, security and return on investment,” said Rob Clyde, ISACA Board Chair.
“It’s impossible to guarantee results when deploying less familiar technologies, but this survey suggests that organizations that have adopted new technologies overwhelmingly consider their journeys to be worthwhile. As organisations continue to navigate uncertain territory, finding qualified leaders to help steer these journeys and instill an organizational commitment to innovation is critical.”
AI to the rescue
Considering that artificial intelligence is revolutionising the way organisations do business, improve their processes, enhance efficiency and reduce risks of human error from their operations, many organisations may not roll back their investments on the technology just because there could be security risks.
However, it is possible that organisations could use artificial intelligence itself to enhance their cyber security processes and find weaknesses in their IT networks which would be impossible to find via traditional methods.
Earlier this year, in an interview given to Jeremy Swinfen Green, Head of consulting at TEISS, Sue Daley, Head of Artificial Intelligence at tech UK, said that artificial intelligence can be very effective in protecting against known threats and “Advanced Persistent Threats”.
“By identifying potentially risky events, such as unusual behaviour, ML (machine learning) can help cyber security professionals defend IT systems more efficiently and more rapidly. This is important: a speedy response to an incident can reduce the potential for damage massively.
“And by freeing security professionals from the need to investigate known events, many of which will turn out to be benign, ML allows those professionals to focus on the most important or intractable problems.
“ML cyber security tools can even be programmed to take decisions on behalf of their human masters. They can block suspicious traffic or perhaps simply quarantine it until a human can investigate. This way, because a machine rather than a human is undertaking the defence, networks (even those of relatively small organisations) can be kept secure 24/7,” she said.