AI & machine learning are the future of cybersecurity but don’t expect the Terminator yet
June 23, 2017
Have you watched Good Morning Britain recently? Especially the episode where Piers Morgan and Susanna Reid 'interview' a humanoid? I have to say, I was very impressed by how human the machine looked and the clarity of the answers provided.
Every day I receive emails scare mongering about how artificial intelligence is ready to take over our lives and how all our jobs will be lost to robots. I am sure my job could be automated to a degree, but am pretty certain that a machine wouldn't be able to write an interesting enough feature article (like this one)...yet. Machines have, however, been churning out financial news for news agencies for years, although, without human intervention, it would be illegible. According to a recent report, the risk is ever present but the timeline is a lot longer.
YOU MAY ALSO LIKE:
According to the researchers and experts who contributed to the report, over the next 40 years, the following jobs have the potential to be outsourced to machines: translating languages (by 2024); writing school essays (by 2026); driving a lorry (by 2027); working in retail (by 2031); writing a best-selling book (by 2049) and working as a surgeon (by 2053).
But what of cyber security? Would it be possible to automate tasks and sweeps to machines? Teiss.co.uk spoke to Oliver Tavakoli, CTO of Vectra Networks to find out how they have been putting machine learning towards the task of gate-keeping.
Teiss.co.uk: How and why did Vectra get into the machine learning and artificial intelligence space?
Tavakoli: 'We have been shipping products for over 3 years or so. To start off with, the premise [of using machine learning] for the company was not popular at all. The myth being propagated 5 years back was that you can build a perfect wall. That has pretty much fallen by the way side. For us, it was not so much about how malicious actors get in but to see what their goals were. We wanted to recognise patterns and behaviours because in a majority of cases, the damage accrues very slowly. Information is collated in terms of hours, days and weeks rather than in an instant.
'To do that we decided that we would need a collection of technology basically data science or machine learning. Data science is treating data as first-class modelling, machine learning is where you allow software to learn from the data- so you give it data to understand from and AI is the next higher thing.'
Teiss.co.uk: So where does machine learning stop and AI start?
Tavakoli: 'There is no clear divide between where artificial intelligence starts and machine learning stops. In popular fiction, machine learning morphs into AI when it becomes almost sentient in some respects and so anything from Skynet to Terminator.
'In practical reality, however, machine learning and artificial intelligence exist on a spectrum.
'Machine learning is a means of automating tasks that were carried out by humans earlier. At a certain point, when it becomes complicated enough, it becomes AI.
'At Vectra, we extract data within a corporate setup. This involves watching traffic and pulling information from logs and then creating models from normal and abnormal patterns. We then correlate this mass of data that we gather and build narratives and story lines. Once we have managed to do this, we are then able to go back to our clients and say: these are the things that these accounts are being used for; for this length of time so the companies can understand what to do next time they are faced with a similar situation.'
Teiss.co.uk: How much data does machine learning have to run through to be classed as AI?
Tavakoli: 'There are 2 ways to think of the data, so it will be consistent globally. We can learn from the global attacks that have taken place over the last 10 years. We also check to see how reconnaissance has performed in the past. We collect all the information we can get and train the machine offline and on our own premises. So when the machine arrives at the customer's premises, it has a preconceived notion of what's good or bad. There is no delay or lag when it enters the customers environment- it can just get up and run.
'We already know that local patterns are very important for attackers in every company. Think of administrative account usage in different companies, sectors and premises. It depends on the people, the tasks and which accounts are utilised. Learnings have to occur locally and those learnings require soak time in that environment. It also depends on the environment for the automated activity to kick in. Connectivity pattern study can take several months, so times vary.'
Teiss.co.uk: Can you give us an example of where machine learning has worked well?
Tavakoli: 'The most relevant and current example I can think of is WannaCry. On one hand, it was a completely new malware that came out. Companies out there were not patched for it, but another thing to remember was that the malware itself wasn't being prevented at the perimeter. For malware to get in is not a problem- new things always get in. However, WannaCry was a particularly virulent strain and so just one person at the affected organisation making a mistake could then be used to leverage hundreds of systems in that enterprise. So, in this case, constructing a scanning environment that is able to move laterally was the mantra. Even if the malware was found to be succeeding in a fraction of machines and infecting them with the follow-on ransomeware, our software was able to deal with it.'
Teiss.co.uk: During simulations using machine learning, how quickly were you able to hook and catch malware?
Tavakoli: 'Our detection was in real time. The propagation pattern it followed was that of a 'worm'. We all know about the Confiker worm that went around Windows PCs more than 10 years back. That particular worm used a combination of Windows OS flaws to carry out dictionary attacks on administrator passwords and proved to be unusually difficult to counter. The progression of malware from one machine to next is 30 seconds to 1 minute. The bigger question was if the customer was set up to trigger remedial measures. Most of them were not. Machine learning and artificial intelligence is still imperfect enough to not be impervious to human action. In case of worms, there isn't enough time for humans to act.'
Teiss.co.uk: If it happened again, would the time taken to quarantine a malware worm be shorter because of WannaCry? Because of machine learning?
Tavakoli: 'These worms will be more top of mind now if another attack was to happen. The last large scale attack instance was about 10 years back, with the Confiker worm. I think between ransomware and worms, there will be bias towards taking action fast to head off a disaster.
'But there is always a danger with pulling the trigger prematurely and then potentially not understanding the nature of the worm in totality. And maybe get it wrong sometimes. There will be organisational bias and it is the same conundrum in cybersecurity.
'If I am trying to infect your machine, I will try to carry out either a watering hole attack or send you a phishing email. These techniques have existed for years. But once you download, the lateral spread and the speed at which malware spreads to your company's machines is also a vulnerability. That's why there is a variation by company size and type on how successful phishing campaigns are.
'WannaCry was a huge land grab and the targeting was indiscriminate. More companies and their machines learn from these attacks, better the defences will be next time!'
As with almost everything in life, proof of the pudding lies in the eating and unless organisations try machine learning within their own perimeters, they won't really know if it is for them or not.
And as for Terminator turning up at your doorstep, sorry to be the harbinger of such sad news, but Arnie will not be turning up tonight.
The Information Commissioner's Office announced Thursday that it is investigating as many as thirty organisations, including Facebook, relating to the use of personal data and analytics by political campaigns, parties, …
Lazarus Group, an infamous North Korean hacker group, has been targeting employees at cryptocurrency firms with spearphishing attacks in order to steal Bitcoin, researchers have revealed. Hackers belonging to the …