The expert view: The reality of AI to defeat cyber-crime in a cloud-based world

The expert view: The reality of AI to defeat cyber-crime in a cloud-based world

Many companies would benefit from rethinking their security model when they move to the cloud, Pascal Geenens, of Radware, told an audience of senior security professionals from a range of sectors, at a Business Reporter breakfast briefing at the Langham Hotel, in London.

He argued that many companies expect too much from the security offered by cloud providers. While the likes of Amazon do secure their services and keep them patched and updated – often more effectively than companies did with their own servers – it is still possible to add insecure elements on top or make a mistake when implementing security procedures. It is dangerous to be complacent about cloud security.

Cloud-first, not cloud-only

Everyone present said their company used cloud services to some extent. For some, notably those attendees from the financial services sector, the deployment was still cautious. One attendee, from a major UK bank, said he could see his company one day becoming cloud-first, but he could never see it becoming cloud-only.

For others, legacy systems were slowing them down because they are hard to replicate in the cloud. The flipside of this is that legacy systems can sometimes drive a cloud migration because they start to break down or become unstable. An alternative must be found quickly, and the cloud is the logical solution.

Attendees were concerned about the concentration of risk from too much data being held by too few suppliers, as well as the problem of being locked-in to a particular cloud provider but very few were concerned about security. As Mr Geenens had argued in his introduction, they were mostly satisfied that cloud security was at least as good as their previous, on-premise security.

Securing the cloud

However, the cloud is significantly more open to automated attacks and the best way to respond to these can be with automated defences – and increasingly that means using AI. Mr Geenens argued that for a system to be considered AI, it cannot just be processing data – it must also be learning from the data it processes so that it improves. Security systems currently do this, in a limited fashion, but their capabilities will increase in future.

An attendee from a bank said that his business was too small to provide enough data points for an AI security system to be effective. However, in the cloud his data can be aggregated with that of other businesses, which makes it a viable option.

Among attendees, there was an openness to using AI security tools, but those present were concerned about how to evaluate them or choose between similar services. One attendee, from the Civil Service, expressed concern about the false positives that AI can generate. A human still has to sift through them and often there isn’t time.

Another worry for some attendees was that the response cannot be automated because every incident is different. The starting point is to figure out exactly what is going on, something that might be better suited to a human instead of an AI. Others argued that, even in these situations, humans are following rules – it is simply a matter of finding the right rules for the AI and the necessary data.

Does existing security practice work in the cloud?

There was widespread agreement that moving to the cloud represents an increased risk because an attacker is more likely to find and exploit a security hole. However, there was disagreement over how this increased risk should be approached.

Some argued that there is no reason to change existing processes. For example, a common vulnerability in cloud services results from improperly managed permissions. Companies already have procedures in place for ensuring that permissions are properly set so employees should follow them, and the problem would not arise.

For the other group, however, the significant factor is that, despite existing processes, preventable breaches still happen with alarming regularity. If the procedures and policies we currently use do not prevent those breaches, then why would we trust the same problem in an environment where the risks are greater and the consequences might be more serious?

These attendees argued in favour of greater automation, whether AI or not, taking some of the simpler tasks out of the hands of humans so that they can be managed more consistently and the humans are freed-up for the tasks that can’t be automated.

All of those present expect their businesses to become cloud first – if they aren’t already – over the next few years, so this is a debate that will need to be resolved. The danger in exporting the familiar model into a cloud environment is that by the time we realise that it is insufficient, it will be too late.

For more information, please visit

Copyright Lyonsdown Limited 2021

Top Articles

The silent weapon: uncovering the threats of adversarial AI

Organisations concerned about rising threat levels from the criminal use of AI should consider deep learning as a defence

Addressing cyber-resilience gaps across key infrastructure assets

While no single security tactic will give you 100 per cent protection, there is a way to foster a defence-in-depth approach.

Will 5G Accelerate Cybercrime?

If you pay attention to such things, the press coverage of the ongoing roll-out of the 5G network in the UK has been dominated by two subjects.

Related Articles

[s2Member-Login login_redirect=”” /]