Deploying edge computing architecture? Read this before you jump
August 15, 2019
Andrew Bargery, Solutions Architect, F5 Networks, discusses what businesses need to think about before jumping into deploying edge computing architecture.
Edge computing is gaining momentum, and it isn’t hard to see why.
Rather than transmitting data to the cloud or a central data warehouse to be analysed, processing can take place at the ‘edge’ of a network, reducing network latency, increasing bandwidth and delivering significantly faster response times.
The technology’s reach and influence may still be (relatively) embryonic, yet momentum has grown in EMEA over the past couple of years, particularly in the automotive and manufacturing industries. Indeed, any organisation with a glut of inter-connected devices and rapid data processing requirements would do well to start exploring deployment options now.
The first thing to consider is that the deployment of applications at the edge should not be taken lightly or seen as a mere extension of cloud computing. Applications and their data will be distributed across multiple locations, markedly increasing potential threat surfaces.
Furthermore, edge nodes may no longer be deployed in secure, central locations, making them more vulnerable to physical access. One of the biggest mistakes is to assume traditional security controls such as firewalls are enough.
Edge computing requires a robust application layer security such as a web-application firewall (WAF). Encouragingly, today’s Advanced WAF (AWAF) solutions are capable of dynamically protecting applications with anti-bot capabilities and stopping credential theft using keystroke encryptions.
It is also possible to extend app-layer DDoS detection and remediation for all applications via a combination of machine learning and behavioural analysis.
Times have changed. Traditionally, physical appliances were deployed for centralised security and firewalling functions. As applications become more widely distributed and no longer tied to a fixed location, the security controls need to be deployable in the same way.
This means modern AWAFs must be able to be virtualised and deployed in private and public cloud infrastructures – while still providing the requisite levels of security and performance.
Virtualisation means AWAFs can support a variety of consumption and licensing models, including a per-app basis, as well as perpetual, subscription, and utility billing options for flexibility in the cloud and the data centre.
This enables SecOps to work with modern DevOps, and NetOps teams to easily deploy app protection services in any environment. These can then be configured for individual applications or en masse. This holistic approach reduces management complexity, decreases OpEx, and efficiently delivers services to neutralise attacks.
Fundamentally, it is vital to deliver the right protection models wherever applications reside. In addition, it is essential to invest in automation. Otherwise, it is impossible to ensure consistent security policies are deployed across a distributed edge computing architecture.
For instance, if an application is deployed or deleted in an edge computing location, the appropriate network and security controls can be automatically deployed.
Due to the diffuse nature of distributed automation, organisations and service providers operating network and computing infrastructures also need to make sure their control interfaces are protected with the right Application Programming Interface (API) security solutions.
In principle, edge computing can potentially simplify security management because it can deliver more clarity on where data originates and where it is going. Traditionally, everything goes to a central data centre or cloud system where it can be harder to efficiently monitor and protect as traffic volumes soar.
Edge servers, on the other hand, can offload computing tasks from connected devices by caching information like a private cloud, and data can be accessed locally.
The importance of alighting an optimal security posture will only grow as more and more compelling edge computing use cases come online. An automated, smart and data-led approach to manufacturing is now becoming commonplace, enabling real-time insights for rapid decision-making in mission-critical scenarios – particularly for robotics or AI-powered devices.
Factions of the automotive industry are also revving up to gain a competitive edge, implementing multiple IoT sensors into driverless vehicles to detect movement in its surrounding environment. This includes acquiring data on the condition of the vehicle, such as latency-sensitive location and navigation information.
Feeding this data through a network to a central data centre or cloud system can be time consuming. Edge computing allows for data to be processed and analysed in real-time, improving consistency and response times.
Police body cameras are another interesting use case, with the technology allowing data to be compressed and encoded locally, sending short bursts of video to a local edge centre, speeding up the upload process and reducing pressure on a central network.
Elsewhere, retailers are benefiting from point of service (PoS) machines that can send credit card data to an edge compute, removing the need for sensitive information to be sent across the network in a potentially more vulnerable state.
Then there’s the surging development of Augmented Reality (AR) and Virtual Reality (VR) applications, which are enthusiastically incorporating edge computing capabilities, harnessing the benefits of rapid responsiveness in the face of high-bandwidth usage.
Whatever the technology’s incarnations, it is critical to avoid crashing into problems that could have been avoided. Only jump in with both feet once you know exactly how the technology will benefit your organisation and, most importantly, how you intend to keep it safe.