Steve Blow at Zerto explains the need for continuous data protection
To appreciate the extent to which today’s high-availability, ‘always-on’ IT systems have become embedded in our lives, one needs only to examine the importance and impact of downtime. When even a brief outage at the likes of Google, Microsoft, Apple or even a social media platform makes global headlines, it’s clear we’re living in an era where digital infrastructure has demonstrated both its importance and vulnerability.
Always-on means real time data protection
For technology platforms and service providers whose business models are digitally dependent, the impact of downtime can be felt across revenue, customer loyalty and reputation. While in many cases, downtime can be measured in a matter of minutes, for organisations that rely on traditional backup technologies to protect their data and workloads, a return to business as usual can be much more time consuming.
And while it can be difficult to quantify the financial impact of downtime for any individual business, a recent IDC report indicated the average business – across all industries and sizes – is suffering 2.9.3 hours of unplanned downtime a year. With the cost of downtime ranging from thousands of pounds per hour to hundreds of thousands, what is clear is that any reduction in downtime can have a significant impact on the ROI of new solutions or solution improvements.
The problem with the traditional approach to backup is that it largely offers the same approach first seen decades ago, relying on periodic snapshots rather than continuous protection. Yet, for businesses focused on providing an uninterrupted service to their customers, the priority has shifted to a data protection process that copes with every change and update, ensuring each new piece of data is safe and available in real time.
The challenges are diverse and growing in complexity for every always-on business. From ransomware attacks and database corruption to accidental deletions, data and application availability can no longer be considered separately.
Ransomware, for example, has become one of the most critical recovery headaches for businesses around the world. Once an organisation falls victim to an attack, files are locked down, and if the latest backup is from last night, last week, or, even last month, the impact of the attack can be so severe that some organisations struggle to recover at all. Having to restore to a day-old or even week-old backup also means data loss and increased time and expense in recovery efforts.
What’s more, modern workload movement technology allows applications to move seamlessly from on-premise to multi-cloud. IDC, for instance, reports that 70% of CIOs have a cloud-based strategy for application deployment. As a result, data availability must move in tandem with protection to assure the greatest agility. It must meet availability SLAs while ensuring applications and data remain available – regardless of the disruption – because customer relations and loyalty leave little room for downtime or data loss.
The need for continuous data protection
IDC explains the solution eloquently: “In response to the need for ever greater application availability with less data loss, a new generation of continuous data protection (CDP) technology is emerging to significantly reduce recovery point objectives (RPOs).”
CDP works by automatically capturing and tracking data modifications, saving every version of user-created data locally or at a target repository. With little-to-no production overhead, incremental writes are replicated continuously and saved to a journal file. CDP’s change block tracking also allows users or administrators to restore data to any point in time with tremendous granularity.
By saving data in intervals of seconds – rather than days or months – CDP gives IT teams the scope to quickly rewind operations to just seconds before any disruption occurred. In the process, they can quickly recover anything from a single file or virtual machine right up to an entire site.
The fundamental principles of CDP also apply to backup and long-term retention. Whether it’s day-to-day restores and recoveries of files, VMs, and/or specific volumes, backup differs from disaster recovery since it isn’t a ‘disaster incident’, as such, and doesn’t need an entire site recovery.
Backup is also typically performed locally and restores do not come from an offsite datacentre or cloud. In contrast, traditional disaster recovery is focused around large-scale disruptions to the business, is performed via remote recovery, and is focused on failing over infrastructure.
For long-term retention, most organisations require storing data long-term due to compliance, taxes, or internal demands. Much of this data isn’t critical to day-to-day operations so it needs to be stored on cost-efficient media where quick recovery is not as much of a priority. In each of these circumstances, organisations need the same recovery capabilities that give seconds of RPOs rather than hours, and do so without relying on snapshots that stun production VMs.
Ultimately, any modern organisation that positions itself as reliable and trustworthy must ensure its technology infrastructure can address the wide variety of challenges that can disrupt the ‘always-on’ model. By approaching backup as a continuous process, it becomes possible to ensure service availability remains in line with customer expectations. In doing so, they can meet the challenges of the connected digital economy head-on and, in the event of a cyber-attack or technology failure, focus on immediately returning to business as usual.
Steve Blow is Technology Evangelist at Zerto.
Main image courtesy of iStockPhoto.com