Nigel Thorpe at SecureAge suggests that if you can’t protect legacy IT systems from cyber-attacks, it’s time to focus on protecting the data itself
In a perfect world, organisations would constantly update their IT systems with the latest, leading-edge technology to meet all their changing business and customer needs. But life isn’t that simple, with budgets to keep to and complex environments that evolve over time through business development, mergers and acquisitions. And of course, there is an argument to say, ‘it if ain’t broke, don’t fix it’.
Almost all businesses will have some legacy systems that are outdated but still do an essential job, along with software applications that have passed their ‘sell by’ dates but work perfectly well. Replacement can often cause major upheavals and it can often be more cost-effective to maintain, integrate and manage existing systems rather than ‘rip and replace’.
Legacy systems were not designed to be exposed to public networks, but as staff, customers and suppliers need direct access to business processes online, a wealth of tools and technologies have been developed to make it practical to integrate aging systems with the new connected world. Service layers, Application Programming Interfaces (APIs) and Data Access Layers (DALs) can all be employed so that new online services can be built rapidly on top of tried and tested technology.
Outside of the silo
When data is processed, managed and stored within the confines of a legacy IT system, this ‘silo’ is good at maintaining information security. But when connected to the outside world, legacy system data – such as customer details, company operational data and intellectual property – becomes vulnerable as it travels from silo, through web-based applications to end users. Legacy systems working in a more open environment than their design brief make them vulnerable to cyber-attacks or insider threats, while data extracted from security silos can be left exposed.
The traditional approach to IT security is to add more layers of defence to prevent people getting in; but adding new perimeter security measures to a legacy system may not be possible. However, it is possible to protect the underlying files that contain data using encryption, without disrupting the legacy application.
This is because encrypting at the file system level means that properly authenticated applications simply work as normal, requesting files and writing back to disk, and decrypting and encrypting data as it is streamed to and from the application, without affecting performance. Only the piece of data that is required to be loaded into memory is processed and files remain encrypted at all times. This means that even if in-use files are stolen, the data is useless to the cyber-criminal.
Data, data, everywhere
With new ways and places of work, new desktop tools and a new, more accessible layer over the legacy system, users are in a position to extract data. And while this is a significant benefit of a more flexible approach, it means that information is widely distributed meaning that organisations may have no clear view of where potentially sensitive information is stored.
Common methods used to protect ad-hoc data include access controls, data classification and data loss prevention systems (DLP). However, access controls are easily evaded by privileged users and clever hackers, and classification and DLP systems both require careful setup and on-going maintenance so that sensitive information can be identified. Monitoring approaches will never be able to recognise all important data, and access controls simply build fences around data that is otherwise unprotected. A much more effective and comprehensive approach is to focus on the data itself. After all, it’s the data which is of value.
By using universal file-level encryption to encrypt all data, all of the time – when stored, in transit and even when in use, no matter where a file is moved or copied, and no matter how the file is used – means that it remains protected. So, in the event of a data breach, the cyber-criminal is only able to steal encrypted data – and you can’t demand a ransom for data that is unintelligible.
Encrypting all data, rather than taking a selective approach avoids the problem of missing some sensitive information. It can be argued that all data is sensitive as it could be used for social engineering purposes, for example. Relying on users to make the decision as to what data is sensitive and what is not, is simply not scalable, and automated classification tools are only as good as the rules that drive them.
Universal file encryption is a far simpler and more secure approach, since no decisions need to be made about the sensitivity or location of data. And if the process is seamless and designed not to get in the way of either users or applications, there is no disadvantage to encrypting everything.
Focusing on protecting the data itself rather than trying to prevent access to it resolves problems of data security with modern use of legacy systems. By encrypting all data files behind the legacy application, attempts to steal information are effectively defeated. And by silently encrypting all data exported from the legacy system, all information, no matter where it is held, remains protected and unusable by cyber-criminals.
Nigel Thorpe is technical director at SecureAge
Main image courtesy of iStockPhoto.com