Eoin Keary, CEO and co-founder of edgescan, discusses why organisations are still failing to get the fundamentals of security right.
It is no secret that many organisations still struggle with the fundamentals of maintaining a stable security posture. In fact, upon reporting on the global status of vulnerability management in 2018, edgescan discovered that a surprising number of organisations still had a 15-year-old vulnerability lingering on their systems.
Relatively common vulnerabilities such as Cross-site scripting, SQL injections and command injection still remain in the wild despite there being tested-and-proven solutions to prevent those them from becoming serious attack vectors.
Most worryingly, many enterprises could potentially still be vulnerable to attacks such as NotPetya, having not patched the vulnerabilities that caused one of the largest cyberattacks in history.
To improve overall security posture, organisations can follow a set of principles that will significantly decrease their chance of becoming the next headline in cybercrime news, and these are:
Also of interest: Are supply chains fit for purpose in the 21st century?
Visibility and Profile
The first step of any security plan should be acquiring full and continuous visibility over the organisations’ assets.
Make sure you understand what “moving parts” of the enterprise are the most likely to be targeted by an attack, which assets are connected to the public Internet/untrusted network, does the network have exposed ports, protocols and services, and finally, where are the enterprise’s API and critical assets are hosted.
If you don’t know any of the above information, then a thorough mapping of the attack surface needs to be considered on an ongoing basis in order to detect change. It is impossible to protect the unknown.
Organisations need to have a clear picture of where the majority of their security efforts need to be spent and should regularly – if not continuously – check for changes in the estate they need to secure.
As networks change, firewall rules are altered, systems spun-up and torn-down and DNS changes, only continuous monitoring can keep an enterprise safe, under control and generate alerts for when a new entry point becomes exposed.
Common Vulnerabilities can sometimes go undetected by visibility and profiling tools, but exposed systems, services, API’s and consoles are flagged when accessible via the public Internet and can be remediated, as soon as the weakness is discovered.
Also of interest: How to secure your move to the multi-cloud
Coupled with visibility and profiling, regular vulnerability management can help detect misconfigurations, known vulnerabilities (CVE), systems that require patching and hardening, web application security weaknesses and systems with insecure default configurations, which can all lead to a security incident.
Fullstack vulnerability intelligence means that the vulnerability detection service covers both the hosting infrastructure (Cloud, Data Centre and On-Premise) and any web applications or APIs residing on it.
The fullstack approach doesn’t separate infrastructure and web layer vulnerabilities into silos but provides a fuller picture of the potential risks an organisation may face. This lends to the DevSecOps view of cyber security and application development, and views vulnerabilities through a risk-based approach.
After all, regardless of which system is vulnerable, any unprotected entry point should be regarded as an avenue to breach that needs protection and monitoring.
Also of interest: The role of the threat hunter: what is it and why it matters
Measure and Track
Vulnerabilities need to be tracked all the way to mitigation. Reporting a vulnerability alone does little to improve security posture. It needs to be acted on, and the speed at which this happens should be recorded.
Tracking the speed at which high and critical risk vulnerabilities are closed allows for better measurement of cyber security performance and a more focused approach to security. Prioritisation, however, can be challenging to determine where there isn’t full visibility across both web applications and associated infrastructure which results in cyber security blind spots.
The metrics worth recoding include:
- MTTR – Mean time to fix: One should expect this to be lower for high or critical issues
- Average Assessment Count: How frequent is the asset being assessed? Is the frequency matching deployment schedules?
- Risk Density: High/Critical/Medium Risks per asset/Exposure Index, Vulnerable Assets %
- CVE Landscape: The percentage of assets which have at least one CVE associated to it
- Remediation Performance: Focusing on high and Critical vulnerabilities how quick are we closing our vulnerabilities.
- Patch Performance: Mean Time To Remediate (MTTR) for vulnerabilities which have CVEs associated with them. Vulnerabilities that are associated with CVEs are typically remediated by patching or upgrading the affected software.
- Maximum Severity: The maximum severity value associated with the vulnerabilities on your assets. Ideally this should be as low as possible. A high value indicates that a potentially dangerous vulnerability is present somewhere in your infrastructure.
Nowadays, crimeware has become a commodity sold on the dark market, and data breaches have become an almost-daily occurrence. Nation-state-backed cyber-attacks are no longer a conspiracy theory, and industrial operations technology is more at risk than it has ever been.
In this climate, compromising on the fundamentals of cyber security, data protection and secure application deployment is no longer an option.