Using browser and OS versions to measure cyber risk

Piers Wilson at Huntsman Security explains how browsers and OS versions contain very useful data for managing cyber risk.

One of the challenges in cyber security is how to measure the status of security controls to quantify cyber risk – even controls that should be ubiquitous, baseline and foundational.

This problem has a number of dimensions – for example when looking at maturity it is often necessary to ensure that a technical control (which might be perfectly robust) is governed by a policy and actually generates audit information that enables it to be verifiable.

More common however, is the need to ensure that a technical configuration is (a) correct (i.e. matches policy, intent or compliance requirements) and (b) has been implemented (and to what degree).

The ability to measure this can be difficult in highly distributed environments. This can lead to readings being taken that are based on assumptions, and it is typically these assumptions that are found to be flawed when problems later emerge.

Assumptions and guesswork

One way in which this can occur is in the configurations or versions of endpoint software on the network.  An enterprise-wide Windows rollout, or browser update might have been implemented, but did it reach all the systems it was supposed to cover?

Old laptops that were bought for specific purposes, systems that control physical access and are never directly logged into, contractors using their own specialised equipment or other corporate guests - these can all have been skipped when changes have been applied.  They are the very vulnerable systems that attackers aim to locate and target, not the several thousand well managed and well-patched workstations.

To quantify this problem, there are now several service providers trying to gauge security performance using external sources of information and assumptive benchmarks.  However, this in itself is counter intuitive. How can you derive internal network configuration information without looking inside the network itself?

Cyber risk: Outside looking in

The answer (in the example we are discussing) is that browsers reveal information when they connect to web pages hosted on a server. See https://www.whatismybrowser.com for an example.  This blog post is being written on a system running Safari 14:

Safari 14 on macOS (Catalina)

That information wouldn’t be any use if you had to trawl every company that the users have ever connected to in order to harvest their browser details.  But often when you visit a web page, the adverts that are displayed come from a single set of web advertising companies – and so these sites do have a vast number of end-user browser details across all users and irrespective of the actual web sites the user visited.

Secondly, there is information publicly available about which network addresses are owned and used by companies.  This can be used to identify the organisation and geographic location.  It’s often used in security to convert an IP address to a location but it can also be used to map the information above to an organisation and even a particular office.

On the surface this seems to answer the question – we have one dataset of browsers and OS versions linked to IP addresses and another linking IP addresses to companies and offices.  But does this provide a reliable way to externally connect browser/OS versions and patching status to a specific organisation?

Making an ASS of U and ME

The reality is that it does give an answer, but that answer relies on assumptions, and quite often flawed ones.  And that means that any decisions about the state of security controls are similarly flawed.

Some organisations might allow people to connect to their network in a permitted way which means that any Internet access externally from their browsers/workstations will cloud the organisation’s browser/OS version results, as the connections to the outside world (and hence the advertising browser databases) originate from the organisation’s external network address (just not their own, managed systems).

There are many scenarios – guest Wi-Fi networks, external users connecting devices to networks for meetings, contractors using their own systems, employees with mobile devices that are permitted to connect to corporate networks, guests in hotels.  An organisation can easily appear to be bad at patching and software configuration just because it has hosted a major career event for hundreds of students.

In addition, users that are part of the organisation might be out of the office, work from home or at customer sites or be in smaller offices where the IP/network provision is given to them by the telecoms provider to the building.  These systems, being away from the corporate network, might be the ones that are not regularly updated or patched and hence the riskiest, but because they connect to the Internet from hotels, client sites, Starbucks branches or restaurants they are never associated with the enterprise risk scores that a limited external assessment produces.

In essence, we are making decisions based on results from an incomplete and polluted sample.

An introspective view of cyber risk

The solution is to look within the network itself, where the systems that we want to assess or measure can be directly examined.  In the example we’ve been using the devices on the network are easily visible, and it is possible to discern their role or ownership much more easily.

It would be very simple to see a distinction between systems that are on a guest Wi-Fi network (the student conference, the visiting business partners etc.) and the corporate network (where you have employees using IT-issued kit), and consequently include or exclude them in an assessment as appropriate.

If you are trying to validate operating system versions, patches, browsers that are in use, what update schedule they have as part of an audit activity or assess a third party supplier, having the ability to collect metrics on security controls from within the network is crucial.

Intrusive security oversight

One perceived problem with this approach is the level of intrusion that the data gathering involves (this is the rationale for using externally visible information).  The concern is that if internal systems are being scanned, probed, connected to and interrogated directly, this could put a load on the network and could cause other problems, maybe disrupting activities or triggering security controls that aim to detect vulnerability probes or scans.

However, it is only by interrogating the central management systems that control security, that you get an easy, single point of security control data around (in this case) patching and version information, or backup schedules, malware defences or application usage.

The only exceptions are those systems that fall outside of that umbrella as a result of deliberate exclusion for operational reasons.

Continuous security assurance

To measure cyber risk on a continuous (rather than one-off) basis, you need accurate information that is complete and trustworthy, and you need to be able to collect this from single points of reference rather than interrogating every individual device with a noisy scanning solution.  You need to be able to work across network boundaries enabling complex business units to police themselves and large organisations to monitor their external third party supply chains, and you must focus on issues that provide the highest value in security risk terms (like patching and software/OS versions).

Relying on external repositories that are derived from datasets based on assumptions might seem easy, but it is a sure way to get flawed data.  It’s not quite guesswork, but it risks providing a view, upon which decisions about risk are made, that are not valid and hence unsafe.

If the choice is a questionable external view or an internal control assessment, then data gathered from within the network itself will always be closer to the truth and the better source of information to use.

By objectively measuring cyber risk from the ‘inside-out’, operations and management teams can reliably verify their security posture and even manage it as part of an enterprise wide improvement program.


Piers Wilson is Head of Product Management at Huntsman Security.  You can find more information on taking control of your security posture here.

Main image courtesy of iStockPhoto.com

MORE ABOUT:

Leave a Reply