What CVSS is and how to use it

Last Updated on August 11, 2022 by Dave Farquhar

What is CVSS? CVSS stands for Common Vulnerability Scoring System. It is a method to express the relative strength of vulnerabilities compared to each other. It’s a common statistic in computer security, especially in the field of vulnerability management.

There are two versions of CVSS in common use. The major difference is version 3 allows you to account for environmental factors to adjust it, but both of these versions have one significant weakness.

What is CVSS?

what is cvss?
All computer code has vulnerabilities. CVSS is the most common way to rate and measure them, but it has limitations. And it seems like no matter the situation, I’m almost always running into one or more of them.

CVSS works on a scale of 1 to 10, with 10 being high.You’ll hear the phrase “CVSS score,” but that’s redundant, since the third letter in the acronym is literally “scoring.” I’ll try to avoid using that phrase, but it’s so common it’s easy to slip.

Most vulnerability disclosures include a CVSS value, though I have seen instances where the vulnerability gets announced, and the CVSS follows a few hours or days later. The CVSS is intended to help you decide how quickly to patch the vulnerability, or whether to patch it at all.

If it’s a 10, you need to do something about it. If it has a low score, you may or may not need to do something about it. The key word is may. There are problems with CVSS, but I’ll get to those in a moment.

Using static scores

You can use CVSS as a static score, or you can perform calculations on it. Any vulnerability scanner worth having reports the CVSS associated with each vulnerability, which gives you a quick, all-things-equal comparison of the findings.

There is no polite way to say this, but most organizations are terrible at patching. I’ve gotten in trouble for saying that before, and told I should recommend industry best practices. The problem is, there are none. Either patching is mandated or it isn’t, and if it’s mandated, you either learn how to do it, or you learn how to cover up when you’re not doing it.  The dirty truth is that most organizations are doing well to completely eradicate 25% of the vulnerabilities in their network. The reasons for that are complex, and it’s much easier to deal with it than to try to understand why that is and outperform it.

CVSS provides a quick and dirty way to deal with it. It’s pretty standard for organizations to say they want vulnerabilities with a score of 7-10 fixed, and they’ll look the other way on the rest. The assumption is that vulnerabilities are evenly distributed or on a bell curve, so fixing the 7s through 10s is reasonable.

Factoring in the environment

CVSSv3 allows you to perform calculations on the base score, including factoring in environmental metrics. Based on the potential for collateral damage, asset criticality, and the target distribution, you can adjust the score up and down. You can also factor in temporal metrics, like the availability of an update and whether the update is official or not, the availability and quality of the associated exploits, and the credibility of the report.

This tailors the score to your environment. I’ve worked in both extremes. In the 90s I worked in an environment where every system had a direct connection to the Internet, with no firewall. I’ve also worked in classified environments where the systems had no connection to any other network whatsoever. In the former environment, almost any Critical vulnerability, and perhaps certain highs can become a 10. In the latter, it’s difficult for any given vulnerability to remain a 10. It’s possible, but difficult.

The problem with these factors is most organizations don’t have the information available to use it properly. You can’t do the environmental calculations when you don’t know how many assets you have, what kind of data they store, and what other systems they talk to. That sounds like something they should know, but it’s rare. I’ve seen companies who had those kinds of lists, but I’m always surprised when I see them. Usually I have to help them build them, and usually that takes a lot of time.

I’ve never seen anyplace outside of the military use these CVSS calculations fully. They just don’t have the data to do it. The best I see are partial calculations, prioritizing vulnerabilities with a high CVSS value and known exploits available.

Problems with CVSS

No scoring system is perfect, and CVSS is no exception to that. I use it, but I use other things as well. Some of the problems with CVSS may be due to lack of understanding rather than a problem with the system itself, but I see these problems frequently enough that I question blind allegiance to CVSS. When someone tells me they’re a firm believer in CVSS, I usually take it as a sign they are either using it incorrectly or not using it to its fullest capability.

The assumption of either a bell curve or even distribution

The biggest problem with CVSS, especially when trying to prioritize work, is the assumption that 10 percent or fewer of your vulnerabilities are CVSS 10s, 9s, 8s, and so on. And therefore, if you say that you just require CVSS 7s and above to be fixed, you’re being reasonable.

I don’t see even distribution in the real world. In my experience, 5s, 7s, and 9s are the most common. 1s, 3s, and 6s are the most rare. If I squint a lot I can make it look like a curve, but it’s not a bell curve. It’s too lopsided. Mid-level vulnerabilities are very common, but vulnerabilities with a score of 8-10 are much, much more common than vulnerabilities with a score of 1-3.

What you observe will depend on the operating systems and application software in use in your environment, but in large corporations with a lot of Windows and Linux systems and a lot of Microsoft, Adobe and Oracle software running on top of it, that’s what the distribution looks like. And that describes well over 90 percent of large companies.

My favorite example: Heartbleed

Heartbleed has a CVSS value of 5. It has a value of 5 because it has no impact whatsoever on the integrity or availability of a system. It does no harm to the system itself, and can’t change the system in any way.

All it does is break SSL and TLS. So we treated it as a critical, and rightly so, even though it was only a CVSS 5.

Risk-based scoring vs CVSS

I advocate using risk-based scoring rather than CVSS. CVSS works pretty well when you have enough data to feed into it so it’s doing dynamic calculations based on your environment, rather than using its static scores. But it could still slip up and not elevate Heartbleed to a level above your risk tolerance, which it should.

It’s much faster and easier to come in and implement a risk-based approach, using something like Kenna, or even something like Tenable’s VPR or Qualys’ Threat Protect. These tools let you zero in on the vulnerabilities attackers are actually using, which is usually a smaller percentage of your overall vulnerabilities than your CVSS 9s and 10s.

Why is that? Even though CVSS 9s and 10s can do a lot for an attacker, that doesn’t mean the exploits are practical to use. Maybe they aren’t very reliable. Maybe they bluescreen the system too often. Or maybe they just make a lot of noise on the network. The people who hack networks have their favorites, and you can’t make a simple, hard-and-fast rule about what those are. But you can buy a tool that factors threat intelligence into its scoring. If you have the data that lets you use CVSS properly, you can load that into Kenna and fine-tune its recommendations even further. But even if you don’t, you can very quickly find the biggest fires raging around you and get those taken care of, even in the absence of any data other than what IP address ranges you use.

Practical advantages of a risk-based approach over CVSS

Sysadmins don’t like to patch. It’s hard work, and typically for little reward. It’s much easier to social engineer management, including security management, to convince them they don’t have a problem than to fix something.

I once helped a client fix a vulnerability that was part of a contractual requirement. A very determined sysadmin had social engineered both the CIO and the CISO into thinking there wasn’t a problem. Fixing it turned into a cat fight that came down to me having to exploit the vulnerability to prove to the CIO and CISO that the vulnerability had not, in fact, been mitigated, despite the sysadmin’s protestations.

My question at the end of this exercise was simple. Do they want to go through this for 35 percent of the vulnerabilities on their network, or two percent?

If you found this post informative or helpful, please share it!