What does CVE stand for? How do you fix one?

What In Information Security and Information Technology, CVE stands for Common Vulnerabilities and Exposure. It is a standard identifier for tracking vulnerabilities in computer software. I’ve only deployed updates to fix about 800,000 of them, but that experience taught me a little bit about working with them.

The CVE database is maintained by MITRE, and there are about 100 CVE Numbering Authorities (CNAs) who assign them. The CVEs themselves don’t include a lot of detail, but they serve the purpose of providing a common identifier that vendors and security professionals can use to track each unique security flaw.

What a CVE looks like

What does CVE stand for
CVE stands for Common Vulnerabilities and Exposures. Admittednly, MITRE’s page for CVEs isn’t much to look at. It generally contains the CVE ID, a brief description, and a few links, and it’s totally Web 1.0.

A CVE is a tracking number, consisting of the string CVE, the year, and a number. In the good old days, that number was four digits. In 2017, we ran out of numbers. Yes, that means more than 10,000 new vulnerabilities are discovered every year now. The number gives a way to track the approximate age of the vulnerability. However, the year refers to the year the vulnerability was reported, not the year it was publicly disclosed. So in January and February of one year, you may still be dealing with newly disclosed vulnerabilities with CVE numbers from the previous year.

MITRE’s descriptions contain very little technical information. They consist of the CVE ID, a brief description that’s generally about 2 sentences, and a list of references that isn’t guaranteed to be complete. MITRE’s CVE entry will also usually contain a link to the National Vulnerability Database, which tends to be much more complete.

CVEs as status symbols

Sometimes you’ll see security professionals challenge each other on Twitter, asking them how many CVEs they have. In that context, they’re asking how many vulnerabilities they personally discovered in software, reported to the vendor, and resulted in the creation of a new CVE.

I stay out of those kinds of discussions. While discovering new CVEs is necessary work, dealing with fixing the CVEs that already exist is also important work, and there aren’t many people talking about that.

Remediating CVEs

It’s one thing to know what CVE stands for, and another to translate it into something actionable. One key problem to remember when remediating CVEs is that CVEs aren’t patches. Security tends to think in terms of vulnerabilities, while system administrators think in terms of patches or updates. And the tools they use reflect that.

One patch often fixes several vulnerabilities. I’ve been in many meetings (and heard of many more) where security makes a statement about how many vulnerabilities a system has, and the IT team disagrees, because their tool reports a different number of missing patches or updates. Because the numbers don’t match, it’s easy for the teams to talk past each other.

You can see this when you look at the raw scan results. When you look in either the solution or the vendor reference column (depending on your tool), you can see the vendors don’t track their patches by CVE. If you pivot or de-duplicate on the CVE column, then on the solution/vendor reference column, the numbers aren’t going to match. Qualys’ solution column works a little better for this than Tenable’s vendor reference column, since Qualys usually gives a single item versus Tenable’s wall of text, but with Tenable data, you can at least get the idea. Let’s take CVE-2019-0559 as an example. It’s a random vulnerability in Microsoft Office. If you read Tenable’s solution column, you’ll see there are seven different KB articles addressing that single CVE. And most patching tools speak in KB articles, not CVEs. At least those made by someone not named Ivanti.

Being an effective vulnerability management or patch management professional requires standing in the gap and figure out what the CVE requires you to do.

How I fixed 800,000 CVEs in my sysadmin career

I estimate I fixed somewhere between 800,000 and a million CVEs in my sysadmin career. I have to estimate because we had bad tools back then and the raw data no longer exists for me to analyze, not to mention the little problem that I don’t work there anymore. But based on the size of the network and the software we had installed, I know I fixed somewhere just shy of a million, and no fewer than 800,000. So I claim 800,000.

It was a rocky start. When Patch Tuesday came out, I had somewhere between 21 and 90 days to fix every single vulnerability. The deadlines seemed to be picked at random, and the security team didn’t seem to pay much attention to the deadline. If it was missing, they wanted it fixed yesterday, even if the patch didn’t exist yesterday. My direct management couldn’t agree on when to fix it either, varying from yesterday to one minute before it was due and not a minute before, neither of which was realistic.

I had to fix the vulnerabilities and report what I’d done in very specific ways, which also could change without notice. If I used the wrong cover sheet on my TPS Report, the system was still vulnerable.

Nothing about it was reasonable, but it was a recession, and no one else was hiring.

Learning to read vulnerability scan results

The first thing I had to do was learn how to read vulnerability scan results, because no one would tell me what it was they wanted. Security used three different tools, which was also a problem, but all of them at least would tell me the reason it flagged the finding, and provide a vendor reference of some kind. In those days, Windows logged every update in a plaintext file, so I could tell in seconds whether anyone had even tried to deploy that update to the system. I could also check the scan results for the problem file or registry key and compare that with what was on the system and compare it against the log file.

Armed with this knowledge, I could prove false positives or I could find incomplete patches and fix them. False positive rates varied depending on which tool they used. Only one of the three tools they used, Nessus, is still on the market, which says something about the quality of the other two.

I call this patching like a brain surgeon. It’s not something just any sysadmin can do, MCSE or no MCSE. I was an accomplished sysadmin and fully capable of forcing those files to update without breaking other stuff, but it was time consuming.

Patching like a caveman

I solved this problem by adopting a different strategy most of the time. I could just blast out patches pretty easily. At first I had to use batch files, but eventually I was able to get my employer to procure a proper patching tool, which greatly improved my efficiency and my initial success rate. Picking a group of systems and blasting out the missing updates to them works somewhere between 80 and 99 percent of the time. When I had to use Microsoft tools, the success rate was 80 percent. When I could use Shavlik tools, the success rate was closer to 99 percent. I can’t comment on today’s tools, but Ivanti owns Shavlik now. If I were going to go back to pushing patches for a living, I’d take a long look at Ivanti.

I worked on a 30-day cycle, since that was the best compromise between all of my conflicting requirements. I disallowed known problematic updates, then blasted out all the rest.

Then I rebooted during my maintenance window and looked for failures. When an update failed, I gave the patching tool one or two more shots at it. When that didn’t work, I looked at the logs and took the brain surgeon approach.

Using this hybrid approach, within 30 days of Patch Tuesday, I had a fully up-to-date network. What about updates I couldn’t deploy? That happened a lot, such as a Java update that a vendor didn’t support yet. In those cases I got a risk acceptance. If the risk acceptance was for 14 days, so be it.

Standing in the gap and fixing CVEs with Ivanti tools

Ivanti came into being after I moved into security, so I don’t have first-hand operational experience with their tools. But Ivanti lets you do something cool. You can import a list of CVEs into Ivanti’s tools, then it will tell you what to deploy in order to fix them.

If you can’t get someone to stand in the gap between security and IT in regards to CVEs, it might make sense to buy a tool that does it for you. Even if you can, having a tool that speaks the same language as your security team might be beneficial.

If you found this post informative or helpful, please share it!