Vulnerability management best practices

Last Updated on September 5, 2019 by Dave Farquhar

As a vulnerability management professional, I talk about vulnerability management best practices a lot. It comes up in sales presentations. I talk about it when my phone rings and a former colleague just needs to talk. But based on my experience, not many companies do vulnerability management well. If you’re not happy with your vulnerability management program, here are some best practices to help you get the results you want.

Vulnerability management best practices: The goal

vulnerability management best practices

Vulnerability management best practices dictate scanning your entire network on a regular basis and then remediating the findings in a timely manner. Image credit: Scoobah36 [CC BY-SA 3.0] via Wikimedia Commons
There’s a classic business book out there called The Goal. It’s all about a company that lost sight of why it existed. The company thought it existed to fill factories up with robots and make cool stuff with it. The book’s pivotal moment is when the protagonist’s mentor tells him companies exist to make money.

What does this have to do with vulnerability management? Most struggling vulnerability management programs aren’t chasing the right goal.

I won’t leave you hanging. The goal of vulnerability management is seeing to it that your vulnerabilities are being fixed quickly enough.

That’s it. Get that and the other cool stuff becomes automatic, or a whole lot easier. Or, perhaps, the other stuff becomes less necessary.

But how fast is fast enough? Great question.

Finding your organization’s risk tolerance

Once upon a time, a company hired a new CISO from a company in a different city and a different industry to get a fresh perspective. The name of the company and the reason are unimportant. One of the first things he did was go around to the other VPs in the company and ask them how quickly they wanted vulnerabilities fixed.

Depending on how bad it was, they came up with a range of 7-28 days.

That’s doable but difficult. Back when I pushed patches for a living, I generally had 30 days. The problem was, I had about 500 systems to patch in 30 days. This company had a few tens of thousands of systems to patch in 7-28 days and about five people to do it.

I was good. Just ask me. But seriously, I probably could have patched 2,000 systems a month if you didn’t require a 100% success rate. But ten thousand? My success rate would have dropped even more, and I wouldn’t have stayed five years.

A better way to get your organization’s risk tolerance is to ask each VP how quickly they want them fixed, with the caveat that their people would be doing the work. Then revisit it a few months later.

Maybe they say one month, but it’s taking six months to get them patched. If they’re OK with six months, then the organization’s risk tolerance is six months and it’s time to stop pretending the risk tolerance is one month, or worse yet, a week.

Knowing whether you’re patching fast enough is perhaps the most critical of vulnerability management best practices.

Scan your whole network

You may not be able to scan your whole network every time. “Because we’re nervous” isn’t a good reason. You should, at the very least, perform an OS discovery scan against the entire RFC 1918 space and find out what’s out there. If Bob’s been stealing decommissioned desktop computers and using them to run an ad-hoc datacenter under his desk, you need to find it and make sure someone is patching it.

And no, I’m not kidding about decommissioned desktops embarking on a second career as servers under someone’s desk. It happens a lot. What you and I call Cowboy IT or Shadow IT, others call cutting through red tape. If you can’t stop it, you need to keep it from getting you breached.

Mainstream computing boxes hold up fine to being scanned. Typically where you run into problems are on older Unix boxes running custom apps that listen on high ports. These homegrown apps often respond poorly to OS discovery. The trick is to proceed slowly. When you find Unix systems, probe the common ports, then find out if they have anything goofy on high ports that might cause a problem. Exclude those ports and carry on. Then you can scan without causing production outages.

Do the same thing with embedded systems. You may not find much on boxes running QNX and Vxworks. Scan one representative system and skip the rest to save IP licenses. Do the same thing with your phones. But you do want to be running at least a discovery or inventory scan of the whole network and taking inventory of what’s there. That way if someone leaves a Pwnplug somewhere, you find it.

Authenticated scans: the king of vulnerability management best practices

Always do authenticated scans when you can. It’s easier on the system because your scanner can just check file signatures or versions rather than probing ports. It’s also more accurate, since a port can lie about what’s running on it. The file doesn’t lie. Authenticated scans can find patches that fell off, or partially failed, so you can fix them.

It’s a milestone in your vulnerability management program when you get more findings from unauthenticated scans than you do from authenticated scans. That’s because authenticated scans check more and have fewer false positives. When your patch management program is working, the unauthenticated scans will be mostly informational findings or false positives, and that’s a good thing.

What about devices that aren’t safe to scan?

Some devices just fall over if you scan them. Medical devices, SCADA and ICS devices are all systems that can be problematic if you scan them. In that case, put a passive vulnerability scanner on that network segment. This locks you in to Tenable, as none of the other vulnerability scanners have this ability. But a passive scanner allows you to see what’s talking on the network without interacting with it directly and get a live inventory of live systems and ports based on what’s in the network traffic.

More on vulnerability scanning

Vulnerability scanning is almost a subpractice unto itself within vulnerability management. Here are some more vulnerability scanning best practices.

Know the rules

If you’re in a regulated industry, the industry might dictate a certain risk tolerance for you. PCI compliance, for example, which allows you to process credit cards, gives you 90 days for high-severity vulnerabilities.

If your VPs say six months and your compliance team says three, your compliance team wins. If you’re getting it done with time to spare, you’re good. What if you’re not getting it done with time to spare? It’s time for a conversation. You may need additional staff and you may need more tools. That’s OK. Don’t try to build the Great Pyramids with a spoon.

Know the risky vulnerabilities

Not all vulnerabilities are created equal. This is something I struggled with for years, but it’s critical. Some exploits work reliably. Some don’t. If a vulnerability doesn’t have a reliable exploit, don’t run yourself into the ground trying to get that patch deployed. Save the energy for stuff that’s actually useful to a bad guy.

I think you should deploy every applicable patch to your network every month. But if a particular patch doesn’t have a reliable exploit and your success rate is only 40 percent, it’s probably OK. Forty isn’t zero.

Any vulnerability scanner will tell you if a particular vulnerability has exploits available or not. To get the information about how good the vulnerabilities are, you’ll have to pay extra, or get another product. Qualys and Tenable both offer threat information at additional cost. Alternatively, you can buy Kenna to get that information.

Beware of shiny new vulnerabilities

Whenever a new name-brand vulnerability appears, pressure tends to increase from above to fix it. Now.

It’s good to have executive interest in information security. But if you have multiple vulnerabilities more than a year old hanging around in your network, those probably deserve more attention. Especially if one or more of those vulnerabilities is last year’s name-brand vulnerability.

If you hear about a vulnerability in the mainstream media, it’s probably a pretty big deal. And nobody wants to get breached due to a vulnerability that everyone heard about on CNN. That said, the high-profile breaches like Equifax involved boring vulnerabilities that didn’t get any headlines when they were released. The majority of breaches that we find out about involved ordinary, low-profile vulnerabilities that were more than a year old when the breach occurred.

Report on patches, not vulnerabilities

When you need to get a clean scan on a system, don’t send your infrastructure team a CSV of raw vulnerabilities. Often they will do one of two things. They’ll either ignore it or you, or they’ll look at the file, find one thing wrong in it that they can defend, and use that one problem as justification to ignore the whole thing–and you.

It’s a logical fallacy, but it works, so teams do it.

The way you get things done is by pulling the patch report. Both Qualys and Tenable offer this feature. Other vulnerability management tools may or may not. The patch report is higher level, eliminating all of the old patches for a vulnerability and just reporting the most recent patch that fixes the problems your scanner found. It’s a high-level report that doesn’t include the details for someone to nitpick at.

When they tell you there’s a patch on the list they applied, go to the raw CSV and look in the results or plugin output column. That will tell you the file that failed to update properly. Get the team used to troubleshooting that way before you send the raw CSV to you. It will save both of you time.

You can use a year’s worth of patch reports against a group of systems to get a rough metric of what your team accomplished in a year. Do that math a month or so before annual review time and hand that to your infrastructure team, and you’ll have a friend. There are some guys at a former employer of mine who will go through a wall for me if I ask them, because I helped them review well.

Know your critical assets

Some systems are more important than others. It’s hard to get anyone to prioritize them, but someone in your organization has. Talk to your disaster recovery team. Whatever systems your company would rebuild first in the event of losing a datacenter are the critical systems.

You can also use the opposite to your advantage. If you can prove nobody is using a system anymore, get it decommissioned so you don’t have to continue the time, effort and expense to patch and scan it.

Don’t forget your workstations

Back when I worked for a vulnerability management software company, customers would tell me they were thinking about not scanning workstations anymore to save money. You need to scan everything, but if you’re going to stop scanning something, it makes more sense to skip the servers, not the workstations. Workstations are constantly under attack, from phishing e-mails and malicious content on web sites. An internal server sitting on a VLAN has it easy.

Understand patch management

My take on vulnerability management isn’t one I hear every day. But what we’re doing is we’re measuring the effectiveness of patch management. And when patch management doesn’t get the results we need, it helps if we can give some advice.

If you have experience in patch management, to some degree you can coach patch management from the vulnerability management space. If you’ve never done patch management, it’s a lot harder. It’s like me trying to coach basketball. I never played outside of PE or recess, and then, only reluctantly.

There are examples of successful coaches who coached in a sport they didn’t play professionally, but it’s rare. The best coaches are the ones who were good enough to play professionally and make the big leagues but spent 10 years mostly sitting on the bench.

Your vulnerability management program will be more successful if you have at least one person on staff who spent at least a year pushing patches. When you have a bad month, that’s the kind of person who can find out why, and can help figure out how to not have another bad month like it.

Get trained

I cannot overemphasize training and certification when it comes to vulnerability management best practices. Security analysts who get training in their solution do well. Security analysts who don’t just struggle for months or even years on end. Both Tenable and Qualys offer free, web-based training for their products. You can get through the training in 1-2 days. The tests are about 40 questions long. You should be able to finish the test in an hour and get certified. It’s free and you get 8 CPEs to apply toward your Security+, CISSP, or any other certification that requires continuing education. I could have gotten a full year’s worth of CPEs just from Tenable training if I’d done it right.

Also, just to see what would happen, I took one of the Tenable tests cold. I used Tenable products in the field a few years ago and I know my way around VM. How bad could I do? Well, I didn’t just fail. I bombed it badly. Training covers the nuances of a product that you won’t pick up on your own. Invest the time in it. Get your staff to invest some time in it. It pays off.

And just so you know, I went and got all the training Tenable offers. A few weeks later, I deployed Tenable.io for the first time. I had to look a few things up, but I had it operational in less than eight working hours. Training works.

Light a fire with attack simulation

Sometimes the way to get management’s attention is to simulate a breach. That’s where a product like Core Insight can help. By demonstrating how an attacker might use the vulnerabilities in your network using a simulation, you can sometimes get the reaction that just talking about vulnerabilities will not.

Frequently when I find a system that hasn’t been patched in years, the argument I hear is that nothing bad has happened, so why fix something that isn’t broken? Using an attack simulation shows what might have happened last month or could happen this month. Arguing over whether something bad has or hasn’t happened frequently is unproductive. But not patching is like not locking your doors. If I don’t lock the doors to my house, I might know if someone comes in while I’m gone. But if they’re careful enough, I won’t.

Vulnerability management best practices say to lock your doors. And apply your patches.

Know when to bring in help

Some companies can spin up an effective vulnerability management program in a couple of years and they have the results to prove it. Many can’t. If you don’t have the staff in house to run a vulnerability management program and you haven’t been successful in hiring it, consider using a managed security services provider. An MSSP will build or take over your vulnerability management solution, ensure it’s scanning what it needs to scan, and provide you monthly reports. They can’t name names but they can tell you what’s working and not working for other clients, giving you an outsider perspective.

Vulnerability management best practices: In conclusion

Vulnerability management is still a relatively new practice, and there aren’t a lot of people with experience in it. That said, following these vulnerability management best practices will help you get the results you want and give your company the protection it needs.

If you found this post informative or helpful, please share it!