Kenna is a revolutionary vulnerability management tool. It completely changed my approach to vulnerability management. But it can be hard to get used to. The most maddening thing about it is how you can deploy an update, and then your Kenna score increases. That’s not the outcome you wanted. Here’s why patching can make your Kenna score go up instead of down, and what to do about it.
Kenna’s math is tricky, but the thing to remember is the risk score isn’t exactly an average. Once you deploy enough patches for high-risk vulnerabilities, your risk score will start to drop as expected. The key is sticking with it long enough for the score to drop.
How Kenna calculates risk score
Kenna doesn’t disclose the math behind its risk score. But as best I can tell, Kenna derives it from the average of the highest-risk vulnerability on each system. But zeroes don’t count. Kenna doesn’t factor your clean systems into that average.
That’s unfair, but including them isn’t fair either. It doesn’t take a lot of zeros in an average to artificially skew the score and make a network look less risky than it is. Kenna’s job is to drive remediation, not to encourage complacency, so that’s why they calculate it the way they do.
So, Kenna calculates the risk score of an asset by taking its highest-scored vulnerability and multiplying it by 10.
The risk score of any group of systems appears to be the average of the non-zero systems, rounded off to the nearest 10.
Why patching can make your Kenna score increase
I gave a client a list of four things I wanted them to do. It will probably sound familiar. Patch Microsoft Office, Adobe products, web browsers, and the operating system. I can probably give this advice to any client; the difference would be what order the first three items go in. Typically web browsers and Microsoft Office would switch places.
They took my advice and rolled the newest updates for each to some pilot systems. Their overall vulnerability count plummeted, and so did their number of vulnerable systems. But their risk score went up. And they were anything but happy about that.
Don’t get discouraged if your Kenna score increases early on
I once had someone ask me for a list of vulnerabilities to fix. Then they told me they’d talk to me again in three months. They told me not to scan their systems or anything else, just to let them know in three months how they did.
It’s an unconventional approach. I didn’t think it would work, but I humored them. Three months later, I scanned their network and let Kenna pull in the results. I found they deployed the updates I asked them to, with a success rate of about 90 percent. They fixed well over 60 percent of the vulnerabilities in their network, and their Kenna score dropped into the 300s.
And this was a good-sized company too, with around 5,000 employees across its enterprise.
Their approach was unconventional, but looking back, there was a lot of wisdom in not looking at the statistics while they undertook this project. They asked me what to do, believed it would work, and then they did it.
Now that patching is a routine part of operations, not a project, they look at how they’re doing every month. Sometimes their score jumps a bit, but never by a lot, and it always comes back down, and it’s always quite a bit better than industry average. Looking at their Kenna scores every month is like Barry Bonds watching his home runs. Yes, it’s a moonshot, but you want to know how big of a mark it left.
But what if you have to look during that initial push? Most do. And I can tell you I’ve never seen a risk score go up two months in a row if you’re patching successfully and you’re pushing enough updates.
The problem with vulnerability metrics
The biggest problem with vulnerability statistics is there’s no metric you can collect that tells you everything. You need several, and it’s hard to find consensus on which ones they need.
Eventually you get to a point where your patching program works and all the statistics look good, so then no one cares which ones you use. In the meantime you need several, and some may trend the wrong way sometimes. Kenna collects a ton of metrics and tells you where your biggest problems are, but you still need a reasonably experienced analyst to look at Kenna’s output and tell you if what you’re doing is working. Finding and keeping a good vulnerability analyst is difficult, so I recommend looking into an MSSP.
What a sustainable approach looks like
Kenna tells you in its sales literature that fixing vulnerabilities with a score of 66 or higher gives you double the benefit of fixing vulnerabilities with a CVSS score of 7 or higher, for half the work. That’s because you’re fixing the vulnerabilities that people are actually using in breaches, rather than the vulnerabilities that have the potential to be useful for breaches. Potential doesn’t always pan out. Just ask Bo Jackson.
Every patch cycle, collect the list of vulnerabilities that have a Kenna score of 66 or higher, then have your patching team deploy those updates, and if your success rate is anywhere close to 90 percent, your Kenna score is virtually guaranteed to be below 650. More likely, it will be lower than that. The math is unpredictable, but every single time I’ve seen someone take this approach with a reasonable rate of success (85 percent or better), they’ve ended up with a Kenna score better than the industry average, which is usually around 525.