Authenticated scan vs unauthenticated

Last Updated on March 18, 2020 by Dave Farquhar

In vulnerability scanning, there’s a big difference in an authenticated scan vs unauthenticated. Here’s why it matters, and why you should almost always go for an authenticated scan. Using authenticated scans is a vulnerability management best practice.

Lots of people misunderstand this. To quote myself about fifteen years ago: “Let me get this straight. I give you an admin account, and then you tell me you were able to log in?” It’s about logging in and assessing what’s wrong, not telling you we got in. Regardless of the tool you use, authenticated scans let the vulnerability scanner do its job better.

Authenticated scan vs unauthenticated: Better accuracy

authenticated scans vs unauthenticated
You will always get better scans if you create an administrative account for your vulnerability scanner to use. Image credit: m lobo/flickr

When you scan without authentication, the vulnerability scanner probes the system. It’s able to tell you a surprising amount, but not everything. It can tell you the broad OS family the system belongs to and it can tell the difference between NT4 and XP and Vista or newer systems. It will probably find a good number of vulnerabilities but it may have to flag them as potential, indicating it’s not 100% certain about it.

Authenticated scans prove your patch management program is doing its job, and provides the data that patch management teams need in order to improve their processes going forward.

The accuracy of an unauthenticated scan is hard to measure, because it misses so much. When it does find something, the accuracy varies based on network conditions. You can safely assume it will be no better than 90%. That sounds good, but consider authenticated scans are about 99.999997% accurate, at least with Qualys and Tenable. But they conduct 50,000 checks. If you have a network with 50,000 live hosts, you can potentially still have 75 errors. It’s much easier to deal with 75 errors than multiple errors on every host.

Dealing with false positives

One thing I hear over and over is that vulnerability scanners, even Qualys and Tenable, have a lot of false positives. Remember that whoever is saying that doesn’t realize how many checks the scanner is performing, and may or may not realize how many live IP addresses the network has. To someone who doesn’t have the perspective that the scanner has to consider billions of possibilities in a large enterprise, 75 errors sounds like a lot. Especially if they’re the unlucky soul who ends up with the lion’s share of those false positives.

The key is to be gracious and understanding. Ask for the evidence that refutes the finding in the results or plugin output section of the report on that host. If the evidence checks out, mark it as a false positive, open a case with the vendor so they can improve that signature for the next guy, and move on. If the evidence doesn’t check out, explain why. Graciously. They’ll come around to understand the tool and eventually trust it.

Some findings in an unauthenticated scan may be marked as potential. That means there’s some indication the host may be vulnerable but the scanner couldn’t confirm it with certainty. When this happens, run an authenticated scan so the scanner can confirm it. If an authenticated scan isn’t possible, work with the system administrator to confirm or refute it, then mark it as a false positive if you can refute it.

You may get someone who says because of a given false positive, they don’t believe a single thing the scanner says. That’s a logical fallacy. Each check exists independently of the next. They weren’t all written by the same person, and were written over the course of years or decades. When you check the finding in the results or plugin output section against the system, you’ll find the tool is correct more often than not. And wen you do find an error and open a false positive case with the vendor, you’ll generally find they’re very interested in improving the checks.

Authenticated scan vs unauthenticated: Better advice

When you scan with authentication, your scanner examines files rather than fuzzing services. So it can give very specific advice, such as a system being vulnerable to a specific vulnerability based on this particular file being this version.

It is possible to configure services to lie about their version numbers. Some services, like Oracle Weblogic, report only major version numbers and not the minor version. To accurately report what these services are running, it’s necessary to look at the filesystem. One of the reasons I like Qualys and Tenable solutions is that they look at the filesystem rather thoroughly to really try to report accurate results, even in the case of anomalies. Qualys claims better than 99.999997% accuracy when you use authenticated scans. In my experience scanning networks with tens of thousands of hosts, that’s been about right.

With an authenticated scan, you can look at the test results (usually labeled either “results” or “plugin output”) to see what condition the scanner found to flag the condition. It could be a missing registry key or a file that’s the wrong version. Correct that issue, and the finding goes away. These tools not only find missing patches, but partially-applied patches as well.

Using an authenticated scan vs unauthenticated not only reduces false positives, but it also reduces false negatives. We don’t know the details of the Equifax breach of 2017, but one of the accusations that flew around in the aftermath was the possibility of a false negative.

Authenticated scan vs unauthenticated: Lower impact

An authenticated scan usually has less impact on a system than an unauthenticated scan. Some services don’t like being probed, and an authenticated scan eliminates the need to probe the service. The vulnerability scanner can just log in, ask the operating system what’s installed, what’s running, and where. Then it can check a few files to validate, then move on to the next system.

The authenticated scan may or may not be faster, but it has less impact on the system and the network. It can simply ask the system questions rather than throwing a bunch of traffic at it and observing the results. So that gives you less chance of something weird happening. It may or may not be faster since authentication allows the scanner to do a much larger number of tests. Each test goes faster, but since it can now do tens of thousands of checks (potentially) rather than hundreds, it may spend more time on each system. But the greater accuracy and thoroughness is worth the scan taking longer.

If you found this post informative or helpful, please share it!