Windows vs. Linux kernel performance

Last Updated on December 3, 2018 by Dave Farquhar

An anonymous Microsoft developer spilled some juicy opinions about why Windows kernel performance isn’t all it could be and answered some longstanding questions about Windows vs. Linux kernel performance in the process. Although he has recanted much of what he said, some of his insights make a ton of sense.

He started by talking about a subject near and dear to my heart, security.

We started caring about security because pre-SP3 Windows XP was an existential threat to the business.

Very true. And today, Microsoft security isn’t a joke. Windows security is surprisingly good today. There are a few things NSA still won’t use Windows to do in place of Unix, but the number of vulnerabilities that appear every month, the seriousness of them, and the time it takes for fixes to appear are all down from the pre-XPSP3 days. I remember those days, and I’m glad they’re over.

I’ve said it once before, and I couldn’t believe I said it then, and I can’t believe I’m repeating it now, but I wish Oracle, Adobe, and, yes, Apple would take Microsoft’s reforms to heart.

Then there’s this:

Our low performance is not an existential threat to the business.

And it really only was a threat in the Vista days. It’s the user interface, not the performance, that’s hurting Windows 8 sales. And I’ll add something to that. I really don’t think Intel, AMD, Dell, HP, Lenovo, Acer, and the rest of the hardware makers really want Windows to perform as well as it possibly could. It would cut into hardware sales. It already is–pretty much any dual-core system from 2005 or later can run any Windows version except Vista adequately enough to keep the majority of users happy. But throughout the 1990s, the useful life expectancy of a PC was about three years, consistently. It was part of the business model.

And I’ll argue that Vista’s performance would have been acceptable if $500 PCs could have run it tolerably. In 2006, they couldn’t, so people stuck with XP.

Linux has been different, almost from the very beginning. One of its biggest selling points all along has been that it can run acceptably on machines that can’t run Windows anymore. Hand me a decade-old PC, and I can probably make an acceptable Linux web server out of it. Fighting Windows’ dominance was an uphill battle, and being free and being more secure wasn’t enough. Being free and more secure and faster isn’t always enough either, but it’s better.

On linux-kernel, if you improve the performance of directory traversal by a consistent 5%, you’re praised and thanked.

Yes, indeed. And if you contribute a patch that gets accepted into the Linux kernel, that’s resume material. At least it used to be–I assume it still is. And improving performance is one way to get that patch accepted, and get that valuable line item on the resume. I suspect that if a hiring manager saw a line item about a patch getting accepted into the Windows NT kernel, the hiring manager would just ask, “Wasn’t that your job?”

For whatever reason, getting something accepted into the Linux kernel has more cachet than getting something accepted into the NT kernel. Which is probably good, since, according to this particular insider, it’s harder to get a change to the NT kernel made anyway.

So, even though this particular developer seems to regret what he said, virtually everything he said makes sense to me, at least from my sysadmin perspective. Different history, different priorities, different culture. That’s why Windows vs. Linux kernel performance differs.

Is it bad? I think it depends on your point of view, which is why he’s regretting what he said. But, like he said, high performance is part of Linux’s business model, and not-as-high performance is part of Microsoft’s. Is any Microsoft shareholder going to object to Microsoft doing something that worked in the ’90s?

I doubt it. And in the case of Microsoft, the shareholders are the boss.

If you found this post informative or helpful, please share it!