What to do when a Microsoft patch won’t install

Every once in a while, when you push patches for a living, you come across a time when a Microsoft patch won’t install. This is one of those times, and what I did to fix it.

So, Microsoft KB947742, an old .NET 1.1 fix, refused to install on one of the servers at work. When I ran the executable, all it did was pop up the window showing the Windows Installer switches or parameters. Searching Google turned up a number of people having the problem, but no solutions that worked, although reinstalling the .NET 1.1 Framework and the latest version of the Windows Installer are always good ideas when you run into weird problems. .NET 1.1 is extremely fragile anyway, and reinstalling it along with all applicable hotfixes has worked for me in the past to resolve weird issues, such as permissions issues showing up in the security log. Or .NET applications just suddenly not running anymore, even though they ran just fine yesterday.

I tried everything I could think of and finally stumbled on a solution. I have absolutely no idea why this works. First, I opened a command line, changed into the directory where I had stored the patch, and I ran the following command:

NDP1.1sp1-kb947742-x86.exe /extract .\947742

This extracts the update to a directory called 947742. Inside that directory, I found a single file, named NDP1.1sp1-kb947742-x86.msp. When I double-clicked on the file from Windows Explorer, it installed.

I’ve applied this patch on more than 100 servers and I recall only having the problem on one of them. And, oddly, all other .NET patches and for that matter all other recent Microsoft updates apply to this machine just fine.

I suppose the same fix could work on other Windows updates that supply only a window full of switches instead of installing, or other weird installation issues. It’s worth a shot if nothing else works and you can’t (or would rather not) open a support case with Microsoft.

This is a strange case. If you’re running WSUS or (better yet) Shavlik Netchk and a patch refuses to install, try logging in, downloading and running the offending patch manually and note any error messages. Maybe, just maybe, this fix will help you. Or better yet, maybe the patch will tell you what you need to fix, but don’t count on it.

When absurdity strikes, try extracting the patch and poking around inside, like I did in this case.

Buffer overflows explained

Buffer overflows are a common topic on a Security+ exam. The textbook explanation of them is confusing, perhaps even wrong. I’ve never seen buffer overflows explained well.

So I’m going to give a simplified example and explanation of a buffer overflow, similar to the one I gave to the instructor, and then to the class.

Read more

Better upgrade advice

PC Magazine has a feature about inexpensive PC upgrades. There’s some good advice there, but some questionable advice too. Since I really did write the book on free and inexpensive upgrades, I’ll present my own advice (but I’ll skip the pretty pictures).Hard drives

The best upgrade they didn’t mention is replacing the hard drive. I’ve been squeezing extra life out of old systems for years by taking out the aging drives and replacing them with something newer and faster. The trick is figuring out whether the drive is the old-style parallel ATA (with a 40- or 80-conductor cable) or newer SATA. If you can afford it, it makes sense to upgrade to a SATA controller so you can use a more modern drive. Newer drives are almost always faster than older drives if only because the density of the data is always increasing. If a drive stores twice as much data in the same linear space as an old one, it (roughly) means it will retrieve the data twice as fast, assuming the disk spins at the same speed (and it may spin faster). You can go all the way up to the 10,000 RPM Western Digital Raptor drives if you want, but even putting a mid-range drive in an old PC will speed it up.

Some people will point out that a new drive may be able to deliver data at a faster rate than an old controller in an old PC can handle. I don’t see that as a problem. There’s no drive on the market that can keep a 133 MB/sec bus saturated 100% of the time, and the old drive certainly isn’t. Even if your older, slower bus is the limiting factor some of the time, you’re still getting the benefit of a newer drive’s faster seek times and faster average data transfers.

While replacing a hard drive can bust an entire $125 upgrade budget in and of itself, it’s still something I recommend doing. Unless your system is really short on memory or you’re heavily into gaming, the hard drive is the best bang for your upgrade buck.

Memory

The other point I disagree with most strongly is the memory. There’s very little reason anymore to run a system with less than 1 GB of RAM. As a system becomes more obsolete, memory prices go up instead of down, so it makes sense to just install a ton of memory when you’re upgrading it anyway. If you need it later, it will probably cost more.

The caveat here is that it makes very little sense to install 4 GB of RAM, since the Intel x86 processor architecture reserves most of the 4 GB block for system use. If you install 4 GB of RAM, you really get more like 3.2 or 3.5 GB of usable memory unless you’re running 64-bit Windows. I don’t recommend going 64-bit yet. When it works, it works well. Unfortunately there’s no way to know if you’ll have good drivers for everything in your system until you try it. I wouldn’t go 64-bit until some popular software that requires (or at least takes really good advantage of) 64 bit arrives. The next version of Photoshop will help, but I think the thing that will really drive 64-bit is when id software releases a game that needs it. Until then, hardware makers will treat 64-bit Windows as an afterthought.

I usually put 2 GB of RAM in a system if it’ll take that much. If you do a lot of graphics or video work, more is better of course. For routine use, 2 GB is more than adequate, yet affordable. If a system won’t take 2 GB, then it makes sense to install as much as it will take, whether that’s 1 GB or 512 MB. If a system won’t take 512 MB, then it’s old enough that it makes sense to start talking replacement.

Outright replacement

Speaking of that, outright replacement can be a very practical option, especially if a system is getting up in years. My primary system is a 5-year-old office PC. Take a 2-ish GHz P4 or equivalent (current market value: $75-$125), load it up with 2 GB of RAM and a moderately fast hard drive, and you’ll have a better-built system than any $399 budget PC on the market. It will probably run as fast or faster, and it will cost less.

I have two PCs at the office: a 3 GHz Pentium D, and a 2.6 GHz Core Duo. Both have 2 GB of RAM. They theoretically encode MP3s faster than my home PC and would make better gaming PCs than my home PC (ahem), but for the things I do–namely, web browsing, spreadsheets, word processing, e-mail, and the occasional non-3D game–I can’t tell much difference between them. The System Idle Process gets the overwhelming majority of the CPU time on all of them.

Other upgrades

The other things discussed in the article can be worthwhile, but faster network cards won’t help your Internet speed. If you routinely copy huge files between multiple PCs, they help a lot, but how many people really do that on a regular basis?

Fast DVD burners are nice and they’re inexpensive, but if you needed one, you’d know it. If you don’t know what you’d do with one, skip it. Or if you have an older one that you use occasionally, you probably won’t use a faster one any more often.

For $60 you can get a decently fast hard drive, and that will do a lot more for overall system performance than either a network card or DVD burner upgrade.

The video card is a sensible upgrade under two circumstances: If you’re using the integrated video on your motherboard, or if you play 3D games and they feel jerky. If neither of those describes you, skip the video card upgrade.

Free upgrades

The article describes CHKDSK as a “low level defrag.” That’s not what CHKDSK does–it checks your drive for errors and tries to fix them. If your drives are formatted NTFS (and they probably are), routinely running CHKDSK isn’t going to do much for you. If you run CHKDSK routinely and it actually says it’s done something when it finishes, you have bigger problems and what you really need is a new hard drive.

If you want to defragment optimally, download JK-Defrag. It’s free and open source, and not only does a better job than the utility that comes with Windows, but it does a better job than most of the for-pay utilities too.

The first time you run it, I recommend running it from the command line, exactly like this: JkDefrag.exe -a 7 -d 2 -q c:. After that, just run it without any options, about once a month or two. (Running more often than that doesn’t do much good–in fact, the people who defragment their drives once a day or once a week seem to have more problems.) Run it with the options about once a year. Depending on what condition your system is in, the difference in performance after running it ranges from noticeable to stunning.

Microsoft buys and then discontinues Linux/Unix antivirus products

First GeCAD, now Sybari.

Microsoft has been buying smaller anti-virus firms and discontinuing their Linux and Unix product lines.

Trust, schmust. When your god is Big Business, that means Big Business can do no wrong, so when you’re the U.S. government, you let companies like Microsoft do whatever they want. The problem is that Unix antivirus products are extremely useful, especially in Microsoft shops. Unix viruses are rare, and the heterogenous nature of Unix–never knowing much about the underlying hardware, binary incompatibilities between various dialects even when running on the same hardware, and never knowing for certain which libraries are installed–creates a hostile environment for viruses anyway.

So what good is a Unix server that detects viruses that can’t survive in Unix anyway? It makes a great buffer between the hostile world and the soft and chewy Windows boxes inside corporate firewalls, that’s what.

I love to put Unix boxes in between the world and mail servers that may be running Windows. Just set it up to relay mail to your Exchange or Domino server, but have it scan the mail first. Better yet, have it running on weird hardware. A slightly elderly Macintosh or Alpha or Sun box works great. Since the Intel x86 instruction set is the most common, most buffer overflows use it. While non-x86 processors aren’t immune to buffer overflows, an overflow using x86 instructions will appear to be gibberish and it won’t run. It’s like telling me a lie in Japanese. You won’t fool me with the lie, because I don’t speak Japanese, so I won’t understand a word you’re saying.

Fortunately, there are still antivirus products for Unix and Linux out there. And once Microsoft establishes its antivirus product, it will be more difficult–I hope–for it to simply continue buying antivirus firms and discontinue their products, since now they would be buying off competitors, rather than just attempting to acquire technology that they don’t have the ability to develop internally.

And even if they do buy and discontinue everything, there’s always ClamAV.

Can Google compete with Paypal?

There are reports in the news today that Google may launch a Paypal-like service. Most are questioning whether Google can compete with Paypal, which boasts 72 million users.

I believe the answer is yes.Here’s why. I buy a lot of stuff on Ebay. Lately I’ve been selling too, and since the initial effort was reasonably successful, I’m going to start listing more things.

I’ll be listing for the same reason lots of people do. It’s funny how much stuff becomes redundant once you get married and your spouse moves in, and it’s cheaper than having a garage sale and you’ll usually get better prices. And, besides, for the past six weeks or so I’ve been a bit shorter on cash than I’d like to be.

Online payment systems work because a lot of people don’t want to mess with checks. It’s a pain to write a check and it’s a pain to cash one, and nobody likes waiting the 7-10 days it takes for one to clear. Money orders and cashier’s checks eliminate the waiting period, but they’re a pain for the buyer, who has to go visit the bank during working hours and pay a couple of dollars, or you have to visit the ATM and then find a convenience store that sells money orders, and pay a couple of dollars. It wastes a lot of time. And if you’re buying a $100 item, you probably don’t care about the couple of dollars, but you sure do if you’re paying for a $2 item.

The reason 72 million people use Paypal is because it’s better than dealing with checks or money orders. But it doesn’t take much.

Read through some Ebay listings though, and you’ll find lots of people who don’t take Paypal. The reasons vary, but the people who don’t like Paypal really don’t like it. Those people tout Western Union or Bidpay as alternatives, but those in reality are just an online venue to buy a money order. It saves you hopping in the car. Again, on an item whose price requires three or more digits, you probably don’t care. But they’re horrible for small transactions.

Since Paypal is so widely used but so widely disliked, there’s lots of room for a competitor.

From what I can tell, sellers of merchandise don’t like Paypal because it’s free for the buyer, but big-time sellers take a hit. (People like me who sell casually don’t.) The hit seems to vary, but resellers seem to like to tack 60 cents onto the cost of the transaction when I use it. I generally pay it, since 60 cents is a lot less than it would cost for me to use another online payment service or to buy a money order, and it’s not much more than it would cost me to mail a check.

So it seems to me that there are at least two ways for Google to compete. I’m sure they’ve done some market research on what people dislike about Paypal and they’ve looked into what they can do to provide better service. Obviously one approach they could take would be to simply charge less money.

A second possibility would be for Google to endear itself to the seller by placing the financial burden on the buyer. Charge the buyer, say, a percentage of the transaction cost, with a maximum cap of somewhere around the cost of a postage stamp. Sellers would gladly accept it if it didn’t cost them anything. Buyers won’t like it as much as Paypal since it’s not free for them, but it would give the instant gratification of Paypal while costing about as much as mailing a check. And besides, it’s the seller who sets the terms of the transaction. If the buyer doesn’t like it, the only choice is to not bid.

I believe that sellers who don’t accept Paypal are putting themselves in the same position as a brick-and-mortar store that doesn’t accept credit cards, and sometimes I’ve gotten some real bargains precisely because the seller only accepted money orders, but that doesn’t stop a lot of them.

So I don’t believe Paypal is a juggernaut. It was the first widely successful online payment service. But this field doesn’t give much credit for being first. Just ask Datapoint (inventor of what became the x86 family of processors), Commodore (first successful consumer-level computer to feature pre-emptive multitasking), Digital Research (first popular operating system for microcomputers), or any number of now-defunct pioneers.

I’m not willing to place any bets on whether Google will become the market leader in this arena, especially without having seen their service. But I also don’t think there’s much question as to whether it will survive and/or be profitable. As dissatisfied as the users of other services are, Google Wallet would have to be awfully bad to flop.

Intel inside a Mac?

File this under rumors, even if it comes from the Wall Street Journal: Apple is supposedly considering using Intel processors.

Apple’s probably pulling a Dell.It’s technically feasible for Mac OS X to be recompiled and run on Intel; Nextstep ran on Intel processors after Next abandoned the Motorola 68K family. Mac OS X is based on Nextstep.

Of course the x86 is nowhere near binary-compatible with the PowerPC CPU family. But Apple has overcome that before; the PowerPC wasn’t compatible with the m68K either. Existing applications won’t run as fast under emulation, but it can be done.

Keeping people from running OS X on their whitebox PCs and even keeping people from running Windows on their Macs is doable too. Apple already knows how. Try installing Mac OS 9 on a brand-new Apple. You can’t. Would Apple allow Windows to run on their hardware but not the other way? Who knows. It would put them in an interesting marketing position.

But I suspect this is just Apple trying to gain negotiating power with IBM Microelectronics. Dell famously invites AMD over to talk and makes sure Intel knows AMD’s been paying a visit. What better way is there for Apple to get new features, better clock rates, and/or better prices from IBM than by flirting with Intel and making sure IBM knows about it?

I won’t rule out a switch, but I wouldn’t count on it either. Apple is selling 3 million computers a year, which sounds puny today, but that’s as many or more computers as they sold in their glory days. Plus Apple has sources of revenue that it didn’t have 15 years ago. If it could be profitable selling 3 million computers a year in 1990, it’s profitable today, especially considering all of the revenue it can bring in from software (both OS upgrades and applications), Ipods and music.

Well, I’m a Slowlaris administrator now

Let me run down <strike>my list of qualifications</strike> what I know about Solaris.1. They call it "Slowlaris" because it initially wasn’t as fast on the same hardware as its predecessor, SunOS.
2. I don’t know if Slowlaris 9 is faster than older versions of Slowlaris, so I don’t know if this counts as something I know about it.
3. Slowlaris is based on System V Unix. SunOS was based on BSD.
4. Slowlaris runs primarily on proprietary hardware from Sun, based on a CPU architecture called SPARC. A handful of Sun clones exist, but I think Fujitsu is the only big third-party manufacturer.
5. There is an x86 version of Slowlaris. Sun keeps going back and forth on whether to continue making it or not, since they don’t make much money off it. It’s being made now. Professional Slowlaris admins argue that its availability makes it easier for up-and-coming admins to learn the OS without buying expensive Sun hardware–they can run it on their six-month old computer that’s too slow to run Doom 3.
6. "Sun" was originally an acronym for "Stanford University Network."

So most of what I know about Slowlaris is either trivia, or holdover generic Unix know-how. But I told my boss since it’s System V, I should be able to adjust to it almost as easily as I could adjust to a Linux distribution from someone other than Debian. I’ll just be typing –help and grepping around in /etc even more than usual.

Yep, it’s been that kind of <strike>week</strike> month.

Spend your computer money on your monitor, not some hopped-up CPU

I read an editorial at Tom’s Hardware this morning that struck me as a bit unusual. Not only did it not mention Quake once (or Doom or whatever the FPS flavor of the week is today), it didn’t mention overclocking, and it wasn’t especially excited about AMD and Intel’s new CPU releases today.
In fact, it argued that by rushing out and buying those CPUs, all you’re doing is giving AMD and Intel an interest-free loan. You buy the chips now. The apps that need them will come later. And that, he said, is just plain wrong.

And I thought to myself: How is this any different from history? Yes, I’ll concede that every chip from the 486 up to, say, the chips of the gigahertz race was overdue. But let’s face it. When Gatermann’s dad needed a computer, we tracked down a used Dell P2-450. When a mutual friend’s sister went off to college, we tracked down another off-lease Dell, added a CD burner, and sent her on her way. If you know how to set a computer up right, it’s entirely possible to be plenty productive on a P2.

And the majority of people are mainly interested in using a computer to surf the Web, read e-mail, do some word processing, listen to MP3s and burn music CDs. For tasks like that, a P2 is, frankly, overkill.

When the first 386 PCs appeared in 1986, they were overkill. People were content with their 4.77 MHz XTs. Some of them had just gotten 6 or 8 MHz ATs, which were themselves overkill. Everyone seems to think the x86 series debuted in 1981. It didn’t. Intel released the 8088 in 1977. It was four years before the chip got mainstream use! (The 8086, after which the family is named, waited even longer.)

This industry has always been built with the bucks from the early adopters and enthusiasts. Always. And if you don’t want to play, nobody’s making you. I haven’t ordered my Athlon 64 yet.

It’s never made sense for me to be the first one on my block with the hottest new CPU. The same is true for most people I know. A lot of people would do well with a $150 used computer from one of these guys–click one of the links and scroll to the bottom and find a link that says “systems” or “desktop PCs”–and a really good keyboard, mouse, and monitor. Or if you want new, buy the cheapest PC available from a first-tier vendor you trust, then spend the money you would have spent on a 3 GHz Pentium 4 Extreme Edition CPU on something that’s actually useful, like that thing you spend all that time staring at. Get a flat-panel LCD monitor that runs at a comfortable resolution. Ditch the $3 keyboard and mouse that comes with the system and buy nice(r) ones. (The best keyboards on the market bring sticker shock–I have trouble justifying a $150 computer keyboard too, I know.)

Chances are you’ll have money left over. Good. In two years the budget CPU will be faster than that P4 Extreme Edition that Intel is touting today. Start saving for 2005’s budget PC now. The monitor, keyboard, and mouse you just shelled out the big bucks for will still work with it, and you’ll be a lot happier.

More on tiny but potentially modern Linux distributions

I found a couple of interesting things on Freshmeat today.
First, there’s a Linux-bootfloppy-from-scratch hint, in the spirit of Linux From Scratch, but using uClibc and Busybox in place of the full-sized standard GNU userspace. This is great for low-memory, low-horsepower machines like 386s and 486s.

I would think it would provide a basis for building small Linux distributions using other tools as well.

What other tools? Well, there’s skarnet.org, which provides bunches of small tools. The memory usage on skarnet’s web server, not counting the kernel, is 2.8 megs.

Skarnet’s work builds on that of Fefe, who provides dietlibc (yet another tiny libc) and a large number of small userspace tools. (These tools provide most of the basis for DietLinux, which I haven’t been able to figure out how to install, sadly. Some weekend I’ll sign up for the mailing list and give it another go.

And then there’s always asmutils, which is a set of tools written in pure x86 assembly language and doesn’t use a libc at all, and the e3 text editor, a 12K beauty that can use the keybindings for almost every popular editor, including two editors that incite people into religious wars.

These toolkits largely duplicate one another but not completely, so they could be complementary.

If you want to get really sick, you can try matching this kind of stuff up with Linux-Lite v1.00, which is a set of patches to the Linux 1.09 kernel dating back to 1998 or so to make it recognize things like ELF binaries. And there was another update in 2002 that lists fixes for the GCC 2.72 compiler in its changelog. I don’t know how these two projects were related, if at all, besides their common ancestry.

Or you could try using a 1.2 kernel. Of course compiling those kernels with a modern compiler could also be an issue. I’m intrigued by the possibility of a kernel that could itself use less than a meg, but I don’t know if I want to experiment that much.

And I’m trying to figure out my fascination with this stuff. Maybe it’s because I don’t like to see old equipment go to waste.

DietLinux — a Linux that boots in under 10 seconds

The tinkerer in me just couldn’t stay away. I saw a reference on Linux Weekly News to DietLinux and had to look at it.
DietLinux is an example of a Linux distribution that can’t properly be called GNU/Linux, because the majority of its userspace didn’t come from the GNU project. GNU’s libc–the main API for Unixish systems, and I’ll call Linux a Unix just to hack off SCO–is replaced with an alternative, trimmed-down libc called dietlibc. It’s not feature-complete but it’s tiny. Those of you who programmed casually in the 1980s and 1990s probably remember a day when you could write a fairly sophisticated program in a few kilobytes. Under modern operating systems, a simple program that simply emits “Hello, world!” can take up 32K or more. Using dietlibc instead of GNU’s libc shrinks that program back down to a couple of kilobytes.

The majority of DietLinux’s userspace comes from Felix von Leitner, the author of dietlibc. Von Leitner reimplemented init–the program that bootstraps a Unix system once the kernel is loaded–and getty, which is the program that handles text-based logins. These unglamorous programs can eat up a fair chunk of memory, and since Unix systems typically go for long periods of time without being rebooted, it’s a bit of a waste unless you need certain features provided by the more traditional init and getty programs. He also wrote replacements for several standard utilities.

Obviously, not every program in the world designed for glibc will compile and run under dietlibc, so DietLinux won’t ever be a complete general-purpose distribution. But for network infrastructure glue-type servers providing services like firewalling, DNS and DHCP (all of which already function), it would be perfect.

I don’t know what the future plans for DietLinux are. The asmutils provide an impressive number of userspace and server utilities, written in assembly language with very low overhead, and would appear to be a nice complement to DietLinux’s infrastructure. Their use would limit DietLinux to x86, however. And the text editor e3 is tiny, full-featured, and emulates keybindings for vi, emacs, WordStar, and Pico, so it’s friendly to pretty much any command-line jockey regardless of heritage and takes little space.

It’s also not a newbie distribution. Installation requires a fair bit of skill and pretty much requires an existing Linux system to bootstrap it.

But it’s definitely something I want to keep an eye on. I’m highly tempted to put it on one of my 486s. I just wish I had more time to mess around with it.