So there is a benefit to running Windows Server 2003 and XP

One of the reasons Windows Server 2003 and XP haven’t caught on in corporate network environments is that Microsoft has yet to demonstrate any real benefit to either one of them over Windows 2000.

Believe it or not, there actually is one benefit. It may or may not be worth the cost of upgrading, but if you’re buying licenses now and installing 2000, this information might convince you it’s worth it to install the current versions instead.The benefit: NTFS compression.

Hang on there Dave, I hear you saying. NTFS compression has been around since 1994, and hard drives are bigger and cheaper now than ever before. So why do I want to mess around with risky data compression?

Well, data compression isn’t fundamentally risky–this site uses data compression, and I’ve got the server logs that prove it works just fine–it just got a bad rap in the early 90s when Microsoft released the disastrous Doublespace with DOS 6.0. And when your I/O bus is slow and your CPU is really fast, data compression actually speeds things up, as people who installed DR DOS on their 386DX-40s with a pokey 8 MHz ISA bus found out in 1991.

So, here’s the rub with NTFS compression when it’s used on Windows Server 2003 with XP clients: the data is transferred from the server to the clients in compressed form.

If budget cuts still have you saddled with a 100 Mb or, worse yet, a 10 Mb network, that data compression will speed things up mightily. It won’t help you move jpegs around your network any faster, but Word and Excel documents sure will zoom around a lot quicker, because those types of documents pack down mightily.

The faster the computers are on both ends, the better this works. But if the server has one or more multi-GHz CPUs, you won’t slow down disk writes a lot. And you can use this strategically. Don’t compress the shares belonging to your graphic artists and web developers, for instance. Their stuff tends not to compress, and if any of them are using Macintoshes, the server will have to decompress it to send it to the Macs anyway.

But for shares that are primarily made up of files created by MS Office, compress away and enjoy your newfound network speed.

Resolving an issue with slow Windows XP network printing

There is a little-known issue with Windows XP and network printing that does not seem to have been completely resolved. It’s a bit elusive and hard to track down. Here are my notes and suggestions, after chasing the problem for a couple of weeks.The symptoms are that printing occurs very slowly, if at all. Bringing up the properties for the printer likewise happens very slowly, if at all. An otherwise identical Windows 2000 system will not exhibit the same behavior.

The first idea that came into my head was disabling QoS in the network properties, just because that’s solved other odd problems for me. It didn’t help me but it might help you.

Hard-coding the speed of the NIC rather than using autonegotiate sometimes helps odd networking issues. Try 10 mB/half duplex first, since it’s the least common denominator.

Some people have claimed using PCL instead of PostScript, or vice versa, cleared up the issue. It didn’t help us. PCL is usually faster than PostScript since it’s a more compact language. Changing printer languages may or may not be an option for you anyway.

Some people say installing SP2 helps. Others say it makes the problem worse.

The only reliable answer I have found, which makes no sense to me whatsoever, is network equipment. People who are plugged in to switches don’t have this problem. People who are plugged into hubs often have this problem, but not always.

The first thing to try is plugging the user into a different hub port, if possible. Sometimes ports go bad, and XP seems to be more sensitive to an deterriorating port than previous versions of Windows.

In the environment where I have observed this problem, the XP users who are plugged into relatively new (less than 5 years old) Cisco 10/100 switches do not have this problem at all.

This observation makes me believe that Windows XP may also like aging consumer-grade switches, like D-Link, Belkin, Linksys, and the like, a lot less than newer and/or professional grade, uber-expensive switches from companies like Cisco. I have never tried Windows XP with old, inexpensive switches. I say this only because I have observed Veritas Backup Exec, which is very network intensive, break on a six-year-old D-Link switch but work fine on a Cisco.

I do not have the resources to conduct a truly scientific experiment, but these are my observations based on the behavior of about a dozen machines using two different 3Com 10-megabit hubs and about three different Cisco 10/100 switches.

Undocumented Backup Exec error

I got an odd Backup Exec error message on Thursday night that I wasn’t able to find in Veritas’ knowledge base.

The error code is 0x3a18 – 0x 3a18 (14872). Since it seems otherwise undocumented, I might as well document what I know about it.In my case at least, the cause of the error seems to have been insufficient disk space. The drive where Backup Exec was storing its catalogs was filling up, and this cryptic error message was the result. When I reran the job that failed, I got an "Insufficient disk space to write catalogs" error in the popup, but not in the system log. That doesn’t help you if you happen to not be logged in at the time of the error. Seeing as this error happened at 12:30 AM, I wasn’t.

This error was especially nasty because it caused Backup Exec to not recognize that the tape was allocated, so it overwrote the three good jobs it had completed that night with two bad jobs. If there’s anything more enraging than a failed backup, it’s a failed backup that took a bunch of others down with it.

Many other Backup Exec errors are caused by low disk space. This is so simple that it ought to be the first thing I check, but more often than not I forget. I need to remind myself.

How frequently you run out of disk space on your system drive, of course, increases exponentially with each person who has admin rights on the server.

Backup Exec misadventures

(Subtitle: My coworkers’ favorite new Dave Farquhar quote)

If your product isn’t suitable for use on production servers, then why didn’t you tell us that up front and save us all a lot of wasted time?

(To a Veritas Backup Exec support engineer when he insisted that I reboot four production web servers to see if that cleared up a backup problem.)When I refused to reboot my production web servers, he actually gave me a bit of useful information. Since Veritas doesn’t tell you this anywhere on their Web site, I don’t feel bad at all about giving that information here.

When backing up through a firewall, you have to tell Backup Exec what ports to use. It defaults to ports in the 10,000 range. That’s changeable, but changing it through the user interface (Tools, Options, Network) doesn’t do it. It takes an act of Congress to get that information out of Veritas.

What Veritas doesn’t tell you is that the media server (the server with the tape drive) should talk on a different range of ports than the remote servers you’re backing up. While it can still work if you don’t, chances are you’ll get a conflict.

The other thing Veritas doesn’t tell you is that you need a minimum of two, and an ideal of four, ports per resource being backed up. So if the server has four drives and a system registry, which isn’t unusual, it takes a minimum of 10 TCP ports to back it up, and 40 is safer.

Oh, and one other thing: If anyone is using any other product to back up Windows servers, I would love to hear about it.

So, do you still think having Internet Explorer on your server is a good idea?

Microsoft is making its updates to IE only available for Windows XP.

To which I say, what about all of those servers out there?Surely they include Server 2003 in this. But that’s a problem. Upgrading to Server 2003 isn’t always an option. Some applications only run on Windows NT 4.0, or on Windows 2000.

Unfortunately, sometimes you have to have a web browser installed on a server to get updates, either from your vendor or from MS. Windows Update, of course, only works with Internet Explorer.

One option is to uninstall Internet Explorer using the tools from litepc.com. A potentially more conservative option is to keep IE installed, use it exclusively for Windows Update, and install another lightweight browser for searching knowledge bases and downloading patches from vendors. Offbyone is a good choice. It has no Java or Javascript, so in theory it should be very secure. It’s standalone, so it won’t add more muck to your system. To install it, copy the executable somewhere. To uninstall it, delete the executable.

An even better option is just to run as few servers on Windows as possible, since they insist on installing unnecessary and potentially exploitable software on servers–Windows Media Player and DirectX are other glaring examples of this–but I seem to hold the minority opinion on that. Maybe now that they wilfully and deliberately install security holes on servers and refuse to patch them unless you run the very newest versions, that will change.

But I’m not holding my breath.

VMWare is in Microsoft\’s sights

Microsoft has released its Virtual Server product, aimed at VMWare. Price is an aggressive $499.

I have mixed feelings about it.VMWare is expensive, with a list price of about 8 times as much. But I’m still not terribly impressed.

For one, with VMWware ESX Server, you get everything you need, including a host OS. With Microsoft Virtual Server, you have to provide Windows Server 2003. By the time you do that, Virtual Server is about half the price of VMWare.

I think you can make up the rest of that difference very quickly on TCO. VMWare’s professional server products run on a Linux base that requires about 256 MB of overhead. Ever seen Windows Server 2003 on 256 megs of RAM? The CPU overhead of the VMWare host is also very low. When you size a VMWare server, you can pretty much go on a 1:1 basis. Add up the CPU speed and memory of the servers you’re consolidating, buy a server that size, put VMWare on it, and then move your servers to it. They’ll perform as well, if not a little bit better since at peak times they can steal some resources from an idle server.

Knowing Microsoft, I’d want to give myself at least half gig of RAM and at least half a gigahertz of CPU time for system overhead, minimum. Twice that is probably more realistic.

Like it or not, Linux is a reality these days. Linux is an outstanding choice for a lot of infrastructure-type servers like DHCP, DNS, Web services, mail services, spam filtering, and others, even if you want to maintain a mixed Linux/Windows environment. While Linux will run on MS Virtual Server’s virtual hardware and it’s only a matter of time before adjustments are made to Linux to make it run even better, there’s no official support for it. So PHBs will be more comfortable running their Linux-based VMs under VMWare than under Virtual Server 2003. (There’s always User-Mode Linux for Linux virtual hosts, but that will certainly be an under-the-radar installation in a lot of shops.)

While there have been a number of vulnerabilities in VMWare’s Linux host this year, the number is still lower than Windows 2003. I’d rather take my virtual host server down once a quarter for patching than once a month.

I wouldn’t put either host OS on a public Internet address though. Either one needs to be protected behind a firewall, with its host IP address on a private network, to protect the host as much as possible. Remember, if the host is compromised, you stand to lose all of the servers on it.

The biggest place where Microsoft gives a price advantage is on the migration of existing servers. Microsoft’s migration tool is still in beta, but it’s free–at least for now. VMWare’s P2V Assistant costs a fortune. I was quoted $2,000 for the software and $8,000 for mandatory training, and that was to migrate 25 servers.

If your goal is to get those NT4 servers whose hardware is rapidly approaching the teenage years onto newer hardware with minimal disruption–every organization has those–then Virtual Server is a no-brainer. Buy a copy of Virtual Server and new, reliable server hardware, migrate those aging machines, and save a fortune on your maintenance contract.

I’m glad to see VMWare get some competition. I’ve found it to be a stable product once it’s set up, but the user interface leaves something to be desired. When I build or change a new virtual server, I find myself scratching my head whether certain options are under “Hardware” or under “Memory and Processors”. So it probably takes me twice as long to set up a virtual server as it ought to, but that’s still less time than it takes to spec and order a server, or, for that matter, to unbox a new physical server when it arrives.

On the other hand, I’ve seen what happens to Microsoft products once they feel like they have no real competition. Notice how quickly new, improved versions of Internet Explorer come out? And while Windows XP mostly works, when it fails, it usually fails spectacularly. And don’t even get me started on Office.

The pricing won’t stay the same either. While the price of hardware has come down, the price of Microsoft software hasn’t come down nearly as quickly, and in some cases has increased. That’s not because Microsoft is inherently ruthless or even evil (that’s another discussion), it’s because that’s what monopolies have to do to keep earnings at the level necessary to keep stockholders and the SEC happy. When you can’t grow your revenues by increasing your market share, you have to grow your revenues by raising prices. Watch Wal-Mart. Their behavior over the next couple of decades will closely monitor Microsoft’s. Since they have a bigger industry, they move more slowly. But that’s another discussion too.

The industry can’t afford to hand Microsoft another monopoly.

Some people will buy this product just because it’s from Microsoft. Others will buy it just because it’s cheaper. Since VMWare’s been around a good long while and is mature and stable and established as an industry standard, I hope that means it’ll stick around a while too, and come down in price.

But if you had told me 10 years ago that Novell Netware would have single-digit marketshare now, I wouldn’t have believed you. Then again, the market’s different in 2004 than it was in 1994.

I hope it’s different enough.

Two useful tools for upgrading Windows servers

Old servers never die. That’s part of the reason I share responsibility with two other people for administering 125 of the wretched things. And “wretched” is a nice way of describing the state of an NT server that’s almost old enough to drive.

How about two free tools for moving at least the shares and print queues off old servers and onto a new one? Moving the applications is still up to you, but moving file shares, either manually or via a batch file, is tedious work, so these are still welcome tools for the weary sysadmin.The first is Windows Print Migrator 3.1. This is a tool you should be running anyway, even if you can’t get rid of that particular print server, because it backs up everything about your printer configuration. Should that print server die by means of something other than a BOFH-style unfortunate (wink wink nudge nudge) incident, this tool’s backup copy lets you re-establish print queues in no time flat.

The second is Microsoft’s File Server Migration Toolkit, which allows you to move the shares from one server to another, and, if you have DFS set up, you can even preserve the UNCs so that the migration is completely transparent to the end users.

Both tools are a little bit tricky, so you want to play around with them in the test room with a couple of old machines on a hub or switch that’s not connected to the main network before you try them in production.

But once you master them, they can take work that would have taken you days to finish and reduce it to an hour or so.

Optimizing dynamic Linux webservers

Linux + Apache + MySQL + PHP (LAMP) provides an outstanding foundation for building a web server, for, essentially, the value of your time. And the advantages over static pages are fairly obvious: Just look at this web site. Users can log in and post comments without me doing anything, and content on any page can change programmatically. In my site’s case, links to my most popular pages appear on the front page, and as their popularity changes, the links change.

The downside? Remember the days when people bragged about how their 66 MHz 486 was a perfectly good web server? Kiss those goodbye. For that matter, your old Pentium-120 or even your Pentium II-450 may not be good enough either. Unless you know these secrets…

First, the simple stuff. I talked about a year and a half ago about programs that optimize HTML by removing some extraneous tags and even give you a leg up on translating to cascading style sheets (CSS). That’s a starting point.

Graphics are another problem. People want lots of them, and digital cameras tend to add some extraneous bloat to them. Edit them in Photoshop or another popular image editor–which you undoubtedly will–and you’ll likely add another layer of bloat to them. I talked about Optimizing web graphics back in May 2002.

But what can you do on the server itself?

First, regardless of what you’re using, you should be running mod_gzip in order to compress your web server’s output. It works with virtually all modern web browsers, and those browsers that don’t work with it negotiate with the server to get non-compressed output. My 45K front page becomes 6K when compressed, which is better than a seven-fold increase. Suddenly my 128-meg uplink becomes more than half of a T1.

I’ve read several places that it takes less CPU time to compress content and send it than it does to send uncompressed content. On my P2-450, that seems to definitely be the case.

Unfortunately, mod_gzip is one of the most poorly documented Unix programs I’ve ever seen. I complained about this nearly three years ago, and the situation seems little improved.

A simple apt-get install libapache-mod-gzip in Debian doesn’t do the trick. You have to search /etc/apache/httpd.conf for the line that begins LoadModule gzip_module and uncomment it, then you have to add a few more lines. The lines to enable mod_gzip on TurboLinux didn’t save me this time–for one thing, it didn’t handle PHP output. For another, it didn’t seem to do anything at all on my Debian box.

Charlie Sebold to the rescue. He provided the following lines that worked for him on his Debian box, and they also worked for me:

# mod_gzip settings

mod_gzip_on Yes
mod_gzip_can_negotiate Yes
mod_gzip_add_header_count Yes
mod_gzip_minimum_file_size 400
mod_gzip_maximum_file_size 0
mod_gzip_temp_dir /tmp
mod_gzip_keep_workfiles No
mod_gzip_maximum_inmem_size 100000
mod_gzip_dechunk Yes

mod_gzip_item_include handler proxy-server
mod_gzip_item_include handler cgi-script

mod_gzip_item_include mime ^text/.*
mod_gzip_item_include mime ^application/postscript$
mod_gzip_item_include mime ^application/ms.*$
mod_gzip_item_include mime ^application/vnd.*$
mod_gzip_item_exclude mime ^application/x-javascript$
mod_gzip_item_exclude mime ^image/.*$
mod_gzip_item_include mime httpd/unix-directory
mod_gzip_item_include file .htm$
mod_gzip_item_include file .html$
mod_gzip_item_include file .php$
mod_gzip_item_include file .phtml$
mod_gzip_item_exclude file .css$

Gzipping anything below 400 bytes is pointless because of overhead, and Gzipping CSS and Javascript files breaks Netscape 4 part of the time.

Most of the examples I found online didn’t work for me. Charlie said he had to fiddle a long time to come up with those. They may or may not work for you. I hope they do. Of course, there may be room for tweaking, depending on the nature of your site, but if they work, they’re a good starting point.

Second, you can use a PHP accelerator. PHP is an interpreted language, which means that every time you run a PHP script, your server first has to translate the source code into machine language and run it. This can take longer than the output itself takes. PHP accelerators serve as a just-in-time compiler, which compiles the script and holds a copy in memory, so the next time someone accesses the page, the pre-compiled script runs. The result can sometimes be a tenfold increase in speed.

There are lots of them out there, but I settled on Ion Cube PHP Accelerator (phpa) because installation is a matter of downloading the appropriate pre-compiled binary, dumping it somewhere (I chose /usr/local/lib but you can put it anywhere you want), and adding a line to php.ini (in /etc/php4/apache on my Debian box):

zend_extension=”/usr/local/lib/php_accelerator_1.3.3r2.so”

Restart Apache, and suddenly PHP scripts execute up to 10 times faster.

PHPA isn’t open source and it isn’t Free Software. Turck MMCache is, so if you prefer GPL, you can use it.

With mod_gzip and phpa in place and working, my web server’s CPU usage rarely goes above 25 percent. Without them, three simultaneous requests from the outside world could saturate my CPU.

With them, my site still isn’t quite as fast as it was in 2000 when it was just serving up static HTML, but it’s awfully close. And it’s doing a lot more work.

 

Confessions of a SQL 7 junkie

My name is Dave, and I’m a Microsoft junkie. So are the people I hang out with every day at work. We’re all junkies. We’re addicted to the glamor drug of Microsoft SQL Server 7.
I’m still trying to recover from the nightmare that is Microsoft SQL Server.

You see, I have a problem. My employer and most of its clients rely heavily on SQL Server. SQL Server is a touchy beast. We have some servers running completely unpatched SQL Server 7, for fear of breaking a client’s application. No, I absolutely will not tell you who my employer is or who those clients are.

That makes us, in Microsoft’s eyes, socialism-loving pinko Commies, since we won’t migrate to SQL 2000. Unfortunately, SQL 2000 isn’t completely compatible with SQL 7. So we’re forced into being pinko Commies.

Part of the reason SQL Slammer hit was because of the touchiness of the service packs and hotfixes, and part of it was the difficulty in installing them. The hotfix that would prevent SQL Slammer requires you to manually copy over 20 files, mercifully spread out over only two directories. But it takes time and it’s easy to make a mistake. So Microsoft released a SQL 2000 patch with a nice, graphical installer. But the pinko Commies like me who still use SQL 7 have to manually copy files.

Now, SQL 7 isn’t vulnerable to SQL Slammer, but it has plenty of security flaws of its own. And there’s one thing that history has taught us about viruses. Every time a new virus hits, a game of one-upmanship ensues. Similar viruses incorporating new twists appear quickly. And eventually a virus combining a multitude of techniques using known exploits appears. A SQL Slammer derivative that hits SQL 7 in one way or another is only a question of time.

Someone asked me why we can’t just leave everything unpatched and beef up security. The problem is that while our firewall is fine and it protects us from the outside, it doesn’t do anything for us on the inside. So the instant some vendor or contractor comes in and plugs an infected laptop into our network–and it’s a question of when, not if–we’re sunk. Can we take measures to keep anyone from plugging outside machines into our network? Yes. We can maintain a list of MAC addresses for inside equipment and configure our servers not to give IP addresses to anything else. But that’s obstructive. The accounting department is already supremely annoyed with us because we have a firewall at all. Getting more oppressive when there’s even just one other option isn’t a good move. People in the United States love freedom and they get annoyed when it’s taken away, even in cases that are completely justifiable like an employer blocking access to porn sites. But in a society where sysadmins have to explain that an employer’s property rights trump any given individual’s right to use work equipment for the purpose of seeing Pamela Anderson naked, one must be picky about what battles one chooses to fight.

In a moment of frustration, after unsuccessfully patching one server and breaking it to the point where SQL wouldn’t run at all anymore, I pointed out how one can apply any and every security patch available for Debian Linux at any instant it comes out with two commands and the total downtime could be measured in seconds, if not fractions of a second. And the likelihood of breaking something is very slight because the Debian security people are anal-retentive about backward compatibility. The person listening didn’t like that statement. There’s a lot more software available for Windows, he said. I wondered aloud, later, what the benefit of building an enterprise on something so fragile would be. Jesus’ parable of building a house on rock rather than on sand came to mind. I didn’t bring it up. I wasn’t sure it would be welcome.

But I think I’ll keep on fighting that battle. Keeping up on Microsoft security patches is becoming a full-time job. I don’t know if we can afford a full-time employee who does nothing but read Microsoft security bulletins and regression-test patches to make sure they can be safely deployed. I also don’t know who would want that job. But we are quickly reaching the point where we are powerless and our lives are becoming unmanageable.

Such is the life of the sysadmin. It’s a little bit of a rush to come into crisis situations, and a lot of my clients know that when they see me, there’s something major going on because they only see me a couple of times a year. In the relatively glamor-less life of a sysadmin, those times are about as glamorous as it gets. And for a time, it can be fun. But when the hours get long and not everyone’s eager to cooperate, it gets pretty draining.