Things to look for in a wireless router

It’s the time of year that a lot of people buy computer equipment, and wireless networking is one of the things people look for. But what things should be on the shopping list?

I was hoping you’d ask that question.Compatibility with what you already have, if possible. Routers are available that speak 802.11a, 802.11b, and 802.11g, or all three. If you already have some wireless equipment, look for something that can speak its language.

Cordless phone interference. 2.4 GHz cordless phones will interfere with 802.11b and 802.11g. 802.11a works at a different frequency, but it might be cheaper to replace your 2.4 GHz phone with a 900 MHz phone.

Speed. 802.11a and 802.11g operate at 54 Mbps, which is considerably nicer than 802.11b’s 11 Mbps, although both are much faster than current U.S. broadband connections, which tend to top out around 3 Mbps. If you move a lot of files around, you’ll appreciate the 54 Mbps speed. If your primary use of wireless is sharing an Internet connection and a printer or two, 802.11b is probably fast enough, and it’s usually cheaper, with the downside of shorter life expectancy.

802.11g is currently the most popular standard, because it gives 54 Mbps speed and offers compatibility with existing 802.11b equipment. Use this information as you will. If you’re of the security by obscurity mindset, 802.11a is a better choice, as a wardriver is more likely to be driving around with an 802.11b or 802.11g card. If you want to make sure your buddies can hook up when they come over, or you can hook up at your buddies’ places, 802.11g is the better choice.

Brand. Match the brands of router and cards, if at all possible. This makes configuration and security much simpler.

WPA. The encryption used by older standards is relatively weak. You want to enable 128-bit WEP (256-bit WEP is better but still not as good as WPA), change the SSID and disable SSID broadcast, and hard-code your MAC addresses so that only your cards can use your router. This protects you from someone driving around your neighborhood with a laptop and using your Internet connection to send out spam or transfer illicit material that can be traced back to you. Do you want the RIAA suing you because someone used your Internet connection to download 400 gigs’ worth of boy-band MP3s off Kazaa? Worse yet, if that happens, word might get out that you like that stuff.

WPA adds another layer of protection on top of these (which are standard issue by now). Rather than the security key being fixed, it’s dynamically generated from trillions of possibilities. Sufficient CPU power to crack WPA and either monitor your transmissions or use your access point might someday exist, but for now it gives the best protection available, so you should get it and use it. This USRobotics whitepaper on security ought to be a must-read.

Built-in firewall with port forwarding. This is a standard feature on all brand-name units and ought to be on the off brands as well, but it doesn’t hurt to double check. Hardware firewalls are far superior to software firewalls–they don’t annoy you with popups and they can’t be disabled by a malicious process. Port forwarding is necessary for a lot of games, and also if you want to run your own mail or web server.

Hackability. By this I don’t mean the ability of an outsider to get in, I mean your ability to add capability to it. The Linksys WRT54G is based on Linux, so it has a big following with an underground community adding capabilities to it all the time. If you want to take advantage of this, look for a WRT54G or another device with a similar following.

Troubleshooting a Compaq Proliant 1600

I still work on a lot of Compaq Proliant 1600s. In their day, they were a very versatile server, packing lots of drive bays and open expansion slots into a 5U package. They were also very reliable.

Now that they are five years old or even older, they are less so. But I’ve collected some good suggestions from Compaq and HP technicians about working on them.The biggest problem with the 1600 is that so many parts are socketed. Over time, socketed components tend to work themselves loose. So, when a 1600 crashes a lot but will pass its built-in diagnostics with flying colors, the best thing to do is to completely disassemble it and put it back together.

If it seems to be having memory problems, don’t just reseat the processor board and/or replace the memory. I had one 1600 exhibit memory failures that would not go away until I replaced the PCI board, of all things. Why? Beats me. The HP technician was as stumped as I was. So reseat that board too.

It never hurts to clean the connectors when you have the system apart. Get some zero-residue contact cleaner from a hardware or auto parts store. Be sure it’s zero-residue. A lot of contact cleaners contain oil, which isn’t going to help intermittent electrical connections at all. If in doubt, skip the contact cleaner entirely and clean the contacts with a cotton swab and rubbing alcohol instead. Need I also mention you need to stay grounded at all times while doing these procedures?

When replacing the PCI and CPU modules, you have to use a lot of force. Don’t rely on the plastic releases on the back to put them in. Whenever I’ve seen a veteran Compaq technician reinstall one of these modules, he’s slammed the module into the back computer with so much force that it moved the system. If you don’t think you’re going to break it, you probably aren’t doing it hard enough.

Newer Proliant servers have many fewer socketed components, so their long-term reliability prospects are higher. They also usually have LEDs that indicate failed components, making diagnostics virtually irrelevant and system repair much more straightforward. But when replacement isn’t an option just yet, it’s nice to know there are things to do to return a 1600 to life.

Wake up your Backup Exec remote agent

Usually when a Backup Exec remote agent refuses to respond and stopping and starting the service does no good (verifiable by creating a new job and attempting to connect to the remote server, only to find the drive selection boxes greyed out), the solution is to reboot.

There’s a last-resort method more appropriate for production servers.Telnet to the remote server on port 10000. As in:

telnet 192.168.1.2 10000

When I did it, I got a bunch of garbage characters. I closed the window, then tried to connect again. This time, the agent was awake.

I have no idea if Veritas sanctions this or not, but it worked for me, and I like the answer a lot better than rebooting.

So there is a benefit to running Windows Server 2003 and XP

One of the reasons Windows Server 2003 and XP haven’t caught on in corporate network environments is that Microsoft has yet to demonstrate any real benefit to either one of them over Windows 2000.

Believe it or not, there actually is one benefit. It may or may not be worth the cost of upgrading, but if you’re buying licenses now and installing 2000, this information might convince you it’s worth it to install the current versions instead.The benefit: NTFS compression.

Hang on there Dave, I hear you saying. NTFS compression has been around since 1994, and hard drives are bigger and cheaper now than ever before. So why do I want to mess around with risky data compression?

Well, data compression isn’t fundamentally risky–this site uses data compression, and I’ve got the server logs that prove it works just fine–it just got a bad rap in the early 90s when Microsoft released the disastrous Doublespace with DOS 6.0. And when your I/O bus is slow and your CPU is really fast, data compression actually speeds things up, as people who installed DR DOS on their 386DX-40s with a pokey 8 MHz ISA bus found out in 1991.

So, here’s the rub with NTFS compression when it’s used on Windows Server 2003 with XP clients: the data is transferred from the server to the clients in compressed form.

If budget cuts still have you saddled with a 100 Mb or, worse yet, a 10 Mb network, that data compression will speed things up mightily. It won’t help you move jpegs around your network any faster, but Word and Excel documents sure will zoom around a lot quicker, because those types of documents pack down mightily.

The faster the computers are on both ends, the better this works. But if the server has one or more multi-GHz CPUs, you won’t slow down disk writes a lot. And you can use this strategically. Don’t compress the shares belonging to your graphic artists and web developers, for instance. Their stuff tends not to compress, and if any of them are using Macintoshes, the server will have to decompress it to send it to the Macs anyway.

But for shares that are primarily made up of files created by MS Office, compress away and enjoy your newfound network speed.

Resolving an issue with slow Windows XP network printing

There is a little-known issue with Windows XP and network printing that does not seem to have been completely resolved. It’s a bit elusive and hard to track down. Here are my notes and suggestions, after chasing the problem for a couple of weeks.The symptoms are that printing occurs very slowly, if at all. Bringing up the properties for the printer likewise happens very slowly, if at all. An otherwise identical Windows 2000 system will not exhibit the same behavior.

The first idea that came into my head was disabling QoS in the network properties, just because that’s solved other odd problems for me. It didn’t help me but it might help you.

Hard-coding the speed of the NIC rather than using autonegotiate sometimes helps odd networking issues. Try 10 mB/half duplex first, since it’s the least common denominator.

Some people have claimed using PCL instead of PostScript, or vice versa, cleared up the issue. It didn’t help us. PCL is usually faster than PostScript since it’s a more compact language. Changing printer languages may or may not be an option for you anyway.

Some people say installing SP2 helps. Others say it makes the problem worse.

The only reliable answer I have found, which makes no sense to me whatsoever, is network equipment. People who are plugged in to switches don’t have this problem. People who are plugged into hubs often have this problem, but not always.

The first thing to try is plugging the user into a different hub port, if possible. Sometimes ports go bad, and XP seems to be more sensitive to an deterriorating port than previous versions of Windows.

In the environment where I have observed this problem, the XP users who are plugged into relatively new (less than 5 years old) Cisco 10/100 switches do not have this problem at all.

This observation makes me believe that Windows XP may also like aging consumer-grade switches, like D-Link, Belkin, Linksys, and the like, a lot less than newer and/or professional grade, uber-expensive switches from companies like Cisco. I have never tried Windows XP with old, inexpensive switches. I say this only because I have observed Veritas Backup Exec, which is very network intensive, break on a six-year-old D-Link switch but work fine on a Cisco.

I do not have the resources to conduct a truly scientific experiment, but these are my observations based on the behavior of about a dozen machines using two different 3Com 10-megabit hubs and about three different Cisco 10/100 switches.

Undocumented Backup Exec error

I got an odd Backup Exec error message on Thursday night that I wasn’t able to find in Veritas’ knowledge base.

The error code is 0x3a18 – 0x 3a18 (14872). Since it seems otherwise undocumented, I might as well document what I know about it.In my case at least, the cause of the error seems to have been insufficient disk space. The drive where Backup Exec was storing its catalogs was filling up, and this cryptic error message was the result. When I reran the job that failed, I got an "Insufficient disk space to write catalogs" error in the popup, but not in the system log. That doesn’t help you if you happen to not be logged in at the time of the error. Seeing as this error happened at 12:30 AM, I wasn’t.

This error was especially nasty because it caused Backup Exec to not recognize that the tape was allocated, so it overwrote the three good jobs it had completed that night with two bad jobs. If there’s anything more enraging than a failed backup, it’s a failed backup that took a bunch of others down with it.

Many other Backup Exec errors are caused by low disk space. This is so simple that it ought to be the first thing I check, but more often than not I forget. I need to remind myself.

How frequently you run out of disk space on your system drive, of course, increases exponentially with each person who has admin rights on the server.

Backup Exec misadventures

(Subtitle: My coworkers’ favorite new Dave Farquhar quote)

If your product isn’t suitable for use on production servers, then why didn’t you tell us that up front and save us all a lot of wasted time?

(To a Veritas Backup Exec support engineer when he insisted that I reboot four production web servers to see if that cleared up a backup problem.)When I refused to reboot my production web servers, he actually gave me a bit of useful information. Since Veritas doesn’t tell you this anywhere on their Web site, I don’t feel bad at all about giving that information here.

When backing up through a firewall, you have to tell Backup Exec what ports to use. It defaults to ports in the 10,000 range. That’s changeable, but changing it through the user interface (Tools, Options, Network) doesn’t do it. It takes an act of Congress to get that information out of Veritas.

What Veritas doesn’t tell you is that the media server (the server with the tape drive) should talk on a different range of ports than the remote servers you’re backing up. While it can still work if you don’t, chances are you’ll get a conflict.

The other thing Veritas doesn’t tell you is that you need a minimum of two, and an ideal of four, ports per resource being backed up. So if the server has four drives and a system registry, which isn’t unusual, it takes a minimum of 10 TCP ports to back it up, and 40 is safer.

Oh, and one other thing: If anyone is using any other product to back up Windows servers, I would love to hear about it.

So, do you still think having Internet Explorer on your server is a good idea?

Microsoft is making its updates to IE only available for Windows XP.

To which I say, what about all of those servers out there?Surely they include Server 2003 in this. But that’s a problem. Upgrading to Server 2003 isn’t always an option. Some applications only run on Windows NT 4.0, or on Windows 2000.

Unfortunately, sometimes you have to have a web browser installed on a server to get updates, either from your vendor or from MS. Windows Update, of course, only works with Internet Explorer.

One option is to uninstall Internet Explorer using the tools from litepc.com. A potentially more conservative option is to keep IE installed, use it exclusively for Windows Update, and install another lightweight browser for searching knowledge bases and downloading patches from vendors. Offbyone is a good choice. It has no Java or Javascript, so in theory it should be very secure. It’s standalone, so it won’t add more muck to your system. To install it, copy the executable somewhere. To uninstall it, delete the executable.

An even better option is just to run as few servers on Windows as possible, since they insist on installing unnecessary and potentially exploitable software on servers–Windows Media Player and DirectX are other glaring examples of this–but I seem to hold the minority opinion on that. Maybe now that they wilfully and deliberately install security holes on servers and refuse to patch them unless you run the very newest versions, that will change.

But I’m not holding my breath.

VMWare is in Microsoft\’s sights

Microsoft has released its Virtual Server product, aimed at VMWare. Price is an aggressive $499.

I have mixed feelings about it.VMWare is expensive, with a list price of about 8 times as much. But I’m still not terribly impressed.

For one, with VMWware ESX Server, you get everything you need, including a host OS. With Microsoft Virtual Server, you have to provide Windows Server 2003. By the time you do that, Virtual Server is about half the price of VMWare.

I think you can make up the rest of that difference very quickly on TCO. VMWare’s professional server products run on a Linux base that requires about 256 MB of overhead. Ever seen Windows Server 2003 on 256 megs of RAM? The CPU overhead of the VMWare host is also very low. When you size a VMWare server, you can pretty much go on a 1:1 basis. Add up the CPU speed and memory of the servers you’re consolidating, buy a server that size, put VMWare on it, and then move your servers to it. They’ll perform as well, if not a little bit better since at peak times they can steal some resources from an idle server.

Knowing Microsoft, I’d want to give myself at least half gig of RAM and at least half a gigahertz of CPU time for system overhead, minimum. Twice that is probably more realistic.

Like it or not, Linux is a reality these days. Linux is an outstanding choice for a lot of infrastructure-type servers like DHCP, DNS, Web services, mail services, spam filtering, and others, even if you want to maintain a mixed Linux/Windows environment. While Linux will run on MS Virtual Server’s virtual hardware and it’s only a matter of time before adjustments are made to Linux to make it run even better, there’s no official support for it. So PHBs will be more comfortable running their Linux-based VMs under VMWare than under Virtual Server 2003. (There’s always User-Mode Linux for Linux virtual hosts, but that will certainly be an under-the-radar installation in a lot of shops.)

While there have been a number of vulnerabilities in VMWare’s Linux host this year, the number is still lower than Windows 2003. I’d rather take my virtual host server down once a quarter for patching than once a month.

I wouldn’t put either host OS on a public Internet address though. Either one needs to be protected behind a firewall, with its host IP address on a private network, to protect the host as much as possible. Remember, if the host is compromised, you stand to lose all of the servers on it.

The biggest place where Microsoft gives a price advantage is on the migration of existing servers. Microsoft’s migration tool is still in beta, but it’s free–at least for now. VMWare’s P2V Assistant costs a fortune. I was quoted $2,000 for the software and $8,000 for mandatory training, and that was to migrate 25 servers.

If your goal is to get those NT4 servers whose hardware is rapidly approaching the teenage years onto newer hardware with minimal disruption–every organization has those–then Virtual Server is a no-brainer. Buy a copy of Virtual Server and new, reliable server hardware, migrate those aging machines, and save a fortune on your maintenance contract.

I’m glad to see VMWare get some competition. I’ve found it to be a stable product once it’s set up, but the user interface leaves something to be desired. When I build or change a new virtual server, I find myself scratching my head whether certain options are under “Hardware” or under “Memory and Processors”. So it probably takes me twice as long to set up a virtual server as it ought to, but that’s still less time than it takes to spec and order a server, or, for that matter, to unbox a new physical server when it arrives.

On the other hand, I’ve seen what happens to Microsoft products once they feel like they have no real competition. Notice how quickly new, improved versions of Internet Explorer come out? And while Windows XP mostly works, when it fails, it usually fails spectacularly. And don’t even get me started on Office.

The pricing won’t stay the same either. While the price of hardware has come down, the price of Microsoft software hasn’t come down nearly as quickly, and in some cases has increased. That’s not because Microsoft is inherently ruthless or even evil (that’s another discussion), it’s because that’s what monopolies have to do to keep earnings at the level necessary to keep stockholders and the SEC happy. When you can’t grow your revenues by increasing your market share, you have to grow your revenues by raising prices. Watch Wal-Mart. Their behavior over the next couple of decades will closely monitor Microsoft’s. Since they have a bigger industry, they move more slowly. But that’s another discussion too.

The industry can’t afford to hand Microsoft another monopoly.

Some people will buy this product just because it’s from Microsoft. Others will buy it just because it’s cheaper. Since VMWare’s been around a good long while and is mature and stable and established as an industry standard, I hope that means it’ll stick around a while too, and come down in price.

But if you had told me 10 years ago that Novell Netware would have single-digit marketshare now, I wouldn’t have believed you. Then again, the market’s different in 2004 than it was in 1994.

I hope it’s different enough.

Two useful tools for upgrading Windows servers

Old servers never die. That’s part of the reason I share responsibility with two other people for administering 125 of the wretched things. And “wretched” is a nice way of describing the state of an NT server that’s almost old enough to drive.

How about two free tools for moving at least the shares and print queues off old servers and onto a new one? Moving the applications is still up to you, but moving file shares, either manually or via a batch file, is tedious work, so these are still welcome tools for the weary sysadmin.The first is Windows Print Migrator 3.1. This is a tool you should be running anyway, even if you can’t get rid of that particular print server, because it backs up everything about your printer configuration. Should that print server die by means of something other than a BOFH-style unfortunate (wink wink nudge nudge) incident, this tool’s backup copy lets you re-establish print queues in no time flat.

The second is Microsoft’s File Server Migration Toolkit, which allows you to move the shares from one server to another, and, if you have DFS set up, you can even preserve the UNCs so that the migration is completely transparent to the end users.

Both tools are a little bit tricky, so you want to play around with them in the test room with a couple of old machines on a hub or switch that’s not connected to the main network before you try them in production.

But once you master them, they can take work that would have taken you days to finish and reduce it to an hour or so.