Will ZDNet ever get a clue about Linux?

The next time ZDNet runs a story about Linux and you start feeling the urge to click on the link and read it, I’ve got a piece of advice for you.
Lie down until it goes away.

If you have a clue about Linux, the story will just make you mad. If you’re trying to learn about Linux, ZDNet will fill you up with enough misinformation to confuse you for weeks.
Read more

Getting out of a sticky BIND

Setting up DNS on Linux isn’t supposed to be the easiest thing in the world. But it wasn’t supposed to be this hard either.
I installed Debian (since it’s nice and lean and mean) and BIND 9.2.1 and dutifully entered the named.conf file and the zones files. I checked out their syntax with the included tools (named-checkconf and named-checkzone). It checked out fine. But my Windows PCs wouldn’t resolve against it.
Read more

A useful Linux app for your CD-R

Quit wasting space on your CD rack with CDs that are only 3/4 full!
C’mon. You know you’ve done it. You’ve got 1.9 gigs worth of stuff to burn to CD. You know it should fit on three CDs. Half an hour later, you’re tired of trying to figure out how to make it fit and you just burn 500 megs’ worth on four CDs.
Read more

Optimizing Web graphics

Gatermann told me about a piece of freeware he found on one of my favorite sites, tinyapps.org, called JPG Cleaner. It strips out the thumbnails and other metadata that editing programs and digital cameras put in your graphics that isn’t necessary for your Web browser to render them. Sometimes it saves you 20K, and sometimes it saves you 16 bytes. Still, it’s worth doing, because more often than not it saves you something halfway significant.
That’s great but I don’t want to be tied to Windows, so I went looking for a similar Linux program. There isn’t much. All I was able to find was a command-line program, written in 1996, called jpegoptim. I downloaded the source, but didn’t have the headers to compile it. I went digging and found that someone built an RPM for it back in 1997, but Red Hat never officially adopted it. I guess it’s just too special-purpose. The RPM is floating around, I found it on a Japanese site. If that ever goes away, just do a Google search for jpegoptim-1.1-0.i386.rpm.

I used the Debian utility alien to convert the RPM to a Debian package. It’s just a 12K binary, so there’s nothing to installing it. So if you prefer SuSE or TurboLinux or Mandrake or Caldera, it’ll install just fine for you. And Debian users can convert it, no problem.

Jpegoptim actually goes a step further than JPG Cleaner. Aside from discarding all that metadata in the header, its main claim is that it optimizes the Huffman tables that make up the image data itself, reducing the image in size without affecting its quality at all. The difference varies; I ran it on several megabytes’ worth of graphics, and found that on images that still had all those headers, it frequently shaved 20-35K from their size. On images that didn’t have all the extra baggage (including some that I’d optimized with JPG Cleaner), it reduced the file size by another 1.5-3 percent. That’s not a huge amount, but on a 3K image, that’s 40-50 bytes. On a Web page that has lots of small images, those bytes add up. Your modem-based users will notice it.

And Jpegoptim will also let you do the standard JPEG optimization, where you set the file quality to a numeric value between 1 and 100, the higher being the truest to the original. Some image editors don’t let you adjust the quality in a very fine-grained manner. I’ve found that a level of 70 is almost always perfectly acceptable.

So, to try to get something for nothing, change into an image directory and type this:

jpegoptim -t *

And the program will see what it can save you. Don’t worry if you get a negative number; if the “optimized” file ends up actually being bigger, it’ll discard the results.

To lower the quality and potentially save even more, do this:

jpegoptim -m70 -t *

And once again, it’ll tell you what it saves you. (The program always optimizes the Huffman tables, so there’s no need to do multiple steps.) Be sure to eyeball the results if you play with quality, and back up the originals.

Commercial programs that claim to do what these programs do cost anywhere from $50 to $100. This program may be obscure, but that’s criminal. Go get it and take advantage of it.

Also, don’t forget the general rule of file formats. GIF is the most backward-compatible, but it’s encumbered by patents and it’s limited to 256-color images. It’s good for line drawings and cartoons, because it’s a lossless format (it only compresses the data, it doesn’t change it).

PNG is the successor to GIF, sporting better compression and support for 24-color images. Like GIF, it’s lossless, so it’s good for line drawings, cartoons, and photographs that require every detail to be preserved. Unfortunately, not all browsers support PNG.

JPEG has the best compression, because it’s lossy. That means it looks for details that it can discard to make the image compress better. The problem with this is that when you edit JPEGs, especially if you convert them between formats, you’ll run into generation loss. Since JPEG is lossy, line drawings and cartoons generally look really bad in JPEG format. Photographs, which usually have a lot of subtle detail, survive JPEG’s onslaught much better. The advantage of JPEG is the file sizes are much smaller. But you should always examine a JPEG before putting it on the Web; blindly compressing your pictures with high compression settings can lead to hideous results. There’s not much point in squeezing an image down to 1.5K when the result is something no one wants to look at.

A stupid BIND trick

My head’s still swimming from my crash course in BIND. I knew enough BIND to be dangerous–I’ve known how to set up a caching nameserver for years, and even stumbling through creating a master server for someone with a fixed IP address who wanted to host a domain wasn’t beyond me. Creating BIND servers for an enterprise isn’t too big of a deal, but creating one right can be.
After reading a lot, I set to the task.

Here’s a hint: If you’re migrating your servers from another OS to some Unixish OS and BIND, you can avoid re-keying all those zone files. (We’ve got more than 60 of the blasted things; our external server alone is 404K worth of configuration files. I didn’t bother to check the internal files.) Set your server to be a slave server to your current server. Be sure to comment out your allow-updates line; BIND 9 will complain if you mention slave servers and updates in the same breath. Now restart BIND (/etc/init.d/bind9 restart in Debian 3.0; the command may be /etc/init.d/named restart or /etc/init.d/bind restart in other distros) and wait. In my case, the files started appearing within seconds, and within a couple of minutes, my server had downloaded all of them. Reset your server to master status, then find a few people to change their TCP/IP configuration to use it. Give it a day or two, and when you’re convinced that all is well, turn off DNS on the old server and put the new server in production.

Yes, my Linux box was perfectly capable of pulling DNS records from an NT-based DNS. This is good. If you’re running DNS on NT currently, I wholeheartedly recommend you migrate away from it. Don’t waste clock cycles and network bandwidth on an expensive NT server. Grab a server-grade machine that’s too old to be a useful NT server and load Linux or some BSD variant on it. I know a company that ran BIND on some old 25 MHz DEC VAX workstations for years. That’s a too low-end to be comfortable, but if you’ve got server-grade 486-66s kicking around in a dusty corner somewhere, they’ll be adequate. A Pentium-133 will treat you a little bit better. A good rule of thumb: If the machine ever ran NT Server with any competence at all (even if it was in 1996), it’s got enough oomph to run BIND.

The nice thing about machines like that is that you usually have more than one of them and it doesn’t cost you anything to keep a hot spare. If one fails, unplug it and boot up the spare. Yes, DNS is mission-critical, but by definition it’s also redundant.

I’m shocked that there isn’t a single-floppy Linux distro that’s basically just Linux and BIND. Here’s a challenge for some sicko: Make a mini-distro incorporating BIND and Linux 1.09 so the minimum requirements will be a 386sx/16 with 2 megs of RAM and an NE2000 NIC.

I believe there are other slick BIND tricks, but I think I’ll wait and see if they work before I go touting a bunch of stuff that might not work.

Analysis of the Apple Mac Xserver

Given my positive reaction to the Compaq Proliant DL320, Svenson e-mailed and asked me what I thought of Apple’s Xserver.
In truest Slashdot fashion, I’m going to present strong opinions about something I’ve never seen. Well, not necessarily the strong opinions compared to some of what you’re used to seeing from my direction. But still…

Short answer: I like the idea. The PPC is a fine chip, and I’ve got a couple of old Macs at work (a 7300 and a 7500) running Debian. One of them keeps an eye on the DHCP servers and mails out daily reports (DHCP on Windows NT is really awful; I didn’t think it was possible to mess it up but Microsoft found a way) and acts as a backup listserver (we make changes on it and see if it breaks before we break the production server). The other one is currently acting as an IMAP/Webmail server that served as an outstanding proof of concept for our next big project. I don’t know that the machines are really any faster than a comparable Pentium-class CPU would be, but they’re robust and solid machines. I wouldn’t hesitate to press them into mission-critical duty if the need arose. For example, if the door opened, I’d be falling all over myself to make those two machines handle DHCP, WINS, and caching DNS for our two remote sites.

So… Apples running Linux are a fine thing. A 1U rack-mount unit with a pair of fast PPC chips in it and capable of running Linux is certainly a fine thing. It’ll suck down less CPU power than an equivalent Intel-based system would, which is an important consideration for densely-packed data centers. I wouldn’t run Mac OS X Server on it because I’d want all of its CPU power to go towards real work, rather than putting pretty pictures on a non-existent screen. Real servers are administered via telnet or dumb terminal.

What I don’t like about the Xserver is the price. As usual, you get more bang for the buck from an x86-based product. The entry-level Xserver has a single 1 GHz PowerPC, 256 megs of RAM, and a 60-gig IDE disk. It’ll set you back a cool 3 grand. We just paid just over $1300 for a Proliant DL320 with a 1.13 GHz P3 CPU, 128 megs of RAM, and a 40-gig IDE disk. Adding 256 megs of RAM is a hundred bucks, and the price difference between a 40- and a 60-gig drive is trivial. Now, granted, Apple’s price includes a server license, and I’m assuming you’ll run Linux or FreeBSD or OpenBSD on the Intel-based system. But Linux and BSD are hardly unproven; you can easily expect them to give you the same reliability as OS X Server and possibly better performance.

But the other thing that makes me uncomfortable is Apple’s experience making and selling and supporting servers, or rather its lack thereof. Compaq is used to making servers that sit in the datacenter and run 24/7. Big businesses have been running their businesses on Compaq servers for more than a decade. Compaq knows how to give businesses what they need. (So does HP, which is a good thing considering HP now owns Compaq.) If anything ever goes wrong with an Apple product, don’t bother calling Apple customer service. If you want to hear a more pleasant, helpful, and unsuspicious voice on the other end, call the IRS. You might even get better advice on how to fix your Mac from the IRS. (Apple will just tell you to remove the third-party memory in the machine. You’ll respond that you have no third-party memory, and they’ll repeat the demand. There. I just saved you a phone call. You don’t have to thank me.)

I know Apple makes good iron that’s capable of running a long time, assuming it has a quality OS on it. I’ve also been around long enough to know that hardware failures happen, regardless of how good the iron is, so you want someone to stand behind it. Compaq knows that IBM and Dell are constantly sitting on the fence like vultures, wanting to grab its business if it messes up, and it acts accordingly. That’s the beauty of competition.

So, what of the Xserver? It’ll be very interesting to see how much less electricity it uses than a comparable Intel-based system. It’ll be very interesting to see whether Apple’s experiment with IDE disks in the enterprise works out. It’ll be even more interesting to see how Apple adjusts to meeting the demands of the enterprise.

It sounds like a great job for Somebody Else.

I’ll be watching that guy’s experience closely.

First look: The Proliant DL320

I’ve had the opportunity the past two days to work with Compaq’s Proliant DL320, an impossibly thin 1U rack-mount server. All I can say is I’m impressed.
When I was in college, a couple of the nearby pizza joints sold oversized 20″ pizzas. The DL320 reminded me of the boxes these pizzas came in. The resemblance isn’t lost on IBM: In its early ads for a competing product, I remember IBM using an impossibly thin young female model holding a 1U server on a pizza-joint set.

HP announced last week that Compaq’s Proliant series will remain basically unchanged, it will just be re-branded with the HP name. HP had no product comparable to the DL320.

I evaluated the entry-level model. It’s a P3 1.13 GHz with 128 MB RAM, dual Intel 100-megabit NICs, and a single 40-gigabyte 7200-rpm Maxtor/Quantum IDE drive. It’s not a heavy-duty server, but it’s not designed to be. It’s designed for businesses that need to get a lot of CPU power into the smallest possible amount of rack space. And in that regard, the DL320 delivers.

Popping the hood reveals a well-designed layout. The P3 is near the front, with three small fans blowing right over it. Two more fans in the rear of the unit pull air out, and two fans in the power supply keep it cool. The unit has four DIMM sockets (one occupied). There’s room for one additional 3.5″ hard drive, and a single 64-bit PCI slot. Obvious applications for that slot include a gigabit Ethernet adapter or a high-end SCSI host adapter. The machine uses a ServerWorks chipset, augmented by a CMD 649 for UMDA-133 support. Compaq utilizes laptop-style floppy and CD-ROM drives to cram all of this into a 1U space.

The fit and finish is very good. The machine looks and feels solid, not flimsy, which is a bit surprising for a server in this price range. Looks-wise, it brings back memories of the old DEC Prioris line.

The rear of the machine has a fairly spartan number of ports: PS/2 keyboard and mouse, two RJ-45 jacks, VGA, one serial port, and two USB ports. There’s no room for luxuries, and such things as a parallel port are questionable in this type of server anyway.

Upon initial powerup, the DL320 asks a number of questions, including what OS you want to run. Directly supported are Windows NT 4.0, Windows 2000, Novell NetWare, and Linux.

Linux installs quickly and the 2.4.18 kernel directly supports the machine’s EtherExpress Pro/100 NICs, CMD 649 IDE, and ServerWorks chipset. A minimal installation of Debian 3.0 booted in 23 seconds, once the machine finished POST. After compiling and installing a kernel with support for all the hardware not in the DL320 removed, that boot time dropped to 15 seconds. That’s less time than it takes for the machine to POST.

Incidentally, that custom kernel was a scant 681K in size. It was befitting of a server with this kind of footprint.

As configured, the DL320 is more than up to the tasks asked of low-end servers, such as user authentication, DNS and DHCP, and mail, file and print services for small workgroups. It would also make a nice applications server, since the applications only need to load once. It would also be outstanding for clustering. For Web server duty or heavier-duty mail, file and print serving, it would be a good idea to upgrade to one of the higher-end DL320s that includes SCSI.

It’s hard to find fault with the DL320. At $1300 for an IDE configuration, it’s a steal. A SCSI-equipped version will run closer to $1900.

Recovery time.

Taxes. I think I’ve actually filed my taxes on time twice in my adult life. This year isn’t one of them. I filed Form 4868, so Tax Day for me is actually Aug. 15, 2002.
In theory Uncle Sam owes me money this year, so I shouldn’t owe any interest. I’ll have a professional accountant test that theory soon. Make that fairly soon, because it’d be nice to have that money, seeing as I expect to make the biggest purchase of my still-fairly-short life this year.

Some people believe filing a 4868 is advantageous. The thinking is this: Let the IRS meet its quota for audits, then file. That way, the only way you’re going to get audited is if you truly raise red flags, which I shouldn’t because I’m having a professional (and an awfully conservative one at that) figure the forms. That’s good. I’d rather not have to send a big care package off to the IRS to prove I’m not stealing from them.

Adventure. Steve DeLassus and I dove headlong into an adventure on Sunday, an adventure consisting of barbecue and Linux. I think at one point both of us were about ready to put a computer on that barbie.

We’ll talk about the barbecue first. Here’s a trick I learned from Steve: Pound your boneless chicken flat, then throw it in a bag containing 1 quart of water and 1 cup each of sugar and salt. Stick the whole shebang in the fridge while the fire’s getting ready. When the fire’s ready, take the chicken out of the bag and dry thoroughly. Since Steve’s not a Kansas Citian, he doesn’t believe in dousing the chicken in BBQ sauce before throwing it on the grill. But it was good anyway. Really good in fact.

Oh, I forgot. He did spray some olive oil on the chicken first. Whether that helps it brown or locks in moisture or both, I’m not quite sure. But olive oil contains good fats, so it’s not a health concern.

Now, Linux on cantankerous 486s may be a health concern. I replaced the motherboard in Steve’s router Sunday night, because it was a cranky 486SX/20. I was tired of dealing with the lack of a math coprocessor, and the system was just plain slow. I replaced it with a very late model 486DX2/66 board. I know a DX2/66 doesn’t have three times the performance of an SX/20, but the system sure seemed three times faster. Its math coprocessor, L2 cache, faster chipset, and much better BIOS helped. It took the new board slightly longer to boot Linux than it took the old one to finish counting and testing 8 MB of RAM.

But Debian wasn’t too impressed with Steve’s Creative 2X CD-ROM and its proprietary Panasonic interface. So we kludged in Steve’s DVD-ROM drive for the installation, and laughed at the irony. Debian installed, but the lack of memory (I scraped up 8 megs; Steve’s old memory wouldn’t work) slowed down the install considerably. But once Debian was up and running, it was fine, and in text mode, it was surprisingly peppy. We didn’t install XFree86.

It was fine until we tried to get it to act as a dialup router, that is. We never really did figure out how to get it to work reliably. It worked once or twice, then quit entirely.

This machine was once a broadband router based on Red Hat 6.1, but Red Hat installed way too much bloat so it was slow whenever we did have to log into it. And Steve moved into the boonies, where broadband isn’t available yet, so it was back to 56K dialup for him. Now we know that dialup routers seem to be much trickier to set up than dual-NIC routers.

After fighting it for nearly 8 hours, we gave up and booted it back into Freesco, which works reliably. It has the occasional glitch, but it’s certainly livable. Of course we want (or at least Steve wants) more features than Freesco can give you easily. But it looks like he’ll be living with Freesco for a while, since neither of us is looking forward to another marathon Debian session.

Nostalgia. A couple of articles on Slashdot got me thinking about the good old days, so I downloaded VICE, a program that can emulate almost every computer Commodore ever built. Then I played around with a few Commodore disk images. It’s odd what I do and don’t remember. I kind of remember the keyboard layout. I remembered LOAD “*”,8,1 loads most games (and I know why that works too, and why the harder-to-type LOAD “0:*”,8,1 is safer), but I couldn’t remember where the Commodore keyboard layout put the *.

I sure wish I could remember people’s names half as well as I remember this mesozoic computer information.

It stands on shaky legal ground, but you can go to c64.com and grab images for just about any Commodore game ever created. The stuff there is still covered by copyright law, but in many cases the copyright holder has gone out of business and/or been bought out several times over, so there’s a good possibility the true copyright holder doesn’t even realize it anymore. Some copyright holders may care. Others don’t. Others have probably placed the work in the public domain. Of course, if you own the original disks for any of the software there, there’s no problem in downloading it. There’s a good possibility you can’t read your originals anyway.

I downloaded M.U.L.E., one of the greatest games of all time. I have friends who swear I was once an ace M.U.L.E. player, something of an addict. I have absolutely no recollection of that. I started figuring out the controls after I loaded it, but nothing seemed familiar, that’s for sure. I took to it pretty quickly. The strategy is simple to learn, but difficult to master. The user interface isn’t intuitive, but in those days they rarely were. And in those days, not many people cared.

Planes, trains, and computers

Planes. I’m not as big of an airplane fanatic as my dad was, but no one is. It’s too bad he didn’t live to see the Web come of age, because I found some sites that would have made him want to use a computer. Aviationarchaeology documents military crash sites in the United States. It’s not complete (I know of an F-86 Sabre crash site in a remote site in the Southwest that it doesn’t document) but cool. I found another similar page.
And then there’s Urban’s military aviation weblog, which is a links collection that just has to be seen to believe.

Trains. Gatermann sent this link to a streetcar, built in 1910, for sale on eBay. I asked him if he thought Metrolink would mind if we used it on their tracks. He said he didn’t think so.

Having a restored streetcar would be almost as cool as having a private Tu-144… And a whole lot safer.

Automobiles Computers. The P2 shell I ordered last week arrived yesterday. It was surprisingly well constructed. The motherboard and floppy drive were installed, and all cables were present, making it really easy to construct a complete system from it. I plugged in a 128-meg stick, attached a CPU fan to a Celeron-366 in a slotket and plugged it in, then I raided an old 486 for a video card, NIC, hard drive, and CD-ROM drive. The HD in the 486 had Debian 2.2 installed, so no further work was necessary. I plugged it in, turned it on, Debian booted, and it was fast.

This is the first time I’ve ever seen a P2-class machine with ISA video and network cards, but this thing’s going to be a low-volume Intranet server. Why waste a decent video card on it when the only thing it’s ever going to display is a logon prompt? Back before Microsoft brainwashed the world into putting GUIs on their servers, it was common practice to put ISA video cards in servers to conserve PCI slots for important things, like network cards and SCSI cards.

A Pentium-75 would do this job nicely, but I had a slotket, a CPU, and a 128-meg stick, and the barebones system cost $40 delivered. I’d have needed 32 megs of 72-pin memory to bring up a Pentium-75 to do this job, and it would have cost more than that.

At any rate, if you want to build your own dirt-cheap P2, you can get the case/ps/mobo/floppy combo for $20 and a P2-233 for $17 at Compgeeks.com. As for hard drives, a 2.5-gig job will run you $26 and it goes on up from there. They don’t have any dirt-cheap video cards there, unfortunately. You can go to Computer Surplus Outlet for that. I wouldn’t trust either place’s memory, so go to Crucial for that. If you have some parts laying around from upgrades past, you can have a complete system cheap. If you don’t have parts, you’re better off just buying a complete P2. You can get a Dell P2-233/32MB for $79, including CD-ROM and NIC.

I’m really curious how a lab full of P2-233s running Linux as one big OpenMosix cluster would perform…

And baseball. Can’t leave that out. I just read that Cookie Rojas is coaching for the Toronto Blue Jays. So when are the Royals going to get rid of Tony Loser and put Cookie at the helm?

As for Stinky the Frenchman’s comments the other day comparing rooting for the Royals to rooting for the cars at a monster truck rally, does anyone else find it ironic that a supposed French nobleman would talk with an air of superiority about “American Cricket,” then go compare my favorite team to a monster truck rally? How does he know about monster truck rallies?

Technobabble

Grisoft AVG works as advertised. If you don’t want to pay for virus protection, do yourself and your friends a favor and head over to Grisoft and download the free edition of AVG. I used it Monday night to disinfect a friend’s PC that had become infected by the infamous KAK virus.
Free-for-personal-use anti-virus tools have a nasty habit of becoming un-free within a year or two of their release, but look at it this way: AVG at least saves you a year or two of paying for virus update subscriptions.

It’s not as whiz-bang as the tools from Norton or McAfee but it works. You can’t get as fine-grained about scheduling stuff but that doesn’t matter so much. You can schedule things like scans and updates, and it does find and isolate the viruses, and you can’t beat the price. Go get it.

Linux on vintage P2s. I helped Gatermann get Debian up and running on his vintage HP Kayak workstation last night. This is an early P2-266 workstation. Gatermann marveled at how it was put together, and with the calibre of components in it. It had a high-end (for its time) Matrox AGP card in it, plus onboard Adaptec Wide SCSI, 128 MB of ECC SDRAM, and a 10,000-RPM IBM Wide SCSI hard drive. It arrived stripped of its original network card; Gatermann installed an Intel EtherExpress Pro.

In its day, this was the best Intel-based workstation money could buy, and you needed a lot of it. Of course, back in that day I was working on the copydesk of a weekly magazine in Columbia, Mo. and chasing a girl named Rachel (who I would catch, then lose, about a year later). And I probably hadn’t turned 22 yet either. Needless to say, that was a while ago. It seems like 100 years ago now.

Today, the most impressive thing about the system is its original price tag, but it remains a solidly built system that’s very useful and very upgradable. He can add another CPU, and depending on what variation his particular model is, he can possibly upgrade to as much as a P2-450. A pair of 450s is nothing to turn your nose up at. And of course he can add a variety of SCSI hard drives to it.

Debian runs fine on the system; its inability to boot doesn’t bother me too much. I occasionally run across systems that just won’t boot a Linux CD, but once I manage to get them running (either by putting the drive in another PC for the installation process or by using a pair of boot floppies to get started) they run fine.

The system didn’t want to boot Debian on CD, or any other Linux for that matter. So we made a set of boot floppies, then all was well.

The batch that this computer came from is long gone, but I expect more to continue to appear on the used market as they trickle out of the firms that bought them. They are, after all, long since obsolete for their original purpose. But they’re a bargain. These systems will remain useful for several years, and are built well enough that they probably will be totally obsolete before they break.