How to get started in the IT industry

Someone asked me today what I do.
“You know those computers the size of a dorm fridge? I work on those,” I said. And yes, sometimes that means crawling around and sticking my head inside one, I added.

“Aren’t you afraid you’ll break something?”

“Oh, I break stuff all the time,” I said. “Then I fix it.”

And that reminded me of a story.

One day when I was about 17, my Dad came home one afternoon and found me in the basement, with our computer disassembled–completely–and sprawled out across his table. His eyes got really big. “You gonna be able to put that thing back together?” he asked, without much hope in his voice. “Sure!” I said. He watched me pry a ROM chip out with a screwdriver, pop in a new one, then reinstall all the drives, the power supply, and the expansion cards and replace the cover. I didn’t have any pieces left over, which I think he took as a good sign, and then he watched me plug it into the keyboard and monitor. I fired it up, and it worked perfectly.

I still had an awful lot to learn though. At work, you’re supposed to have pieces left over. What, you think employers actually buy their techs those great PCs on their desks? Ha! And a tip for you beginners: After the new system performs like swamp sump because you just swapped out all the good parts with parts from the old clunker on your desk, just blame the disappointing performance on Microsoft bloatware. Works every time.

(And when your boss starts asking how all these parts with Micron stickers on them ended up inside a Compaq server, just say, “Dan must have done it.” Who’s Dan? Who cares! He’s not you!)

Nobody knew that I knew how to do that. Every once in a while when I was the only one home, I’d take the computer apart. That afternoon, I just happened to get caught.

Upgrading an eMachine

One of the most common search engine hits on this site involves the words “emachine” and “upgrade” or “upgrades.”
There are a number of things to keep in mind. Some of this advice also holds for low-end units from Compaq and Gateway and the like as well.

First things first: eMachines don’t have the best reputation. The majority of their problems are due to the power supply though. Aftermarket replacements are readily available, and I recommend them. Don’t buy a factory replacement; it’ll just fail again like the original. A quality replacement from Sparkle or PC Power & Cooling will run you less than $50. I’ve seen 180-watt Sparkles go for $35. The stock 145-watt unit isn’t very adequate and isn’t of the utmost quality. If I bought an eMachine, I’d buy an aftermarket power supply and install it as soon as I could. I wouldn’t wait for the factory unit to fail.

If I had an eMachine I wanted to upgrade, I’d track down a PCI video card. The problem with integrated video on a lot of motherboards is that the CPU and video chip have to share memory bandwidth. What’s that mean? Part of the time, your nice 64-bit memory bus is reduced to 32 bits, that’s what. Steve DeLassus told me a couple of years ago about putting a cheap PCI ATI video card in his wife’s Compaq, which had integrated video, and everything about the system sped up, dramatically. I made fun of him. But it wasn’t his imagination. I was wrong, and the explanation is simple: After he disabled the onboard video, he finally got the computing power they paid for.

Besides that, any add-on card is going to be faster than the integrated video in anything but an nVidia chipset anyway. Last I checked, eMachines weren’t using nVidia nForce chipsets for anything. If you’re into 3D gaming, you shouldn’t have bought an eMachine in the first place, but look for a PCI card with an nVidia chipset. If you’re just into word processing and e-mail, something like an ATI Xpert98 will do nicely. Yeah, it’s an old card, but it’s still more than adequate for 2D applications, and it’s cheap.

If you’re wondering if your system’s integrated video is holding you back, the best tell-tale sign to look for is called “shared memory.” Enter your PC’s setup program and look for an adjustable amount of shared memory. If you find that setting, you’ll almost certainly benefit from disabling it and plugging in a video card.

The next thing I’d look to do is replace the hard drive. Hard drive speed is significant, and sub-$500 PCs don’t come with blazing drives. Pick up a 7200-rpm drive of adequate capacity. They’re not expensive–you can be in business for under a hundred bucks. The performance difference is dramatic. Most retail-boxed drives even come with all the software you need to move all your data to the new drive. CompUSA frequently has something on sale. I prefer Maxtor drives over Western Digital because they’re faster and more reliable; CompUSA’s house-brand drives are just repackaged Maxtors, so those are fine as long as you can find a 7200-rpm model.

The modems that came in eMachines are worthless. If you don’t have broadband yet, replace it with a USRobotics 2977 modem immediately. That factory modem is costing you 35% of your CPU power. The USR will give that back, give you better throughput on top of it, and costs $40 at newegg.com. Good deal. But don’t settle for anything less than that–any modem that costs less than $40 is going to have the same problems as the factory modem.

Most eMachines can take more memory, but a lot of eMachines already shipped with adequate memory. There’s rarely any reason to put more than 256 MB in a PC. If your machine doesn’t have 256 megs, you can pick up a 256-meg stick pretty cheaply.

Most eMachines can take a faster processor, but I rarely bother. Unless you can increase your clock speed by 50%, you’re not likely to really notice the difference. Doubling is better. You’ll get better results from adding a video card and a faster hard drive.

Likewise, a high-end sound card from the likes of Creative or Turtle Beach can reduce the amount of work your CPU has to do and give you much better-sounding audio than what your eMachine has on the motherboard, but is it worth putting a $100 sound card in a computer you paid $399 for?

It’s easy to see you can very quickly spend $300 on upgrades for a computer that originally cost $399. That makes it hard to justify, when you could just get a new $399 computer. So should you do it? It depends. Don’t spend more than half the price of a new computer to upgrade an old one. But also keep in mind that a new computer won’t come with first-rate components, and the aftermarket parts you’re buying are first rate, or very close to it. If that PC you’re looking to upgrade has a 600 MHz processor or faster, it’s likely that when it’s upgraded, it’ll hold its own with a new computer. In that case, you should think about it.

But if you’ve got a four-year-old eMachine with a 300 MHz processor in it, you’re better off buying something new. When you can buy a 900-MHz PC without an operating system from walmart.com for $299, it’s just not worth wasting your time. Load your eMachine’s copy of Windows on the new computer and stick the eMachine in a closet somewhere as a spare. Or pony up a couple hundred bucks more to pick up a brand-name PC with Windows and a monitor, then get a couple of network cards and network your computers together. Your family will appreciate being able to share a printer and an Internet connection. If you pay a little extra to get wireless cards, the computers don’t even have to be close to each other.

One last thing: A lot of people sniff at eMachines. Yes, they are cheaply made. But they’re not all that bad of a machine, aside from the skimpy power supply. Replace it, and you’ve got a lot of computer for the money. Packard Bell did a lot to ruin the reputation of cheap computers in the 1990s, but the problems they had were mostly due to skimpy power supplies that were odd sizes so there weren’t many aftermarket replacements, and due to junky integrated modems and/or combo modem/sound cards that did both jobs poorly, killing system performance and causing software incompatibilities. Today’s highly integrated motherboards have eliminated that combo sound/modem problem. I know I malign the company all the time, but in all honesty, once you put real modems and sound cards into Packard Bells, they did OK as long as the power supply held up. I’ve got an old Packard Bell P120 with Debian Linux loaded on it. I ripped out the sound card/modem combo. I left the power supply alone because it looked decent. The machine’s run several years for me without any problems. Of course I covered up the Packard Bell logos on it.

Today, the same holds true of an eMachine–it’s just the power supply and video card you have to worry about now.

Dude! I’m getting a… Packard Bell!

Oh wait. No, I’m thinking of Steve. Although he and I did just get identical Dell Optiplex GX1 P2-450 workstations to use as Web servers. We learned a little bit about them too.
Read more

Analysis of the Apple Mac Xserver

Given my positive reaction to the Compaq Proliant DL320, Svenson e-mailed and asked me what I thought of Apple’s Xserver.
In truest Slashdot fashion, I’m going to present strong opinions about something I’ve never seen. Well, not necessarily the strong opinions compared to some of what you’re used to seeing from my direction. But still…

Short answer: I like the idea. The PPC is a fine chip, and I’ve got a couple of old Macs at work (a 7300 and a 7500) running Debian. One of them keeps an eye on the DHCP servers and mails out daily reports (DHCP on Windows NT is really awful; I didn’t think it was possible to mess it up but Microsoft found a way) and acts as a backup listserver (we make changes on it and see if it breaks before we break the production server). The other one is currently acting as an IMAP/Webmail server that served as an outstanding proof of concept for our next big project. I don’t know that the machines are really any faster than a comparable Pentium-class CPU would be, but they’re robust and solid machines. I wouldn’t hesitate to press them into mission-critical duty if the need arose. For example, if the door opened, I’d be falling all over myself to make those two machines handle DHCP, WINS, and caching DNS for our two remote sites.

So… Apples running Linux are a fine thing. A 1U rack-mount unit with a pair of fast PPC chips in it and capable of running Linux is certainly a fine thing. It’ll suck down less CPU power than an equivalent Intel-based system would, which is an important consideration for densely-packed data centers. I wouldn’t run Mac OS X Server on it because I’d want all of its CPU power to go towards real work, rather than putting pretty pictures on a non-existent screen. Real servers are administered via telnet or dumb terminal.

What I don’t like about the Xserver is the price. As usual, you get more bang for the buck from an x86-based product. The entry-level Xserver has a single 1 GHz PowerPC, 256 megs of RAM, and a 60-gig IDE disk. It’ll set you back a cool 3 grand. We just paid just over $1300 for a Proliant DL320 with a 1.13 GHz P3 CPU, 128 megs of RAM, and a 40-gig IDE disk. Adding 256 megs of RAM is a hundred bucks, and the price difference between a 40- and a 60-gig drive is trivial. Now, granted, Apple’s price includes a server license, and I’m assuming you’ll run Linux or FreeBSD or OpenBSD on the Intel-based system. But Linux and BSD are hardly unproven; you can easily expect them to give you the same reliability as OS X Server and possibly better performance.

But the other thing that makes me uncomfortable is Apple’s experience making and selling and supporting servers, or rather its lack thereof. Compaq is used to making servers that sit in the datacenter and run 24/7. Big businesses have been running their businesses on Compaq servers for more than a decade. Compaq knows how to give businesses what they need. (So does HP, which is a good thing considering HP now owns Compaq.) If anything ever goes wrong with an Apple product, don’t bother calling Apple customer service. If you want to hear a more pleasant, helpful, and unsuspicious voice on the other end, call the IRS. You might even get better advice on how to fix your Mac from the IRS. (Apple will just tell you to remove the third-party memory in the machine. You’ll respond that you have no third-party memory, and they’ll repeat the demand. There. I just saved you a phone call. You don’t have to thank me.)

I know Apple makes good iron that’s capable of running a long time, assuming it has a quality OS on it. I’ve also been around long enough to know that hardware failures happen, regardless of how good the iron is, so you want someone to stand behind it. Compaq knows that IBM and Dell are constantly sitting on the fence like vultures, wanting to grab its business if it messes up, and it acts accordingly. That’s the beauty of competition.

So, what of the Xserver? It’ll be very interesting to see how much less electricity it uses than a comparable Intel-based system. It’ll be very interesting to see whether Apple’s experiment with IDE disks in the enterprise works out. It’ll be even more interesting to see how Apple adjusts to meeting the demands of the enterprise.

It sounds like a great job for Somebody Else.

I’ll be watching that guy’s experience closely.

First look: The Proliant DL320

I’ve had the opportunity the past two days to work with Compaq’s Proliant DL320, an impossibly thin 1U rack-mount server. All I can say is I’m impressed.
When I was in college, a couple of the nearby pizza joints sold oversized 20″ pizzas. The DL320 reminded me of the boxes these pizzas came in. The resemblance isn’t lost on IBM: In its early ads for a competing product, I remember IBM using an impossibly thin young female model holding a 1U server on a pizza-joint set.

HP announced last week that Compaq’s Proliant series will remain basically unchanged, it will just be re-branded with the HP name. HP had no product comparable to the DL320.

I evaluated the entry-level model. It’s a P3 1.13 GHz with 128 MB RAM, dual Intel 100-megabit NICs, and a single 40-gigabyte 7200-rpm Maxtor/Quantum IDE drive. It’s not a heavy-duty server, but it’s not designed to be. It’s designed for businesses that need to get a lot of CPU power into the smallest possible amount of rack space. And in that regard, the DL320 delivers.

Popping the hood reveals a well-designed layout. The P3 is near the front, with three small fans blowing right over it. Two more fans in the rear of the unit pull air out, and two fans in the power supply keep it cool. The unit has four DIMM sockets (one occupied). There’s room for one additional 3.5″ hard drive, and a single 64-bit PCI slot. Obvious applications for that slot include a gigabit Ethernet adapter or a high-end SCSI host adapter. The machine uses a ServerWorks chipset, augmented by a CMD 649 for UMDA-133 support. Compaq utilizes laptop-style floppy and CD-ROM drives to cram all of this into a 1U space.

The fit and finish is very good. The machine looks and feels solid, not flimsy, which is a bit surprising for a server in this price range. Looks-wise, it brings back memories of the old DEC Prioris line.

The rear of the machine has a fairly spartan number of ports: PS/2 keyboard and mouse, two RJ-45 jacks, VGA, one serial port, and two USB ports. There’s no room for luxuries, and such things as a parallel port are questionable in this type of server anyway.

Upon initial powerup, the DL320 asks a number of questions, including what OS you want to run. Directly supported are Windows NT 4.0, Windows 2000, Novell NetWare, and Linux.

Linux installs quickly and the 2.4.18 kernel directly supports the machine’s EtherExpress Pro/100 NICs, CMD 649 IDE, and ServerWorks chipset. A minimal installation of Debian 3.0 booted in 23 seconds, once the machine finished POST. After compiling and installing a kernel with support for all the hardware not in the DL320 removed, that boot time dropped to 15 seconds. That’s less time than it takes for the machine to POST.

Incidentally, that custom kernel was a scant 681K in size. It was befitting of a server with this kind of footprint.

As configured, the DL320 is more than up to the tasks asked of low-end servers, such as user authentication, DNS and DHCP, and mail, file and print services for small workgroups. It would also make a nice applications server, since the applications only need to load once. It would also be outstanding for clustering. For Web server duty or heavier-duty mail, file and print serving, it would be a good idea to upgrade to one of the higher-end DL320s that includes SCSI.

It’s hard to find fault with the DL320. At $1300 for an IDE configuration, it’s a steal. A SCSI-equipped version will run closer to $1900.

Full disclosure and integrity

I feel like I owe it to my readers to disclose a few things, due to the events of recent weeks raising a few questions in some people’s minds.

Read more

The penguins are coming!

The penguins are coming! Word came down from the corner office (the really big corner office) that he wants us to get really serious about Linux. He sees Linux as a cheap and reliable solution to some of the problems some outside clients are having. This is good. Really good.
My boss asked if it would be a capable answer to our needs, namely, for ISP-style e-mail and for Web caching. But of course. Then he asked if I was interested in pursuing it. Now that’s a silly question.

Now it could be that FreeBSD would be even better, but I know Linux. I don’t know FreeBSD all that well. I’ve installed it once and I was able to find my way around it, but I can fix Linux much more quickly. The two of us who are likely to be asked to administer this stuff both have much more Linux experience than we have BSD experience. Plus you can buy Linux support; I don’t know if you can buy FreeBSD support. I doubt we will, but in my experience, clients want to know (or at least think) that some big company is standing behind us. They’re more comfortable if we can buy support from IBM.

So maybe my days of Linux being a skunkworks project are over. The skunkworks Linux boxes were really cleverly disguised too–they were Macintoshes. They’re still useful for something I’m sure. I expect I’ll draft one of them for proof-of-concept duty, which will save us from having to pull a Compaq server from other duty.

I spent a good portion of the day installing Debian 3.0 on an old Micron Trek 2 laptop. It’s a Pentium II-300 with 64 megs of RAM. It boots fast, but current pigware apps tend to chew up the available memory pretty fast. I recompiled the kernel for the hardware actually in the machine and it helped some. It’s definitely useful for learning Linux, which is its intended use.

I’ve noticed a lot of people interested in Linux lately. One of our NT admins has been browsing my bookshelf, asking about books, and he borrowed one the other day. Our other NT admin wants to borrow it when he’s done with it. The Trek 2 I installed today is for our senior VMS admin, who wants a machine to learn with. My boss, who’s been experimenting with Linux for a couple of years, has been pushing it aggressively of late.

I don’t know if this situation is unique, but it means something.

I spent a good part of the evening at the batting cages. I messed my timing up something fierce. I hit the first few pitches to the opposite field, some of them weakly, but soon I was hitting everything–and I mean everything–to the third-base side. So my bat speed came back pretty fast, and I was getting way out in front of a lot of the pitches. So I started waiting on the ball longer, hoping to start hitting the ball where it’s pitched. The end result was missing about a quarter of the time, slashing it foul to the third-base side a quarter of the time, hitting it weakly where it was pitched a quarter of the time, and hitting it solidly where it was pitched a quarter of the time. Good thing the season doesn’t start until June–I’ve got some work to do.

Afterward, I drove to my old high school, hoping to be able to run a lap or two around the track. I was hoping for two; realistically I knew I’d probably be doing well to manage one. There was something going on there, and I couldn’t tell if the track was in use or not, so I kept driving. Eventually I ended up at a park near my apartment. I parked my car, found a bit of straightaway, and ran back and forth until I was winded. It didn’t take long.

I can still run about as fast as I could when I was a teenager, but my endurance is gone. I’m hoping I can pick that back up a little bit. I was a catcher last season, filling in occasionally at first base and in left field. In the league I play in, we usually play girls at second and third base, and we’ve got a couple of guys who can really play shortstop, so I’ll probably never play short. When I was young I played mostly left field and second. I’d like to roam left field again. Not that I mind catching, but there’s a certain nostalgia about going back to my old position.

Optimizing a Linux box in-place

Here’s the Linux bit I promised yesterday. I wrote it much earlier, so I might as well throw it out there.
Our test firewall at work is an old Pentium-200 running Red Hat Linux and a commercial firewall app. (No, I won’t disclose which one. Security, you know.) It’s a bit slow. A P200 is severe overkill for the firewall built into the Linux kernel (Steve DeLassus and I made a firewall out of the first PC he ever bought, a 486SX/20 of 1992 vintage, which, save the loss of the original power supply in an electrical storm, has never required any service), but this commercial package does a lot more than the simple firewalls built into Unixish kernels do.

It had 72 megs of RAM in it and swapped mercilessly. Its speed seemed to be OK once it was booted, but seeing as this is a testbed, it tends to get rebooted an awful lot. I needed to do something for it.

So I trekked into the PC graveyard to see what I could dig up. I found a Compaq 386DX/20. I left that alone. That’ll be useful if I ever need to pillage a pair of Compaq drive rails, which has happened before. Unfortunately those rails are worth more than the rest of the computer. I also spotted a Mac SE. That’ll be handy if I ever need a doorstop. Then I found a Pentium-75 and another Pentium of unknown speed. I opened them up. The 75 had a pair of 16-meg sticks. I opened up the unknown Pentium and looked inside. Ugh. Socket 4. That meant it was a Pentium-60, or, at best, a Pentium-66. It had a pair of 8-meg sticks.

I pulled the memory sticks out of the 75. The 60 didn’t have anything usable in it, save a pair of hard drives, both 540 megs, one a Quantum and the other a Seagate. I took the Seagate because it was easier to unbolt. I don’t have any way of knowing at this late date which of those drives was the better performer, and it probably doesn’t make much difference anymore.

The idea was to add some memory, and put in a second hard drive dedicated to virtual memory. Since the likelihood of the machine needing to read data from a drive and simultaneously hit virtual memory was fairly high, I wanted the virtual memory on its own drive. Furthermore, Linux’s partition-read
mechanism isn’t terribly efficient. This doesn’t matter for SCSI drives, which re-order I/O events, but for IDE drives it matters a lot. So getting the swap partition onto a dedicated drive was likely to improve performance a fair bit. (If this were a production system, it would probably have a SCSI
drive in it.)

So I swapped in the 16s for the 4s and found an empty bay to hold the 540, which I put on the second IDE channel as master (another performance trick), and booted Linux. The next trick is to use your favorite disk partitioning tool (I like cfdisk, but I can navigate plain old fdisk) to blow away whatever partition is on the new drive (this one was /dev/hdc) and create a single partition. I just made it the size of the drive, since 2.4 can deal with large swap partitions and Linux is smart enough to use whatever virtual memory it needs, not just automatically use all it has available. Then I set
it to type 82. Linux can do swapfiles, but a filesystemless dedicated swap partition gives better performance.

Next, I edited /etc/fstab. I found an entry for the swap partition pointing at /dev/hda2. I changed that to /dev/hdc1. That means I now have a small swap partition just sitting on the first drive unused, but that’s not a big deal to me. The system’s not using the disk space it has. While I was there, I noticed the CD-ROM drive was pointing at /dev/cdrom. I asked Charlie, our Unix/Linux guru, if Red Hat had some intelligence I didn’t know about. He said /dev/cdrom was just a symlink. I changed the entry to read /dev/hdd, which is where the CD-ROM drive ended up after my shuffle. Better to just code things directly than try to track symlinks, in my estimation.

Next, I issued the command mkswap /dev/hdc1 to initialize the swap partition. Then I rebooted and listened.

Indeed, during boot, the second drive was getting activity. I logged in and ran top, then hit shift-M to have a look at memory usage. The firewalling software was eating up a lot. But swap usage was down.

I decided to try cutting memory usage down a little more. I loaded /etc/inittab into vi. Red Hat by default gives you six virtual consoles. This machine has little need for more than two. Pulling the extras saves you a couple of megs. Near the end of the file you’ll see several lines that look something like this:

1:2345:respawn:/sbin/mingetty 38400 tty1

I commented out the last four of those. Hit the i key to put vi in insert mode, scroll down to those lines, add a # to the beginning of them, then hit ESC, then hit ZZ (shift-Z twice) to rapidly save the file, no questions asked. (I know, vi ain’t friendly, but it’s there.)

Then I had a look at /etc/rc3.d to see what daemons were running. I found apmd, sendmail, and gpm running. That was a waste of a couple megs, not to mention a possible security risk. I vaguely remember all three of them having had security issues in the past, and sendmail is one of those programs that should never be running unless you need it. Yes, this machine’s just practice, but Hall of Fame catcher Johnny Bench found that if he got sloppy and just let wild pitches go while he was warming up pitchers, he wasn’t as sharp at blocking potential wild pitches during the game when
it counted. So he worked just as hard during practice as he did during the game. Now he’s considered the greatest catcher of all time.

So I applied the Johnny Bench principle and disabled them with the following command sequence:

mv /etc/rc3.d/S26apmd /etc/rc3.d/K26apmd
mv /etc/rc3.d/S80sendmail /etc/rc3.d/K80sendmail
mv /etc/rc3.d/S85gpm /etc/rc3.d/K85gpm

I rebooted to find memory usage down by about 4 megs and the system booted a little faster. It was also more secure.

Total downtime: About 45 minutes.

That was time well spent. I may end up having to just bite the bullet and get some memory, but the system will perform better with these changes no matter how much memory is in it. And, more importantly, performing this exercise made me notice something I hadn’t noticed before. It let me tighten up security.

Had I blindly just ordered some memory to put in the system, or a new PC, like some people unfortunately advocate, I wouldn’t have necessarily noticed that as quickly.


Speaking of Linux, I did finally get Apache, PHP, and MySQL all talking together on my church’s 486. I used phpWeblog, which is an awfully nice package. Pages load in an acceptable two seconds. I notice the machine is paging, so a little more memory will probably help that. It’s amazing that people are throwing away Pentium-class machines when even a 486 has enough power to be a decent intranet server.

Not everyone’s so fortunate as you and me. Give ’em to someone who can use them if you don’t want them.

How Linux could own the education market

How Linux could own the education market. I spent some time yesterday evening working on computers. They were contrasts to the extreme: One, a brand-spankin’ new 1 GHz AMD Duron system with 512MB of RAM and 80 GB of 7200-rpm storage (IDE, unfortunately–but for $800, what do you want?). The other was an elderly AST 486SX/25 running Windows 3.1 belonging to a local teacher who goes to my church.
She teaches kindergarten, and the AST used to be her home computer. When she bought a Compaq Presario a couple of years ago, she took the AST to school. It’s more useful there than in her basement, and there’d be no computer in her classroom if it weren’t for that.

I don’t understand why that is. As much as my sister jokes about it, we don’t exactly live in the ghetto. The school district has money, but it isn’t spending it on computers. Whether that’s a good or bad thing depends on your point of view. The majority of people living in Oakville probably own home computers, so this probably isn’t contributing to the technology gap. But I wonder sometimes how things might have been if I’d been exposed to computers a few years earlier.

I was shocked how much I remembered about Windows 3.1. And I was able to figure out how to get her CD-ROM drive to play music CDs. Don’t ask me how; this was the first I’d messed with Windows 3.1 since 1994 and I’d prefer it stay that way–I was so impressed by Windows 3.1 that I’m one of the 12 people who actually went out and paid money for OS/2. I own actual, retail-box copies of OS/2 2.1, 3.0, and 4.0. And I remember distinctly thinking that her computer has enough memory to run OS/2 at least as well as it runs Windows 3.1…

I also remember distinctly thinking that my employer pays someone $15 a pound to haul better computers than hers away several times a year. We regard 486s as junk; low-end Pentiums may also go out, depending on whether the right person finds out about them beforehand. Usually they work just fine–the problem isn’t the computers, it’s people trying to run Internet Exploiter 6 and Office 2000 on them. They’d run Windows 95 and Office 95 perfectly fine.

But a lot of times we can’t give these old computers away because the licenses for the software that originally came with them are long gone. Old computers are useless without software, so no one would want them anyway.

Now, let me tell you something about kids. Kids don’t care much about the computers they use. As long as there’s software on them, they’ll use them. When I was a kid 20 years ago, I used Radio Shack TRS-80 computers at school. The next year, my family moved, and my new school had Commodore 64s. I couldn’t tell much difference. My next-door neighbor had a Radio Shack Color Computer. They were computers. The Commodores had better graphics, but from a usability standpoint, the biggest difference was where the cartridge slot was so you could change programs. Later on I took a summer class at the local junior college, learning about Apple IIs and IBM PCs. I adjusted smoothly. So did all the other kids in the class. Software was software.

Kids don’t care if the computer they’re using runs Windows or Mac OS or Linux. All they care about is whether there are cool programs to run.

So, businesses throw useless computers away, or they give useless computers to schools so they don’t have to pay someone to haul them away. And schools don’t generally know what to do with obsolete computers that lack software.

Linux won’t run fabulously on old 486s, but Debian with a lightweight window manager like IceWM will run OK. (Let’s face it, Windows 3.1 doesn’t run fabulously on them either–it crashes if you breathe wrong.) I know of a project to clone Oregon Trail on Linux. Great start. How about Sea Route to India? I remember playing that on C-64s at school. It may have been a type-in out of a magazine–I don’t remember where exactly it came from. In these violent times, Artillery might be too controversial, but it taught us early on about angles and forces. Artillery was an ancestor to games like Scorched Earth, but without the heavy-duty nukes. Close wasn’t good enough to win in Artillery. You had to be exact. And no blowing up the mountains between you and your opponents either. You had to figure out how to get over them.

But what about doing homework? By the time I was in the sixth grade, they were teaching us how to use word processors and databases and spreadsheets. AbiWord is a fabulous lightweight word processor. It gives you fonts and spell-checking and good page formatting. (I learned word processing on Bank Street Writer. AbiWord is a far, far cry from that. Frankly, I’d rather write a paper with vi than with Bank Street Writer.) Besides being feature-rich, AbiWord’s been lightning fast on every computer I’ve tried it on. Gnumeric is a nice, fast, capable spreadsheet. I don’t know of a free-form database, but I haven’t looked for one lately either. (I don’t think we need to be trying to teach our 6th graders SQL.)

But what about for younger kids? I remember a program called The Factory. The object was you combined chemicals to make monsters. Different chemicals made different monsters. I seem to remember you played around to see what chemicals would make which heads and torsos and arms. Then the computer started showing you monsters and you had to figure out what chemicals to give it to match them. I also remember a program called Snooper Troops. I don’t remember much else about it, other than it was a mystery and you went around looking for clues, and one of my classmates accidentally formatted the disk one day before any of us had managed to solve it. We couldn’t get the disk replaced, because it was out of print.

And Spinnaker had all sorts of simple titles for younger kids that let them tell stories and other stuff. It seemed cool at the time. But that was almost 20 years ago, so about all I remember was that sailboat logo and some corny theme music.

The other thing about those old days was that the majority of these programs were written in Basic. An ambitious teacher could modify them, to make them easier or harder, or improve the graphics a little. As we got older and learned to program, some of us would try our hand at making changes. You can’t do that anymore with Windows or Macintosh educational titles. Open source can bring all that back too, provided the programs are written in languages like Perl or Python. And it can give cash-strapped schools a way to get computers where kids can use them.

Now I’m wondering what it would take to write something like The Factory in Python…

What on earth is going on?

AOL-Time Warner in talks to buy Red Hat? I found this this morning. It’s intriguing, but I can’t decide if a buyout would be a good thing or a bad thing. After all, Netscape was in decline when AOL bought it. It nosedived afterward. Obviously, the problem was twofold. When AOL acquired Netscape, they didn’t acquire all of its mindshare. Some of the most talented people got fed up and left. You can take Jim Barksdale or you can leave him. The loss of Marc Andreesen and Jamie Zawinski, though, was substantial.
The second problem was that AOL wasn’t serious about competing. They bought a browser technology and basically sat on it. Netscape 4.x was fundamentally flawed, as even Zawinski acknowledges, although I would argue it was no more fundamentally flawed than IE 4.x. The Gecko engine, on which Netscape 6.x is based, is solid technology, even though it took longer to get to market than anyone had hoped. Although Netscape 6.x won’t bowl anyone over, other browsers based on the technology, such as Galeon, are absolutely fantastic. But AOL chose to release a half-hearted browser with the Netscape name on it and continued to use the IE engine in its flagship product even after the favorable agreement with Microsoft that prompted AOL to do so in the first place expired.

That begs the question of what AOL would do with Red Hat if it owned it. Red Hat is still the big-name player in the Linux field, but Red Hat is concentrating on the server market. You can still buy Red Hat at retail, but on the desktop, Red Hat is arguably #3 in popularity now behind France’s Mandrake and Germany’s SuSE. Red Hat is the only Linux company that’s making money, but that’s largely by selling consulting. That’s not AOL’s core business. At this point, AOL is more of a media company than a technology company. Software just gives AOL more outlets to sell its media content. Consulting doesn’t do that.

The best possible scenario for a Red Hat buyout would be for AOL to, as Microsoft puts it, “eat its own dog food,” that is, rip out the infrastructure it bought from other companies and replace it with the technology it just developed or acquired. Since AOL is largely powered by Sun servers, it wouldn’t be terribly difficult to migrate the infrastructure to Red Hat running on Intel. Then AOL could give a big boost to its newly-acquired services division by saying, “We did it and we can help you do it too.” They can also cite Amazon’s recent successes in moving its infrastructure to Red Hat Linux. There is precedence for that; after AOL bought Time Warner, the entire company started using AOL for e-mail, a move widely questioned by anyone who’s used anything other than AOL for mail.

Of course, it would be expected that AOL would port its online service to Linux, which would create the truly odd couple of the computing field. AOL, meet sed and awk. Red Hat would certainly lose its purity and much of its credibility among the Linux die-hards. AOL would bank on making up the loss by gaining users closer to the mainstream. AOL could potentially put some Linux on its corporate desktops, but being a media company, an all-out migration to Linux everywhere within is very far-fetched.

To really make this work, AOL would either have to enter the hardware business and sell PCs at retail using its newly acquired Red Hat distribution and newly ported AOL for Linux and possibly an AOL-branded office suite based on OpenOffice, or it would have to partner with a hardware company. Partnering with a big name seems unlikely–a Compaq or an HP or an IBM wouldn’t do it for fear of retaliation from Microsoft. Sun has never expressed any interest in entering the retail computer business, and even though Sun loves to take opportunities to harm Microsoft, Sun probably wouldn’t cooperate with AOL if AOL replaced its Sun infrastructure with Red Hat Linux. Struggling eMachines might be the best bet, since it’s strictly a consumer brand, has a large presence, but hasn’t consistently turned a profit. But AOL could just as easily follow eMachines’ example, buying and re-branding low-end Far East clones and selling them at retail as loss-leaders, taking advantage of its lack of need for Windows (which accounts for roughly $75 of the cost of a retail PC) and making its profit off new subscribers to its dialup and broadband customers. A $349 PC sold at retail with a flashy GUI, decent productivity software and AOL is all the computer many consumers need.

The advantage to this scenario for everyone else is that AOL would probably dump more development into either the KDE or GNOME projects in order to give itself more and higher-quality software to offer. The official trees can either take these changes or leave them. Undoubtedly, some of the changes would be awful, and the official trees would opt to leave them. But with its 18 years’ worth of experience developing GUIs, some of the changes would likely be a good thing as well.

The more likely scenario: AOL will buy out Red Hat, not have a clue what to do with it, and Red Hat Linux will languish just like Netscape.

The even more likely scenario: AOL will come to its senses, realize that Red Hat Linux has nothing to do with its core business, and the two companies will go their separate ways.