Home » BSD » Page 2

BSD

Time for a core dump

I’ve been keeping a low profile lately. That’s for a lot of reasons. I’ve been doing mostly routine sysadmin work lately, which is mind-numbingly boring to write about, and possibly just a little bit less mind-numbingly boring to read about. While a numb mind might not necessarily be a bad thing, there are other reasons not to write about it.
During my college career, I felt like I had less of a private life than most of my classmates because of my weekly newspaper column. I wrote some pretty intensely personal stuff in there, and frankly, it seemed like a lot of the people I hung out with learned more about me from those columns than they did from hanging out with me. Plus, with my picture being attached, I’d get recognized when I went places. I remember many a Friday night, going to Rally’s for a hamburger and having people roll down their windows at stoplights and talk to me. That was pretty cool. But it also made me self-conscious. College towns have some seedy places, you know, and I worried sometimes about whether I’d be seen in the vicinity of some of those places and what people might think.

Looking back now, I should have wondered what they would be doing in the vicinity of those places and why it was OK for them to be nearby and not me. But that’s the difference between how I think now and how I thought when I was 20.

Plus, I know now a lot fewer people read that newspaper than its circulation and advertising departments wanted anyone to think. So I could have had a lot more fun in college and no one would have known.

I’m kidding, of course. And I’m going off on tangent after tangent here.

In the fall of 1999, I willingly gave up having a private life. The upside to that is that writing about things helps me to understand them a lot better. And sometimes I get stunningly brilliant advice. The downside? Well, not everyone knows how to handle being involved in a relationship with a writer. Things are going to come up in writing that you wish wouldn’t have. I know now that’s something you have to talk about, fairly early. Writing about past girlfriends didn’t in and of itself cost me those relationships but I can think of one case where it certainly didn’t help anything. The advice I got might have been able to save that relationship; now it’s going to improve some as-yet-to-be-determined relationship.

There’s another downside too. When you meet a girl and then she punches your name into a search engine, if you’re a guy like me who has four years’ worth of introspective revelations out on the Web, it kind of puts you at a disadvantage in the relationship. She knows a whole lot more about you than you do about her. It kind of throws off the getting-to-know-you process. I’d really rather not say how many times that’s happened in the past year. Maybe those relationships/prospective relationships were doomed anyway. I don’t have any way of knowing. One of them really hurt a lot and I really don’t want to go through it again.

So I’ve been trying to figure out for the past few weeks what to do about all this. Closing up shop isn’t an option. Writing strictly about the newest Linux trick I’ve discovered and nothing else isn’t an option. Writing blather about the same things everyone else is blathering about is a waste of time and worthless. Yes, I’ve been saying since March that much, if not all, of the SCO Unix code duplicated in Linux is probably BSD code that both of them ripped off at different points in time. And now it’s pretty much been proven that I was right. So what? How many hundreds of other people speculated the same thing? How could some of us be more right than others?

I’m going to write what I want, but I’m having a hard time deciding what I want to write. I know I have to learn how to hold something back. Dave Farquhar needs a private life again.

For a while, this may just turn into a log of Wikipedia entries I made that day. Yes, I’m back over there again, toiling in obscurity this time. For a while I was specializing in entries about 1980s home computing. For some reason when I get to thinking about that stuff I remember a lot, and I still have a pile of old books and magazines so I can check my facts. Plus a lot of those old texts are showing up online now. So now the Wikipedia has entries on things like the Coleco Adam and the Texas Instruments TI-99/4A. Hey, I find it interesting to go back and look at why these products were failures, OK? TI should have owned the market. It didn’t. Coleco should have owned the market, and they didn’t. Atari really should have owned the market and they crashed almost as hard as Worldcom. So how did a Canadian typewriter company end up owning the home computer market? And why is it that probably four people reading this know who on earth I’m talking about now, in 2003? Call me weird, but I think that’s interesting.

And baseball, well, Darrell Porter and Dick Howser didn’t have entries. They were good men who died way too young, long before they’d given everything they had to offer to this world. Roger Maris didn’t have an entry. There was more to Roger Maris than his 61 home runs.

The entries are chronicled here, if you’re interested in what I’ve been writing lately while I’ve been ignoring this place.

First impressions of VMWare

I’ve been setting up VMWare ESX Server at work, and it’s quirky, but I like it. I shut it down improperly once (logging into the console on its Linux-based host OS and doing a shutdown -h now resulted in a system that wouldn’t boot anymore) so I’m afraid of what may happen. The upside is since every virtual machine is just a collection of files, disaster recovery is dirt simple: Build a VMWare box, restore those files from backup, point the VMs at them, and you’re back in business. No more need to worry about locating identical or close-enough-to-identical hardware. For that reason alone, I’d advocate running all of my Windows servers in production environments on VMWare, since Windows isn’t like a real OS that will allow you use a disk or image on dissimilar hardware with minor adjustments. We get some other benefits too, like allowing us to put all the toy servers in one box with RAID to protect us in a disk crash. We’ve lost far too much to disk failures on desktop PCs recast as someone’s pet-project server.
It also appears to try to only allocate to machines the amount of memory they’re actually using, so theoretically, if you were doing server consolidation and had, say, four servers with 256 MB of RAM, you could potentially get away with putting them on a VMWare host with less than 1 GB of memory.

I also like VMWare for tasks you don’t like to dedicate a single machine to. For instance, DNS on NT is totally brain-dead. It’s slow to propogate. It works about 99.9% of the time, but that .1% of the time that it feeds wrong answers will infuriate somebody, who will holler at you, and the struggle to fix the problem will infuriate you worse.

If you want DNS that works, your best bet is to load Linux or BSD with BIND on something and use it. But if you don’t already have a production Linux server somewhere and you don’t have a machine you trust to give the job, carve out a server on a VMWare box. Allocate 16 megs of RAM and a couple hundred megs of disk space to it, and give it a thin slice of processor time. DNS lookups don’t take a lot of power, so it won’t detract noticeably from the other hosted servers.

It ain’t cheap (the price isn’t listed on the web site for a reason), but software’s cheaper than hardware.

If I had my own Linux distribution

I found an interesting editorial called If I had my own Linux Distro. He’s got some good ideas but I wish he’d known what he was talking about on some others.
He says it should be based on FreeBSD because it boots faster than Linux. I thought everyone knew that Unix boot time has very little to do with the kernel? A kernel will boot more slowly if it’s trying to detect too much hardware, but the big factor in boot time is init, not the kernel. BSD’s init is much faster than SysV-style init. Linux distros that use BSD-style inits (Slackware, and optionally, Debian, and, as far as I understand, Gentoo) boot much faster than systems that use a traditional System V-style init. I recently converted a Debian box to use runit, and the decrease in boot time and increase in available memory at boot was noticeable. Unfortunately now the system doesn’t shut down properly. But it proves the concept.

He talks about installing every possible library to eliminate dependency problems. Better idea: Scrap RPM and use apt (like Debian and its derivatives) or a ports-style system like Gentoo. The only time I’ve seen dependency issues crop up in Debian was on a system that had an out of date glibc installed, in which case you solve the issue by either keeping the distribution up to date, or updating glibc prior to installing the package that fails. These problems are exceedingly rare, by the way. In systems like Gentoo, they don’t happen because the installation script downloads and compiles everything necessary.

Debian’s and Gentoo’s solution is far more elegant than his proposal: Installing everything possible isn’t going to solve your issue when glibc is the problem. Blindly replacing glibc was a problem in the past. The problems that caused that are hopefully solved now, but they’re beyond the control of any single distribution, and given the choice between having a new install stomp on glibc and break something old or an error message, I’ll take the error message. Especially since I can clear the issue with an apt-get install glibc. (Then when an old application breaks, it’s my fault, not the operating system’s.)

In all fairness, dependency issues crop up in Windows all the time: When people talk about DLL Hell, they’re talking about dependency problems. It’s a different name for the same problem. On Macintoshes, the equivalent problem was extensions conflicts. For some reason, people don’t hold Linux to the same standard they hold Windows and Macs to. People complain, but when was the last time you heard someone say Windows or Mac OS wasn’t ready for the desktop, or the server room, or the enterprise, or your widowed great aunt?

He also talks about not worrying about bloat. I take issue with that. When it’s possible to make a graphical Linux distribution that fits on a handful of floppies, there’s no reason not to make a system smooth and fast. That means you do a lot of things. Compile for an advanced architecture and use the -O3 options. Use an advanced compiler like CGG 3.2 or Intel’s ICC 7.0 while you’re at it. Prelink the binaries. Use a fast-booting init and a high-performance system logger. Mount filesystems with the highest-performing options by default. Partition off /var and /tmp so those directories don’t fragment the rest of your filesystem. Linux can outperform other operating systems on like hardware, so it should.

But when you do those things, then it necessarily follows that people are going to want to run your distribution on marginal hardware, and you can’t count on marginal hardware having a 20-gig hard drive. It’s possible to give people the basic utilities, XFree86, a reasonably slick window manager or environment, and the apps everyone wants (word processing, e-mail, personal finance, a web browser, instant messaging, a media player, a graphics viewer, a few card games, and–I’ll say it–file sharing) in a few hundred megabytes. So why not give it to them?

I guess all of this brings up the nicest thing about Linux. All the source code to anything desirable and all the tools are out there, so a person with vision can take them and build the ultimate distribution with it.

Yes, the idea is tempting.

A stupid BIND trick

My head’s still swimming from my crash course in BIND. I knew enough BIND to be dangerous–I’ve known how to set up a caching nameserver for years, and even stumbling through creating a master server for someone with a fixed IP address who wanted to host a domain wasn’t beyond me. Creating BIND servers for an enterprise isn’t too big of a deal, but creating one right can be.
After reading a lot, I set to the task.

Here’s a hint: If you’re migrating your servers from another OS to some Unixish OS and BIND, you can avoid re-keying all those zone files. (We’ve got more than 60 of the blasted things; our external server alone is 404K worth of configuration files. I didn’t bother to check the internal files.) Set your server to be a slave server to your current server. Be sure to comment out your allow-updates line; BIND 9 will complain if you mention slave servers and updates in the same breath. Now restart BIND (/etc/init.d/bind9 restart in Debian 3.0; the command may be /etc/init.d/named restart or /etc/init.d/bind restart in other distros) and wait. In my case, the files started appearing within seconds, and within a couple of minutes, my server had downloaded all of them. Reset your server to master status, then find a few people to change their TCP/IP configuration to use it. Give it a day or two, and when you’re convinced that all is well, turn off DNS on the old server and put the new server in production.

Yes, my Linux box was perfectly capable of pulling DNS records from an NT-based DNS. This is good. If you’re running DNS on NT currently, I wholeheartedly recommend you migrate away from it. Don’t waste clock cycles and network bandwidth on an expensive NT server. Grab a server-grade machine that’s too old to be a useful NT server and load Linux or some BSD variant on it. I know a company that ran BIND on some old 25 MHz DEC VAX workstations for years. That’s a too low-end to be comfortable, but if you’ve got server-grade 486-66s kicking around in a dusty corner somewhere, they’ll be adequate. A Pentium-133 will treat you a little bit better. A good rule of thumb: If the machine ever ran NT Server with any competence at all (even if it was in 1996), it’s got enough oomph to run BIND.

The nice thing about machines like that is that you usually have more than one of them and it doesn’t cost you anything to keep a hot spare. If one fails, unplug it and boot up the spare. Yes, DNS is mission-critical, but by definition it’s also redundant.

I’m shocked that there isn’t a single-floppy Linux distro that’s basically just Linux and BIND. Here’s a challenge for some sicko: Make a mini-distro incorporating BIND and Linux 1.09 so the minimum requirements will be a 386sx/16 with 2 megs of RAM and an NE2000 NIC.

I believe there are other slick BIND tricks, but I think I’ll wait and see if they work before I go touting a bunch of stuff that might not work.

Analysis of the Apple Mac Xserver

Given my positive reaction to the Compaq Proliant DL320, Svenson e-mailed and asked me what I thought of Apple’s Xserver.
In truest Slashdot fashion, I’m going to present strong opinions about something I’ve never seen. Well, not necessarily the strong opinions compared to some of what you’re used to seeing from my direction. But still…

Short answer: I like the idea. The PPC is a fine chip, and I’ve got a couple of old Macs at work (a 7300 and a 7500) running Debian. One of them keeps an eye on the DHCP servers and mails out daily reports (DHCP on Windows NT is really awful; I didn’t think it was possible to mess it up but Microsoft found a way) and acts as a backup listserver (we make changes on it and see if it breaks before we break the production server). The other one is currently acting as an IMAP/Webmail server that served as an outstanding proof of concept for our next big project. I don’t know that the machines are really any faster than a comparable Pentium-class CPU would be, but they’re robust and solid machines. I wouldn’t hesitate to press them into mission-critical duty if the need arose. For example, if the door opened, I’d be falling all over myself to make those two machines handle DHCP, WINS, and caching DNS for our two remote sites.

So… Apples running Linux are a fine thing. A 1U rack-mount unit with a pair of fast PPC chips in it and capable of running Linux is certainly a fine thing. It’ll suck down less CPU power than an equivalent Intel-based system would, which is an important consideration for densely-packed data centers. I wouldn’t run Mac OS X Server on it because I’d want all of its CPU power to go towards real work, rather than putting pretty pictures on a non-existent screen. Real servers are administered via telnet or dumb terminal.

What I don’t like about the Xserver is the price. As usual, you get more bang for the buck from an x86-based product. The entry-level Xserver has a single 1 GHz PowerPC, 256 megs of RAM, and a 60-gig IDE disk. It’ll set you back a cool 3 grand. We just paid just over $1300 for a Proliant DL320 with a 1.13 GHz P3 CPU, 128 megs of RAM, and a 40-gig IDE disk. Adding 256 megs of RAM is a hundred bucks, and the price difference between a 40- and a 60-gig drive is trivial. Now, granted, Apple’s price includes a server license, and I’m assuming you’ll run Linux or FreeBSD or OpenBSD on the Intel-based system. But Linux and BSD are hardly unproven; you can easily expect them to give you the same reliability as OS X Server and possibly better performance.

But the other thing that makes me uncomfortable is Apple’s experience making and selling and supporting servers, or rather its lack thereof. Compaq is used to making servers that sit in the datacenter and run 24/7. Big businesses have been running their businesses on Compaq servers for more than a decade. Compaq knows how to give businesses what they need. (So does HP, which is a good thing considering HP now owns Compaq.) If anything ever goes wrong with an Apple product, don’t bother calling Apple customer service. If you want to hear a more pleasant, helpful, and unsuspicious voice on the other end, call the IRS. You might even get better advice on how to fix your Mac from the IRS. (Apple will just tell you to remove the third-party memory in the machine. You’ll respond that you have no third-party memory, and they’ll repeat the demand. There. I just saved you a phone call. You don’t have to thank me.)

I know Apple makes good iron that’s capable of running a long time, assuming it has a quality OS on it. I’ve also been around long enough to know that hardware failures happen, regardless of how good the iron is, so you want someone to stand behind it. Compaq knows that IBM and Dell are constantly sitting on the fence like vultures, wanting to grab its business if it messes up, and it acts accordingly. That’s the beauty of competition.

So, what of the Xserver? It’ll be very interesting to see how much less electricity it uses than a comparable Intel-based system. It’ll be very interesting to see whether Apple’s experiment with IDE disks in the enterprise works out. It’ll be even more interesting to see how Apple adjusts to meeting the demands of the enterprise.

It sounds like a great job for Somebody Else.

I’ll be watching that guy’s experience closely.

Another ordinary Monday…

Seen on a sign. God calls us to play the game, not to keep the score.
I like that.

Seen at a book sale. The Coming War with Japan. The book was written in 1992 and asserted that the conditions that pre-dated World War II exist today and that war is inevitable. Then I spotted another book: The Japanese Conspiracy. I didn’t bother picking that one up. I could have bought them for entertainment value, but I picked up a couple of books by Dave Barry and P.J. O’Rourke for that.

The idea seems ridiculous to me.

I was glad I went over to the section on war though. In addition to those, I also found A Practical Guide to the Unix System, Third Edition, by Mark G. Sobell. Had it been in the computer section where it belonged, it would have been snapped up long before I got there. It comes from a BSD perspective, but I have to work with a BSD derivative at work sometimes, so it’s good to have. At the very least, it can serve as a status book (books you keep on your shelf in your office to make it look like you know something, even if you never read them).

Speaking of humor value… I picked up a book on typography, written in 1980. Some of my classmates had a knack for making type look really good–they could literally turn a headline into art. I never got that knack. This book tries to teach it. It also talks about computerized typography. Needless to say, the couple of pages that illustrate that are just a wee bit out of date.

But I’m not worried about the key points of the book being out of date. The basic elements of good design were old news when Gutenberg built his first printing press.

Retro computing. I was inventorying my old stuff and I ended up building a computer. I have an original IBM PC/AT case, but the last of the AT motherboards don’t fit in it well. The screws line up, I’m in trouble if I need any memory, because the drive cage blocks the memory slots on a lot of boards, including my supercheap closeout Soyo Socket 370 boards I picked up a year or so ago. I used the motherboard that had been in that case for something else long ago, and it’s been sitting ever since.

In my stash, I found a Socket 7 board that fits and lets me put the memory in it. It even has 2 DIMM and 4 SIMM sockets in it. Unfortunately it has the Intel 430VX chipset in it, which didn’t cache any memory above 64 MB, limited the density of SDRAM it would recognize, and its SDRAM performance was so lousy you didn’t really see much difference between SDRAM and EDO. But if I run across a 32-meg DIMM or two it’ll fit, and a relatively slow CPU with adequate memory still makes a good Linux server, especially if you give it a decent SCSI card.

I did some investigation using the tools at www.motherboards.org, and found out the board was a Spacewalker Shuttle. So I went to www.spacewalker.com, where I found out there were only three Shuttle boards ever made with the 430VX chipset. There were pictures of each board, so I quickly figured out which one I had–a HOT-557/2 v1.32. It tops out at a Pentium 200 or a Pentium MMX 166, so I’ve got some options if I decide the AMD K5-100 in there isn’t enough horsepower. And, most importantly to me at least, it looks like a computer. A machine from a time when computers were computers, not boomboxes and fax machines and toaster ovens and television sets. A machine that looks rugged enough to survive a tumble down a flight of stairs. A hot-rodded classic. A man’s machine, ar ar ar!

Back to the grind. The weekend’s over, and it’s time to think about work. Have a wonderful week, check the news sources I cited Saturday if you want, and check back in here a few times while you’re at it, won’t you?

The penguins are coming!

The penguins are coming! Word came down from the corner office (the really big corner office) that he wants us to get really serious about Linux. He sees Linux as a cheap and reliable solution to some of the problems some outside clients are having. This is good. Really good.
My boss asked if it would be a capable answer to our needs, namely, for ISP-style e-mail and for Web caching. But of course. Then he asked if I was interested in pursuing it. Now that’s a silly question.

Now it could be that FreeBSD would be even better, but I know Linux. I don’t know FreeBSD all that well. I’ve installed it once and I was able to find my way around it, but I can fix Linux much more quickly. The two of us who are likely to be asked to administer this stuff both have much more Linux experience than we have BSD experience. Plus you can buy Linux support; I don’t know if you can buy FreeBSD support. I doubt we will, but in my experience, clients want to know (or at least think) that some big company is standing behind us. They’re more comfortable if we can buy support from IBM.

So maybe my days of Linux being a skunkworks project are over. The skunkworks Linux boxes were really cleverly disguised too–they were Macintoshes. They’re still useful for something I’m sure. I expect I’ll draft one of them for proof-of-concept duty, which will save us from having to pull a Compaq server from other duty.

I spent a good portion of the day installing Debian 3.0 on an old Micron Trek 2 laptop. It’s a Pentium II-300 with 64 megs of RAM. It boots fast, but current pigware apps tend to chew up the available memory pretty fast. I recompiled the kernel for the hardware actually in the machine and it helped some. It’s definitely useful for learning Linux, which is its intended use.

I’ve noticed a lot of people interested in Linux lately. One of our NT admins has been browsing my bookshelf, asking about books, and he borrowed one the other day. Our other NT admin wants to borrow it when he’s done with it. The Trek 2 I installed today is for our senior VMS admin, who wants a machine to learn with. My boss, who’s been experimenting with Linux for a couple of years, has been pushing it aggressively of late.

I don’t know if this situation is unique, but it means something.

I spent a good part of the evening at the batting cages. I messed my timing up something fierce. I hit the first few pitches to the opposite field, some of them weakly, but soon I was hitting everything–and I mean everything–to the third-base side. So my bat speed came back pretty fast, and I was getting way out in front of a lot of the pitches. So I started waiting on the ball longer, hoping to start hitting the ball where it’s pitched. The end result was missing about a quarter of the time, slashing it foul to the third-base side a quarter of the time, hitting it weakly where it was pitched a quarter of the time, and hitting it solidly where it was pitched a quarter of the time. Good thing the season doesn’t start until June–I’ve got some work to do.

Afterward, I drove to my old high school, hoping to be able to run a lap or two around the track. I was hoping for two; realistically I knew I’d probably be doing well to manage one. There was something going on there, and I couldn’t tell if the track was in use or not, so I kept driving. Eventually I ended up at a park near my apartment. I parked my car, found a bit of straightaway, and ran back and forth until I was winded. It didn’t take long.

I can still run about as fast as I could when I was a teenager, but my endurance is gone. I’m hoping I can pick that back up a little bit. I was a catcher last season, filling in occasionally at first base and in left field. In the league I play in, we usually play girls at second and third base, and we’ve got a couple of guys who can really play shortstop, so I’ll probably never play short. When I was young I played mostly left field and second. I’d like to roam left field again. Not that I mind catching, but there’s a certain nostalgia about going back to my old position.

It’s the best of times, it’s the worst of times…

I hate arguing with women. When guys fight, they fight hard, and they don’t always fight fair, but when the fight’s over, it’s pretty much over. You settle it. Maybe you seethe for a little bit. But eventually, assuming you both still can walk, you can go to hockey games together almost like it never happened.
I’ve found myself in an argument. It’s not like an argument with a guy. Every time I think it’s over, it flares back up. It’s like fighting the hydra. (I don’t know if this is characteristic of arguments with women in general; I generally don’t seek out that experience.)

I found one solution though: Don’t open my inbox.

That worked for me once. After 8 months, she finally quit e-mailing me.

Found on a mailing list. I’m assuming this guy mistyped this:

“I need hell with my installation.”

Some smart aleck responded before I did. “Usually you get that with installation whether you want it or not. Now someone’s demanding it. Newbies, these days.”

I was going to say that if you ran Windows, you’d get that free of charge. (That’s the only thing Microsoft gives you for free!)

A cool phone call. My phone rings at work. Outside call. Don’t tell me she somehow got my number at work… I pick up. “This is Dave.”

“Dave, it’s Todd.”

Ah, my boss. Good thing I picked up, eh?

“You busy?”

When it’s your boss, there is absolutely no right answer to that question. One of my classmates in college told me something worth remembering, though: The truth’s always a lot easier to remember than a lie.

“We can’t come to the phone right now. Please leave a message at the beep.”

Nope. Too late for that.

“Not really,” I say, hoping I won’t regret it. Either he’s gathering data for my personal review, or he’s about to ask me to install Mac OS X on a Blue Dalmation iMac with 32 megs of RAM (speaking of wanting hell with installation…)

Actually he asks me for something pretty cool. He asks if I was up to learning some firewalling software. (No, I won’t tell you which one. And no, I won’t tell you who I work for. That’s like saying, “Hey, l337 h4xx0r5! You can’t get me!)

But I will tell you the IP address. It’s 127.0.0.1. If you can crack that address, you deserve whatever you can get. (No comments from the Peanut Gallery.)

So I hit the books. Thanks to this duty, I get another Linux box. I’ve got a Power Mac running Debian already, which runs scripts that are impossible on NT. It monitors the LAN and reformats some reports and e-mails them to my boss and co-workers at 6 every morning. But the management software runs under NT 4, Red Hat Linux, or Solaris. None of that’ll run on a PowerPC-based machine. So I lay claim to an old system that I happen to know has an Asus motherboard in it, along with 72 megs of RAM. I’ll have fun tweaking that system out. An Asus mobo, a Pentium-class CPU, and a Tulip network card. That’s not the makings of a rockin’ good weekend, but it’ll make for a reliable light-use workstation.

While the management software runs under Red Hat, some of the infrastructure is BSD-based. So I get to learn some BSD while I’m at it. As long as BSD is sane about /proc and /var/log, I’ll be in good shape. But I heard LSD was invented at Berkeley, so I may have a little learning to do… Maybe listening to some Beatles records while administering those systems would help.

Desktop Linux and the truth about forking

Desktop Linux! I wanna talk a little more about how Linux runs on a Micron Transport LT. I chose Debian 2.2r3, the “Potato” release, because Debian installs almost no extras. I like that. What you need to know to run Linux on a Micron LT: the 3Com miniPCI NIC uses the 3C59x kernel module. The video chipset uses the ATI Mach64 X server (in XFree86 3.36; if you upgrade to 4.1 you’ll use plain old ATI). Older Debian releases gave this laptop trouble, but 2.2r3 runs fine.
I immediately updated parts of it to Debian Unstable, because I wanted to run Galeon and Nautilus and Evolution. I haven’t played with any GNOME apps in a long time. A couple of years ago when I did it, I wasn’t impressed. KDE was much more polished. I didn’t see any point in GNOME; I wished they’d just pour their efforts into making KDE better. I still wish that, and today KDE is still more polished as a whole, but GNOME has lots of cool apps. Nautilus has the most polish of any non-Mac app I’ve ever seen, and if other Linux apps rip off some of its code, Microsoft’s going to have problems. It’s not gaudy and overboard like Mac OS X is; it’s just plain elegant.

Galeon is the best Web browser I’ve ever seen. Use its tabs feature (go to File, New Tab) and see for yourself. It’s small and fast like Opera, compatible like Netscape, and has features I haven’t seen anywhere else. It also puts features like freezing GIF animation and disabling Java/JavaScript out where they belong: In a menu, easily accessible. And you can turn them off permanently, not just at that moment.

Evolution is a lot like Outlook. Its icons look a little nicer–not as nice as Nautilus, but nice–and its equivalent of Outlook Today displays news headlines and weather. Nice touch. And you can tell it what cities interest you and what publications’ headlines you want. As a mail reader, it’s very much like Outlook. I can’t tell you much about its PIM features, because I don’t use those heavily in Outlook either.

The first time I showed it to an Outlook user at work, her reaction was, “And when are we switching to that?”

If you need a newsreader, Pan does virtually everything Forte Agent or Microplanet Gravity will do, plus a few tricks they won’t. It’s slick, small, and free too.

In short, if I wanted to build–as those hip young whippersnappers say–a pimp-ass Internet computer, this would be it. Those apps, plus the Pan newsreader, give you better functionality than you’ll get for free on Windows or a Mac. For that matter, you could buy $400 worth of software on another platform and not get as much functionality.

Linux development explained. There seems to be some confusion over Linux, and the kernel forking, and all this other stuff. Here’s the real dope.

First off, the kernel has always had forks. Linus Torvalds has his branch, which at certain points in history is the official one. When Torvalds has a branch, Alan Cox almost always has his own branch. Even when Cox’s branch isn’t the official one, many Linux distributions derive their kernels from Cox’s branch. (They generally don’t use the official one either.) Now, Cox and Torvalds had a widely publicized spat over the virtual memory subsystem recently. For a while, the official branch and the -ac branch had different VMs. Words were exchanged, and misinterpreted. Both agreed the original 2.4 VM was broken. Cox tried to fix it. Torvalds replaced it with something else. Cox called Torvalds’ approach the unofficial kernel 2.5. But Torvalds won out in the end–the new VM worked well.

Now you can expect to see some other sub-branches. Noted kernel hackers like Andrea Archangeli occasionally do a release. Now that Marcelo Tosatti is maintaining the official 2.4 tree, you might even see a -ac release again occasionally. More likely, Cox and Torvalds will pour their efforts into 2.5, which should be considered alpha-quality code. Some people believe there will be no Linux 2.6; that 2.5 will eventually become Linux 3.0. It’s hard to know. But 2.5 is where the new and wonderful and experimental bits will go.

There’s more forking than just that going on though. The 2.0 and 2.2 kernels are still being maintained, largely for security reasons. But not long ago, someone even released a bugfix for an ancient 0.-something kernel. That way you can still keep your copy of Red Hat 5.2 secure and not risk breaking any low-level kernel module device drivers you might be loading (to support proprietary, closed hardware, for example). Kernels are generally upward compatible, but you don’t want to risk anything on a production server, and the kernel maintainers recognize and respect that.

As far as the end user is concerned, the kernel doesn’t do much. What 2.4 gave end users was better firewalling code and more filesystems and hopefully slightly better performance. As far as compatibility goes, the difference between an official kernel and an -ac kernel and an -aa kernel is minor. There’s more difference between Windows NT 4.0 SP2 and SP3 than there is between anyone’s Linux 2.4 kernel, and, for that matter, between 2.4 and any (as of Nov. 2001) 2.5 kernel. No one worries about Windows fragmenting, and when something Microsoft does breaks a some application, no one notices.

So recent events are much ado about nothing. The kernel will fragment, refragment, and reunite, just as it has always done, and eventually the best code will win. Maybe at some point a permanent fracture will happen, as happened in the BSD world. That won’t be an armageddon, even though Jesse Berst wants you to think it will be (he doesn’t have anything else to write about, after all, and he can’t be bothered with researching something non-Microsoft). OpenBSD and NetBSD are specialized distributions, and they know it. OpenBSD tries to be the most secure OS on the planet, period. Everything else is secondary. NetBSD tries to be the most portable OS on the planet, and everything else is secondary. If for some reason you need a Unix to run on an old router that’s no longer useful as a router and you’d like to turn it into a more general-purpose computer, NetBSD will probably run on it.

Linux will fragment if and when there is a need for a specialized fragment. And we’ll all be the better for it. Until someone comes up with a compelling reason to do so, history will just continue to repeat itself.