The pundits are wrong about Apple’s defection

Remember the days when knowing something about computers was a prerequisite for writing about them?
ZDNet’s David Coursey continues to astound me. Yesterday he wondered aloud what Apple could do to keep OS X from running on standard PCs if Apple were to ditch the PowerPC line for an x86-based CPU, or to keep Windows from running on Apple Macs if they became x86-based.

I’d link to the editorial but it’s really not worth the minimal effort it would take.

First, there’s the question of whether it’s even necessary for Apple to migrate. Charlie pointed out that Apple remains profitable. It has 5% of the market, but that’s beside the point. They’re making money. People use Apple Macs for a variety of reasons, and those reasons seem to vary, but speed rarely seems to be the clinching factor. A decade ago, the fastest Mac money could buy was an Amiga with Mac emulation hardware–an Amiga clocked at the same speed would run Mac OS and related software about 10% faster than the real thing. And in 1993, Intel pulled ahead of Motorola in the speed race. Intel had 486s running as fast as 66 MHz, while Motorola’s 68040 topped out at 40 MHz. Apple jumped to the PowerPC line, whose clock rate pretty much kept up with the Pentium line until the last couple of years. While the PowerPCs would occasionally beat an x86 at some benchmark or another, the speed was more a point of advocacy than anything else. When a Mac user quoted one benchmark only to be countered by another benchmark that made the PowerPC look bad, the Mac user just shrugged and moved on to some other advocacy point.

Now that the megahertz gap has become the gigahertz gap, the Mac doesn’t look especially good on paper next to an equivalently priced PC. Apple could close the gigahertz gap and shave a hundred bucks or two off the price of the Mac by leaving Motorola at the altar and shacking up with Intel or AMD. And that’s why every pundit seems to expect the change to happen.

But Steve Jobs won’t do anything unless he thinks it’ll get him something. And Apple offers a highly styled, high-priced, anti-establishment machine. Hippie computers, yuppie price. Well, that was especially true of the now-defunct Flower Power and Blue Dalmation iMacs.

But if Apple puts Intel Inside, some of that anti-establishment lustre goes away. That’s not enough to make or break the deal.

But breaking compatibility with the few million G3- and G4-based Macs already out there might be. The software vendors aren’t going to appreciate the change. Now Apple’s been jerking the software vendors around for years, but a computer is worthless without software. Foisting an instruction set change on them isn’t something Apple can do lightly. And Steve Jobs knows that.

I’m not saying a change won’t happen. But it’s not the sure deal most pundits seem to think it is. More likely, Apple is just pulling a Dell. You know the Dell maneuver. Dell is the only PC vendor that uses Intel CPUs exclusively. But Dell holds routine talks with AMD and shows the guest book signatures to Intel occasionally. Being the last dance partner gives Dell leverage in negotiating with Intel.

I think Apple’s doing the same thing. Apple’s in a stronger negotiating position with Motorola if Steve Jobs can casually mention he’s been playing around with Pentium 4s and Athlon XPs in the labs and really likes what he sees.

But eventually Motorola might decide the CPU business isn’t profitable enough to be worth messing with, or it might decide that it’s a lot easier and more profitable to market the PowerPC as a set of brains for things like printers and routers. Or Apple might decide the gigahertz gap is getting too wide and defect. I’d put the odds of a divorce somewhere below 50 percent. I think I’ll see an AMD CPU in a Mac before I’ll see it in a Dell, but I don’t think either event will happen next year.

But what if it does? Will Apple have to go to AMD and have them design a custom, slightly incompatible CPU as David Coursey hypothesizes?

Worm sweat. Remember the early 1980s, when there were dozens of machines that had Intel CPUs and even ran MS-DOS, yet were, at best, only slightly IBM compatible? OK, David Coursey doesn’t, so I can’t hold it against you if you don’t. But trust me. They existed, and they infuriated a lot of people. There were subtle differences that kept IBM-compatible software from running unmodified. Sometimes the end user could work around those differences, but more often than not, they couldn’t.

All Apple has to do is continue designing their motherboards the way they always have. The Mac ROM bears very little resemblance to the standard PC BIOS. The Mac’s boot block and partition table are all different. If Mac OS X continues to look for those things, it’ll never boot on a standard PC, even if the CPU is the same.

The same differences that keep Mac OS X off of Dells will also keep Windows off Macs. Windows could be modified to compensate for those differences, and there’s a precedent for that–Windows NT 4.0 originally ran on Intel, MIPS, PowerPC, and Alpha CPUs. I used to know someone who swore he ran the PowerPC versions of Windows NT 3.51 and even Windows NT 4.0 natively on a PowerPC-based Mac. NT 3.51 would install on a Mac of comparable vintage, he said. And while NT 4.0 wouldn’t, he said you could upgrade from 3.51 to 4.0 and it would work.

I’m not sure I believe either claim, but you can search Usenet on Google and find plenty of people who ran the PowerPC version of NT on IBM and Motorola workstations. And guess what? Even though those workstations had PowerPC CPUs, they didn’t have a prayer of running Mac OS, for lack of a Mac ROM.

Windows 2000 and XP were exclusively x86-based (although there were beta versions of 2000 for the Alpha), but adjusting to accomodate an x86-based Mac would be much easier than adjusting to another CPU architecture. Would Microsoft go to the trouble just to get at the remaining 5% of the market? Probably. But it’s not guaranteed. And Apple could turn it into a game of leapfrog by modifying its ROM with every machine release. It already does that anyway.

The problem’s a whole lot easier than Coursey thinks.

Analysis of the Apple Mac Xserver

Given my positive reaction to the Compaq Proliant DL320, Svenson e-mailed and asked me what I thought of Apple’s Xserver.
In truest Slashdot fashion, I’m going to present strong opinions about something I’ve never seen. Well, not necessarily the strong opinions compared to some of what you’re used to seeing from my direction. But still…

Short answer: I like the idea. The PPC is a fine chip, and I’ve got a couple of old Macs at work (a 7300 and a 7500) running Debian. One of them keeps an eye on the DHCP servers and mails out daily reports (DHCP on Windows NT is really awful; I didn’t think it was possible to mess it up but Microsoft found a way) and acts as a backup listserver (we make changes on it and see if it breaks before we break the production server). The other one is currently acting as an IMAP/Webmail server that served as an outstanding proof of concept for our next big project. I don’t know that the machines are really any faster than a comparable Pentium-class CPU would be, but they’re robust and solid machines. I wouldn’t hesitate to press them into mission-critical duty if the need arose. For example, if the door opened, I’d be falling all over myself to make those two machines handle DHCP, WINS, and caching DNS for our two remote sites.

So… Apples running Linux are a fine thing. A 1U rack-mount unit with a pair of fast PPC chips in it and capable of running Linux is certainly a fine thing. It’ll suck down less CPU power than an equivalent Intel-based system would, which is an important consideration for densely-packed data centers. I wouldn’t run Mac OS X Server on it because I’d want all of its CPU power to go towards real work, rather than putting pretty pictures on a non-existent screen. Real servers are administered via telnet or dumb terminal.

What I don’t like about the Xserver is the price. As usual, you get more bang for the buck from an x86-based product. The entry-level Xserver has a single 1 GHz PowerPC, 256 megs of RAM, and a 60-gig IDE disk. It’ll set you back a cool 3 grand. We just paid just over $1300 for a Proliant DL320 with a 1.13 GHz P3 CPU, 128 megs of RAM, and a 40-gig IDE disk. Adding 256 megs of RAM is a hundred bucks, and the price difference between a 40- and a 60-gig drive is trivial. Now, granted, Apple’s price includes a server license, and I’m assuming you’ll run Linux or FreeBSD or OpenBSD on the Intel-based system. But Linux and BSD are hardly unproven; you can easily expect them to give you the same reliability as OS X Server and possibly better performance.

But the other thing that makes me uncomfortable is Apple’s experience making and selling and supporting servers, or rather its lack thereof. Compaq is used to making servers that sit in the datacenter and run 24/7. Big businesses have been running their businesses on Compaq servers for more than a decade. Compaq knows how to give businesses what they need. (So does HP, which is a good thing considering HP now owns Compaq.) If anything ever goes wrong with an Apple product, don’t bother calling Apple customer service. If you want to hear a more pleasant, helpful, and unsuspicious voice on the other end, call the IRS. You might even get better advice on how to fix your Mac from the IRS. (Apple will just tell you to remove the third-party memory in the machine. You’ll respond that you have no third-party memory, and they’ll repeat the demand. There. I just saved you a phone call. You don’t have to thank me.)

I know Apple makes good iron that’s capable of running a long time, assuming it has a quality OS on it. I’ve also been around long enough to know that hardware failures happen, regardless of how good the iron is, so you want someone to stand behind it. Compaq knows that IBM and Dell are constantly sitting on the fence like vultures, wanting to grab its business if it messes up, and it acts accordingly. That’s the beauty of competition.

So, what of the Xserver? It’ll be very interesting to see how much less electricity it uses than a comparable Intel-based system. It’ll be very interesting to see whether Apple’s experiment with IDE disks in the enterprise works out. It’ll be even more interesting to see how Apple adjusts to meeting the demands of the enterprise.

It sounds like a great job for Somebody Else.

I’ll be watching that guy’s experience closely.

Linux Performance Tuning

I found a very superficial Linux Journal article on performance tuning linked from LinuxToday this week. I read the article because I’m a performance junkie and I hoped to maybe find something I hadn’t heard before.
The article recommended a kernel recompile, which many people don’t consider critical anymore. It’s still something I do, especially on laptops, since a kernel tuned to a machine’s particular hardware boots up faster–often much faster. While the memory you save by compiling your own kernel isn’t huge and was much more critical back when a typical computer had 8 MB of RAM, since Linux’s memory management is good, I like to give it as much to work with as possible. Plus, I’m of the belief that a simple system is a more secure system. The probability of a remote root exploit through the parallel port driver is so low as to be laughable, but when my boss’ boss’ boss walks into my cube and asks me if I’ve closed all possible doors that are practical to close, I want to be able to look him in the eye and say yes.

The same goes for virtual consoles. If a system runs X most of the time, it doesn’t need more than about three consoles. A server needs at most three consoles, since the only time the sysadmin will be sitting at the keyboard is likely to be during setup. The memory savings isn’t always substantial, depending on what version of getty the system is running. But since Linux manages available memory well, why not give it everything you can to work with?

The best advice the article gave was to look at alternative window managers besides the ubiquitous KDE and Gnome. I’ve found the best thing I’ve ever done from a performance standpoint was to switch to IceWM. KDE and Gnome binaries will still run as long as the libraries are present. But since KDE and Gnome seem to suffer from the same feature bloat that have turned Windows XP and Mac OS X into slow pigs, using another window manager speeds things along nicely, even on high-powered machines.

I take issue with one piece of advice in the article. Partitioning, when done well, reduces fragmentation, improves reliability, and allows you to tune each filesystem for its specific needs. For example, if you had a separate partition for /usr or /bin, which hold executable files, large block sizes (the equivalent of cluster sizes in Windows) will improve performance. But for /home, you’ll want small block sizes for efficiency.

The problem is that kernel I/O is done sequentially. If a task requires reading from /usr, then /home, then back to /usr, the disk will move around a lot. A SCSI disk will reorder the requests and execute them in optimal order, but an IDE disk will not. So partitioning IDE disks can actually slow things down. So generally with an IDE disk, I’ll make the first partition a small /boot partition so I’m guaranteed not to have BIOS issues with booting. This partition can be as small as 5 megs since it only has to hold a kernel and configuration files. I usually make it 20 so I can hold several kernels. I can pay for 20 megs of disk space these days with the change under my couch cushions. Next, I’ll make a swap partition. Size varies; Linus Torvalds himself uses a gig. For people who don’t spend the bulk of their time in software development, 256-512 megs should be plenty. Then I make one big root partition out of the rest.

With a multi-drive system, /home should be on a separate disk from the rest. That way, if a drive fails, you’ve halved your recovery time because you’ll either only have to install the OS on a replacement drive, or restore your data from backups on a replacement drive. Ideally, swap should be on a separate disk from the binaries (it can be on the same disk as /home unless you deal with huge data files). The reason should be obvious: If the system is going to use swap, it will probably be while it’s loading binaries.

Still, I’m very glad I read this article. Buried in the comments for this article, I found a gem of a link I’ve never seen referenced anywhere else before: Linux Performance Tuning. This site attempts to gather all the important information about tuning Linux to specific tasks. The pros know a lot of this stuff, but this is the first time I’ve seen this much information gathered in one place. If you build Linux servers, bookmark that page. You’ll find yourself referring back to it frequently. Contributors to the site include kernel hackers Rik van Riel and Dave Jones.

Another RISC platform for Linux

Vintage workstations. I’ve read two articles this past week about running Linux or another free Unix on vintage hardware.
http://www.debianplanet.org/article.php?sid=605
http://www.newsforge.com/article.pl?sid=02/02/19/049208&mode=thread

And while I can certainly appreciate the appeal of running a modern free Unix on a classic workstation from the likes of DEC or Sun or SGI, there’s another class of (nearly) workstation-quality hardware that didn’t get mentioned, and is much easier to come by.

Apple Power Macintoshes.

Don’t laugh. Apple has made some real dogs in the past, yes. But most of their machines are of excellent quality. And most of the appeal of a workstation-class machine also applies to an old Mac: RISC processor, SCSI disk drives, lots of memory slots. And since 7000-series and 9000-series Macs used PCI, you’ve got the advantage of being able to use cheap PC peripherals with them. So if you want to slap in a pair of 10,000-rpm hard drives and a modern SCSI controller, nothing’s stopping you.

There’s always a Mac fanatic out there somewhere willing to pay an exhorbinant amount of money for a six-year-old Mac, so you won’t always find a great deal. Thanks to the release of OS X (which Apple doesn’t support on anything prior to the Power Mac G3, and that includes older machines with G3 upgrade cards), the days of a 120 MHz Mac built in 1996 with a 500-meg HD and 32 megs of RAM selling for $500 are, fortunately, over. Those machines run Linux surprisingly well. Linux of course loves SCSI. And the PPC gives slightly higher performance than the comparable Pentium.

And if you’re lucky, sometimes you can find a Mac dirt-cheap before a Mac fanatic gets to it.

The biggest advantage of using a Mac over a workstation is the wealth of information available online about them. You can visit www.macgurus.com to get mainboard diagrams for virtually every Mac ever made. You can visit www.everymac.com for specs on all of them. And you can visit www.lowendmac.com for comprehensive write-ups on virtually every Mac ever made and learn the pitfalls inherent in them, as well as tips for cheap hardware upgrades to squeeze more speed out of them. I learned on lowendmac.com that adding video memory to a 7200 increases video performance substantially because it doubles the memory bandwidth. And on models like the 7300, 7500, and 7600, you can interleave the memory to gain performance.

Besides being better-built than many Intel-based boxes, another really big advantage of non-x86 hardware (be it PowerPC, Alpha, SPARC, MIPS, or something else) is obscurity. Many of the vulerabilities present in x86 Linux are likely to be present in the non-x86 versions as well. But in the case of buffer overflows, an exploit that would allow a hacker to gain root access on an Intel box will probably just crash the non-x86 box, because the machine language is different. And a would-be hacker may well run into big-endian/little-endian problems as well.

http://homepages.ihug.com.au/~aturner/7200boot.html

It’s the best of times, it’s the worst of times…

I hate arguing with women. When guys fight, they fight hard, and they don’t always fight fair, but when the fight’s over, it’s pretty much over. You settle it. Maybe you seethe for a little bit. But eventually, assuming you both still can walk, you can go to hockey games together almost like it never happened.
I’ve found myself in an argument. It’s not like an argument with a guy. Every time I think it’s over, it flares back up. It’s like fighting the hydra. (I don’t know if this is characteristic of arguments with women in general; I generally don’t seek out that experience.)

I found one solution though: Don’t open my inbox.

That worked for me once. After 8 months, she finally quit e-mailing me.

Found on a mailing list. I’m assuming this guy mistyped this:

“I need hell with my installation.”

Some smart aleck responded before I did. “Usually you get that with installation whether you want it or not. Now someone’s demanding it. Newbies, these days.”

I was going to say that if you ran Windows, you’d get that free of charge. (That’s the only thing Microsoft gives you for free!)

A cool phone call. My phone rings at work. Outside call. Don’t tell me she somehow got my number at work… I pick up. “This is Dave.”

“Dave, it’s Todd.”

Ah, my boss. Good thing I picked up, eh?

“You busy?”

When it’s your boss, there is absolutely no right answer to that question. One of my classmates in college told me something worth remembering, though: The truth’s always a lot easier to remember than a lie.

“We can’t come to the phone right now. Please leave a message at the beep.”

Nope. Too late for that.

“Not really,” I say, hoping I won’t regret it. Either he’s gathering data for my personal review, or he’s about to ask me to install Mac OS X on a Blue Dalmation iMac with 32 megs of RAM (speaking of wanting hell with installation…)

Actually he asks me for something pretty cool. He asks if I was up to learning some firewalling software. (No, I won’t tell you which one. And no, I won’t tell you who I work for. That’s like saying, “Hey, l337 h4xx0r5! You can’t get me!)

But I will tell you the IP address. It’s 127.0.0.1. If you can crack that address, you deserve whatever you can get. (No comments from the Peanut Gallery.)

So I hit the books. Thanks to this duty, I get another Linux box. I’ve got a Power Mac running Debian already, which runs scripts that are impossible on NT. It monitors the LAN and reformats some reports and e-mails them to my boss and co-workers at 6 every morning. But the management software runs under NT 4, Red Hat Linux, or Solaris. None of that’ll run on a PowerPC-based machine. So I lay claim to an old system that I happen to know has an Asus motherboard in it, along with 72 megs of RAM. I’ll have fun tweaking that system out. An Asus mobo, a Pentium-class CPU, and a Tulip network card. That’s not the makings of a rockin’ good weekend, but it’ll make for a reliable light-use workstation.

While the management software runs under Red Hat, some of the infrastructure is BSD-based. So I get to learn some BSD while I’m at it. As long as BSD is sane about /proc and /var/log, I’ll be in good shape. But I heard LSD was invented at Berkeley, so I may have a little learning to do… Maybe listening to some Beatles records while administering those systems would help.

More perspective on video editing

I read Bill Machrone’s current PC Magazine column on PC non-linear video editing with a bit of bemusement. He talked about the difficulty he and his son have editing video on their PCs, and he concluded with the question: “How do normal people do this stuff?” and the misguided answer: “They buy a Mac.”
You don’t have to do that. In fact, you can do pretty well on a PC if you just play by the same rules the Mac forces you to play by.

Consider this for a minute: With the Mac, you have one motherboard manufacturer. Apple tends to revise its boards once a year, maybe twice. Apple makes, at most, four different boards: one for the G4 tower systems, one for the iMac, one for the iBook, and one for the PowerBook. Frequently different lines will share the same board–the first iMacs were just a PowerBook board in an all-in-one case.

And the Mac (officially) supports two operating systems: the OS 9 series and the OS X series. You keep your OS at the current or next-most-recent level (always wait about a month before you download any OS update from Apple), and you keep your apps at current level, and you minimize compatibility problems. Notice I said minimize. PageMaker 7 has problems exporting PDF documents that I can’t track down yet, and I see from Adobe’s forums that I’m not the only one. So the Mac’s not always the bed of roses Machrone’s making it out to be.

Now consider the PC market for a minute. You’ve got two major CPU architectures, plus also-ran VIA; 4-6 (depending on who you ask) major suppliers of chipsets; at least four big suppliers of video chipsets; and literally dozens of motherboard manufacturers. Oh, you want an operating system with that? For all the FUD of Linux fragmentation, Microsoft’s in no better shape: Even if you only consider currently available offerings, you’ve got Windows 98, ME, NT4, 2000, and two flavors of XP.

So we go and we buy a video capture card and expect to put it in any old PC and expect it to work. Well, it probably ought to work, but let’s consider something. Assuming two CPU architectures, four chipset manufacturers, four video architectures, and twelve motherboard manufacturers, the chances of your PC being functionally identical to any other PC purchased right around the same time are 1 in 384. The comparable Mac odds: 1 in 4. But realistically, if you’re doing video editing, 1 in 1, because to do serious video work you need a desktop unit for its expandability. No Blue Dalmation browsing for you!

So you can rest assured that if you have a Mac, your vendor tested the equipment with hardware functionally identical to yours. On a PC you just can’t make that assumption, even if you buy a big brand name like Dell.

But you want the best of both worlds, don’t you? You want to play it safe and you want the economy of using inexpensive commodity PC hardware? It’s easy enough to do it. First things first, pick the video editing board you want. Next, visit the manufacturer’s Web site. Pinnacle has a list of motherboards and systems they’ve tested with the DV500, for instance. You can buy one of the Dell models they’ve tested. If you’re a do-it-yourselfer like me, you buy one of the motherboards they’ve tested. If you want to be really safe, buy the same video card, NIC, and SCSI card they tested as well, and plug them into the same slots Pinnacle did. Don’t worry about the drives Pinnacle used; buy the best-available SCSI drive you can afford, or better yet, two of them.

Video capture cards are cranky. You want a configuration the manufacturer tested and figured out how to make work. Otherwise you get the pleasure. Or the un-pleasure, in some cases.

As far as operating systems go, Windows 2000 is the safe choice. XP is too new, so you may not have drivers for everything. 98 and ME will work, but they’re not especially stable. If I can bluescreen Windows 2000 during long editing sessions, I don’t want to think about what I could do to 9x.

And the editing software is a no-brainer. You use what comes with the card. The software that comes with the card should be a prime consideration in getting the card. Sure, maybe an $89 CompUSA special will do what you want. But it won’t come with Premiere 6, that’s for certain. If I were looking for an entry-level card, I’d probably get a Pinnacle DV200. It’s cheap, it’s backed by a company that’ll be around for a while, and it comes with a nice software bundle. If you want to work with a variety of video sources and output to plain old VHS as well as firewire-equipped camcorders, the DV500 is nice, and at $500, it won’t break the bank. In fact, when my church went to go buy some editing equipment, we grabbed a Dell workstation for a DV500, and we got a DV200 to use on another PC in the office. The DV200-equipped system will be fine for proof of concept and a fair bit of editing. The DV500 system will be the heavy lifter, and all the projects will go to that system for eventual output. I expect great things from that setup.

The most difficult part of my last video editing project (which is almost wrapped up now; it’s good enough for use but I’m a perfectionist and we still have almost a week before it’ll be used) was getting the DV500’s video inputs and outputs working. It turned out my problem was a little checkbox in the Pinnacle control panel. I’d ticked the Test Video box to make sure the composite output worked, back when I first set the board up. Then I didn’t uncheck it. When I finally unchecked it, both the video inputs and outputs started working from inside Premiere. I outputted the project directly to VHS so it could be passed around, and then for grins, I put in an old tape and captured video directly from it. It worked. Flawlessly.

One more cavaet: Spend some of the money you saved by not buying a Mac on memory. Lots of memory. I’m using 384 MB of RAM, which should be considered minimal. I caught myself going to Crucial’s Web site and pricing out three 512-meg DIMMs. Why three? My board only has three slots. Yes, I’d put two gigs of RAM in my video editing station if I could.

OK, two more cavaets: Most people just throw any old CD-ROM drive into a computer and use it to rip audio. You’ll usually get away with that, but if you want high-quality samples off CD to mix into your video production, get a Plextor drive. Their readers are only available in SCSI and they aren’t cheap–a 40X drive will run you close to $100, whereas no-name 52X drives sometimes go for $20-$30–but you’ll get the best possible samples from it. I have my Plextor set to rip at whatever it determines the maximum reliable speed may be. On a badly scratched CD sometimes that turns out to be 1X. But the WAV files it captures are always pristine, even if my audio CD players won’t play the disc anymore.

Desktop Linux and the truth about forking

Desktop Linux! I wanna talk a little more about how Linux runs on a Micron Transport LT. I chose Debian 2.2r3, the “Potato” release, because Debian installs almost no extras. I like that. What you need to know to run Linux on a Micron LT: the 3Com miniPCI NIC uses the 3C59x kernel module. The video chipset uses the ATI Mach64 X server (in XFree86 3.36; if you upgrade to 4.1 you’ll use plain old ATI). Older Debian releases gave this laptop trouble, but 2.2r3 runs fine.
I immediately updated parts of it to Debian Unstable, because I wanted to run Galeon and Nautilus and Evolution. I haven’t played with any GNOME apps in a long time. A couple of years ago when I did it, I wasn’t impressed. KDE was much more polished. I didn’t see any point in GNOME; I wished they’d just pour their efforts into making KDE better. I still wish that, and today KDE is still more polished as a whole, but GNOME has lots of cool apps. Nautilus has the most polish of any non-Mac app I’ve ever seen, and if other Linux apps rip off some of its code, Microsoft’s going to have problems. It’s not gaudy and overboard like Mac OS X is; it’s just plain elegant.

Galeon is the best Web browser I’ve ever seen. Use its tabs feature (go to File, New Tab) and see for yourself. It’s small and fast like Opera, compatible like Netscape, and has features I haven’t seen anywhere else. It also puts features like freezing GIF animation and disabling Java/JavaScript out where they belong: In a menu, easily accessible. And you can turn them off permanently, not just at that moment.

Evolution is a lot like Outlook. Its icons look a little nicer–not as nice as Nautilus, but nice–and its equivalent of Outlook Today displays news headlines and weather. Nice touch. And you can tell it what cities interest you and what publications’ headlines you want. As a mail reader, it’s very much like Outlook. I can’t tell you much about its PIM features, because I don’t use those heavily in Outlook either.

The first time I showed it to an Outlook user at work, her reaction was, “And when are we switching to that?”

If you need a newsreader, Pan does virtually everything Forte Agent or Microplanet Gravity will do, plus a few tricks they won’t. It’s slick, small, and free too.

In short, if I wanted to build–as those hip young whippersnappers say–a pimp-ass Internet computer, this would be it. Those apps, plus the Pan newsreader, give you better functionality than you’ll get for free on Windows or a Mac. For that matter, you could buy $400 worth of software on another platform and not get as much functionality.

Linux development explained. There seems to be some confusion over Linux, and the kernel forking, and all this other stuff. Here’s the real dope.

First off, the kernel has always had forks. Linus Torvalds has his branch, which at certain points in history is the official one. When Torvalds has a branch, Alan Cox almost always has his own branch. Even when Cox’s branch isn’t the official one, many Linux distributions derive their kernels from Cox’s branch. (They generally don’t use the official one either.) Now, Cox and Torvalds had a widely publicized spat over the virtual memory subsystem recently. For a while, the official branch and the -ac branch had different VMs. Words were exchanged, and misinterpreted. Both agreed the original 2.4 VM was broken. Cox tried to fix it. Torvalds replaced it with something else. Cox called Torvalds’ approach the unofficial kernel 2.5. But Torvalds won out in the end–the new VM worked well.

Now you can expect to see some other sub-branches. Noted kernel hackers like Andrea Archangeli occasionally do a release. Now that Marcelo Tosatti is maintaining the official 2.4 tree, you might even see a -ac release again occasionally. More likely, Cox and Torvalds will pour their efforts into 2.5, which should be considered alpha-quality code. Some people believe there will be no Linux 2.6; that 2.5 will eventually become Linux 3.0. It’s hard to know. But 2.5 is where the new and wonderful and experimental bits will go.

There’s more forking than just that going on though. The 2.0 and 2.2 kernels are still being maintained, largely for security reasons. But not long ago, someone even released a bugfix for an ancient 0.-something kernel. That way you can still keep your copy of Red Hat 5.2 secure and not risk breaking any low-level kernel module device drivers you might be loading (to support proprietary, closed hardware, for example). Kernels are generally upward compatible, but you don’t want to risk anything on a production server, and the kernel maintainers recognize and respect that.

As far as the end user is concerned, the kernel doesn’t do much. What 2.4 gave end users was better firewalling code and more filesystems and hopefully slightly better performance. As far as compatibility goes, the difference between an official kernel and an -ac kernel and an -aa kernel is minor. There’s more difference between Windows NT 4.0 SP2 and SP3 than there is between anyone’s Linux 2.4 kernel, and, for that matter, between 2.4 and any (as of Nov. 2001) 2.5 kernel. No one worries about Windows fragmenting, and when something Microsoft does breaks a some application, no one notices.

So recent events are much ado about nothing. The kernel will fragment, refragment, and reunite, just as it has always done, and eventually the best code will win. Maybe at some point a permanent fracture will happen, as happened in the BSD world. That won’t be an armageddon, even though Jesse Berst wants you to think it will be (he doesn’t have anything else to write about, after all, and he can’t be bothered with researching something non-Microsoft). OpenBSD and NetBSD are specialized distributions, and they know it. OpenBSD tries to be the most secure OS on the planet, period. Everything else is secondary. NetBSD tries to be the most portable OS on the planet, and everything else is secondary. If for some reason you need a Unix to run on an old router that’s no longer useful as a router and you’d like to turn it into a more general-purpose computer, NetBSD will probably run on it.

Linux will fragment if and when there is a need for a specialized fragment. And we’ll all be the better for it. Until someone comes up with a compelling reason to do so, history will just continue to repeat itself.

A different Monday, but not much better…

Moves at work continue, but unfortunately the electrical contractors we have are as incompetent as ever, and of course IT takes the brunt of the attack when computers don’t work. They don’t care if it’s an electrical problem or not; all they know is their computer doesn’t work, and of course it’s always IT’s fault if the computer doesn’t work. And with one person to keep 300 desktop PCs in tip-top shape, I usually can’t be up there and have the problem solved within five minutes.
In the last three weeks, we’ve lost three power supplies, two printers, an expensive proprietary modem, and a network card. In two instances, there was an honest-to-goodness fire, with flames and everything.

I think it’s time we sent an electrical contractor or two packing.

Meanwhile I’ve got incompetent department directors who plan moves without giving more than a half hour’s notice, and of course they throw a fit when the move falls to pieces and I’m off solving another problem. I also find myself not caring. Go ahead and yell. Davey’s not listening, la la la, and his boss isn’t listening, and his boss’ boss isn’t listening, and if his boss’ boss’ boss listens and says anything, he’ll have two, maybe three raving lunatics at his door in a heartbeat and I think he knows it.

Deep breath. OK. I feel better now. Kind of.

Let’s see what kind of hints The Big Guy may have been dropping with the day’s other events, shall we?

I had a meeting at church at 7 p.m. So I headed out to my car at 10 ’til 6, put my key in the ignition, and the engine coughed, and then nothing. No electrical system. Hmm. Time to find out how good Chrysler Roadside Assistance is, eh? Well, I called, waited an hour and a half, and they never showed up. So I paced in the beautiful October twilight, waiting for a driver who’d never arrive, thinking there are a number of things I’d love do at twilight outdoors in St. Louis in October (and waiting for a tow truck is very near the top of that list, let me tell you!) but it sure beats sitting in a meeting after dealing with irate, high-maintenance people at work for 9+ hours.

And I noticed something. I wasn’t at the meeting, and yet the world failed to fall apart.

Finally I gave up on the tow truck driver and asked one of my coworkers for a jump. Maybe the problem was a dead battery, even though I didn’t leave my lights on or anything. Indeed it was. I drove home, and about halfway there my battery light came on. I guided the car home, called Chrysler again, and asked them what to do.

On my answering machine, there was a pair of messages waiting for me. It was actually one message, but my answering machine is extremely rude and cuts you off after about 10.5 seconds. OK, maybe 30. But it seems like 10.5 seconds to everyone else but me. So most people leave a message, get cut off, then call me back. Sometimes they call me back a third or even a fourth time. Usually by then they’re pretty steamed. But I digress, as always. The message messages basically boiled down to, “Hey Dave, I understand you’re planning to teach Friday, but I hear things are really hectic so there’s no need for us to stay on the regular schedule. I’ll teach for you if you want.”

I had no idea when I’d get a chance to put a lesson together, to be completely honest. So I called her back and said if she wanted to teach, she could go right ahead. And I thanked her.

Hints taken. So much time doing stuff for God there’s no time to spend with God. So I skipped out on the meeting and now I’m not teaching Friday. I might even show up a little late, for good measure.

And now something completely different. This is starting to sound like the Stress Underground, not the Silicon Underground. So let’s talk about silicon.

Dan Bowman sent me a link to a suggestion that businesses buy old Mac clones, then dump $600 worth of upgrades into them so they can run Mac OS X and avoid paying $199 for a copy of Windows.

Yes, I know I’m teetering on the brink of mental illness here. So I’m assuming that if I were completely sane, this would make even less sense.

The best-selling software package for the Macintosh is (drum roll please)… Microsoft Office. So all you’ve accomplished so far is paying a little less money to Microsoft.

I’ve seen Mac OS X. I’ve tried to install Mac OS X. It wasn’t a pleasant experience. And this was a copy of Mac OS X that came with a brand-new G4. Mac OS X is not production-quality software yet. Not that that’s much of a problem. There’s precious little native software to run on it. For native software, you pretty much have to download and compile your own. If you’re going to do that, you might as well just run Linux, since it’s free for the asking and runs on much less-expensive hardware.

Most businesses are a bit hesitant to put Linux on the desktop yet. Some are starting to see the light. But a business that’s reluctant to put Linux on brand-new desktop PCs even when they can pay for good support they’ll probably never need isn’t too likely to be interested in buying a four-year-old Mac or Mac clone, plus 128 megs of obsolete and therefore overpriced memory plus a hard drive plus a disk controller plus a USB card, from five different vendors who will all point fingers at one another the instant something goes wrong. (And we’re talking Apple here. Things will go wrong.)

And yes, I know there are thousands of people who’ve successfully put CPU upgrades in Macintoshes, but it’s very hit-and-miss. I spent two of the most frustrating days of my life trying to get a Sonnet G3 accelerator to work in a Power Mac 7500. It either worked, failed to boot, or performed just like the stock 100 MHz CPU. Any time you turned it on, you didn’t know which of the three you would get. The local Mac dealer was clueless. I called Sonnet. They were clueless. I struggled some more. I called Sonnet back. I got a different tech. He asked what revision of motherboard I had. I looked. It said VAL4, I think. He told me he was surprised it worked 1/3 of the time. That accelerator never works right with that revision of motherboard. He suggested I return the card, or do a motherboard swap. Of course a compatible motherboard costs more than the accelerator card.

And of course there was absolutely no mention of any of this on Sonnet’s web site. At least you can go to a manufacturer of PC upgrades and read their knowledge base before you buy. Sometimes you can even punch in what model system you have and they’ll tell you if they work. Not that those types of upgrades make any sense when you can a replacement motherboard and CPU starts at around $150.

Suffice it to say I won’t be repeating that advice at work. I just got a flyer in the mail, offering me 700 MHz Compaq PCs preloaded with Win98, with a 15-inch flat-panel monitor, for $799. With a warranty. With support. Yeah, I’d rather have Windows 2000 or Windows XP on it. The only reason Compaq makes offers like that is to move PCs, so I’m sure they’d work with my purchasing guy and me.

Think about it. I can have a cobbled-together did-it-myself 400 MHz Mac refurb without a monitor for $700-$750. Or I can have that Compaq. That’s like getting a flat-panel monitor for 50 bucks. As far as usability and stability go, I’d rate Win98 and Mac OS X about equal. But for the time and money I’d save, I could afford to step up to a better version of Windows. Or I could bank the bucks and run Linux on it.

If you’re already a Mac zealot, I guess that idea might make sense. I’ve spent several years deploying, operating, and maintaning both Macs and PCs side-by-side in corporate environments. I have no great love for Microsoft. Most people would call my relationship with Microsoft something more like seething hatred.

But the biggest problems with PC hardware, in order, are commodity memory, cheap power supplies, proliferation of viruses, and then, maybe, Microsoft software. You can avoid the first two problems by buying decent hardware from a reputable company. (No, Gateway, that doesn’t include you and your Packard Bell-style 145-watt power supplies.) You can avoid the third problem with user education. (It’s amazing how quickly users learn when you poke ’em with a cattle prod after they open an unexpected attachment from a stranger. The biggest problem is getting that cattle prod past building security.) Microsoft software doesn’t exactly bowl everyone over with its reliability, but when Adobe recommends that Mac users reboot their machines every day before they leave for lunch, you know something’s up. Even Windows 95’s uptime was better than that.

Disappointment… Plus Linux vs. The World

It was looking like I’d get to call a l337 h4x0r to the carpet and lay some smackdown at work, but unfortunately I had a prior commitment. Too many things to do, not enough Daves to go around. It’s the story of my life.
And I see Infoworld’s Bob Lewis is recommending companies do more than give Linux a long, hard look–he’s saying they should consider it on the desktop.

He’s got a point. Let’s face it. None of the contenders get it right. So-called “classic” Mac OS isn’t a modern OS–it has no protected memory architecture, pre-emptive multitasking, and limited threading support. It’s got all the disadvantages of Windows 3.1 save being built atop the crumbling foundation of MS-DOS. I could run Windows 3.1 for an afternoon without a crash. I can run Windows 95 for a week or two. I can usually coax about 3-4 days out of Mac OS. Mac users sometimes seem to define “crash” differently, so I’ll define what I mean here. By a crash, I mean an application dying with an error Type 1, Type 2, or Type 10. Or the system freezing and not letting you do anything. Or a program quitting unexpectedly.

But I digress. Mac OS X has usability problems, it’s slow, and it has compatibility problems. It has promise, but it’s been thrust into duty that it’s not necessarily ready for. Like System 7 of the early ’90s, it’s a radical change from the past, and it’s going to take time to get it ready for general use. Since compilers and debuggers are much faster now, I don’t think it’ll take as long necessarily, but I don’t expect Mac OS X’s day to arrive this year. Developers also have to jump on the bandwagon, which hasn’t happened.

Windows XP… It’s slow, it’s way too cutesy, and only time will tell if it will actually succeed at displacing both 9x and NT/2000. With Product Activation being an upgrader’s nightmare, Microsoft may shoot themselves in the foot with it. Even if XP is twice as good as people say it’s going to be, a lot of people are going to stay away from it. Users don’t like Microsoft policing what they do with their computers, and that’s the perception that Product Activation gives. So what if it’s quick and easy? We don’t like picking up the phone and explaining ourselves.

Linux… It hasn’t lived up to its hype. But when I’ve got business users who insist on using Microsoft Works because they find Office too complicated, I have a hard time buying the argument that Linux can’t make it in the business environment without Office. Besides, you can run Office on Linux with Win4Lin or VMWare. But alternatives exist. WordPerfect Office gets the job done on both platforms–and I know law offices are starting to consider the move. All a lawyer or a lawyer’s secretary needs to be happy, typically, is a familiar word processor, a Web browser, and a mail client. The accountant needs a spreadsheet, and maybe another financial package. Linux has at least as many Web browsers as Windows does, and plenty of capable mail clients; WP Office includes Quattro Pro, which is good enough that I’ve got a group of users who absolutely refuse to migrate away from it. I don’t know if I could run a business on GnuCash. But I’m not an accountant. The increased stability and decreased cost makes Linux make a lot of sense in a law firm though. And in the businesses I count as clients, anywhere from 75-90% of the users could get their job done in Linux just as productively. Yes, the initial setup would be more work than Windows’ initial setup, but the same system cloning tricks will work, mitigating that. So even if it takes 12 hours to build a Linux image as opposed to 6 hours to build a Windows image, the decreased cost and decreased maintenance will pay for it.

I think Linux is going to get there. As far as Linux looking and acting like Windows, I’ve moved enough users between platforms that I don’t buy the common argument that that’s necessary. Most users save their documents wherever the program defaults to. Linux defaults to your home directory, which can be local or on a server somewhere. The user doesn’t know or care. Most users I support call someone for help when it comes time to save something on a floppy (or do anything remotely complicated, for that matter), then they write down the steps required and robotically repeat them. When they change platforms, they complain about having to learn something new, then they open up their notebook, write down new steps, and rip out the old page they’ve been blindly following for months or years and they follow that new process.

It amuses me that most of the problems I have with Linux are with recent distributions that try to layer Microsoft-like Plug and Play onto it. Linux, unlike Windows, is pretty tolerant of major changes. I can install TurboLinux 6.0 on a 386SX, then take out the hard drive and put it in a Pentium IV and it’ll boot. I’ll have to reconfigure XFree86 to take full advantage of the new architecture, but that’s no more difficult than changing a video driver in Windows–and that’s been true since about 1997, with the advent of Xconfigurator. Linux needs to look out for changes of sound cards and video cards, and, sometimes, network cards. The Linux kernel can handle changes to just about anything else without a hiccup. Once Red Hat and Mandrake realize that, they’ll be able to develop a Plug and Play that puts Windows to shame.

The biggest thing that Linux lacks is applications, and they’re coming. I’m not worried about Linux’s future.

What can I say about Tuesday…?

Photography. Tom sent me links to the pictures he took on the roof of Gentry’s Landing a couple of weeks ago. He’s got a shot of downtown, the dome, and the warehouse district, flanked by I-70 on the west and the Mississippi River on the east.
I’m tired. I spent yesterday fighting Mac OS X for a couple of hours. It still feels like beta software. I installed it on a new dual-processor G4/533 with 384 MB RAM, and it took four installation attempts to get one that worked right. Two attempts just flat-out failed, and the installation said so. A third attempt appeared successful, but it felt like Windows 95 on a 16-MHz 386SX with 4 megs of RAM. We’re talking a boot time measured in minutes here. The final attempt was successful and it booted in a reasonable time frame–not as fast as Windows 2000 on similar hardware and nowhere near the 22 seconds I can make Win9x boot in, but faster, I think, than OS 9.1 would boot on the same hardware–and the software ran, but it was sluggish. All the eye candy certainly wasn’t helping. Scrolling around was really fast, but window-resizing was really clunky, and the zooming windows and the menus that literally did drop down from somewhere really got on my nerves.

All told, I’m pretty sure my dual Celeron-500 running Linux would feel faster. Well, I know it’d be faster because I’d put a minimalist GUI on it and I’d run a lot of text apps. But I suspect even if I used a hog of a user interface like Enlightenment, it would still fare reasonably well in comparison.

I will grant that the onscreen display is gorgeous. I’m not talking the eye candy and transparency effects, I’m talking the fonts. They’re all exceptionally crisp, like you’d expect on paper. Windows, even with font smoothing, can’t match it. I haven’t seen Linux with font smoothing. But Linux’s font handling up until recently was hideous.

It’s promising, but definitely not ready for prime time. There are few enough native apps for it that it probably doesn’t matter much anyway.

Admittedly, I had low expectations. About a year ago, someone said something to me about OS X, half in jest, and I muttered back, “If anyone can ruin Unix, it’s Apple.” Well, “ruin” is an awfully harsh word, because it does work, but I suspect a lot of people won’t have the patience to stick with it long enough to get it working, and they may not be willing to take the extreme measures I ultimately took, which was to completely reformat the drive to give it a totally clean slate to work from.

OS X may prove yet to be worth the wait, but anyone who thinks the long wait is over is smoking crack.

Frankly, I don’t know why they didn’t just compile NeXTStep on PowerPC, slap in a Mac OS classic emulation layer, leave the user interface alone (what they have now is an odd hybrid of the NeXT and Mac interfaces that just feels really weird, even to someone like me who’s spent a fair amount of time using both), and release it three years ago.

But there are a lot of things I don’t know.

I spent the rest of the day fighting Linux boot disks. I wanted the Linux equivalent of a DOS boot disk with Ghost on it. Creating one from scratch proved almost impossible for me, so I opted instead to modify an existing one. The disks provided at partimage.org were adequate except they lacked sfdisk for dumping and recreating partition tables. (See Friday if you don’t have the foggiest idea what I’m talking about right about now, funk soul brother.) I dumped the root filesystem to the HD by booting off the two-disk set, mounting the hard drive (mount -t ext2 /dev/hda1 /mnt) and copying each directory (cp -a [directory name] [destination]). Then I made modifications. But nothing would fit, until I discovered the -a switch. The vanilla cp command had been expanding out all the symlinks, bloating the filesystem to a wretched 10 megs. It should have been closer to 4 uncompressed, 1.4 megs compressed. Finally I got what I needed in there and copied it to a ramdisk in preparation for dumping it to a floppy. (You’ve gotta compress it first and make sure it’ll fit.) I think the command was dd if=/dev/ram0 bs=1k | gzip -v9 > [temporary file]. The size was 1.41 MB. Excellent. Dump it to floppy: dd if=[same temporary file from before] of=/dev/fd0 bs=1k

And that’s why my mind feels fried right now. Hours of keeping weird commands like that straight will do it to you. I understand the principles, but the important thing is getting the specifics right.