Home » mac os x » Page 3

mac os x

Intel inside a Mac?

File this under rumors, even if it comes from the Wall Street Journal: Apple is supposedly considering using Intel processors.

Apple’s probably pulling a Dell.It’s technically feasible for Mac OS X to be recompiled and run on Intel; Nextstep ran on Intel processors after Next abandoned the Motorola 68K family. Mac OS X is based on Nextstep.

Of course the x86 is nowhere near binary-compatible with the PowerPC CPU family. But Apple has overcome that before; the PowerPC wasn’t compatible with the m68K either. Existing applications won’t run as fast under emulation, but it can be done.

Keeping people from running OS X on their whitebox PCs and even keeping people from running Windows on their Macs is doable too. Apple already knows how. Try installing Mac OS 9 on a brand-new Apple. You can’t. Would Apple allow Windows to run on their hardware but not the other way? Who knows. It would put them in an interesting marketing position.

But I suspect this is just Apple trying to gain negotiating power with IBM Microelectronics. Dell famously invites AMD over to talk and makes sure Intel knows AMD’s been paying a visit. What better way is there for Apple to get new features, better clock rates, and/or better prices from IBM than by flirting with Intel and making sure IBM knows about it?

I won’t rule out a switch, but I wouldn’t count on it either. Apple is selling 3 million computers a year, which sounds puny today, but that’s as many or more computers as they sold in their glory days. Plus Apple has sources of revenue that it didn’t have 15 years ago. If it could be profitable selling 3 million computers a year in 1990, it’s profitable today, especially considering all of the revenue it can bring in from software (both OS upgrades and applications), Ipods and music.

Floppies, meet your replacement

I must be the next-to-last person in the world to spend significant lengths of time experimenting with these, but for the benefit of the last person in the world, I’d like to talk about USB flash drives, also known as thumb drives (for a brand name), pen drives, or keychain drives, because they’re small enough to fit on a keychain.They are, as that popular brand name suggests, about the size of your thumb. It’s possible to buy one that holds as little as 64 megabytes of data, which is still a lot of Word and Excel files, but currently the sweet spot seems to be 512 megabytes or 1 GB. This is, of course, always a moving target, but as I write, it’s entirely possible to find a 512-meg drive for around $40, although sometimes you have to deal with rebates to get the price that low. It’s harder, but still possible, to get a 1 GB drive for under $90. That will change. Currently a 2 GB drive is more than $200.

I remember when people went ga-ga over a 1 GB hard drive priced at an astounding $399. That price was astoundingly low, and that was only 10 years ago. Progress marches on, and sometimes progress really is an improvement.

The drives are so small because they use flash memory–a type of readable/writable memory chip that doesn’t lose its contents when it loses power. It’s not as fast as RAM, and it’s a lot more expensive, and its lifespan is much more finite, so you won’t see flash memory replacing your computer’s RAM any time soon. But as a replacement for the floppy disk, it’s ideal. It’s fast, it’s compatible, and unlike writable CDs and DVDs, they require no special software or hardware to write.

The drive plugs into a USB port, which is present on nearly every computer made since about 1997. Use with Windows 98 will almost certainly require the installation of a driver (hopefully your drive comes with either a driver or a web site you can use to download a driver–check compatibility before you buy one for Win98), but with Windows 2000, XP, and Mac OS X, these devices should just plug in and work, for the most part. With one Windows 2000 box, I had to reboot after plugging the drive in the first time.

From then on, it just looks like a hard drive. You can edit files from it, or drag files onto it. If the computer has USB 2.0 ports, its speed rivals that of a hard drive. It’s pokier on the older, more common USB 1.1 ports, but still very tolerable.

The only thing you have to remember is to stop the device before you yank it out of the USB port, to avoid data loss. Windows 2000 and XP provide an icon in the system tray for this.

These are great as a personal backup device. They’re small enough to carry with you anywhere–the small flashlight I keep on my keychain is bigger than most of these drives–and it only take a few minutes to copy, so you can copy those files to computers belonging to friends or relatives for safekeeping.

If your only interest in a laptop is carrying work with you–as opposed to being able to cruise the net in trendy coffee shops while you drink a $5 cup of coffee–a pen drive makes a very affordable alternative to a laptop. Plug one into your work computer, copy your files, and take work home with you. Take it on the road and you can plug it into any available computer to do work. It’s not the same as having your computer with you all the time, but for many people, it’s more than good enough, and the drives make a Palm Pilot look portly, let alone a laptop.

So how do you maximize the usable space on these devices? The ubiquitous Zip and Unzip work well, and you can download small command-line versions from info-zip.org. If you want something more transparent, there’s an old PC Magazine utility from 1997, confusingly named UnFrag, that reduces the size of many Word and Excel files. Saving in older file formats can also reduce the size, and it increases the possibility of being able to work elsewhere. Some computers still only have Office 97.

You may be tempted to reformat the drive as NTFS and turn on compression. Don’t. Some drives respond well to NTFS and others stop working. But beyond that, NTFS’s overhead makes it impractical for drives smaller than a couple of gigs (like most flash drives), and you probably want your drive to be readable in as many computers as possible. So FAT is the best option, being the lowest common denominator.

To maximize the lifespan of these drives, reduce the number of times you write to it. It’s better to copy your files to a local hard drive, edit them there, then copy them back to the flash drive. But in practice, their life expectancy is much longer than that of a Zip or floppy drive or a CD-RW. Most people are going to find the device is obsolete before it fails.

The technologically savvy can even install Linux on one of these drives. As long as a computer is capable of booting off a USB device, then these drives can be used either as a data recovery tool, or as a means to run Linux on any available computer. 512 megabytes is enough to hold a very usable Linux distribution and still leave some space for data.

In honor of Charlemagne\’s birthday…

I have posted my genealogy, including Charlemagne, online.

As for why a Scot is making a big deal about Charlemagne’s birthday, well, I’m descended from him. But I guess I could have said I did this to celebrate Walter Percy Chrysler‘s birthday. Or William Austin, but you probably haven’t heard of him.Actually I’m just being silly. I’ve had this running since this past weekend, but this is the first time I’ve gotten around to mentioning it.

You can view anything that happened prior to 100 years ago without a password. Stuff newer than that is protected, in order to protect privacy and protect my relatives from identity theft. As dead people’s birthdays come up, I may open their records, but I’m not going to sift through 2,300+ records all at once looking for people who have died since 1904 to open them up.

I used a program called GeneWeb, which comes with Debian but is available for other Linux distributions, Mac OS X, and Windows. It’s a nice package. In some ways it’s clunkier than Family Tree Maker, but for some things, like entering entire families, it’s much nicer and faster. There’s always a trade-off with software like this.

It’s a nice tool for online collaboration. Now my mom and aunt can enter information too, and all our stuff will be in sync, which has always been a major problem for us.

I don’t recommend leaving a package like this open to the world for modification just because a lot of people with nothing better to do like to vandalize public websites. (That’s why this site requires registration these days.)

Anyway, feel free to look around and play with it. I’m going to go back and finish entering the names of Charlemagne’s children.

The pundits are wrong about Apple’s defection

Remember the days when knowing something about computers was a prerequisite for writing about them?
ZDNet’s David Coursey continues to astound me. Yesterday he wondered aloud what Apple could do to keep OS X from running on standard PCs if Apple were to ditch the PowerPC line for an x86-based CPU, or to keep Windows from running on Apple Macs if they became x86-based.

I’d link to the editorial but it’s really not worth the minimal effort it would take.

First, there’s the question of whether it’s even necessary for Apple to migrate. Charlie pointed out that Apple remains profitable. It has 5% of the market, but that’s beside the point. They’re making money. People use Apple Macs for a variety of reasons, and those reasons seem to vary, but speed rarely seems to be the clinching factor. A decade ago, the fastest Mac money could buy was an Amiga with Mac emulation hardware–an Amiga clocked at the same speed would run Mac OS and related software about 10% faster than the real thing. And in 1993, Intel pulled ahead of Motorola in the speed race. Intel had 486s running as fast as 66 MHz, while Motorola’s 68040 topped out at 40 MHz. Apple jumped to the PowerPC line, whose clock rate pretty much kept up with the Pentium line until the last couple of years. While the PowerPCs would occasionally beat an x86 at some benchmark or another, the speed was more a point of advocacy than anything else. When a Mac user quoted one benchmark only to be countered by another benchmark that made the PowerPC look bad, the Mac user just shrugged and moved on to some other advocacy point.

Now that the megahertz gap has become the gigahertz gap, the Mac doesn’t look especially good on paper next to an equivalently priced PC. Apple could close the gigahertz gap and shave a hundred bucks or two off the price of the Mac by leaving Motorola at the altar and shacking up with Intel or AMD. And that’s why every pundit seems to expect the change to happen.

But Steve Jobs won’t do anything unless he thinks it’ll get him something. And Apple offers a highly styled, high-priced, anti-establishment machine. Hippie computers, yuppie price. Well, that was especially true of the now-defunct Flower Power and Blue Dalmation iMacs.

But if Apple puts Intel Inside, some of that anti-establishment lustre goes away. That’s not enough to make or break the deal.

But breaking compatibility with the few million G3- and G4-based Macs already out there might be. The software vendors aren’t going to appreciate the change. Now Apple’s been jerking the software vendors around for years, but a computer is worthless without software. Foisting an instruction set change on them isn’t something Apple can do lightly. And Steve Jobs knows that.

I’m not saying a change won’t happen. But it’s not the sure deal most pundits seem to think it is. More likely, Apple is just pulling a Dell. You know the Dell maneuver. Dell is the only PC vendor that uses Intel CPUs exclusively. But Dell holds routine talks with AMD and shows the guest book signatures to Intel occasionally. Being the last dance partner gives Dell leverage in negotiating with Intel.

I think Apple’s doing the same thing. Apple’s in a stronger negotiating position with Motorola if Steve Jobs can casually mention he’s been playing around with Pentium 4s and Athlon XPs in the labs and really likes what he sees.

But eventually Motorola might decide the CPU business isn’t profitable enough to be worth messing with, or it might decide that it’s a lot easier and more profitable to market the PowerPC as a set of brains for things like printers and routers. Or Apple might decide the gigahertz gap is getting too wide and defect. I’d put the odds of a divorce somewhere below 50 percent. I think I’ll see an AMD CPU in a Mac before I’ll see it in a Dell, but I don’t think either event will happen next year.

But what if it does? Will Apple have to go to AMD and have them design a custom, slightly incompatible CPU as David Coursey hypothesizes?

Worm sweat. Remember the early 1980s, when there were dozens of machines that had Intel CPUs and even ran MS-DOS, yet were, at best, only slightly IBM compatible? OK, David Coursey doesn’t, so I can’t hold it against you if you don’t. But trust me. They existed, and they infuriated a lot of people. There were subtle differences that kept IBM-compatible software from running unmodified. Sometimes the end user could work around those differences, but more often than not, they couldn’t.

All Apple has to do is continue designing their motherboards the way they always have. The Mac ROM bears very little resemblance to the standard PC BIOS. The Mac’s boot block and partition table are all different. If Mac OS X continues to look for those things, it’ll never boot on a standard PC, even if the CPU is the same.

The same differences that keep Mac OS X off of Dells will also keep Windows off Macs. Windows could be modified to compensate for those differences, and there’s a precedent for that–Windows NT 4.0 originally ran on Intel, MIPS, PowerPC, and Alpha CPUs. I used to know someone who swore he ran the PowerPC versions of Windows NT 3.51 and even Windows NT 4.0 natively on a PowerPC-based Mac. NT 3.51 would install on a Mac of comparable vintage, he said. And while NT 4.0 wouldn’t, he said you could upgrade from 3.51 to 4.0 and it would work.

I’m not sure I believe either claim, but you can search Usenet on Google and find plenty of people who ran the PowerPC version of NT on IBM and Motorola workstations. And guess what? Even though those workstations had PowerPC CPUs, they didn’t have a prayer of running Mac OS, for lack of a Mac ROM.

Windows 2000 and XP were exclusively x86-based (although there were beta versions of 2000 for the Alpha), but adjusting to accomodate an x86-based Mac would be much easier than adjusting to another CPU architecture. Would Microsoft go to the trouble just to get at the remaining 5% of the market? Probably. But it’s not guaranteed. And Apple could turn it into a game of leapfrog by modifying its ROM with every machine release. It already does that anyway.

The problem’s a whole lot easier than Coursey thinks.

Analysis of the Apple Mac Xserver

Given my positive reaction to the Compaq Proliant DL320, Svenson e-mailed and asked me what I thought of Apple’s Xserver.
In truest Slashdot fashion, I’m going to present strong opinions about something I’ve never seen. Well, not necessarily the strong opinions compared to some of what you’re used to seeing from my direction. But still…

Short answer: I like the idea. The PPC is a fine chip, and I’ve got a couple of old Macs at work (a 7300 and a 7500) running Debian. One of them keeps an eye on the DHCP servers and mails out daily reports (DHCP on Windows NT is really awful; I didn’t think it was possible to mess it up but Microsoft found a way) and acts as a backup listserver (we make changes on it and see if it breaks before we break the production server). The other one is currently acting as an IMAP/Webmail server that served as an outstanding proof of concept for our next big project. I don’t know that the machines are really any faster than a comparable Pentium-class CPU would be, but they’re robust and solid machines. I wouldn’t hesitate to press them into mission-critical duty if the need arose. For example, if the door opened, I’d be falling all over myself to make those two machines handle DHCP, WINS, and caching DNS for our two remote sites.

So… Apples running Linux are a fine thing. A 1U rack-mount unit with a pair of fast PPC chips in it and capable of running Linux is certainly a fine thing. It’ll suck down less CPU power than an equivalent Intel-based system would, which is an important consideration for densely-packed data centers. I wouldn’t run Mac OS X Server on it because I’d want all of its CPU power to go towards real work, rather than putting pretty pictures on a non-existent screen. Real servers are administered via telnet or dumb terminal.

What I don’t like about the Xserver is the price. As usual, you get more bang for the buck from an x86-based product. The entry-level Xserver has a single 1 GHz PowerPC, 256 megs of RAM, and a 60-gig IDE disk. It’ll set you back a cool 3 grand. We just paid just over $1300 for a Proliant DL320 with a 1.13 GHz P3 CPU, 128 megs of RAM, and a 40-gig IDE disk. Adding 256 megs of RAM is a hundred bucks, and the price difference between a 40- and a 60-gig drive is trivial. Now, granted, Apple’s price includes a server license, and I’m assuming you’ll run Linux or FreeBSD or OpenBSD on the Intel-based system. But Linux and BSD are hardly unproven; you can easily expect them to give you the same reliability as OS X Server and possibly better performance.

But the other thing that makes me uncomfortable is Apple’s experience making and selling and supporting servers, or rather its lack thereof. Compaq is used to making servers that sit in the datacenter and run 24/7. Big businesses have been running their businesses on Compaq servers for more than a decade. Compaq knows how to give businesses what they need. (So does HP, which is a good thing considering HP now owns Compaq.) If anything ever goes wrong with an Apple product, don’t bother calling Apple customer service. If you want to hear a more pleasant, helpful, and unsuspicious voice on the other end, call the IRS. You might even get better advice on how to fix your Mac from the IRS. (Apple will just tell you to remove the third-party memory in the machine. You’ll respond that you have no third-party memory, and they’ll repeat the demand. There. I just saved you a phone call. You don’t have to thank me.)

I know Apple makes good iron that’s capable of running a long time, assuming it has a quality OS on it. I’ve also been around long enough to know that hardware failures happen, regardless of how good the iron is, so you want someone to stand behind it. Compaq knows that IBM and Dell are constantly sitting on the fence like vultures, wanting to grab its business if it messes up, and it acts accordingly. That’s the beauty of competition.

So, what of the Xserver? It’ll be very interesting to see how much less electricity it uses than a comparable Intel-based system. It’ll be very interesting to see whether Apple’s experiment with IDE disks in the enterprise works out. It’ll be even more interesting to see how Apple adjusts to meeting the demands of the enterprise.

It sounds like a great job for Somebody Else.

I’ll be watching that guy’s experience closely.

Linux Performance Tuning

I found a very superficial Linux Journal article on performance tuning linked from LinuxToday this week. I read the article because I’m a performance junkie and I hoped to maybe find something I hadn’t heard before.
The article recommended a kernel recompile, which many people don’t consider critical anymore. It’s still something I do, especially on laptops, since a kernel tuned to a machine’s particular hardware boots up faster–often much faster. While the memory you save by compiling your own kernel isn’t huge and was much more critical back when a typical computer had 8 MB of RAM, since Linux’s memory management is good, I like to give it as much to work with as possible. Plus, I’m of the belief that a simple system is a more secure system. The probability of a remote root exploit through the parallel port driver is so low as to be laughable, but when my boss’ boss’ boss walks into my cube and asks me if I’ve closed all possible doors that are practical to close, I want to be able to look him in the eye and say yes.

The same goes for virtual consoles. If a system runs X most of the time, it doesn’t need more than about three consoles. A server needs at most three consoles, since the only time the sysadmin will be sitting at the keyboard is likely to be during setup. The memory savings isn’t always substantial, depending on what version of getty the system is running. But since Linux manages available memory well, why not give it everything you can to work with?

The best advice the article gave was to look at alternative window managers besides the ubiquitous KDE and Gnome. I’ve found the best thing I’ve ever done from a performance standpoint was to switch to IceWM. KDE and Gnome binaries will still run as long as the libraries are present. But since KDE and Gnome seem to suffer from the same feature bloat that have turned Windows XP and Mac OS X into slow pigs, using another window manager speeds things along nicely, even on high-powered machines.

I take issue with one piece of advice in the article. Partitioning, when done well, reduces fragmentation, improves reliability, and allows you to tune each filesystem for its specific needs. For example, if you had a separate partition for /usr or /bin, which hold executable files, large block sizes (the equivalent of cluster sizes in Windows) will improve performance. But for /home, you’ll want small block sizes for efficiency.

The problem is that kernel I/O is done sequentially. If a task requires reading from /usr, then /home, then back to /usr, the disk will move around a lot. A SCSI disk will reorder the requests and execute them in optimal order, but an IDE disk will not. So partitioning IDE disks can actually slow things down. So generally with an IDE disk, I’ll make the first partition a small /boot partition so I’m guaranteed not to have BIOS issues with booting. This partition can be as small as 5 megs since it only has to hold a kernel and configuration files. I usually make it 20 so I can hold several kernels. I can pay for 20 megs of disk space these days with the change under my couch cushions. Next, I’ll make a swap partition. Size varies; Linus Torvalds himself uses a gig. For people who don’t spend the bulk of their time in software development, 256-512 megs should be plenty. Then I make one big root partition out of the rest.

With a multi-drive system, /home should be on a separate disk from the rest. That way, if a drive fails, you’ve halved your recovery time because you’ll either only have to install the OS on a replacement drive, or restore your data from backups on a replacement drive. Ideally, swap should be on a separate disk from the binaries (it can be on the same disk as /home unless you deal with huge data files). The reason should be obvious: If the system is going to use swap, it will probably be while it’s loading binaries.

Still, I’m very glad I read this article. Buried in the comments for this article, I found a gem of a link I’ve never seen referenced anywhere else before: Linux Performance Tuning. This site attempts to gather all the important information about tuning Linux to specific tasks. The pros know a lot of this stuff, but this is the first time I’ve seen this much information gathered in one place. If you build Linux servers, bookmark that page. You’ll find yourself referring back to it frequently. Contributors to the site include kernel hackers Rik van Riel and Dave Jones.

It’s the best of times, it’s the worst of times…

I hate arguing with women. When guys fight, they fight hard, and they don’t always fight fair, but when the fight’s over, it’s pretty much over. You settle it. Maybe you seethe for a little bit. But eventually, assuming you both still can walk, you can go to hockey games together almost like it never happened.
I’ve found myself in an argument. It’s not like an argument with a guy. Every time I think it’s over, it flares back up. It’s like fighting the hydra. (I don’t know if this is characteristic of arguments with women in general; I generally don’t seek out that experience.)

I found one solution though: Don’t open my inbox.

That worked for me once. After 8 months, she finally quit e-mailing me.

Found on a mailing list. I’m assuming this guy mistyped this:

“I need hell with my installation.”

Some smart aleck responded before I did. “Usually you get that with installation whether you want it or not. Now someone’s demanding it. Newbies, these days.”

I was going to say that if you ran Windows, you’d get that free of charge. (That’s the only thing Microsoft gives you for free!)

A cool phone call. My phone rings at work. Outside call. Don’t tell me she somehow got my number at work… I pick up. “This is Dave.”

“Dave, it’s Todd.”

Ah, my boss. Good thing I picked up, eh?

“You busy?”

When it’s your boss, there is absolutely no right answer to that question. One of my classmates in college told me something worth remembering, though: The truth’s always a lot easier to remember than a lie.

“We can’t come to the phone right now. Please leave a message at the beep.”

Nope. Too late for that.

“Not really,” I say, hoping I won’t regret it. Either he’s gathering data for my personal review, or he’s about to ask me to install Mac OS X on a Blue Dalmation iMac with 32 megs of RAM (speaking of wanting hell with installation…)

Actually he asks me for something pretty cool. He asks if I was up to learning some firewalling software. (No, I won’t tell you which one. And no, I won’t tell you who I work for. That’s like saying, “Hey, l337 h4xx0r5! You can’t get me!)

But I will tell you the IP address. It’s 127.0.0.1. If you can crack that address, you deserve whatever you can get. (No comments from the Peanut Gallery.)

So I hit the books. Thanks to this duty, I get another Linux box. I’ve got a Power Mac running Debian already, which runs scripts that are impossible on NT. It monitors the LAN and reformats some reports and e-mails them to my boss and co-workers at 6 every morning. But the management software runs under NT 4, Red Hat Linux, or Solaris. None of that’ll run on a PowerPC-based machine. So I lay claim to an old system that I happen to know has an Asus motherboard in it, along with 72 megs of RAM. I’ll have fun tweaking that system out. An Asus mobo, a Pentium-class CPU, and a Tulip network card. That’s not the makings of a rockin’ good weekend, but it’ll make for a reliable light-use workstation.

While the management software runs under Red Hat, some of the infrastructure is BSD-based. So I get to learn some BSD while I’m at it. As long as BSD is sane about /proc and /var/log, I’ll be in good shape. But I heard LSD was invented at Berkeley, so I may have a little learning to do… Maybe listening to some Beatles records while administering those systems would help.

Desktop Linux and the truth about forking

Desktop Linux! I wanna talk a little more about how Linux runs on a Micron Transport LT. I chose Debian 2.2r3, the “Potato” release, because Debian installs almost no extras. I like that. What you need to know to run Linux on a Micron LT: the 3Com miniPCI NIC uses the 3C59x kernel module. The video chipset uses the ATI Mach64 X server (in XFree86 3.36; if you upgrade to 4.1 you’ll use plain old ATI). Older Debian releases gave this laptop trouble, but 2.2r3 runs fine.
I immediately updated parts of it to Debian Unstable, because I wanted to run Galeon and Nautilus and Evolution. I haven’t played with any GNOME apps in a long time. A couple of years ago when I did it, I wasn’t impressed. KDE was much more polished. I didn’t see any point in GNOME; I wished they’d just pour their efforts into making KDE better. I still wish that, and today KDE is still more polished as a whole, but GNOME has lots of cool apps. Nautilus has the most polish of any non-Mac app I’ve ever seen, and if other Linux apps rip off some of its code, Microsoft’s going to have problems. It’s not gaudy and overboard like Mac OS X is; it’s just plain elegant.

Galeon is the best Web browser I’ve ever seen. Use its tabs feature (go to File, New Tab) and see for yourself. It’s small and fast like Opera, compatible like Netscape, and has features I haven’t seen anywhere else. It also puts features like freezing GIF animation and disabling Java/JavaScript out where they belong: In a menu, easily accessible. And you can turn them off permanently, not just at that moment.

Evolution is a lot like Outlook. Its icons look a little nicer–not as nice as Nautilus, but nice–and its equivalent of Outlook Today displays news headlines and weather. Nice touch. And you can tell it what cities interest you and what publications’ headlines you want. As a mail reader, it’s very much like Outlook. I can’t tell you much about its PIM features, because I don’t use those heavily in Outlook either.

The first time I showed it to an Outlook user at work, her reaction was, “And when are we switching to that?”

If you need a newsreader, Pan does virtually everything Forte Agent or Microplanet Gravity will do, plus a few tricks they won’t. It’s slick, small, and free too.

In short, if I wanted to build–as those hip young whippersnappers say–a pimp-ass Internet computer, this would be it. Those apps, plus the Pan newsreader, give you better functionality than you’ll get for free on Windows or a Mac. For that matter, you could buy $400 worth of software on another platform and not get as much functionality.

Linux development explained. There seems to be some confusion over Linux, and the kernel forking, and all this other stuff. Here’s the real dope.

First off, the kernel has always had forks. Linus Torvalds has his branch, which at certain points in history is the official one. When Torvalds has a branch, Alan Cox almost always has his own branch. Even when Cox’s branch isn’t the official one, many Linux distributions derive their kernels from Cox’s branch. (They generally don’t use the official one either.) Now, Cox and Torvalds had a widely publicized spat over the virtual memory subsystem recently. For a while, the official branch and the -ac branch had different VMs. Words were exchanged, and misinterpreted. Both agreed the original 2.4 VM was broken. Cox tried to fix it. Torvalds replaced it with something else. Cox called Torvalds’ approach the unofficial kernel 2.5. But Torvalds won out in the end–the new VM worked well.

Now you can expect to see some other sub-branches. Noted kernel hackers like Andrea Archangeli occasionally do a release. Now that Marcelo Tosatti is maintaining the official 2.4 tree, you might even see a -ac release again occasionally. More likely, Cox and Torvalds will pour their efforts into 2.5, which should be considered alpha-quality code. Some people believe there will be no Linux 2.6; that 2.5 will eventually become Linux 3.0. It’s hard to know. But 2.5 is where the new and wonderful and experimental bits will go.

There’s more forking than just that going on though. The 2.0 and 2.2 kernels are still being maintained, largely for security reasons. But not long ago, someone even released a bugfix for an ancient 0.-something kernel. That way you can still keep your copy of Red Hat 5.2 secure and not risk breaking any low-level kernel module device drivers you might be loading (to support proprietary, closed hardware, for example). Kernels are generally upward compatible, but you don’t want to risk anything on a production server, and the kernel maintainers recognize and respect that.

As far as the end user is concerned, the kernel doesn’t do much. What 2.4 gave end users was better firewalling code and more filesystems and hopefully slightly better performance. As far as compatibility goes, the difference between an official kernel and an -ac kernel and an -aa kernel is minor. There’s more difference between Windows NT 4.0 SP2 and SP3 than there is between anyone’s Linux 2.4 kernel, and, for that matter, between 2.4 and any (as of Nov. 2001) 2.5 kernel. No one worries about Windows fragmenting, and when something Microsoft does breaks a some application, no one notices.

So recent events are much ado about nothing. The kernel will fragment, refragment, and reunite, just as it has always done, and eventually the best code will win. Maybe at some point a permanent fracture will happen, as happened in the BSD world. That won’t be an armageddon, even though Jesse Berst wants you to think it will be (he doesn’t have anything else to write about, after all, and he can’t be bothered with researching something non-Microsoft). OpenBSD and NetBSD are specialized distributions, and they know it. OpenBSD tries to be the most secure OS on the planet, period. Everything else is secondary. NetBSD tries to be the most portable OS on the planet, and everything else is secondary. If for some reason you need a Unix to run on an old router that’s no longer useful as a router and you’d like to turn it into a more general-purpose computer, NetBSD will probably run on it.

Linux will fragment if and when there is a need for a specialized fragment. And we’ll all be the better for it. Until someone comes up with a compelling reason to do so, history will just continue to repeat itself.

A different Monday, but not much better…

Moves at work continue, but unfortunately the electrical contractors we have are as incompetent as ever, and of course IT takes the brunt of the attack when computers don’t work. They don’t care if it’s an electrical problem or not; all they know is their computer doesn’t work, and of course it’s always IT’s fault if the computer doesn’t work. And with one person to keep 300 desktop PCs in tip-top shape, I usually can’t be up there and have the problem solved within five minutes.
In the last three weeks, we’ve lost three power supplies, two printers, an expensive proprietary modem, and a network card. In two instances, there was an honest-to-goodness fire, with flames and everything.

I think it’s time we sent an electrical contractor or two packing.

Meanwhile I’ve got incompetent department directors who plan moves without giving more than a half hour’s notice, and of course they throw a fit when the move falls to pieces and I’m off solving another problem. I also find myself not caring. Go ahead and yell. Davey’s not listening, la la la, and his boss isn’t listening, and his boss’ boss isn’t listening, and if his boss’ boss’ boss listens and says anything, he’ll have two, maybe three raving lunatics at his door in a heartbeat and I think he knows it.

Deep breath. OK. I feel better now. Kind of.

Let’s see what kind of hints The Big Guy may have been dropping with the day’s other events, shall we?

I had a meeting at church at 7 p.m. So I headed out to my car at 10 ’til 6, put my key in the ignition, and the engine coughed, and then nothing. No electrical system. Hmm. Time to find out how good Chrysler Roadside Assistance is, eh? Well, I called, waited an hour and a half, and they never showed up. So I paced in the beautiful October twilight, waiting for a driver who’d never arrive, thinking there are a number of things I’d love do at twilight outdoors in St. Louis in October (and waiting for a tow truck is very near the top of that list, let me tell you!) but it sure beats sitting in a meeting after dealing with irate, high-maintenance people at work for 9+ hours.

And I noticed something. I wasn’t at the meeting, and yet the world failed to fall apart.

Finally I gave up on the tow truck driver and asked one of my coworkers for a jump. Maybe the problem was a dead battery, even though I didn’t leave my lights on or anything. Indeed it was. I drove home, and about halfway there my battery light came on. I guided the car home, called Chrysler again, and asked them what to do.

On my answering machine, there was a pair of messages waiting for me. It was actually one message, but my answering machine is extremely rude and cuts you off after about 10.5 seconds. OK, maybe 30. But it seems like 10.5 seconds to everyone else but me. So most people leave a message, get cut off, then call me back. Sometimes they call me back a third or even a fourth time. Usually by then they’re pretty steamed. But I digress, as always. The message messages basically boiled down to, “Hey Dave, I understand you’re planning to teach Friday, but I hear things are really hectic so there’s no need for us to stay on the regular schedule. I’ll teach for you if you want.”

I had no idea when I’d get a chance to put a lesson together, to be completely honest. So I called her back and said if she wanted to teach, she could go right ahead. And I thanked her.

Hints taken. So much time doing stuff for God there’s no time to spend with God. So I skipped out on the meeting and now I’m not teaching Friday. I might even show up a little late, for good measure.

And now something completely different. This is starting to sound like the Stress Underground, not the Silicon Underground. So let’s talk about silicon.

Dan Bowman sent me a link to a suggestion that businesses buy old Mac clones, then dump $600 worth of upgrades into them so they can run Mac OS X and avoid paying $199 for a copy of Windows.

Yes, I know I’m teetering on the brink of mental illness here. So I’m assuming that if I were completely sane, this would make even less sense.

The best-selling software package for the Macintosh is (drum roll please)… Microsoft Office. So all you’ve accomplished so far is paying a little less money to Microsoft.

I’ve seen Mac OS X. I’ve tried to install Mac OS X. It wasn’t a pleasant experience. And this was a copy of Mac OS X that came with a brand-new G4. Mac OS X is not production-quality software yet. Not that that’s much of a problem. There’s precious little native software to run on it. For native software, you pretty much have to download and compile your own. If you’re going to do that, you might as well just run Linux, since it’s free for the asking and runs on much less-expensive hardware.

Most businesses are a bit hesitant to put Linux on the desktop yet. Some are starting to see the light. But a business that’s reluctant to put Linux on brand-new desktop PCs even when they can pay for good support they’ll probably never need isn’t too likely to be interested in buying a four-year-old Mac or Mac clone, plus 128 megs of obsolete and therefore overpriced memory plus a hard drive plus a disk controller plus a USB card, from five different vendors who will all point fingers at one another the instant something goes wrong. (And we’re talking Apple here. Things will go wrong.)

And yes, I know there are thousands of people who’ve successfully put CPU upgrades in Macintoshes, but it’s very hit-and-miss. I spent two of the most frustrating days of my life trying to get a Sonnet G3 accelerator to work in a Power Mac 7500. It either worked, failed to boot, or performed just like the stock 100 MHz CPU. Any time you turned it on, you didn’t know which of the three you would get. The local Mac dealer was clueless. I called Sonnet. They were clueless. I struggled some more. I called Sonnet back. I got a different tech. He asked what revision of motherboard I had. I looked. It said VAL4, I think. He told me he was surprised it worked 1/3 of the time. That accelerator never works right with that revision of motherboard. He suggested I return the card, or do a motherboard swap. Of course a compatible motherboard costs more than the accelerator card.

And of course there was absolutely no mention of any of this on Sonnet’s web site. At least you can go to a manufacturer of PC upgrades and read their knowledge base before you buy. Sometimes you can even punch in what model system you have and they’ll tell you if they work. Not that those types of upgrades make any sense when you can a replacement motherboard and CPU starts at around $150.

Suffice it to say I won’t be repeating that advice at work. I just got a flyer in the mail, offering me 700 MHz Compaq PCs preloaded with Win98, with a 15-inch flat-panel monitor, for $799. With a warranty. With support. Yeah, I’d rather have Windows 2000 or Windows XP on it. The only reason Compaq makes offers like that is to move PCs, so I’m sure they’d work with my purchasing guy and me.

Think about it. I can have a cobbled-together did-it-myself 400 MHz Mac refurb without a monitor for $700-$750. Or I can have that Compaq. That’s like getting a flat-panel monitor for 50 bucks. As far as usability and stability go, I’d rate Win98 and Mac OS X about equal. But for the time and money I’d save, I could afford to step up to a better version of Windows. Or I could bank the bucks and run Linux on it.

If you’re already a Mac zealot, I guess that idea might make sense. I’ve spent several years deploying, operating, and maintaning both Macs and PCs side-by-side in corporate environments. I have no great love for Microsoft. Most people would call my relationship with Microsoft something more like seething hatred.

But the biggest problems with PC hardware, in order, are commodity memory, cheap power supplies, proliferation of viruses, and then, maybe, Microsoft software. You can avoid the first two problems by buying decent hardware from a reputable company. (No, Gateway, that doesn’t include you and your Packard Bell-style 145-watt power supplies.) You can avoid the third problem with user education. (It’s amazing how quickly users learn when you poke ’em with a cattle prod after they open an unexpected attachment from a stranger. The biggest problem is getting that cattle prod past building security.) Microsoft software doesn’t exactly bowl everyone over with its reliability, but when Adobe recommends that Mac users reboot their machines every day before they leave for lunch, you know something’s up. Even Windows 95’s uptime was better than that.

Disappointment… Plus Linux vs. The World

It was looking like I’d get to call a l337 h4x0r to the carpet and lay some smackdown at work, but unfortunately I had a prior commitment. Too many things to do, not enough Daves to go around. It’s the story of my life.
And I see Infoworld’s Bob Lewis is recommending companies do more than give Linux a long, hard look–he’s saying they should consider it on the desktop.

He’s got a point. Let’s face it. None of the contenders get it right. So-called “classic” Mac OS isn’t a modern OS–it has no protected memory architecture, pre-emptive multitasking, and limited threading support. It’s got all the disadvantages of Windows 3.1 save being built atop the crumbling foundation of MS-DOS. I could run Windows 3.1 for an afternoon without a crash. I can run Windows 95 for a week or two. I can usually coax about 3-4 days out of Mac OS. Mac users sometimes seem to define “crash” differently, so I’ll define what I mean here. By a crash, I mean an application dying with an error Type 1, Type 2, or Type 10. Or the system freezing and not letting you do anything. Or a program quitting unexpectedly.

But I digress. Mac OS X has usability problems, it’s slow, and it has compatibility problems. It has promise, but it’s been thrust into duty that it’s not necessarily ready for. Like System 7 of the early ’90s, it’s a radical change from the past, and it’s going to take time to get it ready for general use. Since compilers and debuggers are much faster now, I don’t think it’ll take as long necessarily, but I don’t expect Mac OS X’s day to arrive this year. Developers also have to jump on the bandwagon, which hasn’t happened.

Windows XP… It’s slow, it’s way too cutesy, and only time will tell if it will actually succeed at displacing both 9x and NT/2000. With Product Activation being an upgrader’s nightmare, Microsoft may shoot themselves in the foot with it. Even if XP is twice as good as people say it’s going to be, a lot of people are going to stay away from it. Users don’t like Microsoft policing what they do with their computers, and that’s the perception that Product Activation gives. So what if it’s quick and easy? We don’t like picking up the phone and explaining ourselves.

Linux… It hasn’t lived up to its hype. But when I’ve got business users who insist on using Microsoft Works because they find Office too complicated, I have a hard time buying the argument that Linux can’t make it in the business environment without Office. Besides, you can run Office on Linux with Win4Lin or VMWare. But alternatives exist. WordPerfect Office gets the job done on both platforms–and I know law offices are starting to consider the move. All a lawyer or a lawyer’s secretary needs to be happy, typically, is a familiar word processor, a Web browser, and a mail client. The accountant needs a spreadsheet, and maybe another financial package. Linux has at least as many Web browsers as Windows does, and plenty of capable mail clients; WP Office includes Quattro Pro, which is good enough that I’ve got a group of users who absolutely refuse to migrate away from it. I don’t know if I could run a business on GnuCash. But I’m not an accountant. The increased stability and decreased cost makes Linux make a lot of sense in a law firm though. And in the businesses I count as clients, anywhere from 75-90% of the users could get their job done in Linux just as productively. Yes, the initial setup would be more work than Windows’ initial setup, but the same system cloning tricks will work, mitigating that. So even if it takes 12 hours to build a Linux image as opposed to 6 hours to build a Windows image, the decreased cost and decreased maintenance will pay for it.

I think Linux is going to get there. As far as Linux looking and acting like Windows, I’ve moved enough users between platforms that I don’t buy the common argument that that’s necessary. Most users save their documents wherever the program defaults to. Linux defaults to your home directory, which can be local or on a server somewhere. The user doesn’t know or care. Most users I support call someone for help when it comes time to save something on a floppy (or do anything remotely complicated, for that matter), then they write down the steps required and robotically repeat them. When they change platforms, they complain about having to learn something new, then they open up their notebook, write down new steps, and rip out the old page they’ve been blindly following for months or years and they follow that new process.

It amuses me that most of the problems I have with Linux are with recent distributions that try to layer Microsoft-like Plug and Play onto it. Linux, unlike Windows, is pretty tolerant of major changes. I can install TurboLinux 6.0 on a 386SX, then take out the hard drive and put it in a Pentium IV and it’ll boot. I’ll have to reconfigure XFree86 to take full advantage of the new architecture, but that’s no more difficult than changing a video driver in Windows–and that’s been true since about 1997, with the advent of Xconfigurator. Linux needs to look out for changes of sound cards and video cards, and, sometimes, network cards. The Linux kernel can handle changes to just about anything else without a hiccup. Once Red Hat and Mandrake realize that, they’ll be able to develop a Plug and Play that puts Windows to shame.

The biggest thing that Linux lacks is applications, and they’re coming. I’m not worried about Linux’s future.