Writing Tight 6502 Series Machine Code

This article appeared in the final issue of Twin Cities 128/64, published by Parsec, Inc. of Salem, Mass., sometime after April 1994. Parsec never paid for the article, so under the terms of Parsec’s contract, all rights reverted back to me 30 days after Parsec failed to remit payment.

So now I’m re-asserting my rights to the article. You’ll find the editing poor–all my semicolons appear to have been replaced by commas, for instance–and the writing full of cliches. But I would have been 16 or 17 when I wrote it, and I don’t think it’s a bad effort for a 17-year-old. And the article had some pretty clever tricks. I have to admit I’d forgotten 90% of what was in the article, but I recognize my own writing when I see it.

I’d like to thank Mark R. Brown, former managing editor of INFO magazine, for finding the article and bringing it to my attention. And one final word: Although I wrote this with the Commodore 128 in mind, the same tricks apply to any computer or console based on a 6502 or derivative.

Read more

Better upgrade advice

PC Magazine has a feature about inexpensive PC upgrades. There’s some good advice there, but some questionable advice too. Since I really did write the book on free and inexpensive upgrades, I’ll present my own advice (but I’ll skip the pretty pictures).Hard drives

The best upgrade they didn’t mention is replacing the hard drive. I’ve been squeezing extra life out of old systems for years by taking out the aging drives and replacing them with something newer and faster. The trick is figuring out whether the drive is the old-style parallel ATA (with a 40- or 80-conductor cable) or newer SATA. If you can afford it, it makes sense to upgrade to a SATA controller so you can use a more modern drive. Newer drives are almost always faster than older drives if only because the density of the data is always increasing. If a drive stores twice as much data in the same linear space as an old one, it (roughly) means it will retrieve the data twice as fast, assuming the disk spins at the same speed (and it may spin faster). You can go all the way up to the 10,000 RPM Western Digital Raptor drives if you want, but even putting a mid-range drive in an old PC will speed it up.

Some people will point out that a new drive may be able to deliver data at a faster rate than an old controller in an old PC can handle. I don’t see that as a problem. There’s no drive on the market that can keep a 133 MB/sec bus saturated 100% of the time, and the old drive certainly isn’t. Even if your older, slower bus is the limiting factor some of the time, you’re still getting the benefit of a newer drive’s faster seek times and faster average data transfers.

While replacing a hard drive can bust an entire $125 upgrade budget in and of itself, it’s still something I recommend doing. Unless your system is really short on memory or you’re heavily into gaming, the hard drive is the best bang for your upgrade buck.

Memory

The other point I disagree with most strongly is the memory. There’s very little reason anymore to run a system with less than 1 GB of RAM. As a system becomes more obsolete, memory prices go up instead of down, so it makes sense to just install a ton of memory when you’re upgrading it anyway. If you need it later, it will probably cost more.

The caveat here is that it makes very little sense to install 4 GB of RAM, since the Intel x86 processor architecture reserves most of the 4 GB block for system use. If you install 4 GB of RAM, you really get more like 3.2 or 3.5 GB of usable memory unless you’re running 64-bit Windows. I don’t recommend going 64-bit yet. When it works, it works well. Unfortunately there’s no way to know if you’ll have good drivers for everything in your system until you try it. I wouldn’t go 64-bit until some popular software that requires (or at least takes really good advantage of) 64 bit arrives. The next version of Photoshop will help, but I think the thing that will really drive 64-bit is when id software releases a game that needs it. Until then, hardware makers will treat 64-bit Windows as an afterthought.

I usually put 2 GB of RAM in a system if it’ll take that much. If you do a lot of graphics or video work, more is better of course. For routine use, 2 GB is more than adequate, yet affordable. If a system won’t take 2 GB, then it makes sense to install as much as it will take, whether that’s 1 GB or 512 MB. If a system won’t take 512 MB, then it’s old enough that it makes sense to start talking replacement.

Outright replacement

Speaking of that, outright replacement can be a very practical option, especially if a system is getting up in years. My primary system is a 5-year-old office PC. Take a 2-ish GHz P4 or equivalent (current market value: $75-$125), load it up with 2 GB of RAM and a moderately fast hard drive, and you’ll have a better-built system than any $399 budget PC on the market. It will probably run as fast or faster, and it will cost less.

I have two PCs at the office: a 3 GHz Pentium D, and a 2.6 GHz Core Duo. Both have 2 GB of RAM. They theoretically encode MP3s faster than my home PC and would make better gaming PCs than my home PC (ahem), but for the things I do–namely, web browsing, spreadsheets, word processing, e-mail, and the occasional non-3D game–I can’t tell much difference between them. The System Idle Process gets the overwhelming majority of the CPU time on all of them.

Other upgrades

The other things discussed in the article can be worthwhile, but faster network cards won’t help your Internet speed. If you routinely copy huge files between multiple PCs, they help a lot, but how many people really do that on a regular basis?

Fast DVD burners are nice and they’re inexpensive, but if you needed one, you’d know it. If you don’t know what you’d do with one, skip it. Or if you have an older one that you use occasionally, you probably won’t use a faster one any more often.

For $60 you can get a decently fast hard drive, and that will do a lot more for overall system performance than either a network card or DVD burner upgrade.

The video card is a sensible upgrade under two circumstances: If you’re using the integrated video on your motherboard, or if you play 3D games and they feel jerky. If neither of those describes you, skip the video card upgrade.

Free upgrades

The article describes CHKDSK as a “low level defrag.” That’s not what CHKDSK does–it checks your drive for errors and tries to fix them. If your drives are formatted NTFS (and they probably are), routinely running CHKDSK isn’t going to do much for you. If you run CHKDSK routinely and it actually says it’s done something when it finishes, you have bigger problems and what you really need is a new hard drive.

If you want to defragment optimally, download JK-Defrag. It’s free and open source, and not only does a better job than the utility that comes with Windows, but it does a better job than most of the for-pay utilities too.

The first time you run it, I recommend running it from the command line, exactly like this: JkDefrag.exe -a 7 -d 2 -q c:. After that, just run it without any options, about once a month or two. (Running more often than that doesn’t do much good–in fact, the people who defragment their drives once a day or once a week seem to have more problems.) Run it with the options about once a year. Depending on what condition your system is in, the difference in performance after running it ranges from noticeable to stunning.

Why first-generation flash SSDs are a bit disappointing

I’ve been waiting with anticipation for flash-based SSDs to come out. If you’re unfamiliar with these, they’re hard drives with no moving parts, so their life expectancy is 10 years, and they’re quiet, run cool, and they have virtually no seek time so for some tasks they’re lightning fast.

The best drives on the market, from what limited information is available, seem to be the Samsungs.The problem is that these drives have a sustained read speed of 50 MB/sec and write speed of 27 MB/sec. Under ideal circumstances, a conventional hard drive can exceed those numbers–especially the write speed. So what’s going on?

The main reason is that these drives have no cache on them. Conventional hard drives have a small amount of RAM that acts as a buffer between the computer and the platters. Today a budget drive has 8 megs of RAM. A lot of high-performance drives have 16, and I’ve even seen some that have 32.

The most frequently used data can come off this buffer at high speed. Writes can go to the buffer and the computer can get on with life, and the drive can write the data to the platters when it gets less busy. The other advantages of a solid state disk often can make up the difference when reading data, but if you’re writing a lot of data, the conventional hard drive wins the race most of the time.

SSDs could benefit from cache for one good reason: conventional RAM chips are still much faster than flash memory.

Now for the good news: I’ve read reports that the Samsung drive can boot Windows in 15 seconds and most common applications have single-digit load times. So if you don’t do a lot of writes, these drives can give you a performance boost.

The other complaint is capacity. You can pay $400 for a 32 gig SSD, which is more than you’d pay for a full terrabyte of conventional storage. For some people, this is a problem. Given the work I usually do these days, 32 gigs is plenty for me, and I could probably find ways to get by with 8. I just don’t keep a lot of huge data files around. But if I needed acres of data storage, I could load the operating system and my most critical apps on the SSD, and use the conventional drive for storage.

The old knock on flash memory was its finite lifespan. Put Windows’ swap file on a flash drive and let it run, and theoretically you could wear out the memory in a matter of days. And that’s always one of the first comments that shows up when the topic of flash drives comes up on sites like Digg and Slashdot. But today’s flash memory sustains more writes than the old stuff did, and newer drives use a technique called wear-leveling, where it distributes writes amongst the available chips. This technique makes the chips last a lot longer now, to the point where one respected tech journalist, Dan Rutter, actually recommends putting flash drives in old laptopos with maxed-out memory for the express purpose of holding a swap file. And Macintosh users have been using flash disks to soup up old Mac laptops for several years now. Flash disks give obsolete laptops a boost in both speed and battery life while reducing noise and heat, and it’s pretty safe to say that current technology allows a flash drive to last 3-5 years when used for this purpose, which is about as long as a conventional drive.

My next major system upgrade will probably be a Samsung SSD for at least one of my computers. It’d make a fantastic upgrade for my laptop, at the very least. The laptop will run faster (the hard drive in it is several years old, and I think it runs at 4200 RPM) and the battery life will improve considerably. I also like the idea of having a super quiet, cool-running desktop for the family room. But I definitely hope the second-generation SSDs will include some cache. Otherwise, there’s not much advantage to them over the old trick of buying a large, high-speed Compact Flash card and an IDE-CF adapter (Addonics is one source of these), as long as both the card and the adapter support UltraDMA.

VMWare is in Microsoft\’s sights

Microsoft has released its Virtual Server product, aimed at VMWare. Price is an aggressive $499.

I have mixed feelings about it.VMWare is expensive, with a list price of about 8 times as much. But I’m still not terribly impressed.

For one, with VMWware ESX Server, you get everything you need, including a host OS. With Microsoft Virtual Server, you have to provide Windows Server 2003. By the time you do that, Virtual Server is about half the price of VMWare.

I think you can make up the rest of that difference very quickly on TCO. VMWare’s professional server products run on a Linux base that requires about 256 MB of overhead. Ever seen Windows Server 2003 on 256 megs of RAM? The CPU overhead of the VMWare host is also very low. When you size a VMWare server, you can pretty much go on a 1:1 basis. Add up the CPU speed and memory of the servers you’re consolidating, buy a server that size, put VMWare on it, and then move your servers to it. They’ll perform as well, if not a little bit better since at peak times they can steal some resources from an idle server.

Knowing Microsoft, I’d want to give myself at least half gig of RAM and at least half a gigahertz of CPU time for system overhead, minimum. Twice that is probably more realistic.

Like it or not, Linux is a reality these days. Linux is an outstanding choice for a lot of infrastructure-type servers like DHCP, DNS, Web services, mail services, spam filtering, and others, even if you want to maintain a mixed Linux/Windows environment. While Linux will run on MS Virtual Server’s virtual hardware and it’s only a matter of time before adjustments are made to Linux to make it run even better, there’s no official support for it. So PHBs will be more comfortable running their Linux-based VMs under VMWare than under Virtual Server 2003. (There’s always User-Mode Linux for Linux virtual hosts, but that will certainly be an under-the-radar installation in a lot of shops.)

While there have been a number of vulnerabilities in VMWare’s Linux host this year, the number is still lower than Windows 2003. I’d rather take my virtual host server down once a quarter for patching than once a month.

I wouldn’t put either host OS on a public Internet address though. Either one needs to be protected behind a firewall, with its host IP address on a private network, to protect the host as much as possible. Remember, if the host is compromised, you stand to lose all of the servers on it.

The biggest place where Microsoft gives a price advantage is on the migration of existing servers. Microsoft’s migration tool is still in beta, but it’s free–at least for now. VMWare’s P2V Assistant costs a fortune. I was quoted $2,000 for the software and $8,000 for mandatory training, and that was to migrate 25 servers.

If your goal is to get those NT4 servers whose hardware is rapidly approaching the teenage years onto newer hardware with minimal disruption–every organization has those–then Virtual Server is a no-brainer. Buy a copy of Virtual Server and new, reliable server hardware, migrate those aging machines, and save a fortune on your maintenance contract.

I’m glad to see VMWare get some competition. I’ve found it to be a stable product once it’s set up, but the user interface leaves something to be desired. When I build or change a new virtual server, I find myself scratching my head whether certain options are under “Hardware” or under “Memory and Processors”. So it probably takes me twice as long to set up a virtual server as it ought to, but that’s still less time than it takes to spec and order a server, or, for that matter, to unbox a new physical server when it arrives.

On the other hand, I’ve seen what happens to Microsoft products once they feel like they have no real competition. Notice how quickly new, improved versions of Internet Explorer come out? And while Windows XP mostly works, when it fails, it usually fails spectacularly. And don’t even get me started on Office.

The pricing won’t stay the same either. While the price of hardware has come down, the price of Microsoft software hasn’t come down nearly as quickly, and in some cases has increased. That’s not because Microsoft is inherently ruthless or even evil (that’s another discussion), it’s because that’s what monopolies have to do to keep earnings at the level necessary to keep stockholders and the SEC happy. When you can’t grow your revenues by increasing your market share, you have to grow your revenues by raising prices. Watch Wal-Mart. Their behavior over the next couple of decades will closely monitor Microsoft’s. Since they have a bigger industry, they move more slowly. But that’s another discussion too.

The industry can’t afford to hand Microsoft another monopoly.

Some people will buy this product just because it’s from Microsoft. Others will buy it just because it’s cheaper. Since VMWare’s been around a good long while and is mature and stable and established as an industry standard, I hope that means it’ll stick around a while too, and come down in price.

But if you had told me 10 years ago that Novell Netware would have single-digit marketshare now, I wouldn’t have believed you. Then again, the market’s different in 2004 than it was in 1994.

I hope it’s different enough.

Priorities (Or: How to spend Valentine’s night on the couch)

It’s that special time of year, when a man’s thoughts turn to…
Computers. Or other gadgets. Just like they always do. Men don’t need a Hallmark holiday to think about what they really want.

Steve DeLassus called me up the other night. He wanted to talk about tape drives and CD-RWs. He wanted to know if it was safe to buy an ATAPI CD-RW drive, or if he should buy SCSI. He knew about my terrible experiences with first-generation ATAPI CD-RWs. I burned as many coasters as I did successful discs, and in those days, a coaster was an expensive mistake.

I told him where to get Plextor CD-RWs for next to nothing (Newegg.com). Steve checked yesterday morning, but they were out of them, as it turns out. So Steve looked at Hyper Microsystems and found some good prices on Plextor units. Not earth-shattering like the deal I saw at New Egg a couple of weeks ago, but good. We had initially talked about 12X units. But the 16X unit was $4 more, and the 24X unit was $20 more than that. “You can ride that train all the way up to the $179 40X burner,” I said.

Steve hasn’t responded to that as I write this. Considering what I paid for my 2X burner in 1998, that $179 Plextor 40X unit is a steal.

But there’s something else to consider: The overhead in burning a disc. It takes the computer a little time to figure out the disc layout, and that speed is dependent on the host computer (and the software it’s running), not the burner. It didn’t seem like much time in the days of 2X burners, but compared to the three minutes it takes to lay down a mountain of data on the disc with a modern burner, it’s started to look significant. Secondly, it takes some time to close a disc. I haven’t taken a stopwatch to it, but the 12X unit I use in one of my offices at work seems to take about the same amount of time to close a disc as the 2X unit I use in another office. The 12X unit doesn’t burn a disc 6 times as fast as the clunky 2X unit. And the 24X unit definitely won’t be twice as fast as a 12X.

Since I don’t have all those burners and don’t have the time to make a scientific test, I went to CDRLabs.com to get some figures. Their testbed has changed over time–they keep it constant across drive generations, but the 12x unit was tested in a different system than the 40x unit, and they overclock. Storage Review’s methodology is much better. But the numbers are good enough to illustrate the point.

Results of burning 651 MB of data, along with the cost of the drive:
12x: 6:43 ($124)
16x: 5:11 ($129)
24x: 3:54 ($149)
40x: 3:26 ($175)

Even given the advantage of a faster computer and newer software, the 40x unit still can’t double the speed of the venerable 12x unit.

Why the diminishing returns? Constant Angular Velocity. Very high-speed burners use the same technique as high-speed readers, so you don’t get a constant 40x. The 40x drive starts out at 20x and steps up to 40x as it reaches the outside of the disc. The average writing speed is closer to 30x. Obviously, the less data you burn, the less the 40x drive will help you.

I also pointed out to Steve that there’s more to this than the hard dollar cost. It’s Valentine’s Day time, and there’s the wrath of the wife to deal with. There’s always a hidden cost involved, no matter what you’re buying, and sometimes it doesn’t have a whole lot to do with what you’re buying.

I could quote Proverbs 31:10 and say that a wife is a treasure and therefore you should always treat her as such, and therefore you should buy the $59 refurbished 8x drive for you, a $39 dozen roses for her, and–here’s the kicker–then spend $80 filling her car with flowers the week after Valentine’s Day, when your money buys three times as much. See? I’m a thinking man.

Then again, you can go for bragging rights and find yourself singing along with Dave:

That’s okay, hey hey hey, love songs bite anyway!

(In which case, you’ll probably find yourself spending Valentine’s night on the couch. Or on the porch. I can’t say I’ve ever experienced this, but I don’t think it would be very pleasant to be stuck out on the porch wearing something skimpy in Missouri in February.)

So, to recap, for those of you taking notes: Spending $179 on a 40x CD-RW drive for you and giving a home-made Valentine’s Day card to your wife will lead to very bad things.

Burning a CD full of sappy love songs and then bragging about how it only took four minutes to burn (the 40x drive doesn’t burn audio at full speed) won’t make it any better. Sorry.

But I seem to have gotten off the subject somehow.

As far as tape drives go, I can tell you that Quantum DLT tape drives rock because that’s all we use at work. They’re built like tanks and last forever. The tapes are cheap and take a lot of punishment. They back up at a rate of about 5 MB/sec, which makes them faster than the hard drives of 10 years ago. And they work fabulously with Seagate Backup Exec, which severely reduces headaches when people want stuff restored. Considering they start at about $3995, they’d better have something going for them.

Steve’s needs are a bit more modest. An 8-gig IDE Travan drive from the likes of Seagate is cheap. The tapes run about $30, but for the quantities of data Steve will be backing up (he and I both rely on CD-Rs for backups now) and the frequency at which he’ll be doing so, a drive and tapes designed for light duty ought to do fine. When it comes to tape drives, you can buy a cheap drive that uses expensive tapes, or an expensive drive that uses cheap tapes. A lesson most people have to learn quickly is that it’s much easier to get a cheaper drive past the glare of your wife or boss and then buy the more-costly media as you need them. Media’s an OK purchase. Hardware is bad.

That’s why Zip drives have been so successful, and why Iomega is still in business.

I think if Steve wants to spend Valentine’s night on the porch, he needs to buy a DLT drive, then take out a cash advance to make his first minimum payment.

Then he can gloat about how much money he’s saving on tapes.

Ho-hum.

Another day, another Outlook worm. Tell me again why I continue to use Outlook? Not that I ever open unexpected attachments. For that matter, I rarely open expected ones–I think it’s rude. Ever heard of cut and paste? It’s bad enough that I have to keep one resource hog open to read e-mail, so why are you going to make me load another resource hog, like Word or Excel, to read a message where the formatting doesn’t matter?
The last couple of times I received Word attachments that were important, I converted them to PDFs for grins. Would you believe the PDFs were considerably smaller? I was shocked too. Chances are there was a whole lot of revisioning data left in those documents–and it probably included speculative stuff that underlings like me shouldn’t see. Hmm. I guess that’s another selling point for that PDF-printer we whipped up as a proof of concept a couple of weeks ago, isn’t it? I’d better see if I can get that working again. I never did get it printing from the Mac, but seeing as all the decision-makers who’d be using it for security purposes use PCs, that’s no problem.

I spent the day learning a commercial firewall program. (Nope, sorry, won’t tell you which one.) My testbed for this thing will be an old Gateway 2000 box whose factory motherboard was replaced by an Asus SP97 at some point in the past. It’s got 72 megs of RAM. I put in an Intel Etherexpress Pro NIC today. I have another Etherexpress Pro card here that I’m bringing in, so I’ll have dual EEPros in the machine. The firewall has to run under Red Hat, so I started downloading Red Hat 7.2. I learned a neat trick.

First, an old trick. Never download with a web browser. Use the command-line app wget instead. It’s faster. The syntax is really simple: wget url. Example: wget http://www.linuxiso.org/download/rh7.2-i386-disc1.iso

Second trick: Download your ISOs off linuxiso.org. It uses some kind of round-robin approach to try to give you the least busy of several mirrors. It doesn’t always work so well on the first try. The mirror it sent me to first was giving me throughput rates that topped out at 200KB/sec., but frequently dropped as low as 3KB/sec.Usually they stayed in the 15MB/sec range. I cancelled the transfer (ctrl-c) and tried again. I got a mirror that didn’t fluctuate as wildly, but it rarely went above the 20MB/sec. range. I cancelled the transfer again and got a mirror that rarely dropped below 50MB/sec and occasionally spiked as high as 120MB/sec. Much better.

Third trick (the one I learned today): Use wget’s -c option. That allows wget to resume transfers. Yep, you can get the most important functionality of a download manager in a 147K binary. It doesn’t spy on you either. That allowed me to switch mirrors several times without wasting the little bit I’d managed to pull off the slow sites.

Fourth trick: Verify your ISOs after you download them. LinuxISO provides MD5 sums for its wares. Just run md5sum enigma-i386-disc1.iso to get a long 32-character checksum for what you just downloaded. If it doesn’t match the checksum on the site, don’t bother burning it. It might work, but you don’t want some key archive file (like, say, the kernel) to come up corrupt. Even though CD-Rs are dirt cheap these days and high-speed burners make quick work of them, there’s still no point in unnecessarily wasting 99 cents and five minutes on the disc and half an hour on a questionable install.

As for downloading the file in separate pieces like Go!Zilla does, there’s a command-line Linux program called mget that does it, but it doesn’t follow redirection and it doesn’t do FTP except through a proxy server, so I have a hard time recommending it as a general-purpose tool. When it works, it seems to work just fine. You might try mget, but chances are decent you’ll end up falling back on wget.

How to pad your resume while meeting chicks.

Padding your resume while meeting chicks. I got a phone call last night offering me just that. Seriously. I didn’t hang up or ask to be taken off the calling list because it was a friend. Not a male friend with a harebrained, sleazy scheme. It was Jeanne. So it was a female friend with a sleazy scheme.
I guess it helps to know Jeanne. She has the distinction of being the only female friend who’s ever offered to lend me a copy of Playboy. She said she bought it for the articles. One of those articles was an interview with some film hunk. Another article was an interview with Aimee Mann. But I think it was all a diabolical plot to see what it would take to get me to read a copy of Playboy in front of her.

This time, Jeanne’s plotting to get me to serve on a committee. She tells me there are virtually no males on the committee. “Sixty to one, Dave! With odds like those you can’t lose!” she said.

Didn’t I hear someone say that about the Red Sox earlier this year?

Let’s change the subject to something more cheerful. How about if I list my qualifications?

1. I’m a male of the species homo sapiens.
2. I’m a sucker for dogs that are smarter than my former landlords my eighth grade science teacher the creeps who dated my sister when I was in college. That’s not every dog I’ve ever seen, but it’s a sizable percentage.

Gatermann says this is the most pathetic thing Jeanne’s ever asked me to do. And yes, Gatermann was there when Jeanne conned me into reading that magazine in front of her. (Yes, I gave in. I had to know what Aimee Mann had to say about Jewel, OK? And yes, her interview was just that–an interview.)

I serve on several committees, few of which work as well as I’d like, so it’s probably a good idea for me to participate, just to see if anyone else knows how to make a committee work right. The time commitment is small, so it just makes sense. In a sick sort of way.

Or maybe you can just say I’m easily finding ways to justify padding my resume while meeting women.

Harry Connick Jr. One of my coworkers pulled out a package he’d just received from Amazon. “I ordered two Harry Connick Jr. CDs,” he said. “This is what they sent.” He whipped out two CDs. They got that much right. But the CDs he received were (drum roll) The Bee Gees and LeAnn Rhimes.

He talked about how much he likes Harry Connick Jr. and how he has two tickets to go see him in some faraway city and he’s bringing a date.

“That’s what you think those tickets are for,” I said. Then, in my best concert-announcer voice, I said, “One night only! The Bee Gees! With very special guest LeAnn Rhimes!”

He glared at me.

Speaking of annoying… I got mail from someone who claims to have invented the “compressed ramdisk” technique I’ve talked about here and in my book, said something at least mildly disparaging about Andre Moreira–one of the other Windows-in-a-ramdisk pioneers–and he says he’s patented the technique, and wants me to download a trial copy of his software and link to it off my site.

I e-mailed him and asked him to set the record straight. It sounded to me like he’s claiming to have invented the compressed ramdisk–something CP/M owners were doing way back in 1984, if not earlier–and he wants free advertising from me for his commercial product.

Now, I could be wrong about that. I was wrong about OS/2 being the next big thing, after all. But if I’ve got the story more or less right, then the answer is no.

Now how did CP/M owners do compressed ramdisks? You’d just put your must-have utilities and applications into an .LBR file, then you’d run SQ on it to compress it. Then in profile.sub–the CP/M equivalent of autoexec.bat–you copied the archive to M: (CP/M’s built-in ramdisk) and then you decompressed it. In the days when applications were smaller than 64K, you could put your OS’ crucial utilities, plus WordStar and dBASE into a ramdisk and smoke all your neighbors who were running that newfangled MS-DOS.

I rediscovered the technique on my Commodore 128 (which was capable of running CP/M) in the late 1980s and thought I was really hot stuff with my 512K ramdisk.

Anyone who thinks the compressed ramdisk was invented in 1999 or 2000 either doesn’t remember his history or is smoking crack.

SCSI! SCSI vs. IDE is a long debate, almost a religious war, and it always has been. I remember seeing SCSI/IDE debates on BBSs in the early 1990s. Few argued that IDE was better than SCSI, though some did–but when you’re using an 8 MHz bus it doesn’t really matter–but IDE generally was less expensive than SCSI. The difference wasn’t always great. I remember seeing an IDE drive sell for $10 less than the SCSI version. The controller might have cost more, but back in the days when a 40-meg drive would set you back $300, a $10 premium for SCSI was nothing. To me, that settled the argument. It didn’t for everyone.

Today, IDE is cheap. Real cheap. A 20-gig drive costs you 50 bucks. A 7200-rpm 40-gig drive is all the drive many people will ever need, and it’s 99 bucks. And for simple computers, that’s great. If it fails, so what? Buy two drives and copy your important data over. At today’s prices you can afford to do that.

SCSI isn’t cheap. It’s hard to find a controller for less than $150, whereas IDE is included free on your motherboard. And if you find a SCSI drive for less than $150, it’s a closeout special. A 20-gig SCSI drive is likely to set you back $175-$200.

Superficially, the difference is philosophy. The IDE drive is designed to be cheap. Good enough to run Word, good enough to play Quake, quiet enough to not wake the baby, cheap enough to sell them by the warehouseful.

SCSI is designed for workstations and servers, where the only things that matter are speed, reliability, speed and speed. (Kind of like spam egg spam and spam in that Monty Python skit). If it costs $1,000 and requires a wind tunnel to cool it and ear protection to use it, who cares? It’s fast! So this is where you see extreme spindle rates like 10,000 and 15,000 RPM and seek times of 4.9 or even 3.9 milliseconds and disk caches of 4, 8, or even 16 MB. It’s also not uncommon to find a 5-year warranty.

In all fairness, I put my Quantum Atlas 10K3 in a Coolermaster cooler. It’s a big bay adapter that acts like a big heatsink and has a single fan, and it also dampens the sound. The setup is no louder than some of the 5400 RPM IDE drives Quantum was manufacturing in 1996-97.

OK, so what’s the practical difference?

IDE is faithful and dumb. You give it requests, it handles them in the order received. SCSI is smart. You send a bunch of read and write requests, and SCSI will figure out the optimal order to execute them in. That’s why you can defrag a SCSI drive while running other things without interrupting the defrag process very much. (Out of order execution is also one of the main things that makes modern CPUs faster than the 486.)

And if you’re running multiple devices, only one IDE device can talk at a time. SCSI devices can talk until you run out of bandwidth. So 160 MB/sec and 320 MB/sec SCSI is actually useful, unlike 133 MB/sec IDE, which is only useful until your drive’s onboard cache empties. Who cares whether a 2-meg cache empties in 0.0303 seconds or 0.01503 seconds?

There’s another advantage to SCSI with multiple devices. With IDE devices, you get two devices per channel, one interrupt per channel. With SCSI, you can do 7 devices per channel and interrupt. Some cards may give you 14. I know a lot of us are awfully crowded for interrupts, so being able to string a ton of devices off a single channel is very appealing. IRQ conflicts are rare these days but they’re not unheard of. SCSI giving you in one interrupt what IDE gives you in four is very nice in a crowded system.

Next up for my dining room… A TV studio.

I did it. I finally did it. I’ve been threatening for a long time to do it. I’ve finally, completely, totally gone off the deep end. And I like it.
I just ordered an 18-gig, 10,000-RPM Maxtor (formerly Quantum) Atlas III hard drive. I ordered an Adaptec Ultra160 host adapter to go with it, since this drive would pretty much saturate my old Adaptec 2940UW. And of course since I’m spending this obnoxious amount of money on a drive–around $200, when a mainstream drive of this size would go for 50 bucks–I’m protecting the investment with a $25 drive cooler. The cooler also deadens the drive’s sound, which is good. I’m a bit nervous about having a 10,000-rpm helicopter in my dining room-turned-office. Hey, where else was I going to put my desk?

I didn’t just get this drive so I could play Civ 3 or compile Linux kernels at blazing speed. I had another reason for this purchase. I also bought a Pinnacle DV500+ video capture/editing card. It’s not a cheap toy, but considering the capabilities it gives you, it’s a steal for the money. I could edit full-length movies with this thing. Well, I could with a capable hard drive. It needs a 25 MB/sec stream to spit out video, and, well, the fastest drive I have won’t do that. Ramdisks? Nice idea, but you can assume a minute of video will chew up a gig of disk, so I’d need 4 GB RAM for most of the projects I have in mind. None of my motherboards will take that much memory.

So my Duron-750 is going to become a video editing workstation. I’ll have to buy or scrounge a bit more memory–Pinnacle recommends 256 MB; I might as well do them a little better and go 384–but then I’ll have the ability to edit video in my dining room. A Duron-750 isn’t much CPU by today’s standards, but Pinnacle lists a P3-500 as the minimum, and the reviews I’ve read do fine with a 500 or 550 MHz CPU. You can assume a Duron runs at a similar speed to a P3 or an Athlon that runs 100 MHz slower, so my Duron-750 should perform like a P3-650. If that proves inadequate, hey, a 1.1 GHz Duron runs $89 these days.

The DV500+ is supposed to be a real bear to set up. We’ll see how it likes my FIC AZ11. I’ve made tricky hardware play before, so I’m not too afraid of this. Every review I’ve read complained about the setup, but once the reviewer got it running, each raved about its abilities.

I can’t wait.

Two chipsets from the AMD front

Yesterday AMD formally unveiled and shipped the AMD-760MP chipset. Right now there is one and only one motherboard using it, the ritzy Tyan Thunder K7, which runs about $550 minimum. (Wholesale cost on it is rumored to be $500.) Considering its 64-bit PCI slots, two built-in 3Com NICs, onboard ATI video, onboard Adaptec SCSI, and four available DIMMs, that’s not a half-bad price. It’s obviously not a hobbyist board. This dude’s intended to go in servers.
At any rate, reviews are all over the place and the quality varies. Far and away the best I found was at Ace’s Hardware, where he tested the things people actually likely to buy this board would do with it: workstation-type stuff.

Anand does his usual 10 pages’ worth of butt-kissing and he’s living under the delusion that people will buy this board to play Quake. However, he does test the board with plain old Thunderbird and Duron CPUs (they work, but AMD won’t support that configuration). Skip ahead to page 11 after reading the story at Ace’s. His tests suggest that for some purposes, a dual Duron-850 can be competitive with a dual P3-933. That information is more interesting than it is useful at this point in time, but we’ve all been curious about dual Duron performance, so if and when an inexpensive AMD SMP board becomes available, we have some idea what we’ll be able to do with it.

All the usual hardware sites put in their two cents’ worth; by the time I read Ace’s and Anand’s and Tom’s reviews I stopped learning anything new.

Some of it bordered on ridiculous. One site (I forget which) observed that the AMD 766 northbridge looks just like a K6-2 and said they must have made it look that way just to remind us where the Athlon came from. Whatever. The AMD 766 northbridge and the K6-2 use the same heat spreader. The intention is to keep the chip cool. It’s not there just for looks–the chip runs hot. But that’s the kind of quality information we get from most hardware sites these days, sadly.

More immediately useful and interesting, but not yet available, is the nVidia nForce chipset. You can read about it at Tom’s and elsewhere. This is technically nVidia’s second chipset, their first being the chipset in Microsoft’s X-Box. This chipset is a traditional two-chip solution, linked by AMD’s high-speed HyperTransport. It includes integrated sound better than anything Creative Labs or Cirrus Logic currently offer (now we know what nVidia was doing with those engineers they were hiring from Aureal) and integrated GeForce 2MX video connected via a high-speed port that would be equivalent to AGP 6X, if such a thing existed. And nVidia pairs up DDR controllers to give dual-channel, 128-bit memory with a bandwidth of 4.256 GB/sec. Suddenly DDR provides greater bandwidth than Rambus in addition to lower latency.

Just for good measure, the chipset includes Ethernet too.

What’s all this mean? High-speed motherboards with everything integrated (and with integrated peripherals definitely worth using) for around 200 bucks. By the end of the summer, last summer’s monster PC will be integrated onto two chips and priced for building PCs at the $600-$800 price point.

This summer’s computer revolution won’t be Windows XP.

And, in something not really related, here’s something you probably missed, unfortunately. Start rubbing your hands together if you enjoy the Mac-PC or Intel-AMD wars. This is a hard benchmark comparing AMD Athlon, Intel P3, and Motorola PowerPC architectures and their relative speed. The methodology: under Linux, cross-compile a Linux kernel for the SPARC architecture (compiling native isn’t a fair comparison; this way they’re all creating identical code and therefore doing the same work, or as close to it as you’re gonna get). You know those claims that a Mac is twice as fast as an equivalent-speed Pentium III running Photoshop? I always countered that with Microsoft Office benchmarks, where a Mac is about 1/4 the speed of a PC, at best, when doing a mail merge. Neither is a fair test. This benchmark resembles one.

Anyway… Yes, a G4 is faster than the equivalently clocked Pentium III. How much faster? Roughly 10 percent. And an Athlon turns out to be about 20 percent slower than the equivalent P3. Of course, the Athlon reaches clock speeds the P3 never will, and the Athlon is also much more than 20 percent cheaper than the equivalently-clocked P3, so who really cares?

This still isn’t a totally fair comparison of CPU architecture, since chipsets vary (and it’s entirely possible that the difference between the P3 and the Athlon in speed is due to chipset quality), but if indeed the G4 was twice as fast as the P3, it would surely outperform it by better than 10 percent in this test. But it’s a decent comparison of real-world performance, because it doesn’t matter how much better your CPU is if it’s burdened by a chipset that doesn’t show up to play on game day.

Most telling is the end, where he gives the cost per speed unit. AMD wins that chart handily.

Enough of my babble. Read all about it here.

More Like This: AMD Hardware

So, who makes the best Mac utility?

When it comes to Macintoshes, I feel like a catcher playing shortstop. Yes, a good athelete can play both positions, but very few can play both exceptionally well. The mindset’s all different. The ideal physique for each is all different.
I fix Macs for the good of my team. Period. Right now my job is to nurse along a dozen Macs for four months until the new fiscal year starts, then they can replace them. I think those machines have four months left in ’em. The bigger question is, do I have four months’ tolerance left in me? Hard to say.

But thanks to my pile of Macs on their last legs (these are 120 MHz machines with no L2 cache and a pathetic 10 MB/sec SCSI-II bus, and they’ve never had regular maintenance) I’ve gotten a lot of first-hand experience with Mac utilities suites.

I said in my book that Norton Utilities for Windows is, in most regards, the second-best utilities suite out there. Problem is, the other two big ones split first place, and the third-placer is usually so bad in that regard that you’d prefer not to use it. So Norton Utilities compromises its way to the top like a politician. The Mac Norton Utilities is the same way. There are two reasons to buy Norton Utilities for the Mac: Speed Disk and Norton Disk Doctor. Period. The rest of the stuff on the CD is completely, totally worthless. Eats up memory, slows the system down, causes crashes. Copy SD and NDD to a CD-R, then run over the original with your car. They’re that bad. But of course your end-users will install them since all software is good, right? You should install everything just in case you need it someday. Famous last words, I say…

But you need Speed Disk and Norton Disk Doctor desperately. Macs are as bad as Microsoft OSs about fragmentation, and they’re far worse about trashing their directory structures. Use a Mac for a week normally, and use a PC for a week, turning it off improperly on a whim (with automatic ScanDisk runs disabled), then at the end of a week, run a disk utility on each. The Mac will have more disk errors. Apple’s Disk First Aid is nice and non-invasive, but it catches a small percentage of the problems. NDD scoops up all of the routine stuff that Disk First Aid misses.

As for Speed Disk, it works. It’s not the least bit configurable, but it has enough sense to put frequently used stuff at the front of the disk and stuff you never touch at the end.

But if you need to do what Norton Utilities says it does, you really need Tech Tool Pro. Its defragmenter is at least the equal of Speed Disk, and its disk repair tools will fix problems that cause NDD to crash. Plus it has hardware diagnostics, and it’ll cleanly and safely zap the Mac’s PRAM (its equivalent to CMOS) and cleanly rebuild the Mac’s desktop (something that should be done once a month).

But the best disk repair tool of them all is Disk Warrior. Unlike the other suites, Disk Warrior just assumes there are problems with your disk. That’s a pretty safe assumption. It goes in, scavenges the disk, rebuilds the directory structure, and asks very, very few questions. Then it rewrites the directory in optimal fashion, increasing your Mac’s disk access by about the same factor as normal defragmentation would.

Oh yes, Disk Warrior comes with a system extension that checks all data before it gets written to the drive, to reduce errors. I really don’t like that idea. Worse speed, plus there’s always something that every extension conflicts with. That idea just makes me really nervous. Then again, since I regard the Mac’s directory structure as a time bomb, maybe I should use it. But I’m torn.

Which would I buy? If I could only have one of the three, I’d take Tech Tool Pro, because it’s the most complete of the three. I’d rather have both Tech Tool and Disk Warrior at my disposal. When a Mac goes bad, you can automatically run Disk Warrior, then rebuild the desktop with Tech Tool Pro before doing anything else, and about half the time one or the other of those (or the combination of them) will fix the problem. Or they’ll fix little problems before they become big ones.

Disk Warrior is positively outstanding for what it does, but it’s a one-dimensional player. For now, it does ship with a disk optimizer, but it’s limited to optimizing one of the Mac’s two common disk formats. At $79 vs. $99 for Tech Tool Pro, if you’ve only got a hundred bucks to spend, you’re better off with Tech Tool Pro.

As for Norton Utilities, I’ve got it, and it’s nice to have a third-string disk utility just in case the other two can’t fix it. Sometimes a Mac disk problem gets so hairy that you have to run multiple disk utilities in round-robin fashion to fix it. So run Disk Warrior, then Tech Tool Pro, then Norton Disk Doctor, then Apple Disk First Aid. Lather, rinse, and repeat until all four agree there are no disk errors.