12/09/2000

I can’t let this stupid move by the Cubs go. And I thought the Royals could do some stupid things. But the Royals never let George Brett walk away. The Greatest Ever flirted with following Whitey Herzog over to the Cardinals in the early 1980s, so the Royals locked him up with a long-term contract and the promise of a front-office position after he retired. The result: Brett had some great years and some not-so-great years, but no matter how he was hitting and how his team was playing, people flocked to Royals Stadium just to see him.

Mark Grace is the closest thing baseball has to a George Brett today. He hits left handed and has a sweet swing. And he’s been playing first base for the Cubs since the Harding administration. OK, since 1988. And he was near and dear to every true Cub fan’s heart.

I remember when I first saw him play. Leon Durham had pretty much played himself out of a job as the Cubs’ first baseman. My dad and I thought Rafael Palmeiro was the Cubs’ first baseman of the future. Then I saw Mark Grace play in a spring training game. Wow! He had gold glove written all over him. If the ball was in the same time zone as him, he grabbed it. And at the plate, he reminded me a little of Brett. Solid contact hitter, good for lots of doubles and the occasional homer.

Within two years, Grace was a clubhouse leader. In the 1990s, he led the majors in hits and doubles. Without a doubt, he was the Mr. Cub of the 90s. He had offers to go elsewhere. But he wanted to be one of The Rare Ones. He wanted to be like Brooks Robinson and Carl Yastrzemski and George Brett, who spent the entirety of their long careers with one team, and spend his whole career with the Cubs. He wanted that more than a World Series ring (good thing, because the Cubs won’t be headed there any time soon). A class act. Loyalty was more important than glory.

But now there’s another young left-handed-hitting first baseman coming up, and he’ll play for less money, and Grace had an injury-plagued season, so now he’s Leon Durham. Only instead of being banished to Cincy, he’s banished to Arizona.

Getting rid of Grace makes good business sense. He’s expensive. He doesn’t hit for as much power as you’d like from a corner infielder. He may never hit .330 again–if you can’t get 40 homers from your first baseman, it’s nice to get 200 hits. The fans will miss Grace, but they’ll come out to see the Cubs regardless of who’s on the field. They could replace Mark Grace with Leon Durham and Sammy Sosa with Keith Moreland and the fans would still come. Cubs fans are like that. And management knows it.

You can replace Grace’s bat, and you can live without his glove. But you can’t replace the man. That was true of a lot of the men the Cubs have let go over the years: Bill Buckner. Andre Dawson. Rick Sutcliffe. Greg Maddux. Joe Girardi. Rafael Palmeiro. And now, Mark Grace. These are the kind of men whose presence makes the other eight guys on the field play better. The Cubs never understood that. Never will.

And that, I submit, is the reason the Cubs are perennial losers. They manage to keep the occasional outstanding individual in a Cub uniform (Ernie Banks, Ryne Sandberg), but for the most part they treat players as commodities, and with a few rare exceptions, field a team of forgettable players day in and day out.

The Cubs didn’t deserve Mark Grace. The real tragedy is it took Mark Grace 13 years to figure that out. Good luck in Arizona, Mark. Go get that World Series ring you were willing to deny yourself. Then head back to Wrigley Field and ask your old boss Andy MacPhail if he wants to touch it.

Thanks for all of the birthday well-wishes. Lunch was good. Dinner was good. The homemade peanut brittle from my aunt was even better. She always sends me peanut brittle for my birthday, it’s always great, and it always lasts me about three days. At about 10 p.m. I was carrying on about how I had about 45 minutes of youth left. I was born around 10:45, you see, and a long time ago I set 26 as the age when you become old. So, speak up sonny, I can’t hear you. But don’t torque me off; I’m apt to hit you with me cane.

And thanks to Al… for the greeting, for the page, and for the hits. I had a spike yesterday. Eighth wonder of the Wintel world I seriously doubt (both because I’m not that good, and because neither half of Wintel would claim me: I do a lot of Microsoft-baiting and probably even more Intel-baiting, mostly because I just can’t stand Andy Grove), but I appreciate the sentiment.

Computer buying advice

Some sound computer buying advice. Here’s a Washington Post article on buying new PCs. Easy to understand in layman’s terms. And the advice is for the most part sound too, though I recommend always buying a good video card–a TNT2 will just add $60 or so to the cost of a low-end box and everything will run more nicely. The box I’m typing on right now has a cheap Cirrus Logic-based card in it, and the high CPU usage of its drivers hurts multitasking noticeably, even if I’m just browsing the Web while listening to music.

In a year this’ll be a moot point, as all chipsets will have serviceable embedded video. Even the enraging Intel i740, though not good for games, was great for productivity use and much better all around than this Cirrus and Trident garbage, and Intel’s newest chipsets have i740 derivatives in them. Future VIA chipsets will have S3 video in them. Same story.

I buy crap so you don’t have to–but don’t get me wrong. I buy the good stuff too. That way I’ll know the difference.

No more wimpy PC sound for me. I just connected an ancient but still awesome Harmon/Kardon 330A receiver (built in the late 1960s, I’m guessing — it once belonged to my dad) to my computer along with a pair of KLH 970A speakers I picked up for 30 bucks at Best Bait-n-Switch (unfortunately, the only nearby place that sells KLH speakers). These things are scarcely bigger than the cheap desktop speakers that came with the last PC I bought — 7 3/8″ high x 4 5/8″ wide by 4 3/8″ deep — but with the volume cranked to about 1/3 I can hear it throughout my apartment. I imagine at 2/3 I’d meet my neighbors. I won’t try that — I’m not interested in sharing my great tunage.

I can’t believe neither my mom nor my sister wanted this receiver — honestly, every time I’ve mentioned this thing at an audio place the salesperson has asked if I was interested in selling it — but hey, my dad would have wanted me to have a kickin’ audio setup for my PCs, right? This’ll work great for Royals broadcasts over the ‘Net once baseball season starts again, but not only that, this combination kicks out the jams almost as hard as punk legends The MC5, so I’m not complaining.

I’m happy enough with the results that I think rather than replacing my dying CD changer, once my Windows Me experiments are over I’ll mount my extra 15-gig drive somewhere on my LAN and put my Plextor Ultraplex CD-ROM drive to work ripping my entire CD collection, which I’ll then encode at 320 kbps. I doubt I’ll notice much difference.

If you’re like me and live with several PCs in close proximity to one another, rather than plugging an endless number of cheap desktop speakers into them, pick up an inexpensive receiver or use a castaway. You can plug a PC into any stereo input except phono, so most modern receivers should accomodate at least three PCs, and the speaker options are limited only by the receiver’s capabilities and available space. You’re likely to be much happier with such a setup than with any desktop speakers you’ll find, and a receiver plus speakers will usually cost much less than multiple pairs of any set of desktop speakers worth having would. Just be very careful to isolate your speakers away from any floppies and Zips and other magnetic media you might have. Some bookshelf speakers may be magnetically shielded, but don’t count on it.

Pentium 4 performance is precedented

Thoughts on the Pentium 4 launch. No big surprises: a massively complex new processor design, limited availability, and systems from all the usual suspects, at high prices of course. And, as widely reported previously, disappointing performance.
This isn’t the first time this has happened. The Pentium Pro was a pretty lackluster performer too–it ran 32-bit software great, but Win9x was still the dominant OS at the time and it still has a lot of 16-bit code in it. So a 200 MHz Pentium Pro cost considerably more than a 200 MHz Pentium and for most of the people buying it, was significantly slower. History repeats itself…

Intel revised the Pentium Pro to create the Pentium II, with tweaks to improve 16-bit performance, but of course massive clock speed ramps made that largely irrelevant. Goose the architecture to 600 MHz and you’re going to blow away a 200 MHz previous-generation chip.

That’s what you’re going to see here. Intel fully intends to scale this chip beyond 2 GHz next year, and that’s when you’ll see this chip come into its own. Not before. And by then Intel will probably have changed their socket, (they intend to change it sometime next year) so buying a P4 today gives you no future-proofing anyway.

It never makes sense to be the first on the block with Intel’s newest chip. Never. Ever. Well, if you’re the only one on the block with a computer, then it’s OK. The P4 has issues. The P3 had issues (remember the serial number?) and was really just a warmed-over P2 anyway. The P2 was a warmed-over Pentium Pro. The Pentium Pro had serious performance issues. The Pentium had serious heat problems and it couldn’t do simple arithmetic (“Don’t divide, Intel inside!”). The last new Intel CPU whose only issue was high price was the 486, and that was in April 1989.

Unless you’re doing one of the few things the P4 really excels at (like encoding MP4 movies or high-end CAD), you’re much better off sticking with a P3 or an Athlon and sinking the extra money into more RAM or a faster hard drive. But chances are you already knew that.

Time to let the cat out of the bag. The top-secret project was to try to dual-boot WinME and Win98 (or some other earlier version) without special tools. But Win98’s DOS won’t run WinME, and WinME’s DOS seems to break Win98 (it loads, but Explorer GPFs on boot).

The best method I can come up with is to use the GPL boot manager XOSL. It just seems like more of an achievement to do it without third-party tools, but at least it’s a free third-party tool. You could also do it with LILO or with OS/2’s Boot Manager, but few people will have Boot Manager and LILO will require some serious hocus-pocus. Plus I imagine a lot of people will like XOSL’s eye candy and other gee-whiz features, though I really couldn’t care less, seeing as it’s a screen you look at for only a few seconds at boot time.

Apple. you call this tech support?

This is why I don’t like Apple. Yesterday I worked on a new dual-processor G4. It was intermittent. Didn’t want to drive the monitor half the time. After re-seating the video card and monitor cable a number of times and installing the hardware the computer needed, it started giving an error message at boot:

The built-in memory test has detected a problem with cache memory. Please contact a service technician for assistance.

So I called Apple. You get 90 days’ free support, period. (You also only get a one-year warranty unless you buy the AppleCare extended warranty, which I’m loathe to do. But I we’d probably better do it for this machine since it all but screams “lemon” every time we boot it.) So, hey, we can’t get anywhere with this, so let’s start burning up the support period.

The hold time was about 15 seconds. I mention this because that’s the only part of the call that impressed me and my mother taught me to say whatever nice things I could. I read the message to the tech, who then put me on hold, then came back in about a minute.

“That message is caused by a defective memory module. Replace the third-party memory module to solve the problem,” she said.

“But the computer is saying the problem is with cache, not with the memory,” I told her. (The cache for the G4 resides on a small board along with the CPU core, sort of like the first Pentium IIs, only it plugs into a socket.) She repeated the message to me. I was very impressed that she didn’t ask whether we’d added any memory to the system (of course we had–Apple factory memory would never go bad, I’m sure).

I seem to remember at least one of my English teachers telling me to write exactly what I mean. Obviously the Mac OS 9 programmers didn’t have any of my English teachers.

I took the memory out and cleaned it with a dollar bill, then put it back in. The system was fine for the rest of the afternoon after this, but I have my doubts about this system. If the problem returns, I’ll replace the memory. When that turns out not to be the problem, I don’t know what I’ll do.

We’ve been having some problems lately with Micron tech support as well, but there’s a big difference there. With Apple, if you don’t prove they caused the problem, well, it’s your problem, and they won’t lift a finger to help you resolve it. Compare this to Micron. My boss complained to Micron about the length of time it was taking to resolve a problem with one particular system. You know what the Micron tech said? “If this replacement CPU doesn’t work, I’ll replace the system.” We’re talking a two-year-old system here.

Now I know why Micron has more business customers than Apple does. When you pay a higher price for a computer (whether that’s buying a Micron Client Pro instead of a less-expensive, consumer-oriented Micron Millenia, or an Apple G4 instead of virtually any PC), you expect quick resolution to your computer problems because, well, your business doesn’t slow down just because your computer doesn’t work right. Micron seems to get this. Apple doesn’t.

And that probably has something to do with why our business now has 25 Micron PCs for every Mac. There was a time when that situation was reversed.

The joke was obvious, but… I still laughed really hard when I read today’s User Friendly. I guess I’m showing my age here by virtue of getting this.

Then again, three or four years back, a friend walked up to me on campus. “Hey, I finally got a 64!” I gave him a funny look. “Commodore 64s aren’t hard to find,” I told him. Then he laughed. “No, a Nintendo 64.”

It’s funny how nicknames recycle themselves.

For old times’ sake. I see that Amiga, Inc. must be trying to blow out the remaining inventory of Amiga 1200s, because they’re selling this machine at unprecedented low prices. I checked out www.softhut.com just out of curiosity, and I can get a bare A1200 for $170. A model with a 260MB hard drive is $200.  On an Amiga, a drive of that size is cavernous, though I’d probably eventually rip out the 260-megger and put in a more modern drive.

The A1200 was seriously underpowered when it came out, but at that price it’s awfully tempting. It’s less than used A1200s typically fetch on eBay, when they show up. I can add an accelerator card later after the PowerPC migration plan firms up a bit more. And Amigas tend to hold their value really well. And I always wanted one.

I’m so out of the loop on the Amiga it’s not even funny, but I found it funny that as I started reading so much started coming back. The main commands are stored in a directory called c, and it gets referred to as c: (many crucial Amiga directories are referenced this way, e.g. prefs: and devs: ). Hard drives used to be DH0:, DF1:, etc., though I understand they changed that later to HD0:, HD1:, etc.

So what was the Amiga like? I get that question a lot. Commodore released one model that did run System V Unix (the Amiga 3000UX), but for the most part it ran its own OS, known originally as AmigaDOS and later shortened to AmigaOS. Since the OS being developed internally at Amiga, Inc., and later at Commodore after they bought Amiga, wasn’t going to be ready on time for a late 1984/early 1985 release, Commodore contracted with British software developer Metacomco to develop an operating system. Metacomco delivered a Tripos-derived OS, written in MC68000 assembly language and BCPL, that offered fully pre-emptive multitasking, multithreading, and dynamic memory allocation (two things even Mac OS 9 doesn’t do yet–OS 9 does have multithreading but its multitasking is cooperative and its memory allocation static).

Commodore spent the better part of the next decade refining and improving the OS, gradually replacing most of the old BCPL code with C code, stomping bugs, adding features and improving its looks. The GUI never quite reached the level of sophistication that Mac OS had, though it certainly was usable and had a much lower memory footprint. The command line resembled Unix in some ways (using the / for subdirectories rather than ) and DOS in others (you used devicename:filename to address files). Some command names resembled DOS, others resembled Unix, and others neither (presumably they were Tripos-inspired, but I know next to nothing about Tripos).

Two modern features that AmigaOS never got were virtual memory and a disk cache. As rare as hard drives were for much of the Amiga’s existance this wasn’t missed too terribly, though Commodore announced in 1989 that AmigaDOS 1.4 (never released) would contain these features. AmigaDOS 1.4 gained improved looks, became AmigaOS 2.0, and was released without the cache or virtual memory (though both were available as third-party add-ons).

As for the hardware, the Amiga used the same MC68000 series of CPUs that the pre-PowerPC Macintoshes used. The Amiga also had a custom chipset that provided graphics and sound coprocessing, years before this became a standard feature on PCs. This was an advantage for years, but became a liability in the early 1990s. While Apple and the cloners were buying off-the-shelf chipsets, Commodore continued having to develop their own for the sake of backward compatibility. They revved the chipset once in 1991, but it was too little, too late. While the first iteration stayed state of the art for about five years, it only took a year or two for the second iteration to fall behind the times, and Motorola was having trouble keeping up with Intel in the MHz wars (funny how history repeats itself), so the Amigas of 1992 and 1993 looked underpowered. Bled to death by clueless marketing and clueless management (it’s arguable who was worse), Commodore bled engineers for years and fell further and further behind before finally running out of cash in 1993.

Though the Amiga is a noncontender today, its influence remains. It was the first commercially successful personal computer to feature color displays of more than 16 colors (it could display up to 4,096 at a time), stereo sound, and pre-emptive multitasking–all features most of us take for granted today. And even though it was widely dismissed as a gaming machine in its heyday, the best-selling titles for the computer that ultimately won the battle are, you guessed it, games.

Mac mice, PC data recovery

A two-button Mac mouse!? Frank McPherson asked what I would think of the multibutton/scroll wheel support in Mac OS X. Third-party multibutton mice have been supported via extensions for several years, but not officially from Ye Olde Apple. So what do I think? About stinkin’ time!

I use 3-button mice on my Windows boxes. The middle button double-clicks. Cuts down on clicks. I like it. On Unix, where the middle button brings up menus, I’d prefer a fourth button for double-clicking. Scroll wheels I don’t care about. The page up/down keys have performed that function just fine for 20 years. But some people like them; no harm done.

Data recovery. One of my users had a disk yesterday that wouldn’t read. Scandisk wouldn’t fix it. Norton Utilities 2000 wouldn’t fix it. I called in Norton Utilities 8. Its disktool.exe includes an option to revive a disk, essentially by doing a low-level format in place (presumably it reads the data, formats the cylinder, then writes the data back). That did the trick wonderfully. Run Disktool, then run NDD, then copy the contents to a fresh disk immediately.

So, if you ever run across an old DOS version of the Norton Utilities (version 7 or 8 certainly; earlier versions may be useful too), keep them! It’s something you’ll maybe need once a year. But when you need them, you need them badly. (Or someone you support does, since those in the know never rely on floppies for long-term data storage.) Recent versions of Norton Utilities for Win32 don’t include all of the old command-line utilities.

Hey, who was the genius who decided it was a good idea to cut, copy and paste files from the desktop? One of the nicest people in the world slipped up today copying a file. She hit cut instead of copy, then when she went to paste the file to the destination, she got an error message. Bye-bye file. Cut/copy-paste works fine for small files, but this was a 30-meg PowerPoint presentation. My colleague who supports her department couldn’t get the file back. I ride in on my white horse, Norton Utilities 4.0 for Windows in hand, and run Unerase off the CD. I get the file back, or so it appears. The undeleted copy won’t open. On a hunch, I hit paste. Another copy comes up. PowerPoint chokes on it too.

I tried everything. I ran PC Magazine’s Unfrag on it, which sometimes fixes problematic Office documents. No dice. I downloaded a PowerPoint recovery program. The document crashed the program. Thanks guys. Robyn never did you any harm. Now she’s out a presentation. Not that Microsoft cares, seeing as they already have the money.

I walked away wondering what would have happened if Amiga had won…

And there’s more to life than computers. There’s songwriting. After services tonight, the music director, John Scheusner, walks up and points at me. “Don’t go anywhere.” His girlfriend, Jennifer, in earshot, asks what we’re plotting. “I’m gonna play Dave the song that he wrote. You’re more than welcome to join us.”

Actually, it’s the song John and I wrote. I wrote some lyrics. John rearranged them a little (the way I wrote it, the song was too fast–imagine that, something too fast from someone used to writing punk rock) and wrote music.

I wrote the song hearing it sung like The Cars, (along the lines of “Magic,” if you’re familiar with their work) but what John wrote and played sounded more like Joe Jackson. Jazzy. I thought it was great. Jennfier thought it was really great.

Then John tells me they’re playing it Sunday. They’re what!? That will be WEIRD. And after the service will be weird too, seeing as everybody knows me and nobody’s ever seen me take a lick of interest in worship music before.

I like it now, but the lyrics are nothing special, so I don’t know if I’ll like it in six months. We’ll see. Some people will think it’s the greatest thing there ever was, just because two people they know wrote it. Others will call it a crappy worship song, but hopefully they’ll give us a little credit: At least we’re producing our own crappy worship songs instead of playing someone else’s.

Then John turns to me on the way out. “Hey, you’re a writer. How do we go about copyrighting this thing?” Besides writing “Copyright 2000 by John Scheusner and Dave Farquhar” on every copy, there’s this.  That’s what the Web is for, friends.

~~~~~~~~~~

Note: I post this letter without comment, since it’s a response to a letter I wrote. My stuff is in italics. I’m not sure I totally agree with all of it, but it certainly made me think a lot and I can’t fault the logic.

From: John Klos
Subject: Re: Your letter on Jerry Pournelle’s site

Hello, Dave,

I found both your writeup and this letter interesting. Especially interesting is both your reaction and Jerry’s reaction to my initial letter, which had little to do with my server.To restate my feelings, I was disturbed about Jerry’s column because it sounded so damned unscientific, and I felt that he had a responsibility to do better.
His conclusion sounded like something a salesperson would say, and in fact did sound like things I have heard from salespeople and self-promoted, wannabe geeks. I’ve heard all sorts of tales from people like this, such as the fact that computers get slower with age because the ram wears out…

Mentioning my Amiga was simply meant to point out that not only was I talking about something that bothered me, but I am running systems that “conventional wisdom” would say are underpowered. However, based upon what both you and Jerry have replied, I suppose I should’ve explained more about my Amiga.

I have about 50 users on erika (named after a dear friend). At any one moment, there are anywhere from half a dozen to a dozen people logged on. Now, I don’t claim to know what a Microsoft Terminal Server is, nor what it does, but it sounds something like an ’80s way of Microsoft subverting telnet.

My users actually telnet (technically, they all use ssh; telnet is off), they actually do tons of work is a shell, actually use pine for email and links (a lynx successor) for browsing. I have a number of developers who do most of their development work in any of a number of languages on erika (Perl, C, C++, PHP, Python, even Fortran!).

Most of my users can be separated into two groups: geeks and novices. Novices usually want simple email or want to host their domain with a minimum of fuss; most of them actually welcome the simplicity, speed, and consistency of pine as compared to slow and buggy webmail. Who has used webmail and never typed a long letter only to have an error destroy the entire thing?

The geeks are why sixgirls.org got started. We all
had a need for a place
to call home, as we all have experienced the nomadic life of being a geek
on the Internet with no server of our own. We drifted from ISP to ISP
looking for a place where our Unix was nice, where our sysadmins listened,
and where corporate interests weren’t going to yank stuff out from underneath us at any moment. Over the years, many ISPs have stopped
offering shell access and generally have gotten too big for the comfort of
geeks.

If Jerry were replying to this now, I could see him saying that shells are
old school and that erika is perhaps not much more than a home for  orphans and die-hard Unix fans. I used to think so, too, but the more novice users I add, the more convinced I am that people who have had no shell experience at all prefer the ease, speed, and consistency of the shell
over a web browser type interface. They’re amazed at the speed. They’re
surprised over the ability to instantly interact with others using talk and ytalk.

The point is that this is neither a stopgap nor a dead end; this IS the
future. I read your message to Jerry and it got me thinking a lot. An awful
lot. First on the wisdom of using something other than what Intel calls a server, then on the wisdom of using something other than a Wintel box as a server. I probably wouldn’t shout it from the mountaintops if I were doing it, but I’ve done it myself. As an Amiga veteran (I once published an article in Amazing Computing), I smiled when I saw what you were doing with your A4000. And some people no doubt are very interested in that. I wrote some about that on my Weblogs site (address below if you’re interested).

I am a Unix Systems Administrator, and I’ve set up lots of servers. I made
my decision to run everything on my Amiga based upon several
criteria:
One, x86 hardware is low quality. I stress test all of the servers I
build, and most x86 hardware is flawed in one way or another. Even if
those flaws are so insignificant that they never affect the running of a
server, I cannot help but wonder why my stress testing code will run just
fine on one computer for months and will run fine on another computer for
a week, but then dump a core or stop with an error. But this is quite
commonplace with x86 hardware.

For example, my girlfriend’s IBM brand FreeBSD computer can run the stress testing software indefinitely while she is running the GIMP, Netscape, and all sorts of other things. This is one of the few PCs that never has any problems with this stress testing software. But most of the other servers I set up, from PIIIs, dual processor PIIIs and dual Celerons, to Cyrix 6×86 and MII, end up having a problem with my software after anywhere from a few days to a few weeks. But they all have remarkable uptimes, and none crash for any reason other than human error (like kicking the cord).

However, my Amigas and my PowerMacs can run this software indefinitely.

So although I work with x86 extensively, it’s not my ideal choice. So what
else is there? There’s SPARC, MIPS, m68k, PowerPC, Alpha, StrongARM… pleanty of choices.

I have a few PowerMacs and a dual processor Amiga (68060 and 200 mhz PPC 604e); however, NetBSD for PowerMacs is not yet as mature as I need it to be. For one, there is no port of MIT pthreads, which is required for MySQL. Several of my users depend on MySQL, so until that is fixed, I can’t consider using my PowerMac. Also, because of the need to boot using Open Firmware, I cannot set up my PowerMac to boot unattended. Since my machine is colocated, I would have to be able to run down to the colocation facility if anything ever happened to it. That’s
fine if I’m in the city, but what happens when I’m travelling in Europe?

SPARC is nice, but expensive. If I could afford a nice UltraSPARC, I
would. However, this porject started as a way to have a home for
geeks; coming up with a minimum of $3000 for something I didn’t even plan to charge for wasn’t an option.

Alpha seems too much like PC hardware, but I’d certainly be willing to
give it a try should send me an old Alpha box.

With MIPS, again, the issue is price. I’ve always respected the quality of
SGI hardware, so I’d definitely set one up if one were donated.

StrongARM is decent. I even researched this a bit; I can get an ATX
motherboard from the UK with a 233 mhz StrongARM for about 310 quid. Not too bad.

But short of all of that, I had a nice Amiga 4000 with a 66 mhz 68060, 64
bit ram, and wide ultra SCSI on board. Now what impresses me about this
hardware is that I’ve run it constantly. When I went to New Orleans last
year during the summer, I left it in the apartment, running, while the
temperatures were up around 100 degrees. When I came back, it was
fine. Not a complaint.

That’s the way it’s always been with all of my Amigas. I plug them in,
they run; when I’m done, I turn off the monitor. So when I was considering
what computer to use as a server when I’d be paying for a burstable 10
Mbps colocation, I wanted something that would be stable and consistent.

 Hence Amiga.

One of my users, after reading your letter (and, I guess, Jerry’s),
thought that I should mention the load average of the server; I assume
this is because of the indirectly stated assumption that a 66 mhz 68060 is
just squeaking by. To clarify that, a 66 mhz 68060 is faster per mhz than
any Pentium by a measurable margin when using either optimised code (such as a distributed.net client) or straight compiled code (such as LAME). We get about 25,000 hits a day, for a total of about 200 megs a day, which accounts for one e

ighth of one percent of the CPU time. We run as a Stratum 2 time server for several hundred computers, we run POP and IMAP services, sendmail, and we’re the primary nameserver for perhaps a hundred machines. With a distributed.net client running, our load average hovers arount 1.18, which means that without the dnet client, we’d be idle most of the time.

If that weren’t good enough, NetBSD 1.5 (we’re running 1.4.2) has a much
improved virtual memory system (UVM), improvements and speedups in the TCP stack (and complete IPv6 support), scheduler enhancements, good softdep support in the filesystem (as if two 10k rpm 18 gig IBM wide ultra drives aren’t fast enough), and more.

In other words, things are only going to get better.

The other question you raise (sort of) is why Linux gets so much more
attention than the BSD flavors. I’m still trying to figure that one
out. Part of it is probably due to the existance of Red Hat and
Caldera and others. FreeBSD gets some promotion from Walnut
Creek/BSDi, but one only has to look at the success of Slackware to
see how that compares.

It’s all hype; people love buzz words, and so a cycle begins: people talk
about Linux, companies spring up to provide Linux stuff, and people hear
more and talk more about Linux.

It’s not a bad thing; anything that moves the mainstream away from
Microsoft is good. However, the current trend in Linux is not good. Red
Hat (the company), arguably the biggest force in popularising Linux in the
US, is becoming less and less like Linux and more and more like a software company. They’re releasing unstable release after unstable release with no apologies. Something I said a little while ago, and someone has been using as his quote in his email:
In the Linux world, all of the major distributions have become
companies. How much revenue would Red Hat generate if their product was flawless? How much support would they sell?

I summarise this by saying that it is no longer in their best interest to
have the best product. It appears to be sufficient to have a working
product they can use to “ride the wave” of popularity of Linux.

I used Linux for a long time, but ultimately I was always frustrated with
the (sometimes significant) differences between the distributions, and
sometimes the differences between versions of the same distribution. Why
was it that an Amiga running AmigaDOS was more consistent with Apache and Samba docs than any particular Linux? Where was Linux sticking all of
these config files, and why wasn’t there documentation saying where the
stuff was and why?

When I first started using BSD, I fell in love with its consistency, its
no bull attitude towards ports and packa
ges, and its professional and
clean feel. Needless to say, I don’t do much linux anymore.

It may well be due to the people involved. Linus Torvalds is a
likeable guy, a smart guy, easily identifiable by a largely computer
illiterate press as an anti-Gates. And he looks the part. Bob Young is
loud and flambouyant. Caldera’s the company that sued Microsoft and probably would have won if it hadn’t settled out of court. Richard
Stallman torques a lot of people off, but he’s very good at getting
himself heard, and the GPL seems designed at least in part to attract
attention. The BSD license is more free than the GPL, but while
freedom is one of Stallman’s goals, clearly getting attention for his
movement is another, and in that regard Stallman succeeds much more than the BSD camp. The BSD license may be too free for its own good.

Yes, there aren’t many “figureheads” for BSD; most of the ones I know of
don’t complain about Linux, whereas Linux people often do complain about the BSD folks (the major complaint being the license).

I know Jerry pays more attention to Linux than the BSDs partly because Linux has a bigger audience, but he certainly knows more about Linux than about any other Unix. Very soon after he launched his website, a couple of Linux gurus (most notably Moshe Bar, himself now a Byte columnist) started corresponding with him regularly, and they’ve made Linux a reasonably comfortable place for him, answering his questions and getting him up and going.

So then it should be their responsibility, as Linux advocates, to give
Jerry a slightly more complete story, in my opinion.

As for the rest of the press, most of them pay attention to Linux only because of the aforementioned talking heads. I have a degree in journalism from supposedly the best journalism school in the free world, which gives me some insight into how the press works (or doesn’t, as is usually the case). There are computer journalists who get it, but a g

ood deal of them are writing about computers for no reason in particular, and their previous job and their next job are likely to be writing about something else. In journalism, if three sources corroborate something, you can treat it as fact. Microsoft-sympathetic sources are rampant, wherever you are. The journalist probably has a Mac sympathy since there’s a decent chance that’s what he uses. If he uses a Windows PC, he may or may not realize it. He’s probably heard of Unix, but his chances of having three local Unix-sympathetic sources to use consistently are fairly slim. His chances of having three Unix-sympathetic sources who agree enough for him to treat what they say as fact (especially if one of his Microsofties contradicts it) are probably even more slim.

Which furthers my previous point: Jerry’s Linux friends should be more
complete in their advocacy.

Media often seems to desire to cater to the lowest common denominator, but it is refreshing to see what happens when it doesn’t; I can’t stand US
news on TV, but I’ll willingly watch BBC news, and will often learn more
about US news than if I had watched a US news program.

But I think that part of the problem, which is compounded by the above, is
that there are too many journaists that are writing about computers,
rather than computer people writing about computers.

After all, which is more presumptuous: a journaist who thinks that he/she
can enter the technical world of computing and write authoritatively about
it, or a computer person who attempts to be a part time journalist? I’d
prefer the latter, even if it doesn’t include all of the accoutrements
that come from the writings of a real journalist.

And looking at the movement as a whole, keep in mind that journalists look for stories. Let’s face it: A college student from Finland writing an operating system and giving it away and millions of people thinking it’s better than Windows is a big story. And let’s face it, RMS running
around looking like John the Baptist extolling the virtues of something called Free Software is another really good story, though he’d get a lot more press if he’d talk more candidly about the rest of his life, since that might be the hook that gets the story. Can’t you see this one now?

Yes. Both of those stories would seem much more interesting than, “It’s
been over three years and counting since a remote hole was found in
OpenBSD”, because it’s not sensationalistic, nor is it interesting, nor
can someone explain how you might end up running OpenBSD on your
appliances (well, you might, but the fact that it’s secure means that it’d
be as boring as telling you why your bathtub hasn’t collapsed yet).

Richard Stallman used to keep a bed in his office at the MIT Artificial Intelligence Lab.

He slept there. He used the shower down the hall. He didn’t have a home outside the office. It would have distracted him from his cause: Giving away software.

Stallman founded the Free Software movement in 1983. Regarded by many as the prophet of his movement (and looking the part, thanks to his long, unkempt hair and beard), Stallman is both one of its most highly regarded programmers and perhaps its most outspoken activist, speaking at various functions around the world.

Linux was newsworthy, thanks to the people behind it, way back in 1993 when hardly anyone was using it. Back then, they were the story. Now, they can still be the story, depending on the writer’s approach.

If there are similar stories in the BSD camp, I’m not aware of them. (I can tell you the philosophical differences between OpenBSD,  NetBSD and FreeBSD and I know a little about the BSD directory structure, but that’s where my knowledge runs up against its limits. I’d say I’m more familiar with BSD than the average computer user but that’s not saying much.) But I can tell you my editor would have absolutely eaten this up. After he or she confirmed it wasn’t fiction.

The history is a little dry; the only “juicy” part is where Berkeley had
to deal with a lawsuit from AT&T (or Bell Labs; I’m not doing my research
here) before they could make their source free.

Nowadays, people are interested because a major layer of Mac OS X is BSD, and is taken from the FreeBSD and NetBSD source trees. Therefore, millions of people who otherwise know nothing about BSD or its history will end up running it when Mac OS X Final comes out in January; lots of people already are running Mac OS X Beta, but chances are good that the people who bought the Beta know about the fact that it’s running on BSD.

And it’s certainly arguable that BSD is much more powerful and robust than Windows 2000. So there’s a story for you. Does that answer any of your question?

Yes; I hope I’ve clarified my issues, too.

Neat site! I’ll have to keep up on it.

Thanks,
John Klos

Scanner troubleshooting secrets

~Mail Follows Today’s Post~

Scanner wisdom. One of the things I did last week was set up a Umax scanner on a new iMac DV. The scanner worked perfectly on a Windows 98 PC, but when I connected it to the Mac it developed all sorts of strange diseases–not warming up properly, only scanning 1/3 of the page before timing out, making really loud noises, crashing the system…

I couldn’t resolve it, so I contacted Umax technical support. The tech I spoke with reminded me of a number of scanner tips I’d heard before but had forgotten, and besides that, I rarely if ever see them in the scanner manuals.

  • Plug scanners directly into the wall, not into a power strip. I’ve never heard a good explanation of why scanners are more sensitive to this than any other peripheral, but I’ve seen it work.
  • Plug USB scanners into a powered hub, or better yet, directly into the computer. USB scanners shouldn’t need power from the USB port, since they have their own power source, but this seems to make a difference.
  • Download the newest drivers, especially if you have a young operating system like MacOS 9, Mac OS X, Windows ME, or Windows 2000. It can take a little while for the scanner drivers to completely stabilize. Don’t install off the CD that came with the scanner, because it might be out of date. Get the newest stuff from the manufacturer’s Web site.
  • Uninstall old drivers before installing the new ones. This was the problem that bit me. The new driver didn’t totally overwrite the old one, creating a conflict that made the scanner go goofy.
  • Buy your scanner from a company that has a track record of providing updated drivers. Yes, that probably means you shouldn’t buy the $15 scanner with the $25 mail-in rebate. Yes, that means don’t buy HP. Up until a couple of years ago, getting NT drivers out of HP was like pulling teeth; now HP is charging for Windows 2000 drivers. HP also likes to abandon and then pick back up Mac support on a whim. Terrible track record.

Umax’s track record is pretty darn good. I’ve downloaded NT drivers for some really ancient Umax scanners after replacing old Macs with NT boxes. I once ran into a weird incompatibility with a seven-year-old Umax scanner–it was a B&W G3 with a wide SCSI controller (why, I don’t know) running Mac OS 8.6. Now that I think about it, I think the incompatibility was with the controller card. The scanner was discontinued years ago (before Mac OS 8 came out), so expecting them to provide a fix was way out of line.
m I’ve ever had with a Umax that they didn’t resolve, so when I spec out a scanner at work, Umax is always on my short list.

And here’s something I just found interesting. Maybe I’m the only one. But in reading the mail on Jerry Pournelle’s site, I found this. John Klos, administrator of sixgirls.org, takes Jerry to task for saying a Celeron can’t be a server. He cites his 66 MHz 68060-based Amiga 4000, which apparently acts as a mail and Web server, as proof. Though the most powerful m68k-based machine ever made, its processing power pales next to any Celeron (spare the original cacheless Celeron 266 and 300).

I think the point he was trying to make was that Unix plays by different rules. Indeed, when your server OS isn’t joined at the hip to a GUI and a Web browser and whatever else Gates tosses in on a whim, you can do a lot more work with less. His Amiga would make a lousy terminal server, but for serving up static Web pages and e-mail, there’s absolutely nothing wrong with it. Hosting a bunch of Web sites on an Amiga 4000 just because I could sounds very much like something I’d try myself if I had the hardware available or was willing to pay for the hardware necessary.

But I see Jerry Pournelle’s point as well.

It’s probably not the soundest business practice to advertise that you’re running off a several-year-old sub-100 MHz server, because that makes people nervous. Microsoft’s done a pretty admirable job of pounding everything slower than 350 MHz into obsolescence and the public knows this. And Intel and AMD have done a good job of marketing their high-end CPUs, resulting in people tending to lay blame at the CPU’s feet if it’s anything but a recent Pentium III. And, well, if you’re running off a shiny new IBM Netfinity, it’s very easy to get it fixed, or if need be, to replace it with another identical one. I know where to get true-blue Amiga parts and I even know which ones are interchangeable with PCs, but you might well be surprised to hear you can still get parts and that some are interchangeable.

But I’m sure there are far, far more sub-100 MHz machines out there in mission-critical situations functioning just fine than anyone wants to admit. I know we had many at my previous employer, and we have several at my current job, and it doesn’t make me nervous. The biggest difference is that most of them have nameplates like Sun and DEC and Compaq and IBM on them, rather than Commodore. But then again, Commodore’s reputation aside, it’s been years since I’ve seen a computer as well built as my Amiga 2000. (The last was the IBM PS/2 Model 80, which cost five times as much.) If I could get Amiga network cards for a decent price, you’d better believe I’d be running that computer as a firewall/proxy and other duties as assigned. I could probably get five years’ uninterrupted service from old Amy. Then I’d just replace her memory and get another ten.

The thing that makes me most nervous about John Klos’ situation is the business model’s dependence on him. I have faith in his A4000. I have faith in his ability to fix it if things do go wrong (anyone running NetBSD on an Amiga knows his machine better than the onsite techs who fix NetFinity servers know theirs). But there’s such thing as too much importance. I don’t let Apple certified techs come onsite to fix our Macs anymore at work, because I got tired of them breaking other things while they did warranty work and having to fix three things after they left. I know their machines better than they do. That makes me irreplaceable. A little job security is good. Too much job sercurity is bad, very bad. I’ll be doing the same thing next year and the year after that. It’s good to be able to say, “Call somebody else.” But that’s his problem, not his company’s or his customers’.

~~~~~~~~~~

From: rock4uandme
To: dfarq@swbell.net
Sent: Wednesday, October 25, 2000 1:22 PM
Subject: i`m having trouble with my canon bjc-210printer…

i`m having trouble with my canon bjc210 printer it`s printing every thing all red..Can you help???
 
 
thank you!!    john c
 
~~~~~~~~~

Printers aren’t my specialty and I don’t think I’ve ever seen a Canon BJC210, but if your printer has replacable printheads (some printers make the printhead part of the ink cartridge while others make them a separate component), try replacing them. That was the problem with the only Canon printer I’ve ever fixed.
 
You might try another color ink cartridge too; sometimes those go bad even if they still have ink in them.
 
If that fails, Canon does have a tech support page for that printer. I gave it a quick look and it’s a bit sketchy, but maybe it’ll help. If nothing else, there’s an e-mail address for questions. The page is at http://209.85.7.18/techsupport.php3?p=bjc210 (to save you from navigating the entire www.ccsi.canon.com page).
 

I hope that helps.

Dave
 
~~~~~~~~~~
 

From: Bruce Edwards
Subject: Crazy Win98 Networking Computer Problem

Dear Dave:

I am having a crazy computer problem which I am hoping you or your readers
may be able to give me a clue to.  I do have this posted on my daily
journal, but since I get very little traffic, I thought your readership or
yourself may be able to help.  Here’s the problem:

My wife’s computer suddenly and inexplicably became very slow when accessing
web sites and usually when accessing her e-mail.  We access the internet
normally through the LAN I installed at home.  This goes to a Wingate
machine which is connected to the aDSL line allowing shared access to the
internet.

My computer still sends and receives e-mail and accesses the web at full
speed.  Alice’s computer now appears to access the web text at about the
speed of a 9600 baud modem with graphics coming down even more slowly if at
all.  Also, her e-mail (Outlook Express) usually times out when going
through the LAN to the Wingate machine and then out over the internet. 
The LAN is working since she is making a connection out that way.

File transfer via the LAN between my PC and hers goes at full speed.
Something is causing her internet access to slow to a crawl while mine is
unaffected.  Also, it appears to be only part of her internet access.  I can
telnet out from her computer and connect to external servers very fast, as
fast as always.  I know telnet is just simple text, but the connection to
the server is very rapid too while connecting to a server via an http
browser is much much slower and then, once connected, the data flows so slow
it’s crazy.

Also, dial-up and connect to the internet via AOL and then use her mail
client and (external to AOL) browser works fine and is as speedy as you
would expect for a 56K modem.  What gives?

I tried reinstalling windows over the existing set-up (did not do anything)
and finally started over from “bare metal” as some like to say.  Reformat
the C drive.  Reinstall Windows 98, reinstall all the drivers, apps, tweak
the configuration, get it all working correctly.  Guess what?  Same slow
speed via the aDSL LAN connection even though my computer zips out via the
same connection.  Any suggestions?

Sincerely,

Bruce W. Edwards
e-mail:  bruce@BruceEdwards.com
Check www.BruceEdwards.com/journal  for my daily journal.

Bruce  🙂
Bruce W. Edwards
Sr. I.S. Auditor  
~~~~~~~~~~

From: Dave Farquhar [mailto:dfarq@swbell.net]Sent: Monday, October 23, 2000 6:16 PM
To: Edwards, Bruce
Cc: Diana Farquhar
Subject: Re: Crazy Win98 Networking Computer Problem

Hi Bruce,
 
The best thing I can think of is your MTU setting–have you run any of those MTU optimization programs? Those can have precisely the effect you describe at times. Try setting yor MTU back to 1500 and see what that does. While I wholeheartedly recommend them for dialup connections, MTU tweaking and any sort of LAN definitely don’t mix–to the point that I almost regret even mentioning the things in Optimizing Windows.
 
Short of that, I’d suggest ripping out all of your networking protocols and adapters from the Network control panel and add back in TCP/IP and only the other things you absolutely need. This’ll keep Windows from getting confused and trying to use the wrong transport, and eliminate the corrupted TCP/IP possibility. These are remote, but possible. Though your reinstall should have eliminated that possibility…
 
If it’s neither of those things, I’d start to suspect hardware. Make sure you don’t have an interrupt conflict (rare these days, but I just saw one a couple weeks ago so I don’t rule them out). Also try swapping in a different cable or NIC in your wife’s machine. Cables of course go bad more frequently than NICs, though I’ve had horrible luck with cheap NICs. At this point I won’t buy any ethernet NIC other than a Bay Netgear, 3Com or Intel.
 
I hope that helps. Let me know how it goes for you.

Dave 
~~~~~~~~~~
From: Bruce Edwards

Hi Dave:
 
Thank you for posting on your web site. I thought you would like an update.
 
I verified the MTU setting was still at 1500 (it was).  I have not used one of the optimizing programs on this PC.
 
I removed all the adapters from the PC via the control panel.  Rebooted and only added back TCP/IP on the Ethernet card. 
 
I double checked the interrupts in the control panel, there do not appear to be any conflicts and all devices report proper function.
 
I still need to 100% verify the wiring/hubs.  I think they are O.K. since that PC, using the same adapter, is able to file share with other PCs on the network.  That also implies that the adapter is O.K.
 
I will plug my PC into the same hub and port as my wife’s using the same cable to verify that the network infrastructure is O.K.
 
Then, I’ll removed the adapter and try a different one.
 
Hopefully one of these things will work.
 
Cheers,
 
Bruce
~~~~~~~~~~

This is a longshot, but… I’m wondering if maybe your DNS settings are off, or if your browser might be set to use a proxy server that doesn’t exist. That’s the only other thing I can think of that can cause sporadic slow access, unless the problem is your Web browser itself. Whichever browser you’re using, have you by any chance tried installing and testing the other one to see if it has the same problems?
 
In my experience, IE 5.5 isn’t exactly the greatest of performers, or when it does perform well, it seems to be by monopolizing CPU time. I’ve gotten much better results with IE 5.0. As for Netscape, I do wish they’d get it right again someday…
 
Thanks for the update. Hopefully we can find an answer.

Dave 
~~~~~~~~~~ 

10/27/2000

Poor Windows 98 Network Performance. The MTU once again reared its ugly head at work yesterday. A 1.5 meg file was taking five minutes to copy. Forget about copying anything of respectable size–we’re talking about an all-day affair here. A perplexed colleague called me and asked why. Once again I suspected MTU tinkering. After setting the MTU back to the default of 1500 (the best value to use on a LAN), we were able to copy a 53-meg file in about 30 seconds, which was of course a tremendous improvement.

Once again, don’t run the MTU optimizers on computers that’ll be used on a LAN. The slight increase in modem speed isn’t worth crippling your network performance.

Basic Mac/PC networking. A friend passed along a question about how to build a network either a Mac or a PC could talk on. That can become a complex subject (I wonder if it’s worthy of a book?), but here are the basics.

If all you need is for the Macs to talk to Macs and PCs to talk to PCs, go to the local computer store and pick up a 100-megabit hub with enough ports to accomodate your machines, cables for your Macs, and network cards for your PCs if they don’t already have them. Get a quality hub and cards–the only really inexpensive brand I trust anymore is Netgear, and the general consensus is that 3Com and Intel are better still, though they’ll cost you more. I’ve been bitten by enough failures to decide that $15 network cards and $25 hubs just aren’t worth the trouble.

The Macs should just start talking via the AppleTalk protocol once you direct AppleTalk to the Ethernet port through the AppleTalk control panel. The PCs will need network card drivers installed, then they can use TCP/IP, NetBEUI or IPX/SPX to communicate. Best to use TCP/IP to limit the number of protocols on your network. Assign your PCs and Macs IP addresses in the 192.168.1.x range, with a subnet mask of 255.255.255.0. You don’t need to worry about the gateway or router settings unless you’ll be using the network to provide Internet access as well.

If you want Internet access, you can add an inexpensive cable/DSL router for about $150 that will serve your PCs and Macs without any problems. Assign your router to 192.168.1.1.

Most mid-range HP and Lexmark laser printers will accept a network card capable of talking to both PCs and Macs.

If you want to share files cross-platform, you’re looking at some bucks. You’ll want a Windows NT or 2000 server (AppleShare IP will talk to both, but it’s not as reliable as NT unfortunately), which will set you back about $1000 for the software. Alternatively, you could install DAVE ( www.thursby.com ), a product that makes Macs speak SMB over TCP/IP like Windows boxes will, so they can use Windows file shares and Windows-hosted printers.

I also see that Thursby is offering a program called MacSOHO, at $99, to accomodate PC/Mac networks. This isn’t suitable for large networks but for a small-scale LAN in the home, small business, school computer lab or church, it should suffice. You can download a trial version before you plunk down the cash.

Abandoned intellectual property

Abandoned Intellectual Property. I read a piece on this subject at OSOpinion over the weekend, and I’ve been thinking about it ever since. There are, of course, a lot of people calling for abolition of copyright or radical changes. This is, believe it or not, one of the tamer proposals I’ve read.

I’m definitely of two minds on this one. Take my first ever publication for money, in 1991. Compute Magazine, before Bob Guccione had managed to totally ram it into the ground, opted to buy my spring break project I collaborated on with a friend. We were writing a video game for the Commodore 64 and 128 and we were getting tired of trying to draw the title screen manually with graphics commands (bad enough on the 128 which had Basic commands to do such things, but on the 64 you were talking peeks and pokes all over the place–someone really should have written this thing back in 1982!) so we wrote a program to do the work for us. You loaded the sprites, moved ’em around, hit a key, and it gave you the Basic code to re-create the screen, suitable for inclusion in your program. We never finished the game, but we got a cool $350 and international recognition (OK, so it was a dwindling audience, but how many high school kids can say they’re published authors at age 16?).

Now, the problem. General Media whittled Compute down until it was basically just another PC mag, abandoning the multiplatform support that made it so great (I read about my beloved Commie 8-bits but still got the opportunity to learn about Macs, Amigas and PCs–what could be better?), market share continued to dwindle, and eventually Guccione and GM sold out to Ziff-Davis, who fulfilled your subscription with a choice of mags (I remember I opted for PC/Computing). So the copyright went to Ziff-Davis, who never did anything with the old Compute stuff. A few years later, Ziff-Davis fell on hard times and eventually hacked itself up into multiple pieces. Who owns the old Compute stuff now? I have no idea. The copyrights are still valid and enforcable. I seriously doubt if anyone cares anymore whether you have the Nov. 1991 issue of Compute if you’re running MOB Mover on your 64/128 or emulator, but where do you go for permission?

The same goes for a lot of old software. Sure, it’s obsolete but it’s useful to someone. A 68020-based Mac would be useful to someone if they could get software for it. But unless the original owner still has his/her copies of WriteNow, Aldus SuperPaint and Aldus Persuasion (just to name a few desirable but no-longer-marketable abandoned titles) to give you, you’re out of luck. Maybe you can get lucky and find some 1995 era software to run on it, but it’ll still be a dog of a computer.

But do we have an unalienable right to abandoned intellectual property, free of charge? Sure, I want the recordings Ric Ocasek made with his bands before The Cars. A lot of people want to get their hands on that stuff, but Ocasek’s not comfortable with that work. Having published some things that I regret, I can sympathize with the guy. I like how copyright law condemns that stuff to obscurity for a time. (Hopefully it’d be obscure in the public domain too because it’s not very good, but limiting the number of copies that can exist clinches it.)

Obscurity doesn’t mean no one is exploited by stealing it. I can’t put it any better than Jerry Pournelle did.

I don’t like my inability to walk into record stores and buy Seven Red Seven’s Shelter or Pale Divine‘s Straight to Goodbye or The Caulfields’ Whirligig, but I couldn’t easily buy them in 1991 when they were still in print either. But things like that aren’t impossible to obtain: That’s what eBay and Half.com are for.

For the majority of the United States’ existence, copyright law was 26 years, renewable for another 26. This seems to me a reasonable compromise. Those who produce content can still make a living, and if it’s no longer commercially viable 26 years later, it’s freely available. If it’s still viable, the author gets another 26-year-ride. And Congress could sweeten the deal by offering tax write-offs for the premature release of copyrighted material into the public domain, which would offer a neat solution to the “But by 2019, nobody would want WriteNow anymore!” problem. Reverting to this older, simpler law also solves the “work for hire” problem that exploits musicians and some authors.

All around, this scenario is certainly more desirable for a greater number of people than the present one.

From: Bruce Edwards

Dear Dave:

I am having a crazy computer problem which I am hoping you or your readers may be able to give me a clue to.  I do have this posted on my daily journal, but since I get very little traffic, I thought your readership or
yourself may be able to help.  Here’s the problem:

My wife’s computer suddenly and inexplicably became very slow when accessing web sites and usually when accessing her e-mail.  We access the internet normally through the LAN I installed at home.  This goes to a Wingate machine which is connected to the aDSL line allowing shared access to the internet.

My computer still sends and receives e-mail and accesses the web at full speed.  Alice’s computer now appears to access the web text at about the speed of a 9600 baud modem with graphics coming down even more slowly if at
all.  Also, her e-mail (Outlook Express) usually times out when going through the LAN to the Wingate machine and then out over the internet.  The LAN is working since she is making a connection out that way.

File transfer via the LAN between my PC and hers goes at full speed. Something is causing her internet access to slow to a crawl while mine is unaffected.  Also, it appears to be only part of her internet access.  I can
telnet out from her computer and connect to external servers very fast, as fast as always.  I know telnet is just simple text, but the connection to the server is very rapid too while connecting to a server via an http
browser is much much slower and then, once connected, the data flows so slow it’s crazy.

Also, dial-up and connect to the internet via AOL and then use her mail client and (external to AOL) browser works fine and is as speedy as you would expect for a 56K modem.  What gives?

I tried reinstalling windows over the existing set-up (did not do anything) and finally started over from “bare metal” as some like to say.  Reformat the C drive.  Reinstall Windows 98, reinstall all the drivers, apps, tweak the configuration, get it all working correctly.  Guess what?  Same slow speed via the aDSL LAN connection even though my computer zips out via the
same connection.  Any suggestions?

Sincerely,

Bruce W. Edwards

~~~~~~~~~~

Hi Bruce,

The best thing I can think of is your MTU setting–have you run any of those MTU optimization programs? Those can have precisely the effect you describe at times. Try setting yor MTU back to 1500 and see what that does. While I wholeheartedly recommend them for dialup connections, MTU tweaking and any sort of LAN definitely don’t mix–to the point that I almost regret even mentioning the things in Optimizing Windows.

Short of that, I’d suggest ripping out all of your networking protocols and adapters from the Network control panel and add back in TCP/IP and only the other things you absolutely need. This’ll keep Windows from getting confused and trying to use the wrong transport, and eliminate the corrupted TCP/IP possibility. These are remote, but possible. Though your reinstall should have eliminated that possibility…

If it’s neither of those things, I’d start to suspect hardware. Make sure you don’t have an interrupt conflict (rare these days, but I just saw one a couple weeks ago so I don’t rule them out). Also try swapping in a different cable or NIC in your wife’s machine. Cables of course go bad more frequently than NICs, though I’ve had horrible luck with cheap NICs. At this point I won’t buy any ethernet NIC other than a Bay Netgear, 3Com or Intel.

I hope that helps. Let me know how it goes for you.

Hardware mailbag for Sunday

I wrote up some stuff, forgot to upload it, and lost it. I hate when that happens. So I’ll hit the mailbag for this late Sunday post.
From: Robert Bruce Thompson

Good post on AMD/Intel. At this point, the only thing saving Intel is the fact that AMD doesn’t have enough fab to keep up with demand.

—-

Thanks.

I know AMD and IBM have an agreement dating back to their K6 woes, and I really don’t know why AMD doesn’t have IBM manufacture Athlons and/or Durons to supplement their own capacity. There are issues with someone else making your chips like higher ramp-up time and possibly lower yields, and of course lower profits but I have to think the greater market share they’d gain until they can get another fab built would have to make it worthwhile.

I really wonder if the thing standing in the way of that isn’t technical, but rather Jerry Sanders’ “real men have fabs” attitude.

From: Robert Bruce Thompson

Well, perhaps. But his attitude is correct. Consider CPUs. Intel (bunch of fabs) = dominant; AMD (a couple fabs) = far second; everyone else (no fabs) = non-players. Same thing in chipsets. Intel (bunch of fabs) = dominant; VIA (a fab) = far second; SiS, ALi, etc. (no fabs) = non-players. But I agree that AMD should sub out CPUs to IBM, who can make chips with the best of ’em.

—-

But I wonder if that attitude towards fabs is what’s keeping Sanders and AMD from subcontracting; it seems almost as if using someone else would appear as an admission of weakness and he’s not willing to do it. That’s an Intel-like mistake. When opportunity comes, you have to seize it regardless of how it looks short-term.

From: J.H. Ricketson

Subject: More bargains

Dave –

Another link for surplus/overstock bargains:

www.computersurplusoutlet.com

O have placed one order with them. Completely satisfied, start to finish. Outpost couldn’t have done better. Also, they are in Nevada, which saves me, a Californican, the 8.5% local shopping penalty. They have a good selection of stuff at very good prices.

Regards,

JHR

J. H. RICKETSON
[JHR@WarlockLltd.com]

—-

Thanks.

AMD’s turnaround

AMD just turned their fourth consecutive profitable quarter, and they say they expect to sell out of Athlons this year. This exposes AMD’s prime weakness: Even though this year Intel has repeatedly failed to execute while AMD has had smooth sailing, Intel has the tremendous advantage of capacity. AMD has two fabs. I don’t remember how many Intel has. Eight?
I was talking with someone before church Wednesday night about new PCs, and he said, “I hear AMD is actually outperforming Intel these days.” AMD, of course, has always been known as the budget chip maker. In 1992, if you wanted a bargain PC, you got an AMD 386DX/40. In the mid 1990s, for a bargain you bought AMD 486s. In the late 90s, you bought AMD K6s. Suddenly, AMD’s not doing much in the low end. They’ve stopped taking orders on K6-2s and they’ll ship their final one this month. The Duron’s a great chip, but they’re not making them in huge quantities. They cite lack of an inexpensive integrated chipset, which is a perfectly valid reason, but there’s another reason. Why should they produce large numbers of Durons? They’re selling every Athlon they can make, so why sacrifice high-margin chips to make lower-margin chips?

It’s been a very interesting year. AMD bet the company on its new fab, knowing that if they made one mistake there was every possibility they were toast. Going into 2000, Intel looked like a company that could do no wrong. But Intel made tons of mistakes this year (the i810 and 1.13 GHz PIII recalls, other high-speed chips that you could read about but not buy, the lack of a suitable replacement for the venerable 440BX chipset, delays on the P4), having a year that made some of AMD’s bad years look good or at least acceptable. Meanwhile, AMD executed. Unlike some, I was fairly confident that AMD would find some way to survive, but survival was about all I expected from them, and about all anyone had any right to expect, given their track record and financial condition.

The guy at church asked if he should sell his Intel stock and get some AMD. I told him I didn’t think so. If AMD can turn around, so can Intel. As for whether AMD is a good investment, I don’t know. They’d look a whole lot better to me if they had a couple more fabs. They’re doing great on CPUs and flash memory, but they need enough capacity to be able to afford to flood the market with Durons, and they need enough capacity to be able to manufacture chipsets if they so choose, rather than developing chipsets as stopgap solutions until VIA and SiS and ALi can step in and then phasing them out. Intel learned that building both chipsets and CPUs allows you to, to a great degree, control your own destiny, not to mention make an extra $20-$30 per computer sold. Intel’s mistake was guessing incorrectly (or perhaps not caring) what consumers wanted, betting on Rambus, and then delivering a bunch of technologies that it was quickly forced to recall.

I don’t know that AMD can turn itself into a giant like Intel, but a strong AMD is good for all of us. It keeps Intel honest.