Selling untested memory is new? Whatever.

An article on the “new” practice of low-tier manufacturers selling untested memory got attention on Slashdot this week.

This isn’t a new practice. I’ve known about it for about eight years.There’s a pretty good reason why all name-brand memory is priced pretty much the same. You can occasionally catch a break in pricing, but on average, a Kingston module is going to cost about the same as a Crucial module, and so will any other top-tier brand. Memory from a computer manufacturer like HP or Sun may cost a bit more still, ostensibly because the manufacturer tests for compatibility. They may or may not actually test the module you buy, but at least they’ll guarantee it not only works but works in the machine you put it in.

If you’re building your own PC, by all means buy Crucial or Kingston memory or go to a specialty high-performance memory like Mushkin. The same holds true for upgrading a name-brand PC. But pay the extra money for server memory from the company who made your server. An hour of downtime will obliterate the $100 you might save.

But there’s another tier of memory. I first became aware of it back in the days when a typical issue of Computer Shopper was as thick as the Greater St. Louis White Pages. Tucked away in the back, there was always someone who beat the typical memory prices and he usually beat it by a long shot–at least 30%. For several years, that was how I bought my memory, and for a long time I got away with it.

Then along came Slot 1 and Super 7. Once CPU rates broke the 233 MHz barrier, the systems became a whole lot harder on their memory. I don’t know what was special about 233 MHz, but that cheap commodity memory just didn’t cut it anymore. Suddenly, I started noticing that commodity memory often didn’t pass the rudimentary memory test that computers perform before they load the operating system. That’s akin to flunking grade-school recess, so I started looking into it.

What I found was that commodity memory generally isn’t tested, or it’s tested very loosely. What’s worse yet is that the chips on some commodity memory were tested, and failed. They were certified for use in things like pagers and other consumer devices, but not up to the higher demands of computers.

So, having known this for about 8 years, you can imagine what I thought when I read the headline “Why untested DRAMs are getting into more and more products.” I was thinking hey, an upgrade! Since it didn’t test bad, at least there’s a chance it’ll work!

Maybe this practice has evolved in the past few months, as the author of the article in question alleges. But it’s hardly a new trick. In the highly competitive no-name clone market, this has been going on since at least the days of the 486. What was going on in the days of the 386 is even scarier.

Will Dell and the boys follow suit, like the author fears? I doubt it. PCs are problematic enough as it is, and it only takes a few months to lose a reputation that was built over the course of a decade. Shipping commodity memory isn’t like outsourcing technical support to India–there’s a fair percentage of your customer base who will never use your tech support. All of your customers will use your memory.

I can’t imagine commodity memory ending up in any name-brand PC, unless it’s a name brand whose ship is sinking fast.

But I guess I shouldn’t be surprised that this old trick is showing back up again. The business is competitive, PC sales are down, the economy isn’t what it was 10 years ago, and profit margins are impossibly thin. If todays untested and/or defective memory is better than 1997’s, someone’s going to use it.

But part of the story never changes: Always buy your memory from a reputable manufacturer and distributor, so you know what you’re getting and whence it came. You’ll save a lot of frustration over the life of the PC that receives the upgrade.

Running Knoppix on a Proliant with a SmartArray controller

If you’ve ever tried to run Knoppix on a Compaq or HP Proliant with a Smart Array controller, you probably got a rude surprise.

Here’s how to make the hard drive(s) show up.Open a shell window.

Type ‘su’ (no quotes) to become root.

Type ‘insmod cciss’ — you may get a message that it’s already installed.

Type ‘cd /dev’

Type ‘MAKEDEV cciss’ (this is case-sensitive).

Now Knoppix will see your drives so you can mount them and/or edit the partitions with qtparted.

How to get that dusty old train running again

How to get that dusty old train running again

It’s the weekend after Thanksgiving. The time of year when nostalgia runs high and ancient toy trains come out of the basement or the attic and get set up again until sometime after the new year.

Well, hopefully they make it that long. Here are some tips for getting old Lionel, American Flyer, Marx, and similar electric trains running again.

Read more

Things to look for in a flatbed scanner

David Huff asked today about scanners, and I started to reply as a comment but decided it was too long-winded and ought to be a separate discussion.

So, how does one cut through the hype and get a really good scanner for not a lot of money?The short answer to David’s question is that I like the Canon Canoscan LIDE series. Both my mom and my girlfriend have the LIDE 80 and have been happy with it.

For the long answer to the question, let’s step through several things that I look for when choosing a scanner.

Manufacurer. There are lots of makers of cheap and cheerful scanners out there. Chances are there are some cheap and nasty ones too. Today’s cheap and nasty scanners will be a lot better than 1995’s crop of cheap and nasties, since the PC parallel port was a huge source of incompatibilities, but I want a scanner from a company with some experience making scanners and with good chances of still being around in five years.

Driver support. Much is made of this issue. But past track record isn’t much of an indicator of future results. HP and Umax infamously began charging for updated drivers, for example. But at least I could get a driver from HP or Umax, even if it costs money. My Acer scanner is forever tethered to a Windows 98 box because I can’t get a working driver for Windows 2000 or XP for it.

Umax used to have a stellar track record for providing scanner drivers, which was why I started buying and recommending them several years ago. I don’t know what their current policy is but I know some people have sworn them off because they have charged for drivers, at least for some scanners, in the recent past. But you can get newer drivers, in many cases, from Umax UK.

But that’s why I like to stick with someone like Canon, HP, Umax, or Epson, who’ve been making scanners for several years and are likely to continue doing so. Even if I have to pay for a driver, I’d rather pay for one than not be able to get one. Keep in mind that you’ll be running Windows XP until at least 2006 anyway.

Optical resolution. Resolution is overrated, like megahertz. It’s what everyone plays up. It’s also a source of confusion. Sometimes manufacturers play up interpolated resolution or somesuch nonsense. This is where the scanner fakes it. It’s nice to have, but there are better ways to artificially increase resolution if that’s what you’re seeking.

Look for hardware or optical resolution. Ignore interpolated resolution.

Back to that overrated comment… Few of us need more than 1200dpi optical resolution. For one thing, not so long ago, nobody had enough memory to hold a decent-sized 4800dpi image in memory in order to edit it. If you’re scanning images to put them on the Web, remember, computer screen resolution ranges from 75 to 96dpi, generally speaking. Anything more than that just slows download speed. For printing, higher resolution is useful, but there’s little to no point in your scanner having a higher resolution than your printer.

I just did a search, and while I was able to find inkjet printers with a horizontal resolution of up to 5760dpi, I found exactly one printer with a vertical resolution of 2400dpi. The overwhelming majority were 1200dpi max, going up and down.

Your inkjet printer and your glossy magazines use different measurements for printing, but a true 1200dpi is going to be comparable to National Geographic quality. If your photography isn’t up to National Geographic standards, megaresolution isn’t going to help it.

Bit depth. If resolution is the most overrated factor, bit depth is the most underrated. Generally speaking, the better the bit depth, the more accurate the color recognition. While even 24 bits gives more colors than the human eye can distinguish, there is a noticeable difference in accuracy between scans done on a 24-bit scanner and scans from a 36-bit scanner.

If you have to choose between resolution and bit depth, go for bit depth every time. Even if you intend to print magazines out of your spare bedroom or basement. After all, if the color on the photograph is off, nobody is going to pay any attention to how clear it is.

Size and weight. Some flatbed scanners are smaller and lighter than a laptop. If they can draw their power from the USB port, so much the better. You might not plan to take one with you, but it’s funny how unplanned things seem to happen.

The HP 4101mfp multifunction device

I set up an HP 4101mfp printer-scanner-fax machine today. My first impressions weren’t good, but once we actually had it working, it worked as advertised.

I’m not about to buy one for home, but if I need a multifunction device in the office (or a client does), I won’t feel too bad about recommending this one. Read more

Troubleshooting a Compaq Proliant 1600

I still work on a lot of Compaq Proliant 1600s. In their day, they were a very versatile server, packing lots of drive bays and open expansion slots into a 5U package. They were also very reliable.

Now that they are five years old or even older, they are less so. But I’ve collected some good suggestions from Compaq and HP technicians about working on them.The biggest problem with the 1600 is that so many parts are socketed. Over time, socketed components tend to work themselves loose. So, when a 1600 crashes a lot but will pass its built-in diagnostics with flying colors, the best thing to do is to completely disassemble it and put it back together.

If it seems to be having memory problems, don’t just reseat the processor board and/or replace the memory. I had one 1600 exhibit memory failures that would not go away until I replaced the PCI board, of all things. Why? Beats me. The HP technician was as stumped as I was. So reseat that board too.

It never hurts to clean the connectors when you have the system apart. Get some zero-residue contact cleaner from a hardware or auto parts store. Be sure it’s zero-residue. A lot of contact cleaners contain oil, which isn’t going to help intermittent electrical connections at all. If in doubt, skip the contact cleaner entirely and clean the contacts with a cotton swab and rubbing alcohol instead. Need I also mention you need to stay grounded at all times while doing these procedures?

When replacing the PCI and CPU modules, you have to use a lot of force. Don’t rely on the plastic releases on the back to put them in. Whenever I’ve seen a veteran Compaq technician reinstall one of these modules, he’s slammed the module into the back computer with so much force that it moved the system. If you don’t think you’re going to break it, you probably aren’t doing it hard enough.

Newer Proliant servers have many fewer socketed components, so their long-term reliability prospects are higher. They also usually have LEDs that indicate failed components, making diagnostics virtually irrelevant and system repair much more straightforward. But when replacement isn’t an option just yet, it’s nice to know there are things to do to return a 1600 to life.

Hard drive upgrade tips for older PCs

A hard drive upgrade is one of the best ways to extend the usable lifespan of a computer.

A lot of people come to this site looking for hard drive upgrade advice, but I realized that it’s been a long time since I’ve written anything about that. Since there are some gotchas, I need to address them.

Read more

How to get your RSS/RDF feed working with Mozilla Firefox\’s Live Bookmarks

As soon as I upgraded to Mozilla Firefox 1.0, I started noticing that when I visited certain sites that had RSS/RDF feeds, a big orange “RSS” icon showed up in the lower right hand portion of the window.

That’s cool. Click on that, and you can instantly see that site’s current headlines, and know if the site has changed, just by looking in your bookmarks.

Except my site has an RSS feed and that icon didn’t show up. Here’s how I fixed it.At first I figured Firefox was looking for the standard “XML” icon everyone uses. So I added that. No go.

So I investigated. A Google search didn’t tell me anything useful. So I went to Slashdot’s page and viewed the source. Four lines down, I found my answer.

In your section, you need to add a line. In my case, since I run GeekLog, it was this:

LINK REL=”alternate” TITLE=”Silicon Underground RSS” HREF=”//dfarq.homeip.net/backend/siliconunderground.rdf” TYPE=”application/rss+xml”

Just substitute the URL for your RSS feed for mine. The two slashes at the beginning are necessary. The whole line has to be enclosed in , of course. (I can’t show them here because my blogging software is trying to protect me from myself.)

But since Geeklog doesn’t have an index.html file, and its index.php file is mostly programming logic, where do you add your code?

In your themes directory, in the file header.thtml, that’s where. I put mine right after the line that indicates the stylesheet.

The location for other blogging systems will vary, of course. But I notice some seem to do it automatically.

Now your readers can keep track of you without constantly refreshing your page (which they probably won’t do) and without having to run a separate RSS aggregator. Pretty cool, huh?

Any Unix gurus care to help me with mod_rewrite?

I’ve watched my search engine traffic decrease steadily for the past few months since I changed blogging software. It seems most engines don’t care much for the super-long arguments this software passes in its URLs.

The solution is mod_rewrite, and I think my syntax looks correct, but it’s not working for me.The goal is to fake out search engines to make them think they’re looking at static files. Search engines are reluctant to index database-driven sites for fear of overloading the site. Since I can’t tell them not to worry about it, I have to make the site look like a static site.

To that end, I created a section at the end of my httpd.conf file:

# rewrites for GL

RewriteEngine on
RewriteRule ^/article/([0-9]+)$ /article.php?id=$1 [NC,L]

This line should make the software respond to Thursday’s entry (https://dfarq.homeip.net/article.php?story=20040902200759738) if it’s addressed as https://dfarq.homeip.net/article/20040902200759738.

Once mod_rewrite is working, in theory I can modify the software to generate its links using that format and watch the search engines take more of a liking to me again. But I’ve got to get mod_rewrite going first, and I’m stumped.

Any expert advice out there?

Thanks in advance.

Thoughts on backups

Backups have weighed heavily on my mind lately. When you have 125 servers to tend to at work, chances are one of them is going to fail eventually. Really what seems to happen is they fail in bunches.

One of my clients has a problem. He’s out of capacity. And that’s gotten me thinking about backups in general.You see, my client’s golf buddies are telling him nobody backs up to tape anymore. Backing up to disk is the hot thing now. Here’s the theory. Your network is fast, right? Why make it wait on the tape drive? Back up all your servers to disk instead, and they can all back up at once, and hours-long backups take minutes instead, and restores take seconds. And no more paying $3,000 for tape drives and $6,000 for a rotation of tapes for it!

Now here’s the problem. A CIO hears "disk" and he thinks of that 400-gigabyte IDE drive he saw in the Sunday paper sales ad for $129 with a $60 mail-in rebate. (It wasn’t really quite that big, and it wasn’t really quite that cheap, but these things are always better on Monday morning than they were the day before.)

No enterprise bases something as important as backups on a single consumer-grade IDE disk. For one thing, it won’t be fast enough. For another, they’re not designed to be used that heavily, that frequently. An enterprise could get away with something like HP’s $1200 entry-level NAS boxes, which use cheap IDE drives but in a RAID configuration, so that when one of those cheap disks fails, it can limp along for the rest of the night until you swap out the failed drive. The chances of one drive failing are small but too large for comfort; the chances of two drives failing at once are only slightly better than Ronald Reagan winning the Republican primary this year. With Abraham Lincoln as his running mate.

One can set up some very nice backups on a Gigabit Ethernet setup. Since Gigabit’s theoretical bandwidth is about 3 time that of Ultra320 SCSI’s theoretical bandwidth, you can back up three servers at once at full speed. Drop in a second NIC, and you can back up six. In reality, the disks in the NAS box can’t come close to keeping up with that rate, but the disk can still back up everything much faster than tape will. Even a lightning-fast state of the art 200/400 GB LTO drive.

Frankly, with such a setup it becomes practical to back up your most important servers over the lunch hour, to avoid losing half a day’s work.

But you don’t get it for $129.

And in reality, no enterprise in its right mind is throwing out tapes either. If they back up to disk, they spool that backup to tapes the next day, so they can store the tapes offsite for archival and/or disaster recovery purposes.

How important is this? I remember about a year ago getting a request for a file that was changed in the middle of a week, and the person wanted that copy from the middle of the week, not from our Friday backups that are archived longer. Even with a tape rotation of 40 tapes, I couldn’t get the file. The tape had been overwritten in the rotation a day or two before.

While rare, these instances can happen. A 40-tape rotation might not be enough to avoid it. Let alone just a couple hundred gigs of disk space.

But what about home?

Consumer tape drives had a terrible reputation, and based on my experience it was largely deserved. The drives had a terrible tendency to break down, and the failure rate of the tapes themselves was high too. The lack of comfort with enterprise-grade tape that I see in my day-to-day work may stem from this.

The last time I was in a consumer electronics store, I don’t think I saw any tape drives.

I suspect most people back their stuff up onto optical disks of some sort, be it CD-R or RW, or some form of writable DVD. The disks are cheap, drives that can read them are plentiful, and if floppies are any indication, the formats ought to still be readable in 20 years. My main concern is that the discs themselves may not be. Cheap optical discs tend to deterriorate rapidly. Even name-brand discs sometimes do. We’ve had great luck with TDK discs ever since Kodak took theirs off the market, but all we can say is that over the course of three years, we haven’t had one fail.

The last time my church’s IT guy called asking about backups, we happened upon a solution: a rotation of USB hard drives. Plug it in, back it up, and take the drive home with you. It’s cheap and elegant. Worried about the reliability of the drives? That’s why you use several. Three’s the minimum; five drives would be better. Use a different drive every day.

It’ll work, and it’s pretty affordable. And since the drives can be opened up and replaced with internal drives, it has the potential for cheap future upgrades.

How about the reliability of hard drives? Well, I have a box full of perfectly readable 120-meg drives in my basement. They date from 1991-1993, for the most part. I bought them off eBay in the mid 1990s, intending to put them in computers I would donate to churches. The computers never materialized, so the drives sat. I fire one up every once in a while out of curiosity. The copies of DOS, Windows 3.1, and the DOS Netware client that were on them when I got them are still there.

Some technology writers have observed that modern IDE hard drives seem to have a use-by date; they just seem to have a tendency to drop dead if they sit unused for too long. I see this tendency in a lot of devices that use inexpensive electric motors. Starting them up every once in a while and giving them a workout to keep the lubricants flowing and keep them from turning glue-like seems to be the best way to keep them working.

At this stage, I’m less worried about the long-term viability of hard drives than I am about optical discs. Ask me again in 20 years which one was the better choice, and I’ll be able to answer the question a lot better.

If you’re stuck using optical discs, the best advice I can give is to use a brand of media with a good reputation, such as TDK, make multiple copies, and store them in a cool, dark, dry place. The multiple copies should preferably be stored in different cool, dark, dry places. Light seems to break down optical discs, and cooler temperatures as a general rule slow down chemical reactions. Dryness prevents chemical reactions with water and whatever the water might manage to pick up.