SSD myths

SSD myths

SSDs, like most disruptive technologies, face some questions and resistance. People will grasp at any straw to avoid adopting them. Thanks to this resistance, a number of SSD myths arose. Here are the myths I see repeated over and over again, and the truth, based on my experience actually using the things.

Note: I originally wrote this way back in 2010. The drive technologies I speak of as state of the art are rather aged now. But the principles still hold today, and will continue to do so. Hard drives have gotten better, but SSD have gotten better at a more rapid pace.

Read more

Buy, don\’t build, enterprise servers

Steve sent me some questionable advice he found online–basically, someone advocating that you build your high-end servers rather than buying them, but admitting that it’s difficult for someone to build a $20,000 server and still be able to afford to maintain the thing.

There’s a solution: Buy it.This is the opposite of the best advice for desktops (although I increasingly tell people to just buy their computers because you don’t really save any money by building), but there are lots of very good reasons for it.

First and foremost is maintainability. The last time something went wrong with one of the HP servers at work, an LED on the front case came on before the problem became critical. Pop open the case, and an internal LED next to the failing component is lit up. Does your off-the-shelf motherboard have that feature? It may or it may not.

How does hot-spare memory sound? It’s kind of like RAID. You buy identical DIMMs to put in the system, but you buy one extra one, which goes into a specially designated slot. When a DIMM starts to fail, the system switches over to the hot spare. In the case of the mid-range HP servers, you can even open the case up, remove the failing module, and replace it, without powering down.

Of course you want your server to have RAID, and use hot-pluggable drives, so a failed disk doesn’t mean downtime. All but the very cheapest commercially-built servers have that feature from the factory.

But if you really have a budget of $20,000 per server, you shouldn’t even mess around with local storage. Buy some kind of a Storage Area Network instead. Basically, it’s a large bank of disks that connects to any number of servers. Some use a Fibre Channel connection, while others just use an Ethernet connection. Then you buy disks, slap them in the SAN, and configure the SAN to split the storage up between the servers. Ever run into a situation where you need 40 gigs of storage, and one server has 10 gigs free and one has 30 gigs free, but there isn’t much of anything you can move around to consolidate that free space? The SAN eliminates that. You can add one monster 300-gig disk to an array and split that storage up however you want. And one hot spare protects the entire array–no more need to buy one hot spare for every server on your network. On a big network (40 servers), that alone can pay for the SAN.

Finally, as far as spare parts go, a company ought to keep a couple of spare hard drives around for the times when a disk in a RAID array or SAN fails. But you put the servers on a maintenance agreement with someone like HP, IBM, or EDS, so that when anything else fails, that company comes out and replaces parts with its inventory. Outsource your server organ donor bank. You’ll save money, not just on the parts themselves, but also on physical storage space.

When I can get all of these features (except for the SAN) in an HP Proliant server that costs about $3,000, there’s no point in my employer wasting time building its own servers.

A RAID array of floppies?

Dan Bowman sent me a link to instructions on setting up FDD RAID on OS X. That’s FDD, not HDD. Floppies.
This most definitely falls into the because-you-can category, if not the for-when-you’re-really-bored category. But it looks like somebody’s finally found a way to make floppies reasonably fast and reliable. So now who’s going to try it with Zips?