RAID, in case you are not aware, is an acronym for redundant array of inexpensive disks. It serves two purposes in computing, using multiple disk drives as a single logical drive for the purpose of improving performance, improving reliability, or both. There are different flavors of RAID, depending on the capability. Frequently, the most common types of RAID are RAID 0, 1, or 5. You can also combine them for extremely high performance applications, which usually are called RAID 5 + 1. Anymore, this is frequently handled and software. So what is the purpose of RAID adapters?
The purpose of RAID adapters
Even today, even though they are far less common than they used to be, sometimes people still use RAID adapters. The usual reason is to get a more complex RAID configuration, such as RAID 5 + 1. There are two reasons for this. RAID 10 is also known as a striped mirror, so it is a pretty complex operation, spreading reads and writes out over a minimum of 4 disks, frequently more. The whole reason for doing this is performance. For this type of operation, hardware is faster than software, so you need a RAID adapter to accommodate it.
But for simple RAID, such as RAID 0 or RAID 1, software implementations are straightforward and typically perform well. Operating systems have had RAID 0 or RAID 1 capability built right in for a long time. So there’s not a lot of need for a dedicated hardware RAID adapter for those types of implementations. The increased performance doesn’t scale with the cost.
But there was a time when that wasn’t the case, so you would find hardware RAID adapters, with varying types of interfaces, depending on the audience. When you are looking at old computers, servers frequently had RAID adapters for reliability purposes. These typically had connectors for about five drives, and you could configure them for any number of RAID options, depending on your storage budget and the level of performance you needed.
The arrival of software RAID also means people can do silly things, like build RAID arrays of floppies.
Types of RAID arrays
The types of RAID arrays used to be a common job interview question, and I sure remember a ton of RAID questions on my CISSP practice exams. There’s a good reason for that. If you get RAID wrong, catastrophic things happen. And here’s a trick you won’t find on any test. Use dissimilar drives in your RAID arrays.
- RAID 0 stripes writes across disks. There is no redundancy at all so this is just for performance, at the expense of reliability
- RAID 1 mirrors writes between two disks, giving a slight increase in performance but much greater reliability, at the expense of storage capacity
- RAID 5 requires three or more disks and spreads writes across all of them. It uses the last drive for parity, so if any drive fails, it can rebuild the array from the other disks
- RAID 10 mirrors two striped arrays, providing speed and redundancy. This was very popular for database drives before the dawn of SANs and, later, SSDs
- RAID 5+1 mirrors two RAID5 arrays, providing a little bit more redundancy than RAID 10, at the expense of overhead
There are other types of RAID, but they are mostly academic. The trick is to remember which types provide redundancy, and know, given two different types, which one has the most overhead and the least overhead in terms of the amount of usable storage space.
Who used RAID?
High end gaming rigs sometimes came with some type of RAID adapter, usually for the purpose of performance. At first they were dedicated PCI cards, but later they came integrated onto the motherboard. Most people couldn’t afford six separate drives to do a striped mirror correctly, let alone having enough room in a case for that many hard drives, so these types of RAID typically were limited to RAID 0 or RAID 1.
Over time, RAID became less important, in the enterprise at least. When I started my career in the ’90s, all of our serious servers, especially file servers, had some form of RAID. Around 2002, a sales engineer from Compaq told us about something called a storage area network, or SAN. The place I was working at couldn’t begin to afford one of those at the time, but once I started a new role at a serious enterprise in 2005, we had at least one SAN at every data center.
The upfront cost was significant. But if you had enough volume, it was extremely cost effective, because you didn’t have to dedicate five full drives to a single server. You just loaded up the SAN, then presented whatever amount of space you needed to each server on the network. And all of the I/O was distributed across a much larger array of disks, so the performance was really good.
So that’s another reason people don’t talk about RAID nearly as much as they used to. It makes a lot more sense to buy a SAN, then buy smaller servers, and provision the storage out to those smaller servers as needed. It lets you pack a lot more computing into a smaller space.
RAID is still important. It’s just implemented by the SAN now, or on a smaller network by a NAS. RAID in client systems has become uncommon; if you’re storing enough data to call for it you’re probably better off moving it to a NAS. If you’re only storing a smaller amount of data, cloud backup meets the needs of many users.
My understanding is that hardware RAID adapters have fallen out of favor for disaster recovery reasons (hashtag irony)… if your hardware RAID controller fails and you have to replace it with another hardware controller that isn’t the exact same model or brand, you probably can’t migrate your existing disks over to it. Without a backup replacement sitting on the shelf just in case, you could lose all your data. Whereas if you’re using a Linux system with software raid, as long as the kernel can see the disks, regardless of the disk controller situation, it can work with the on-disk RAID format.