RAID 101

Last Updated on July 6, 2022 by Dave Farquhar

If you need an introductory course on RAID, here’s RAID 101 for you. Once exclusively used in corporate servers and by performance enthusiasts, the cost of hard drives and chipsets is so low now that RAID is showing up in consumer PCs. That’s good–as long as you set it up carefully.

RAID is an acronym for redundant array of independent (or inexpensive) disks. In simpler terms, it takes two or more hard drives and makes them look like a single drive in order to increase performance and/or boost reliability.

There are tons of flavors of RAID, but most RAID adapters support the four most popular:

RAID 0: Writes data in “stripes” alternating across two or more disks with no redundancy, so it’s not true RAID. This gives outstanding performance, but if you lose a disk, you lose the whole array. I don’t recommend RAID 0 unless you’re vigilant about backups, or unless you’re using your RAID 0 setup for temporary data, like holding your source files for a large video or photo editing project.

With RAID 0, you get to use the full capacity of your drives. If you RAID 0 two 1 GB drives together, you get 2 GB of storage total.

RAID 1: Mirrors data between two drives. If one drive dies, the system just uses the survivor until you replace the failed drive. Write performance degrades slightly. Read performance isn’t quite as good as RAID 0, but if you’re doing something important and your antivirus scan kicks in, one drive can handle your work while the other services the virus scan. So a RAID 1 array is still nicer for everyday use than a single drive.

If you RAID 1 two 1 GB drives, you get 1 GB of storage.

RAID 5: Stripes data across three or more drives, using one drive to store a parity bit. Think of it like RAID 0 with redundancy. It’s not quite as fast as RAID 0 due to the need to calculate parity, but in practice you rarely if ever notice.

If you RAID 5 three 1 GB drives, you get 2 GB of storage. If you use four drives, you get 3 GB. And so on.

RAID 10: This is a combination of RAID 0 and RAID 1. Also called a “striped mirror.” In some cases, RAID 10 can be faster than RAID 5. RAID 10 requires a minimum of four drives.

If you RAID 10 four 1 GB drives, you get 2 GB of storage, due to mirroring. Just like RAID 1, you end up with half the total capacity.

JBOD: An acronym for “just a bunch of disks.” Most RAID controllers support this as well. It just takes multiple disks and mashes their capacity together, with no redundancy. It’s useful in a pinch, but just like RAID 0, you have to be vigilant with backups, since the loss of one drive in the array means you’ll lose the array.

In practice, RAID 1 usually makes the most sense for home use. It gets you redundancy, and it only requires two drives to do it, so even if your computer case only has two 3.5″ drive bays, you can still do it. It gives you a nifty performance boost, and your computer doesn’t come to a screeching halt if a drive fails, so what’s not to like?

Let’s talk about life with RAID 1.

Breaking the mirror. RAID 1’s redundancy can be nice for those instances where you’re making a change that might affect the system. Power down, pull the plug on one of the drives or remove it entirely, then power back on. Ignore the warnings about a failed drive and degraded array, and go about your risky business. If it works, power down, plug the second drive back in, and let the drive rebuild.

If it fails and you want to revert, power down, unplug the remaining drive, and plug the other drive back in. Power on and make sure everything works. Then power down, plug the other drive back in, and let the array rebuild.

Dissimilar drives. I’ve said this before, but it’s worth repeating. If you’re building your own system, mismatching the drives slightly is a good idea. Identical drives are more likely to fail at about the same time. It defeats the purpose of RAID if one drive fails, and then the second drive fails before you get the chance to replace the first one to fail. If you want to be really safe, you have two options. I prefer to use two similar drives (same RPM, same buffer size) made by different manufacturers.

If you’re afraid the RAID will get confused by one drive finishing a few nanoseconds faster than the other–a problem I hear in theory all the time, but have never seen in practice–buy two of the same drive, manufactured at different times. If you’re buying the drives at retail, check serial numbers and buy two as far apart as possible. Or buy one at retail and one via mail order. Or buy from two different mail order outlets.

Mixed sizes. You can mix drive sizes in a RAID array, but you’ll only use up to the capacity of the smaller drive. RAID together a 1 GB drive and a 2 GB drive, and it will treat the 2 GB drive as another 1 GB drive.

Upgrading an array. The above raises the question of whether you can upgrade an array by replacing one drive with a larger drive, letting the array rebuild, then replacing the other drive. Some controllers allow you to do this, but the procedure varies. Unfortunately, finding out how to do it with whatever hardware you have will probably take some digging.

Initialization and rebuilding. The last thing to keep in mind is that initializing an array, particularly a large array, takes some time. Several hours, in most cases.

You can use the system while an array is initializing or rebuilding, but every time I’ve asked a field technician if they would if it was their system, they stammer and eventually say no.

Need more on RAID? Here’s a Q&A with readers.

If you found this post informative or helpful, please share it!

4 thoughts on “RAID 101

  • November 30, 2010 at 9:50 am
    Permalink

    Back in the Very Old Days, “JBOD” was really a short way of saying “use my RAID controller as a SCSI controller.” On the Alpha I still (doh!) have to maintain, you can’t combine JBOD drives into anything; they just show up as logical RAID 0 drives made of one physical device per logical drive.

    Linux software RAID has a mode like what you’re describing as JBOD here; not striped, just capacity dumped together. But usually you have to have some tight integration between the byte-level layer and the block layer and (sometimes) the filesystem driver to pull that sort of thing off.

    • November 30, 2010 at 8:10 pm
      Permalink

      I don’t know if all contemporary enterprise-grade RAID adapters have that mode, but you see it in the consumer space. I can’t think of a good reason to use true, modern JBOD mode, which probably has something to do with why Linux makes it difficult. Why spend time making something easy that doesn’t make sense to use?

      I think Windows server variants still have JBOD capability (it’s been 10 years since I cared enough to look).

      It’s there because it’s easy to implement, but the only good reason to use it is as a temporary solution when you need a bunch of storage right now for a content creation session, and all you have is a handful of disks that collectively have adequate space. So it makes some sense for people who live in Adobe products, but definitely not in the server room.

      And my condolences on that Alpha still hanging around. Great machine for its day. But now it’s just a cantankerous teenager, only there’s no growing out of it. We had a couple of those still lingering at my most recent job. Getting parts when they failed wasn’t easy.

  • December 1, 2010 at 5:25 pm
    Permalink

    Don’t know about Windows 7, but XP Pro and server variants have a software RAID solution called Dynamic Disks. On XP Pro, you can stripe, but not mirror. I think you can do both on server variants, (If you want to hack a couple lines of code in three files, I think you can mirror in XP, too)

    It probably worth noting that one needs to install WimXP (I don’t know about Win 7) with the hard disks set to RAID, not AHCI or Native IDE Mode, even if you’re using only one disk for the moment. XP will Blue Screen if you change that setting after Windows in installed.

    Having redundancy is great, but I’m still going to make backups!

    Thanks for the help, Dave.

    • December 12, 2010 at 11:44 am
      Permalink

      Jim wanted to add a clarification. Once you install Windows with the drive set to RAID in the BIOS, there’s no turning back. Changing in the BIOS to a different setting will cause it to malfunction.

      Safest bet is to use the RAID function from the get-go, even if you’re only using one drive.

      And Jim adds the following (7/11/2011):

      8 months ago I built a new system based on the GIGABYTE GA-P55A-UD4P.

      My main data store was a RAID01 volume of 2 Seagate ST310000528AS

      Today, the RAID BIOS configurator flashed an error message on reboot prior
      to OS load, saying one of the drives had gone ferklempt.

      Rather than mess around in the RAID BIOS, I continued to boot into Windows
      XP SP3 and opened the Intel RAID application.

      Since I’d recently made a backup to DVD, I told it to do its thing. I wasn’t
      sure if the drive had failed, OR that somehow the RAID had been messed up,
      but since I had a backup, I figured “what the hell”.

      A few or three hours later, the RAID had been rebuilt.

      All now appears working normally, if not faster than before.
      I got wondering why it seemed faster, so I opened the Intel RAID application
      and it tells me both drives are now transferring data at 3 Gb/s.

      I’m certain it was at 1 (or 1.5?) GB/s before the rebuild.

      A wee datum for anyone interested….

Comments are closed.