Disk cache vs memory cache

Last Updated on October 3, 2021 by Dave Farquhar

It’s understandable if the concept of disk cache vs memory cache is confusing. They have distinct similarities though they are not interchangeable with one another. But let’s talk about the concept and how it works.

What caching is

disk cache vs memory cache
On this old 486 motherboard, the memory cache is the series of chips in the lower right. The main system memory plugs into the white sockets on the top right. This is different from a disk cache but the concept is the same: accelerate a lot of slow memory with a smaller amount of a faster kind of memory.

Caching is the concept of using a small amount of fast and expensive memory to hold the most important contents of something slower, so you can get access to it more quickly.

This makes it easier to balance price and performance.

In the case of a disk cache, you use RAM to temporarily store the most accessed parts of the disk. You may have terabytes of storage, but ram is too expensive to use for storage. At the time I’m writing this, a terabyte of SSD storage has a similar cost to just 32 GB of RAM. When you’re talking crazy slow spinning hard drives, the price difference is even greater. The only reason spinning hard drives are tolerable to use is because the drives themselves contain a fair bit of RAM on their PCB to cache their storage. And then the operating system will usually do its own caching on top of that. It’s still orders of magnitudes slower than an SSD, but tolerable.

Even in the case of an SSD, a disk cache is helpful. The major difference between a cheap SSD and a premium SSD is the presence of onboard cache. The cache makes the drive faster. You will still do some caching at the operating system level, but no amount of caching from the operating system is enough to make up the difference.

Memory cache

Memory cache works in a similar way. You use a small amount of high-speed ram to speed up a much larger quantity of slower and cheaper RAM. A quarter century ago, motherboards had sockets for two types of RAM. The man system memory went into SIMM sockets, and the cache went into DIP sockets. The cache was optional. Depending on the motherboard, you could populate anywhere from 64 to 512 kilobytes of cache. You would then place anywhere from 4 to 32 megabytes of RAM in the SIMM sockets. And you would pay about the same amount for the main system memory as you paid for the cache. The cache was optional, but the system would run a good 10 to 20% faster if you had the cache installed.

Today, the cache is on the CPU itself. It’s much faster to integrate it into the CPU die. And the biggest difference between a cheap CPU and an expensive one is often how much cache it has. That’s been true for a long time now. But it’s still the case of the memory on the CPU die being much faster and much more expensive than the main system memory.

Why not use more fast memory?

The obvious question about cache is like the old joke about airplane black boxes: if black boxes are indestructible, why don’t they make the whole plane out of that stuff? The answer is cost. The material that enclosure is made of is to expensive to make the whole airplane from.

The generally accepted price range for a personal computer is somewhere between $199 and $5,000. Most people think the $5,000 computer is over the top. But as you get higher on the price curve, the major differences or the amount of memory and storage, of course, and the amount of caching that the system is able to do. The $199 computer probably doesn’t have any built-in disk cache and minimal memory cache.

The reason the $5,000 computer is over the top for most people is that all this comes at a point of diminishing returns. There is a huge difference in performance between the $199 computer and a $1,000 computer. For general use, the difference between a $1,000 computer and a $2,000 computer is not as great.

With caching, there’s always a point of diminishing returns. The first small amount that you do makes the biggest difference. Doubling it gives some improvement, but it isn’t linear. You will reach a point where doubling the amount of cache yields and almost imperceptible improvement. So it makes sense to stop adding more at the level below that.

Technology marches on, so as faster types of memory appear, designers revisit The amount to see what makes sense. Of course the top end machine will have the maximum amount before you hit diminishing returns. Mid-range and budget models may have smaller amounts. Enough to help, but without breaking the bank.

If you found this post informative or helpful, please share it!
%d bloggers like this: