Here’s a good question I heard the other day: What’s the difference between a CPU and cores, or the difference between the number of CPUs and the number of cores in a system? The CPU vs core or core vs processor distinction, it turns out, is subtle.
As far as the operating system is concerned, there is no difference, but I’ll explain why. For you, there might be a difference.
Multiple processors in the old days
Back in the bad old days, when you wanted a multiprocessor system, you had to buy a motherboard with multiple sockets, because every CPU chip you bought had one and only one processor core in it. I was pretty hot stuff in 1999 with my Abit BP6 motherboard with a pair of 500 MHz Celeron CPUs on it, let me tell you. No, really, for its time it really was a very nice system.
And hardware enthusiasts knew they could buy two slower CPUs and get about 75% as much performance as they would from a single faster CPU. My dual 500 MHz CPUs wouldn’t run with a 1 GHz CPU, but it cost a lot less. In fact, it cost less than a 750 MHz CPU.
But Intel didn’t want to sell multi-CPU systems. They wanted to sell clock speed. Ever-increasing clock speeds were how they stomped Cyrix to within an inch of its life and kept AMD on the ropes. Intel tolerated enthusiasts building multi-CPU systems, but the target audience really was servers and workstations costing thousands of dollars.
It wasn’t really a problem for Intel though. Back in the days of the Abit BP6, you had to run Windows NT or Windows 2000 or Linux to use more than one CPU. Most people still ran Windows 98 or some variant of it.
The shift to cores
But around 2005 Intel hit the wall and realized they weren’t going to be able to scale clock rates into the 5-6 GHz range, so instead they started putting multiple CPU cores on a single chip to get higher performance. AMD followed suit. By then almost everyone was running one operating system or another that properly supported multiple CPUs, so it worked out well for everyone, and having more than one CPU went from being something weird people did to something everyone did. And that was OK, since Windows XP was the mainstream operating system by then, and it scaled happily to multiple cores.
Today, single-core CPUs are extinct, at least as far as mainstream CPUs go. Even the budget laptops that sell for $149 around the holidays sport two CPU cores. When Intel and AMD introduce a line of processors, they just vary the clock speed, the CPU core count, and the amount of memory on the chip. The cheapest CPU gets the smallest amount of each worth making. The fastest CPU gets the largest amount of each that Intel or AMD can make reliably.
The distinction between processor vs core
Today I can buy a 2-core CPU for $50 and it will blow the doors off my old dual Celeron setup even though it costs 1/3 as much as one of those chips cost me back in 1999. It’s still possible to buy a motherboard with multiple sockets, but there’s little reason to do so for a desktop PC. Someone with a multi-socket system might say their system has two CPUs, but they really mean they have two processor packages. They’re more likely to tell you how many cores they have total, since the main reason to have multiple CPU sockets is to get more cores.
There’s a caveat with systems with multiple CPU sockets. The CPUs have to be capable of working together in a multi-socket package. Not all can. And the CPUs have to match exactly. This is why you frequently find people listing Xeon processors on Ebay in matched pairs. Running to very similar CPUs isn’t supported. It may appear to work under some circumstances, but the risk of malfunction is much higher.
The licensing conundrum
Packing multiple cores into a CPU die caused a problem with servers. Some applications, notably Oracle, are licensed by the CPU. This is how Oracle charges more for a big 6U production server than for a cruddy little test server sitting under a developer’s desk.
Since large servers can and often do sport multiple sockets, Oracle relented and started licensing per CPU package, rather than CPU core.
Hyperthreading: When a core isn’t a core
Intel CPUs also have a feature called hyperthreading. When enabled, hyperthreading doubles the number of cores on the CPU package, at least as far as the operating system is concerned. But these virtual cores don’t perform quite as fast as true physical cores do. In some cases they can even slow performance down, so hyperthreading has a bit of a stigma. But hyperthreading has been around long enough that most applications are aware of it now and able to code to at least eliminate the slowdown penalty, if not take advantage of it.
Intel’s budget CPUs don’t get hyperthreading, but they’ll include it in their high-end CPUs to give them an additional performance boost.
CPU vs core: The future
Intel CPUs can run at 4 GHz for short periods of time, but that clock speed really represents something of a wall. And even though clock speeds are increasing, they aren’t increasing at the rate they once did. There were a couple of periods in history where clock speed tripled every five years or so, and in the late 1990s and early 2000s it increased at a faster rate than that.
Today, clock speeds creep up slowly, rather than multiplying. Software developers have adjusted their algorithms to scale across CPU cores since they can no longer count on the speed of an individual CPU core increasing at historical rates. When multi-core CPUs first appeared, analysts groused that the third and fourth cores spent a lot of time sitting idle, looking for work to do. That’s changing, especially with games. This makes sense, since gamers historically bought costlier CPUs than people who just wanted to run Microsoft Office.
It used to be when you wanted a faster computer, you bought a faster clock speed. Today, the fastest i5 might be clocked faster than the slowest i7, but the i7 will be the faster chip because it has more cores and probably has more cache.
Today, Intel and AMD give chips model numbers to help you figure out which of two chips is faster.
Multiple CPU cores makes virtualization, or turning one PC into more than one PC, much nicer. The virtual PCs can get their own dedicated CPU if you have enough cores. This bodes well for the future, since virtualization isn’t just a datacenter thing anymore. Virtualization allows you to isolate programs from one another, or even different tasks within the same program. This is a real boon for security. It makes it much harder for someone to use a virus-laden Word document to steal a password from your web browser, for example. As security becomes more important, we’ll see more and more virtualization. And that will make multiple cores more important.
For now, it’s mostly governments and corporations that deal with extremely sensitive data who target this kind of technology. But advanced technology like this usually goes mainstream after a while. We don’t normally think of multiple CPU cores as a security feature. But multiple cores makes security and its associated performance hits much more tolerable, at the very least. All this virtualization comes with some overhead, after all.