Disadvantages of the 8086 microprocessor

Last Updated on August 11, 2017 by Dave Farquhar

An 8086-series microprocessor, the 8088, powered the original IBM PC. Its direct descendants power PCs to this day. Not only that, they power modern Macs too. This was always controversial, especially running Mac OS on Intel chips. Why? What are the disadvantages of the 8086 microprocessor?

Segmented memory

Soviet 8086
This M1810VM86 is a Soviet clone of the Intel 8086 CPU.

The biggest disadvantage of the 8086 microprocessor is its memory model. Intel set up the 8086 to use memory in segments, rather than using one big, flat address space. This made the 8086 much more difficult to program than it needed to be. And @kuiash, an 8086 programmer, mentioned on Twitter that memory was aliased multiple ways, making things even more difficult.

And yet, in the context of 1981, the decision made sense. Intel’s earlier 8-bit 8080 and 8085 CPUs powered a popular operating system called CP/M. The 8086 couldn’t run CP/M directly. But using the 8086/8088 in this particular way made it pretty easy to port existing CP/M software over to the 8086, including the then-new IBM PC. Making it easy to move existing software to the IBM PC increased its chances of success. The tradeoff of being able to move old software over, even if it made writing new software harder, seemed worth it.

In retrospect, what IBM really needed was a killer app. But the availability of hastily ported software bought time until Lotus 1-2-3 came along. That’s why IBM made a decision that seems shortsighted today, 36 years later. But IBM wasn’t thinking about 2017 when it made the decision to use the 8088. It was thinking about getting through a couple of years in a brutal new competitive computer market. The best technical decision isn’t always the best business decision. It’s possible that if IBM hadn’t made this mistake, it wouldn’t have survived in the market long enough to make its bigger mistakes.

Modern operating systems running on modern x86 CPUs utilize a flat memory model, so this is no longer an issue today.

No virtual memory

The other common criticism of the 8086 was its lack of virtual memory support. But again, let’s look at this in context. In 1980, a half megabyte of RAM cost $3,200. The IBM PC originally shipped in configurations with 16K and 64K of RAM to keep it somewhat affordable. Nobody was thinking about virtual memory at a time when 128K was a lot of memory. Besides, few people had hard drives in 1980 either. It wasn’t like they were going to use floppy disks for virtual memory.

Lotus, Intel and Microsoft devised a creative way to get above the 8086’s 1-megabyte limit, called expanded memory or EMS. It was a hack job, but reasonably fast and helped work around one of the 8086’s biggest weaknesses.

Lack of virtual memory on 8086- and 8088-based machines became a problem by around 1990. But if you’d asked anyone involved with the IBM PC what they envisioned the standard computer of 1990 would be, they wouldn’t have expected the IBM PC to be it. And had they designed the IBM PC with 1990 in mind, they would have ended up designing something like the Apple Lisa–too expensive and too far ahead of its time to be commercially successful.

Intel fixed this shortcoming in 1985 with its 80386 CPU. Truthfully, it wasn’t until 1990, with the release of Windows 3.0,  that you started to see mainstream software that actually made use of the 386’s capabilities. So it’s hard to say Intel was late to the party on this one. The market punishes you for being too late but doesn’t reward you for being early. Intel’s timing, whether planned and deliberate or accidental, is one reason it survived.

Registers

CPUs have a small amount of built-in memory called registers. Access to registers is very fast and convenient, and having lots of registers makes a chip easy to program. The 8086 had 14 of them, but only four were general purpose. The others were dedicated to specific uses. The 68000 had eight general purpose registers and 19 total. And our buddy @kuiash chimed in on Twitter that some registers had faster access than others. Figures.

The 6502 only had four, and three were general purpose. But it was lack of registers that drove Microsoft’s dislike of the 6502. The 8086 wasn’t the worst chip on the market in this regard, but it could have been better. Modern x86 CPUs have more registers than the 8086 did.

Lack of registers isn’t a showstopper. A clever programmer can get around the issue, but at the expense of the code being harder to understand. Most development today happens in higher-level languages, which makes this less of an issue today. But modern x86 CPUs have more registers than the original 8086 did anyway.

Efficiency

The 8086 CPU wasn’t very efficient. The cheap MOS 6502 CPU could do basic operations about twice as fast as the 8086, so a 6502-series CPU running at 4 MHz could keep pace with an 8086 running at 8 MHz.

Intel took two approaches to solving this problem. First, they cranked up the clock speed as much as they could. Second, over time Intel and its competitors found ways to make the chips more efficient. One reason AMD and Cyrix CPUs were as fast as comparable Intel CPUs in their day in spite of lower clock speed was due to better efficiency. Intel had to respond in kind as it ran up against its clock speed limits. But the only reason AMD survived was because its chips are compatible with Intel x86 and can run Windows and its software.

Why the 8086 survived in spite of its disadvantages

Being in the IBM PC was enough to overcome the disadvantages of the 8086 microprocessor
Being used in the original IBM PC and its compatibles allowed the Intel to overcome the disadvantages of the 8086 microprocessor

Free markets rarely choose the best solution. Free markets usually pick the good-enough solution, especially when the technically superior solution costs more. Intel did a better job than most of its competitors at driving costs down. So when the Intel CPU was faster and cheaper, consumers didn’t care that it was less elegant under the hood. Even when the Intel CPU wasn’t as good, competing technologies that fell by the wayside like the Motorola 68000 series and the DEC Alpha had an uphill battle because the software most people wanted to run was designed for Intel CPUs.

The weaknesses or disadvantages of the 8086 microprocessor caused Microsoft to hedge its bets more than once during its history. But in the end, the disadvantages of the 8086 microprocessor didn’t outweigh Intel’s ability to flood the market with inexpensive-enough, good-enough CPUs. Every surviving chip that isn’t 8086-compatible survived for a small number of very specific reasons, sometimes as little as one. With the exception of the ARM architecture popular in smartphones, where a consortium managed to out-good-enough Intel, most of them exist in very narrow niche markets. Intel can’t and won’t quit x86.

If you found this post informative or helpful, please share it!