Last Updated on February 21, 2022 by Dave Farquhar
What is MS-DOS? I guess the question dates itself, or rather, the person who asked it. DOS is obsolete today, but was so important in the 80s and 90s that people assume you know what it was.
MS-DOS was the operating system PCs used before Windows. It was the first popular 16-bit operating system, and was the dominant operating system of the 1980s and early 1990s. It did not multitask.
The MS-DOS backstory is legendary. The short version is that in the 1970s, the dominant computer operating system was CP/M. CP/M was so popular that IBM wanted to use it on its IBM PC. The problem was, CP/M ran on Intel 8080 CPUs and IBM planned to use the newer Intel 8088 CPU. The two chips were similar, but CP/M wouldn’t run on the newer chip without modification.
IBM approached Microsoft, who, ironically, produced a hardware board that allowed CP/M to run on the Apple II computer and bundled CP/M with it under license. Microsoft referred IBM to Digital Research, the makers of CP/M. For reasons lost to history, IBM and Digital Research couldn’t come to an agreement. The story that Gary Kildall, the original author of CP/M, blew IBM off while flying in his airplane is an urban legend.
Meanwhile, back in Seattle, a small company named Seattle Computer Products was tired of waiting for Kildall to produce an 8088-compatible version of CP/M. One of its employees, Tim Paterson, cloned it on the 8088. Gates knew Paterson, and when IBM came back to Microsoft to talk options, Gates arranged to license Paterson’s work. Eventually Microsoft ended up buying it outright, for less than $100,000.
Microsoft in turn licensed DOS to IBM, who sold it under the name PC DOS. Microsoft also licensed it to other companies like Compaq under the name MS-DOS, though the two systems were functionally identical. Within 10 years, Microsoft was a billion dollar company and Bill Gates was a billionaire, largely thanks to DOS. Saying Microsoft fleeced SCP is an understatement.
Digital Research’s reaction
Digital Research regarded MS-DOS as a ripoff of its intellectual property. DOS and CP/M were extremely similar, and intentionally so. IBM wanted it to be easy to port existing CP/M software to its new system. Whether MS-DOS contained stolen CP/M code is controversial.
Gary Kildall always said it did, and some other industry pioneers insisted they’d seen proof, though none of them ever produced the evidence. Digital Research considered suing IBM and Microsoft but feared the litigation process would squeeze them out of business. Recent forensic analysis has claimed to clear Microsoft, though I have problems with their methodology, because they investigate the hypothesis that DOS contained CP/M source code. If DOS were derived from CP/M directly, it would have been from memory dumps, not source code.
But since DOS and CP/M were so similar, it didn’t take much for Digital Research to produce its own compatible DOS, which they did. It was called DR DOS.
Kildall died in 1994 under mysterious circumstances. Of course there have been conspiracy theories about that ever since, but it appears he had an undiagnosed medical condition. There’s no reason to believe someone had him killed to silence him.
What was MS-DOS like?
MS-DOS had a command-line interface, similar to what you get when you open a command prompt in Windows today. It referred to disk drives through drive letters like CP/M did. The first floppy drive was A:, the second floppy drive was B:, and if you had a hard drive, it was drive C:. Windows inherited this naming convention, which is why Windows installs on your C drive and your modern PC probably doesn’t even have an A or B drive.
To change between drives, you typed the name of the drive, followed by a colon, and hit the enter key. To launch a program, you inserted the disk containing the program, changed to that drive, then typed the program’s name.
DOS had a number of included commands to perform various functions. The commands date and time allowed you to set the date and time. Most of the command names and syntax were lifted from CP/M, though Microsoft famously replaced the cryptic CP/M PIP command with the much a much more intuitive command called copy.
Once DOS gained the capability to use subdirectories, it used backslashes as the separator rather than forward slashes like Unix. DOS had already appropriated the slash character for passing parameters to programs.
To a casual observer, DOS and Unix certainly appear similar. But Unix was much more powerful, and while there’s a bit of Unix influence in DOS, it was directly influenced by CP/M, which itself was influenced by minicomputer operating systems, especially those from Digital Equipment Corporation (DEC).
The Intel 8088 CPU could address up to a megabyte of memory. But you couldn’t just load it all up with a megabyte of RAM and have a functioning system. IBM had to reserve some of that memory for ROM and I/O chips. IBM chose to reserve 384K of the memory for ROM and I/O, and leave 640K open for RAM. This led to Bill Gates’ infamous statement that “640K ought to be enough for anybody.” Gates didn’t mean for all time. But in 1981, when 128K was a lot of memory, IBM planned to ship the PC with as little as 16K of RAM in its most basic configuration.
The IBM PC was far more successful and had a much longer market life than anyone expected in 1981. So various companies came up with different schemes to get more memory. This included expanded and extended memory, and later on, tricks to get more conventional memory available.
Speaking of memory, DOS had a weird way of addressing it. Rather than just giving you a flat megabyte of address space, it used segments. This made porting software from CP/M much easier, but made DOS harder to program from scratch. It made sense in 1981, but people trying to learn to program DOS in the late 80s cursed that decision.
A different world from today
While a megabyte of address space was enormous in 1981, today it’s very limiting. DOS was a much simpler system than modern systems. There was no multitasking, no hardware abstraction, device drivers were very limited, and the system just did a lot less for you than modern operating systems do today.
Hardware makers had to walk a very fine line. They needed to make their hardware as similar to IBM’s as possible, without infringing on patents or something else that would get them sued.
Today, we can just throw memory and CPU power at compatibility issues to make them go away. MS-DOS didn’t have that luxury.
DOS stored its configuration largely in two files: config.sys and autoexec.bat. Generally config.sys was for device drivers, and autoexec.bat was a script that DOS ran at startup that handled everything else. This script could further fine-tune configuration, loading things that couldn’t load in config.sys, and it could even run programs automatically for you, much like putting shortcuts in the Startup folder in Windows.
Although DOS couldn’t multitask, it did have the facility for terminate-and-stay-resident programs. These programs would load, then terminate, and if you hit a hotkey, you could activate them. The program you were working on would freeze, but this provided a way to, say, launch a calculator or a calendar while you were working in a word processor without exiting the word processor.
Power users might very well load a couple of TSRs in autoexec.bat, even if they left the computer to boot to a C prompt.
DOS addressed peripherals in much the same way as disk drives. The parallel printer port was LPT1:. The RS-232 serial port was COM1:.
Software had to provide its own drivers for peripherals like printers and modems, as DOS only provided the most basic functionality. The quality of the included printer drivers could make or break DOS word processors in the days before Windows.
DOS lacked hardware abstraction like modern operating systems, so DOS software generally talked directly to the hardware. This meant you couldn’t have the kind of hardware variance we have today. Today, AMD and Nvidia can do what they want with their video cards and let the drivers ensure software runs on both. In the DOS era, they had to either make their cards compatible, or get developers to support both.
This is why hardware was so critical for compatibility. After IBM lost control of the PC standard in the 386 era, Intel had little choice but to start making motherboards to stabilize the market. Intel was more concerned about preserving quality than compatibility, but their offerings had the side effect of creating a standard to fill the void.
Why DOS caught on
How DOS came to dominate the industry is its own story, but the short version is, IBM was a safe choice in the 1980s. The saying then went that no one got fired for buying IBM. So that ensured some degree of success. The rise of inexpensive clone PCs increased it. On one hand, you had the safe choice, IBM. On the other hand, you had a cheap-enough, good-enough choice. They just happened to run the same software. Over time that ecosystem built up enough momentum to corner 90 percent of the computer market. After IBM fell out of favor, the cheap-enough, good-enough choice also became the safe one. You could buy a computer for $1,000 (or even a bit less) at any office supply or consumer electronics store. You could run the same software on it you ran on the PC you had on your desk at work. And there were interesting games available for it too.
There certainly were other options. Most of them were better in at least one way. But none of them had the combination of the large software library, expandability, and low cost of entry that cheap IBM-compatible PCs running MS-DOS had. By 1994, only Apple remained, and Apple almost didn’t survive the decade either.
DOS turned into the OS that wouldn’t die. Even by 1984, DOS was showing its age, and IBM’s PC/AT showed its limits. Microsoft and IBM started planning for a future beyond DOS. Would PCs run some form of Unix, or would they run something else? Unix was on the table, but Microsoft and IBM worked on a successor to DOS, called OS/2. Meanwhile, Microsoft was also developing Windows. Prior to Windows 95, Windows was just an application that ran on top of DOS, providing a GUI while DOS provided disk handling and most other I/O. Eventually Microsoft and IBM parted ways on OS/2, and the work Microsoft had done gave rise to Windows NT.
OS/2 and Windows NT replaced the DOS underpinings with a more modern kernel that could multitask and provide much better stability and security, and facilities for real device drivers. But it took time to catch on. Microsoft stopped marketing DOS after 1995 for the most part, but that DOS core remained in Windows through Windows ME. It wasn’t until Windows XP, released in August 2001, that DOS truly went away.
In the meantime, developers found clever ways to milk more life out of DOS. But DOS certainly had some disadvantages. DOS survives today in some embedded applications, certain legacy applications, and retro computing. I guess it has regained some retro charm over the years. But in 2001, I have to say, I couldn’t wait to see it go. And I wasn’t alone in that.
David Farquhar is a computer security professional, entrepreneur, and author. He started his career as a part-time computer technician in 1994, worked his way up to system administrator by 1997, and has specialized in vulnerability management since 2013. He invests in real estate on the side and his hobbies include O gauge trains, baseball cards, and retro computers and video games. A University of Missouri graduate, he holds CISSP and Security+ certifications. He lives in St. Louis with his family.