Why did the Atari ST fail? It is hard for me to be objective about the Atari ST, because I was a dyed in the wool Amiga fanboy in the early ’90s. I am not supposed to like the Atari ST. And I’m not sure if that makes it easier or harder for me to see it as a failure, but I don’t see the Atari ST as a failure.
Then again, since you probably are not reading this on an Atari ST, there is an argument that it did fail. So it could be that I am grading on a curve. I expected the machine to be a nothing burger, and it ended up selling 2.1 million units at a time when 2.1 million units shipped still was a pretty impressive number.
So I guess you can say it’s the 1969 Chicago Cubs of retro computers. It didn’t win a championship of any kind, but it hung in there and it’s easy to see how it could have won if a couple of things had gone differently, so people remember it fondly.
Why the Atari ST should have done better
It is pretty easy to see why 2 million people thought highly enough of the Atari ST to buy one. It gave a Macintosh-like experience for the price of a PC clone. And it came on the market in mid-1985, barely a year after the Mac.
It cost 1/3 as much, and there is absolutely no way you can argue it was only 1/3 as good. It had the same CPU, it had more memory, it had color, it had a better sound chip, and it had built-in MIDI.
The Atari ST was almost everything the standard 1991 386SX PC clone running Windows 3.0 was, but in 1985.
It looked like the future. And unlike the Amiga, which failed because it was too far ahead of its time, the Atari ST was a better match for its time without looking stodgy and old-fashioned.
Like I said before, I’m not an Atari fan. I look at the ST as something that could have done much worse, but in a fair and just world, it probably deserved to do better than it did.
The Atari ST failed because Atari failed
Ultimately, the problem with the ST was Atari bet its very existence on the system being a raging success. And since the ST was not a success along the lines of the Atari 2600 games console or the Commodore 64 computer, Atari failed, and the ST went along with it. Atari historians would say it is more complex than that, and they have a point. But you’re not here to read a 200-page book.
And in all fairness, the Atari 2600 was still fresh in everyone’s memory at the time. The spectacular rise and fall of that console was what led to Jack Tramiel and his sons buying Atari in the first place. Trying to recapture the 2600’s success while preventing what went wrong was a very relatable narrative to the Atari employees who remained. Plus, Tramiel was coming off a similar success and his previous company, Commodore.
It was the obvious path forward in what was still a very young industry that had very few precedents to learn from. It seemed like the right thing to try, and it seemed doable at the time.
The comeback under Tramiel
Regardless of what you think of Tramiel, it was a pretty spectacular comeback story. Tramiel left his previous employer because he disagreed with his jetsetting boss who was mostly interested in plundering the company, so he and a ragtag team of former Commodore engineers who followed him teamed up with Atari engineers to build the product Tramiel wasn’t able to build at Commodore. In six months they went from an idea to having a working prototype. Five months later, it was in production.
It was a total rush job. And yet, the machine worked.
Arguably he didn’t know what he was in for, but he’d seen worse. I don’t mean to make light of his life situation, but he was an Auschwitz survivor. He was keeping the company afloat by selling surplus office furniture, but he had the perspective to be completely willing to do that. He had been in a situation in his life when he had fewer options than that to survive.
Up to that point, everything had gone right that needed to. It beat its main competitor to market by five months. The DOA rates and reliability, always a danger with rush jobs, were within acceptable standards. And it had an acceptable number of launch titles.
But the Atari ST, and Atari, failed in the end because it wasn’t enough to compensate for what happened after launch.
The Atari ST wasn’t easy to buy
Some people would argue with me that this is also an oversimplification, but I don’t think so. Rule number one of marketing is to make your product easy to buy. I’ve seen this myself working at software companies. When someone has time to change their mind, they will.
And this is a U.S.-heavy perspective, but the United States is a huge market. The ST did well in Europe, but it was easier to buy an ST in Europe than it was in the States.
The computers that did well in the 1980s were very easy to buy. They had extensive dealer networks, frequently a combination of national distributors and retail stores that had a lot of locations, as well as some kind of network of locally owned independent dealers.
The other big names of the 1980s had that, and it grew over time. Atari didn’t really. And there were two reasons for that. A lot of national retailers got burned pretty badly on the video game crash of 1983. Of course, Atari was under new management, but the problem was, those retailers remembered Jack Tramiel from his days at Commodore, and they didn’t like working with him at Commodore either.
At the very least, every major city had one or two independent computer shops that sold Atari computers because the owner liked Atari computers. And there were also some mail order and phone order discounters who advertised in computer magazines that sold the Atari ST, but that is a different situation from most of the competition. For example, a significant part of the population could drive 5 minutes and buy a computer. Radio Shack was a problem for the ST. Arguably its biggest problem in the United States.
What does Radio Shack have to do with Atari? Right around the same time the Atari ST came out, Radio Shack started selling the Tandy 1000. At the time, Radio Shack had nearly as many locations as McDonald’s.
The Tandy 1000 was an inexpensive IBM PC compatible with a nice sound chip and nice color graphics and exactly the same price range as the Atari ST. The Atari ST was a better computer overall. The Tandy 1000 was nothing more than an IBM PCjr clone with a nicer keyboard and fixes to improve its compatibility with the IBM PC. It sounds like yesterday’s leftovers, but it was a known quantity. The Atari ST was an unknown. You might have to buy one sight unseen as well.
Or you could drive 5 minutes and see a Tandy 1000 in person. They had a wall of software that was available for it right there in the store. It would be running a demo that showed off its capabilities. And if you asked nicely enough, you could probably try the machine out. If you liked it, you could bring one home with you that day.
That’s why it took about 6 years for Atari to sell 2.1 million STs while Radio Shack sold about that many every year from 1986 to 1989.
From yesterday’s leftovers to viable competitor
Today it is popular to say the IBM PC won because it was an open standard, and open standards always win. I think it’s more complicated than that. I think the reason it won was because the Tandy 1000 was an affordable and easy to buy computer that turned the architecture into a viable home computer. Including having some nice video games to play.
Without that option, in the mid-1980s it would have been very easy for Atari and its direct competitors to position the entire IBM PC clone market as boring beige business computers, while their computers could do a lot more interesting things, both at home and in the office. Instead, the Tandy 1000 with its adequate graphics and sound positioned itself as the machine that could do both. Being able to run both King’s Quest and Lotus 1-2-3 made it look like the sensible choice.
Both the Atari ST and Amiga were better computers than the Tandy 1000. But neither of them were easy to buy. That and their companies’ many other problems doomed them to also ran status.
The 68000 generation
In 1985, it appeared for all the world that the future belongs to the Motorola 68000 CPU that powered the Atari ST, Macintosh, and Amiga. It was the first successful and affordable 32-bit microprocessor, and it could address larger amounts of memory than an IBM PC could and without any weird, hacky programming or hardware tricks. And in the case of the ST and Amiga, they had really nice color graphics and sound.
The difference was rather like the difference between a Sega Genesis and the most popular 8-bit game consoles.
But while the Sega Genesis transformed the video game industry, its counterparts in the computer industry had a more difficult time.
Failing to redefine the market
I have heard the argument that Apple, Atari, and Commodore spent too much time competing against each other and not enough time trying to redefine the market. The contrarian point of view is that they should have positioned themselves as the next generation of computers and defined the IBM PC as backward and old fashioned.
It is very much a contrarian point of view, and would have been completely out of character for all three of those companies. Maybe Motorola should have stepped in and tried to do it, and they would have benefited because it was their CPU powering the revolution. But it is also easy to see why they wanted to stay out of it. They didn’t want to seem like they were favoring one of the three companies over the others, and they were selling chips to IBM at the time as well. IBM didn’t use a Motorola CPU, but they did use a Motorola video chip in some of their models.
The result was everybody talked about these new Motorola powered computers, but people talked about them while they bought something else. For the most part, the Atari ST, Macintosh, and Amiga were what everyone planned to upgrade to someday, and in many cases, that day just never came. In other cases, that day came, but it was sometime after both Commodore and Atari were no longer viable corporations.
It sounds like a cop out, but software piracy played a role in the fate of the Atari ST, and for that matter, the Amiga as well. Both systems enjoyed a great deal of software support early on, and then it tapered off. The ST ramped up earlier, by virtue of hitting the market about 5 months sooner, and it also tailed off sooner. But both systems saw the amount of new software releases tail off as they increased in age.
No system was immune to software piracy. I knew PC owners in the 1987 and 1988 time frame who had large collections of pirated software, but the average PC owner during that time frame was more likely to buy software than the average ST or Amiga owner. Or at least that was the perception.
Ultimately it wasn’t who was pirating more that mattered as much as the perception. Whatever the truth was, the software publishers believed it. Game publishers who had no interest in producing games for the PC in 1985 were very interested in doing so by 1987 or 1988.
The Tandy 1000 had kept the aging PC architecture in the game, and bought the PC ecosystem time for the graphics, CPU, and sound to catch up with the ST.
Lack of productivity software
And when it came to serious productivity software, The ST had a hard time. It had the nice high resolution monochrome display with a high refresh rate that gave a clear Mac like display only larger, and that helped it gain a following as a desktop publishing platform, especially in Europe, but attracting AAA business titles was a struggle. Wordperfect ported its popular word processor to the ST, but it’s hard to come up with another example.
And with few well known business titles available, it was easy to dismiss the Atari ST as a games machine.
The Amiga 500
I also don’t think it is a coincidence that the Atari ST outsold the Amiga in 1985 and 1986, but that changed in 1987. I say this as an Amiga fan, but the Amiga 1000 was an awkward middle child of a computer. It was a desktop form factor machine, but without the expandability. The ST had a nice all-in-one design that took up less space, undercut it in price, and had more of what you wanted built in. The ST was a more practical home computer, and frankly, it was probably a more practical business computer too, even if the Amiga 1000 looked more like one.
The Amiga 500, released in 1987, had a similar all-in-one design and a lower price. The ST could still be less expensive, but the difference was $100, not several hundred dollars.
The Commodore Amiga 500 pulled a Goldilocks on the ST. It had the same amount of memory as the Atari 520ST, but it had the all-in-one design including a floppy disk drive of the Atari 1040ST. And it had a trap door expansion slot to add memory without having to open the case.
So it cost more than a 520 ST, and by the time you added an additional half megabyte of memory to make it competitive with a 1040 ST, it cost more than the 1040 ST. But you didn’t have to buy it all at once. You could buy the base model with a monitor, then expand the memory very easily next year.
The Commodore 64 mindset
The other problem with the Atari ST was Jack Tramiel’s mindset of treating it like a bigger Commodore 64. By that I mean he released the original computer in 1982, and Commodore was still selling the exact same machine 3 years later.
That model mostly worked in the 8-bit world, but not so much in the world the ST competed in. By the late 1980s, consumers were starting to expect occasional operating system upgrades. Ironically, it was Atari’s partner, Digital Research, who got that ball rolling in the PC market by releasing DR-DOS. Apple routinely shipped upgrades to the Macintosh operating system. Commodore updated the Amiga operating system every couple of years.
Atari upgraded the operating system for the ST occasionally, but generally did not keep pace with Commodore, let alone Apple. And while early versions of Windows were a noncontender, by 1990, Windows 3.0 was a different story.
That caused two problems. First, Atari missed out on a potential revenue stream, selling operating system upgrades. Second, it meant the ST looked increasingly dated over time.
Arguably, Atari did a better job of keeping the hardware up to date, but didn’t keep pace with the PC market. 1986’s Mega ST added memory and a blitter to speed up graphics. But the STE, with an increased color palette of 4,096 colors, didn’t appear until 1989. And the Mega STE was the only model with a faster processor, at 16 MHz.
The Atari TT and Falcon made use of the excellent Motorola 68030 processor, but they hit the market in 1990 and 1992, respectively. On paper, they looked thoroughly outclassed by PCs running on Intel 486 processors with VGA graphics and a Sound Blaster. In reality Atari held its own, but you had to see the machines side by side to know that.
In markets where you could see the machines side by side easily, the ST did pretty well. So that’s why the ST fared better in Europe than in, say, the United States. As the 80s and 90s wore on, opportunities to see a PC and Mac side by side grew more common while the ST remained a specialty shop item.
If you’re not first, you’re last
There was a line in the Will Ferrell movie Talladega Nights that said, “If you’re not first, you’re last.” That’s not how it works in the computer and gaming industry. You can finish second and do just fine. The game console industry has three major players. Microsoft and Nintendo alternate finishing third, but since they sell enough units even when they finish third, even if one of them started finishing third all the time, they would probably be okay.
Conventional wisdom says that the market can only support two computer platforms. But there is no reason that has to be the case. It’s not that the number three and number four computer platforms were automatically doomed.
There is a threshold that a new product needs to meet in order to ensure its continued survival. Reaching 16% of your potential market will do the trick. IBM and the clones got there first. Apple got there and lost it, then Microsoft figured out it is better to be the dominant part of a duopoly then to be a monopoly. You have fewer legal problems that way.
In Atari’s case, selling 2.1 million units wasn’t enough. It is fun to speculate that if Atari had given the ST family one last iteration in the early 90s, with Commodore out of the way, the ST could have had a resurgence. But I think it would have been too little, too late. By the time Commodore was out of the way, Apple was starting to have problems as well.
But it’s easy to see, and fun to speculate, how one or two things going differently in the early stages could have radically changed the story.
3 thoughts on “Why did the Atari ST fail?”
Very nice text as always 🙂
I have questions/observation:
– “2.1 millions”
where did you get this number from?
– “It cost 1/3 as much, and there is absolutely no way you can argue it was only 1/3 as good. It had the same CPU, it had more memory, it had color, it had a better sound chip, and it had built-in MIDI.”
Just to add: it had 1/3 higher resolution and 1/3 biggee monitor then Mac.
Also was 1/3 faster then Mac because CPU was not stalled by display memory access.
So: 1/3 of price but 1/3 better 😉
– “The Atari ST was almost everything the standard 1991 386SX PC clone running Windows 3.0 was, but in 1985.”
Omen to this is that software from Atari ST was gradually ported to Windows as soon as Windows become “good enought”, especial after Windows 95:
Apple Logic, CuBase, Calamus, PageStream, 3D Max (although this one was ported to DOS), Autodesk Animator, and some more obscure like FeFlow… was originaly writen to ST, later ported to other platforms.
On ST you can find very advanced software that still does not have it’s counter part on PC or Mac: like german ProText which is unique blend of Excel and Word!
Please note that WordPerfect is one of the “Low-end” programs on ST (but better then Microsoft Write). ST had much more advanced text processors then WordPerfect! Dozens of text processors which quality and options supras anything on PC at given era: Signum, Caligrapher, CyPress, Script, Tempus… were far, far more advanced then top-notch DOS software like WordPerfect or WordStar.
Key point of ST failure:
“Nobody get fired for buying IBM”
is so true and clear now! Evidence is that ST had far more advanced and superior software compared to PC but to few people saw it (and ST and ST specifict software). Today, lurking over ST forums, I am astonished how few people are aware of best ST software – everybody talking about WordPerfect on ST (!) but, like I said, it was really bad comparing to ST unique text processors.
– “And with few well known business titles available, it was easy to dismiss the Atari ST as a games machine.”
Yes, since everybody was looking for Lotus 1-2-3 or dBase (both clones exist on ST but unique database software for ST looks like today database software: complete drag and drop solutions with GUI form editor…).
Many recognized ST strength and use it to write specific software like FeFlow (today: https://www.mikepoweredbydhi.com/products/feflow ).
So there was more then enought software, much more advanced then PC counterpart, but main problem was, like you said: to few people know for it.
Here you can find software for ST and you can see how software looks likeon ST: http://milan.kovac.cc/atari/software/?folder=/DTP or http://atariuptodate.de/en/office/wordpro
– “Rule number one of marketing is to make your product easy to buy.”
Jack Tramiel make grave mistake trying to fix this problem: he bought Federal Store in hope to make ST more avaible. Federal Store was death rope to Atari Corp. since it bring 80 millions in debt (not sure how Jack make such mistake and did not see real value or true papers of Federate Store…).
Thanks! The 2.1 million figure came from Jeremy Reimer’s research, published in Ars Technica. https://arstechnica.com/features/2005/12/total-share/
And I agree with Brian, if the Amiga hadn’t been available, I probably would have gone for an ST. I would have given it a very long look, that’s for sure.
Had the Amiga 1000 not been available for sale in 1986, there’s no doubt in my mind that I would have bought an Atari ST, instead.
Comments are closed.