AI’s parallels with the dotcom bubble

Last Updated on January 23, 2026 by Dave Farquhar

As someone who lived through the dotcom bubble, experiencing the breakthrough of the Internet in the early 1990s and worked in technology during the boom later in the decade, I’ve been asked what I think of the AI phenomenon going on in the mid 2020s. Yes, I chose that wording for a reason. Time will tell if it’s best called an AI boom, an AI bubble, or something worse like an AI scam.

Parallels with the dotcom boom

A modern AI neural network
This is a graphical representation of a modern neural network for AI. Computers have been doing these for years, but they’re more practical today.

Neither AI nor dotcoms happened overnight. In both cases, they involved technology that had existed for decades. But in both cases, over the course of a few short years, a few things arrived at the right place at the right time to make commercializing it seem practical, creating the illusion of an overnight sensation.

In the case of the dotcom bubble, the Internet was the technology that had existed for decades. Its predecessor, ARPANET, dated to 1969. The Internet came into being about a decade later, but it was the arrival of the World Wide Web and wide availability of relatively inexpensive computers powerful enough to run the combination of a graphical user interface, the TCP/IP protocol suite, and a web browser simultaneously that made something resembling the Internet of today possible in the mid 90s.

Artificial intelligence isn’t new

Artificial Intelligence also isn’t especially new. I first read about artificial intelligence in the 1980s when I was a kid. I didn’t understand much of it, but I understood enough to know that the artificial intelligence in the TV show Knight Rider wasn’t going to be reality anytime soon. In 2012, I bought part of the estate of a software developer who lived about five minutes away from me. In her estate, I found books and disks indicating she had experimented with AI, specifically neural networks, in the mid 1980s on an 8088-based IBM PC. Five megahertz of CPU power and half a megabyte of RAM isn’t enough to do anything that would impress you today, but it could prove fundamental concepts.

The problem with AI was that we massively underestimated the complexity of the human brain. Even though CPUs are ridiculously powerful today compared to what we had in the 1980s, they still aren’t powerful enough to make AI practical. But it turns out that GPUs are good at some operations that CPUs still struggle with. Yes, the things we use for playing 3D video games. Not only that, GPU power has increased even more in the last decade than CPU power has.

By 2022, it was possible to pull enough computing power together to create a generative chatbot that could read typed questions and respond to them. The first iteration of ChatGPT still was no match for the sentient computer in Knight Rider. But it sure felt like we’d gotten a lot closer. And if you don’t live and breathe technology, it probably felt like it came out of the blue.

The problem with AI

If it had just been something that could answer simple questions, the novelty probably would have worn off pretty quickly. But since it was capable of not only answering questions but completing simple tasks, it looked like something that might have legs. Competing AI technologies started appearing. Even if your primary business isn’t AI, if you’re in the technology field, you’re getting questions about how you use AI and what your future plans are for AI.

ChatGPT soon had no shortage of imitators. The largest technology companies started releasing AI-related products, and some survivors of the dotcom era got into the game too, some trying to keep a high profile and others content to linger in the background.

How AI feels like the dotcom bubble

I’ve heard more than one person at more than one company say it feels like the dawn of the Internet again. I agree with them, this is the closest thing I’ve felt to the early days of the Internet. Even down to the promise of a new Industrial Revolution. In the early days of the Internet, technologists promised it would start an information revolution that would transform the world at least as much as the Industrial Revolution had. Today, I’m hearing the same things about AI, that AI will usher in a transformation we haven’t seen since the Industrial Revolution. This implies that the Internet-powered information revolution didn’t quite pan out.

Around the year 2000, I could make people bust out laughing almost at will if I deadpanned, “If it’s on the Internet, it must be true.”

It seemed more like a misinformation revolution in 2000, and in 2025, it’s only gotten worse. In 1987, an adult authority figure lied to a room full of 20 students, including a pre-teen me, about a threat against two 14-year-old girls to manipulate us. Thanks to social media, that same person can–and does–lie to hundreds of people with less effort today.

The cracks in AI

As impressive as AI is, its models aren’t perfect. When they don’t know an answer, they’re prone to make something up. That’s a design decision. Some people are less likely to continue interacting with them if they admit they don’t know. But when they have an answer, it’s only as good as the data you train them on. Scrape the entire Internet to train them, and they’re going to regurgitate some misinformation along the way. They don’t know what’s true and what isn’t. And if they go by majority rule, truth may not emerge the victor.

John Milton asked in 1644 who ever knew truth to lose in a fair fight with falsehood. Maybe Milton was wrong, or maybe the Internet plus AI isn’t a fair fight.

How to stump your favorite AI

I can stump AI with simple questions. When working on a blog post, I needed to know what year Micron became a Fortune 500 company. I typed a query into Duck Duck Go, as I often do when starting to research. The AI summary said it happened after 1996. But as I read the actual search results, I saw several inconclusive snippets, many of them suggesting it was before 1996. So I tried another search. That AI summary claimed nobody knew. At the very least, Micron and Fortune know, so I knew that was garbage. So I started reading the search results. A snippet from some random site said it was in 1994. That was the most precise answer I’d seen yet, and it seemed close. A third search, including the year 1994 as a keyword, yielded results from both Fortune and Micron confirming it was 1994. The AI Summary agreed that time.

I started to say you can’t trust AI summaries from search engines. But I have to backtrack on that. They’re great at giving succinct answers to a question like where the three wires go on an electrical outlet. But they struggle at telling you random obscure historical facts. It means you still need a fair bit of critical thinking to use AI, or eventually you’re going to get something wrong. You can’t treat AI like some kind of sentient, all-knowing being, because it isn’t that.

The other big thing dotcoms and AI have in common

But here’s the biggest problem both of them have. Dotcoms couldn’t figure out how to make money. And in spite of massive demand and companies spending billions on AI initiatives, AI isn’t making money either.

AI’s business model ought to be enormously profitable. Thanks to AI startups scraping the entire Internet without permission, sites like mine now get 1/3 the traffic and 1/3 the revenue we once did. So 2/3 of that traffic is going to OpenAI now, to find a way to monetize. In spite of that, OpenAI doesn’t expect to be profitable until 2029. At the end of October 2025, The Register estimated OpenAI lost $11.5 billion in just that quarter.

This is a little off topic but if you’re a regular here, and wonder why I write very differently now than I did in 2022, that’s why. The formula I use now helps me protect what search engine traffic I have left while letting me get by on less effort since I now get 1/3 the return I once did. Small web sites like mine have to either adapt or close up shop, which is why large numbers of small sites have disappeared since 2022.

Why doesn’t AI make money?

I can’t prove it, but my suspicion is the huge power requirements of AI keep it from being profitable. A human brain uses 20 watts of power, while the GPUs that power AI use 150 to 750 watts of power each. In spite of using up to 37.5 times as much power, the human brain is several orders of magnitude more powerful. AI is impressive because it’s much faster than we are, but it’s not doing everything we do. AI doesn’t do any critical thinking. What AI is actually doing is much closer to autocomplete than it is to critical thinking.

The industry tries to compensate by building enormous GPU farms, financing it with low-interest loans, and chasing cheap subsidized electrical power. But interest rates fluctuate and someone has to pay those subsidies. So far, the industry has been buying time, hoping for a breakthrough every 2-3 years to bring it up a level. Help is always on the way in the form of Moore’s Law, but that means GPU power is doubling every 2-3 years. That’s impressive, but a slow way to close the gap with the human brain when the difference is still orders of magnitude. It’s like trying to pay your mortgage with the proceeds from running a lemonade stand in the driveway.

Can Nvidia save the day?

Finding a way to profitability is theoretically an easier problem than closing the gap that remains between AI and the human brain. Nvidia delivered a new generation of GPUs at the beginning of 2025, which suggests subsequent generations will arrive in 2027 and 2029 if all goes well. One option would be to do what we do today, but with 1/4 the GPUs and 1/4 the power. The more tempting option is to build even bigger datacenters than we build now, getting 16 times the power we get today while merely consuming four times as much electricity.

There’s historical precedent for either option. Desktop computers have been relatively consistent in size for the last 35 years, they just keep getting more powerful. A smartphone is much smaller than a desktop computer, and a smartphone from today isn’t as powerful as a desktop computer of today, but it’s more powerful than a desktop computer from 15 years ago, and that’s powerful enough to be very useful.

Projecting profitability vs reaching profitability

But projecting to be profitable by 2029 and remaining in business until 2029 are two different things. I will admit the one thing that seems different this time around is it does seem like more AI companies at least are aware that they have to start making money someday. During the dotcom era, some companies went into IPO without a plan to start making money.

But I still don’t like that AI companies committed the greatest instance of mass copyright violation in history and completely got away with it, using the flimsiest of excuses: that they needed the training data to survive. I need things too, and I pay for them. And in spite of selling access to stolen property, they can’t turn a profit.

If AI triggers a new industrial revolution, I’ll adapt, and so will pretty much everyone else. Most everyone still in the workforce has spent their whole career adapting to changing conditions, so we can do it again. But if this generation of AI ultimately fails for whatever reason, I won’t shed a tear for it.

I’m also not convinced it’s going to be one extreme or the other. I think a more likely outcome is that we’ll have a shakeout at some point, followed by a reset of expectations, and a few years later there’ll be AI 2.0 similar to the Web 2.0 renaissance we saw beginning around 2004.

The coming AI shakeout

And I do think that shakeout is inevitable. We have a toxic mix of AI hitting a wall, not being profitable, and higher interest rates than we had at the start of the AI rush. It’s possible–not probable, but possible–that a new generation of GPUs could make AI more cost-efficient or provide an increase in processing power that unlocks new AI capability. Realistically, that help won’t arrive until early 2027. And then it has to deliver. But that’s rationality talking. We can argue about whether markets are rational, but I think we can agree markets aren’t known for patience.

And impatient markets cause recessions.

If I haven’t painted a gloomy enough picture for you yet, the doctom billionaires came out of the dotcom recession just fine, and they’re positioned to do just fine whether AI causes a recession or not. I’m not saying they want a recession. But they’ll profit in the event of an AI revolution or in the event of an AI recession. So they don’t have any reason to care all that much which way things go.

If you found this post informative or helpful, please share it!

3 thoughts on “AI’s parallels with the dotcom bubble

  • October 8, 2025 at 7:55 am
    Permalink

    Great read, Dave. And thanks for the inside-baseball blurb about why the blog is different since 2022. It makes total sense.

  • October 8, 2025 at 10:41 am
    Permalink

    “where the three wires go on an electrical outlet”
    And you checked, that the answer was not a merging of regulations from 5 different countries, even if you asked for US?

    • October 8, 2025 at 8:32 pm
      Permalink

      “To wire an outlet, connect the hot wire (usually black) to the brass screws, the neutral wire (usually white) to the silver screws, and the ground wire (green or bare) to the green screw. Ensure all connections are secure and follow local electrical codes for safety.”

      Your results may vary but that’s what I got.

Comments are closed.