What comes after cloud?

It’s probably not news to you that the hot trend in computing in 2021 is moving from on-premise computing to cloud computing. And that’s been a hot trend for several years. But what comes after cloud? What will replace cloud? It’s possible to predict what it will look like–just not the specifics.

Why we can’t predict specifics

what comes after cloud?
What comes after cloud is decentralization, the same way armies of servers like this Compaq Proliant mostly displaced leased mainframes.

Disruptive technology is by nature unpredictable. We knew in the 1980s that some kind of laser-based disc would replace the VCR, but we didn’t see streaming services coming in the 80s. And that’s why a bank or urgent care now stands where your nearest video rental store used to be.

Amazon’s idea to lease out surplus computing space is obvious to anyone who knows a little bit about information services like GEnie and Compuserve. But it’s really General Electric, not H&R Block, that provides the closest 1980s analogy to modern cloud computing. And that’s also revisionist history to a degree. Amazon reused GE’s idea, but with much greater success than GE had. Then again, that’s good disruptive technology. Take someone’s good idea that didn’t quite work and make it work this time.

But few, if anyone one saw this coming. For years, Amazon was just a dotcom that had managed to survive the dotcom bust but hadn’t figured out how to make money yet. Then they started making money and everyone disagreed about what had changed. Then cloud computing hit and Amazon became a juggernaut. It snuck up on us.

But history is cyclical, so we can predict what the next disruptive technology will be.

Why we know decentralization will replace cloud

Before businesses bought x86-based PCs, they leased mainframes from companies like IBM and Unisys. Note I didn’t say bought. They leased them. It saved upfront costs, but proved hugely profitable for the mainframe vendors. The more you used the computer, the more you paid. And I’m not just talking electricity. You got a bill from the vendor based on how much capacity you used.

It was crazy expensive in the long run, but the upfront cost was low. The upfront cost attracted new customers, and it’s not like they had much in the way of alternatives. And in the mid 80s, IBM looked unstoppable. There was pressure to break up IBM like AT&T, in fact.

And then came a chip called the Intel 80386. It could run software for small computers, but it had features that previously were only available in larger computers. Over time, large companies figured out they could save money by strategically buying fast PCs and farming computing work out to them. The upfront cost was high, but then you owned it. You just paid for the computer, not for your usage of it. It took a few years, but companies like Compaq and Dell toppled the IBM juggernaut, aided by the ever faster successors to the 386, and operating systems like Windows NT and, later, Linux. But I remember in 1994 and 1995, Microsoft’s decision to build an online network, MSN, and use Compaq computers running Windows NT instead of a mainframe was controversial. Would it scale?

MSN arguably was more valuable to Microsoft as a proof of concept for Windows NT than it was as an online service.

Why we flipped from on-premise computing to cloud

In a way, on-premise computing traded one problem for another. Upgrading a mainframe was IBM’s problem. You just ordered more capacity, and if IBM couldn’t scale your mainframe bigger, they trucked in a new one and moved all your hardware and software to the new one. IBM figured it out and sent you a bigger bill.

Upgrading PC-based technology is harder. That’s why Windows Server 2008 is still one of the most popular server operating systems in 2021 in spite of being end of life. Every 10 years, large companies spend millions of dollars on projects to migrate from obsolete Windows Server operating systems to a supported one. And there are other hidden costs. The hardware costs $3,000. The Windows Server license costs $1,000. You’ll spend money on maintenance for the hardware. When the server software goes end of life, you spend $100 a year on extended support. You also need an IT staff to keep the systems up and running.

Being able to lease a server for $5 a month sounds pretty good in the face of that.

Cracks in the cloud

If you do the math, is that $5 a month actually cheaper? Especially when that’s the introductory price and you’ll pay more for faster, more powerful slices of the cloud?

It may not be. I know people at two competing cloud security vendors, Qualys and Tenable. Qualys owns its own cloud, while Tenable leases capacity from Amazon, which it resells in turn. Qualys finds it easier to turn a profit.

When someone figures out how to deliver on-premise computing that’s cheaper than cloud computing, or at least that appears to be cheaper, we’ll see the cycle shift back over to on-premise computing.

It could take decades. But eventually it will happen. Computing is cyclical. And note that these models will probably coexist next to each other for a very long time, if not forever. When Windows NT caught on, predicting the imminent death of mainframes became very popular. While some companies did indeed retire their mainframes, most older companies found they had some tasks that couldn’t migrate to other technologies. I’ve seen many CIOs bet their careers on being able to migrate that last thing off a mainframe to save millions, and not win the bet.

If you found this post informative or helpful, please share it!