Hang around enough people like me who’ve been in IT for decades and eventually the Y2K problem comes up. But what was the Y2K problem? What was the solution? And was the problem overblown?
I was in an odd position. I argued in 1999 and 2000 that any problems we had would be relatively minor. But I don’t think the efforts to fix Y2K were overblown. I may be in the minority opinion on that but I’ll explain.
What was the Y2K problem?
But first, let’s understand the problem. In the late 1990s, we had lots of computer systems and programs running that never expected to still be around in the year 2000. They represented dates as two-digit years, to save precious memory. Yes, bytes were precious on many systems so that two bytes was significant into the 1980s on many systems. The problem was no one knew what would happen when the two-digit dates rolled over from 99 to 00. Would the computer think it was 2000? 1900? Some other goofy date?
So we spent lots of time, effort and money in the late 1990s finding these old systems and putting fixes in place. We also spent time trying to convince people it wasn’t going to be the apocalypse. But there were a lot of Chicken Littles running around who amassed hoards of food, water, supplies, batteries, and other things they thought they would need when the disaster hit.
As I recall, I withdrew about $300 from the bank, made sure I had about a week’s supply of extra groceries on hand, and filled my bathtub with water. And I stayed home that night, waiting for phone calls that never happened. I stayed sober and played video games, probably Civilization.
What was the solution?
The solution to the Y2K problem depended on the system and the software. In the event that the software was still being made, you just applied a patch, the same way we apply patches every month for security updates today. The difference in 1999 was that centralized deployment systems were relatively rare, and those early versions of SCCM (it was called SMS back then) were terrible. So we did a lot of running around and installing patches manually.
There were lots of off-the-shelf Y2K “shims” for PCs that fixed various Y2K-related problems. When you didn’t have anything specific to load, you loaded one of those on all of your PCs.
But a lot of places had custom software written on mainframes or minicomputers. If you couldn’t replace the software with off-the-shelf software, you found a programmer who could understand that old software and revised it. They had to hunt through the code and find any variables that used years and make sure they were set to use four digits. They also had to find any code that did any date-related math and revise it to handle four-digit years. I have a relative by marriage who spent several years in the late 1990s doing nothing but rewriting old computer code to handle four-digit dates.
If you’ve seen the movie Office Space, the main character was supposed to be fixing Y2K code.
There were some systems that didn’t handle the transition properly, and the date would switch to some random date in the past. Which date depended on any number of things. Many of these systems worked fine once you reset the date to January 1, 2000 or later, however.
In some cases, we just had to discard old systems because there wasn’t anything we could do to make them Y2K compliant. We had the same problem then we do now, with 20-year-old systems laying around that nobody understood except there was this one business-critical function that used it. There were less of them than today. A lot of them got replaced as part of Y2K projects just in case.
Was Y2K overblown?
When I read about Y2K today, people seem to think it was overblown. They look at it as a financial disaster, because every company in the world spent large sums of money, and nothing happened.
Here’s a similar situation. About two years ago, my 14-year-old Honda Civic didn’t pass its safety inspection, so I bought a Toyota Camry. But I haven’t been in a car accident since then, so did I waste my money by buying that car?
No and no. Both cases are examples of taking precautions to keep something bad from happening. And then something bad didn’t happen. That means the precautions worked.
There was an added benefit. We replaced a lot of aged technology with new stuff that was faster and nicer. The problem was that too many people slashed their IT budgets right after Y2K and that made the dotcom bust worse than it otherwise would have been.
People forget there was a tremendous amount of pressure to fix it. People literally thought the world was going to end. There were survivalist magazines about it. It was never going to be as bad as the worst-case scenarios people were assuming. So, from that perspective, yes, it was overblown. The important lesson from Y2K was that nobody knew how big the problem would be, but the people who knew how to find and fixed it did just that, and did such a good job that nothing bad happened.
The “other” Y2K problem
Of course, one reason Y2K wasn’t as big of a problem as it could have been was because there were (and still are) large numbers of systems that don’t have a Y2K problem. Unix systems never did use two-digit dates by default and couldn’t care less about 1900. Unix systems measure time by offsetting time from January 1, 1970. To Unix systems, there was nothing at all special about January 1, 2000. All you had to do was make sure that any custom software you had running under a Unix system wasn’t doing its own date work.
The problem with Unix systems is their clock is going to wrap around on January 19, 2038, and suddenly think it’s December 13, 1901.
We have a little less than 20 years now to find and patch or replace all of the old 32-bit Unix and Linux systems that are hanging around. I hope people don’t dismiss it as fearmongering. We’ll see.