Slate’s Josephine Wolff argues that you have a moral imperative to claim $125 from Equifax as part of their breach settlement. Preventing the kinds of things that happened to Equifax is what I’ve done for a living for the bulk of my career. So here’s why I agree with her argument in favor of making an example of Equifax.
Most companies, in my experience, do patch management and vulnerability management on the cheap and write off the consequences as a cost of doing business. The cost of not doing it right needs to be high enough to get them to spend enough on tools and personnel to get the job done. And as the guy who pushed the patches for 9 years and then shifted in 2014 to being the guy who coaches the patch-pushers, I have a pretty good idea what it takes to do the job right.
What went wrong at Equifax
The Equifax 2017 breach was caused by a known bug in Apache Struts, a common piece of software on web servers. Sometimes it’s deployed on its own. Sometimes it’s deployed as part of another product, like Oracle Weblogic. Researchers first noticed hackers exploiting the bug in March 2017, and the Apache developers quickly released an update that was well publicized at the time. I was an account manager at the security firm Qualys at the time, and while Qualys released a signature for the bug within 24 hours, I received several phone calls from other large companies complaining that Qualys didn’t have a check quickly enough.
By May 2017, hackers were exploiting the bug to siphon data from an Equifax web site. Equifax learned of the breach in July, and engaged Mandiant, a respected security firm, to come in and stop the bleeding and investigate.
Initially Equifax tried to blame a low-level system administrator for failing to deploy the update. But in the fall, Equifax’s CIO and CSO retired, and contemporary reports suggested the retirements weren’t 100% voluntary.
Additionally, as part of the fallout, Equifax owes you $125. Here’s the site where you can collect it. (The FTC is running the site, not Equifax.)
How the Equifax breach could have been prevented
As far as industry best practices are concerned, Equifax should have identified the vulnerability in their systems sometime in March and deployed the update to fix it soon after, no later than April. As someone who used to deploy these updates for a living, I can sympathize. It’s not an easy job. But neither is it impossible.
Realistically, most companies are slow to deploy updates to software like Apache Struts for a couple of reasons. Usually there is one team responsible for the servers themselves and another team responsible for the web application. Both of them think the other team should be responsible for updating Struts and making sure it doesn’t break something. Both of them are 100% confident it will break something they don’t know how to fix.
This causes a stalemate. Given my position at the time, I received lots of questions about this bug. The companies who asked me about it pushed the update anyway. Equifax did not.
Pushing patches for a living
I got started in pushing patches in the late 1990s, working Y2K projects. I was reasonably good at it. When monthly security updates came into being in October 2003, I was the most junior sysadmin on my team. I had a track record, and no one else wanted that responsibility, so it fell to me.
Pushing patches right
I spent the second half of the decade working on an Air Force contract, pushing updates to a system that tracked cargo planes and tankers. The system had to be up more than 99.999% of the time, and it was designed with massive amounts of failover to accommodate it.
I patched the system live, during business hours. Security was paramount. To comply with the requirements of the contract, I pretty much had to have a clean scan of the entire system, with no missing updates, every 30 days. Every missing update required me to submit a plan to a Colonel or GS-15, who would sign off on the plan, certify that the risk was negligible, and accept any consequences for anything that would happen in the meantime. Practically speaking, that meant I updated Java about once a year.
I was a senior-level system administrator, with an appropriate market-rate salary for the time. Patching was my primary responsibility. I shared responsibility with several other people for antivirus, and in emergencies they would pull me away to do general sysadmin work. But realistically, I spent at least 30 hours a week doing something directly related to patching about 500 servers. I estimate that in my sysadmin career, I closed 800,000 vulnerabilities.
I have helped other companies adopt my methods. Using best of breed tools and dedicating one person more or less full-time to patching, they’ve achieved pretty impressive results. They can take care of about 5,000 systems with a success rate of 80 percent, and they close over 800,000 vulnerabilities per year.
A company the size of Equifax would need several people like that to do the job really well.
The more common approach to patching
In my experience working for and consulting with for-profit companies, patching is usually a part-time affair. They have a couple of security analysts whose job it is to scan their network and report the vulnerabilities. And then they dedicate the equivalent of 2-3 FTEs on the patching side, usually spread out across several individuals. None of them specialize in patching.
It’s not unusual for a company the size of Equifax to spend less than $50 per system per year keeping it up to date. And while I won’t name the company, I know of one company in the midwest that thinks spending $10 per system per year is too much. The only thing I’ll tell you is that company is someone other than Equifax.
How effective is it? It varies, but inevitably their success rate in deploying updates is closer to 20% than 80%.
And in my experience, if you can offer a company something that will cost $5 per system up front to improve effectiveness and might save more than that in reduced labor in the long run, they’re more likely to say no than yes.
Back to Equifax as an example
I would estimate that on my old Air Force contract, the government was spending around $500 per server per year to have me keep those systems up to date. That cost covered tools and labor. It’s a lot of money, but we did exceptionally well on our penetration tests.
Scaling that up to the size of a company like Equifax, it might cost $24 million a year to approach that level of security, as opposed to what they planned to spend in 2017, which may have been less than $2.4 million. Dedicating people full-time to patching and buying good tools on both the deployment and the security side is expensive.
The problem is that the average cost of a data breach is $3.6 million, or $141 per person affected. It’s far cheaper to just spend a couple million bucks per year and write it off if something bad happens.
Equifax’s breach looks to cost the company $1.4 billion. That raises the stakes. If it costs $24 million a year to do the job right, there’s no incentive to do it when only $3.6 million is at stake. If more than a billion is at stake, throwing a couple million bucks at the problem sounds negligent.
Is it enough?
I don’t know if $1.4 billion is enough. That’s 42% of Equifax’s revenue in 2018. It’s a stiff penalty but not ruinous. The penalty shouldn’t be ruinous, but it needs to be high enough to cause executives to shift their priorities. At my last job working for a company comparable in size to Equifax, I had a VP try to cancel my security project that was going to cost $125,000 up front, in spite of objections from his own directors and senior directors. His own lieutenants thought my project would more than pay for itself. But he saw it as waste and tried to cancel it.
If he ever tries that again, the guy sitting at my old desk will inevitably bring up Equifax and their $1.4 billion loss. That might cast a dark enough shadow to make him act differently. But I don’t know for sure.