“This is basically what we were all worried about with Y2K, except it’s actually happened this time.”
What people were worried about with Y2K was nuclear weapons being launched and planes falling out of the sky. And it was nonsense, but bad things could have happened.
The good part is that the harm was mitigated for the most part through due diligence of IT workers.
This is similar to what would have actually happened if not for the dilligence of IT workers fixing the Y2K code issues globally. Uninformed people were worried about missiles and apocalyptic violence, but IT workers withdrew some cash and made sure not to have travel plans.
The difference here is that this was caused by massive and widespread negligence. Every company affected had poor IT infrastructure architecture. Falcon Sensor is one product installed on Windows servers. Updates should go to test environments prior to being pushed to production environments. Dollars to donuts, all of the companies that were not affected had incompetent management or cheap budgets.
Millions of man hours spent making sure Y2K didn’t cause problems and the only recognition they got was the movie Office Space.
There isn’t a single one of them who was working at that time I have spoken with who didn’t think Office Space was exactly the correct tribute
I’ll take it. I identified so hard with that movie. When I eventually die, I’ll do so knowing I’ve been seen.
I wonder if there would be any way to work it so that a dry concept like that could be made into a decent movie based on the actual events. They did it for Tetris.
Sure, but even the worst Y2K effects wouldn’t have had what lots of people were worried about, which was basically the apocalypse.
People who really should have known better were telling me that Y2K would launch the missiles in the silos.
We knew. However we knew there would be problems so we emphasized extremely unlikely scenarios to get the budgets to prevent the really annoying shit that might’ve happened.
We rarely disagree, but I’m gonna pull the “I work in the industry” card on you. A lot of hardworking people prevented bad things from happening whether big or small. We only look back at it as overblown because of them.
Are you really going to claim that we would have had a global thermonuclear armageddon if Y2K mitigation was a failure?
You’re focusing on the extreme unrealistic end of what people were worried about with Y2K, but the realistic range of concerns got really high up there too. There were realistic concerns about national power grids going offline and not being easily fixable, for example.
The huge amount of work and worry that went into Y2K was entirely justified, and trying to blow it off as “people were worried about nuclear armageddon, weren’t they silly” is misrepresenting the seriousness of the situation.
I literally said in my first comment:
The good part is that the harm was mitigated for the most part through due diligence of IT workers.
What more should I have said?
It’s not what more you should have said, but what less. It’s the “people were worried about nuclear armageddon” thing that’s the problem here. You’re making it look like the concerns about Y2K were overblown and silly.
No. I’m saying that something like today would have happened only it would have been much worse in that it couldn’t be fixed in the space of hours / days.
Sure, but that’s not what people were worrying about at the time, which was my point.
Y2K wasn’t nonsense. It was unremarkable, ultimately, because of the efforts taken to avoid it for a decade.
20 Years Later, the Y2K Bug Seems Like a Joke—Because Those Behind the Scenes Took It Seriously
President Clinton had exhorted the government in mid-1998 to “put our own house in order,” and large businesses — spurred by their own testing — responded in kind, racking up an estimated expenditure of $100 billion in the United States alone. Their preparations encompassed extensive coordination on a national and local level, as well as on a global scale, with other digitally reliant nations examining their own systems.
“The Y2K crisis didn’t happen precisely because people started preparing for it over a decade in advance. And the general public who was busy stocking up on supplies and stuff just didn’t have a sense that the programmers were on the job,” says Paul Saffo, a futurist and adjunct professor at Stanford University.What is worth noting about this event is how public concern grows and reacts out of ignorance. Just because a pending catastrophe results in something ‘less-than’ does not mean best efforts weren’t taken to avoid it. Just because something isn’t as bad as it could have been doesn’t mean it was a hoax (see: covid19). Additionally, just because something turns out to be a grave concern doesn’t mean best efforts didn’t mitigate what could have been far worse (see: inflation).
After the collective sigh of relief in the first few days of January 2000, however, Y2K morphed into a punch line, as relief gave way to derision — as is so often the case when warnings appear unnecessary after they are heeded. It was called a big hoax; the effort to fix it a waste of time.
Written in 2019 about an event in 1999, it’s apparent to me that not much has changed. We’re doomed to repeat history even provided with the most advanced technology the world has ever known to pull up the full report of history in the palm of our hands.
The inherent conundrum of the
Y2K[insert current event here] debate is that those on both ends of the spectrum — from naysayers to doomsayers — can claim that the outcome proved their predictions correct.I never said it was nonsense. I said what a lot of people were worried about was nonsense- stuff like it causing nuclear armageddon or crashing the global economy.
And this event today isn’t even what IT professionals were worried about. This is a big headache for them and a day off for a lot of other people. It’s not going to do the damage Y2K would have done had people not done enough.
One exception to that is the UK’s NHS. I feel like having IT outages for an entire countries nationalized health service could probably lead to some preventable death. Though I imagine they hopefully have paper backups for the most important shit.
Real life Armageddon: Bruce Willis & crew return home and are greeted by boos and protestors with “waste of taxpayer money” signs. Can you imagine…
The United States would never send a crew up to stop an asteroid. If it’s a Dem president, SCOTUS would block it. If it’s Donald, he’d claim the asteroid is fake news and a Dem hoax, then the scoundrels in the House and Senate would obstruct any action via their little bunkers.
Work is borked so I get to paint Warhammer today.
Minis are for painting at unspecified times in the future, not now
My Mountain of Shame must be mined.
I love this phrase and I will use it.
I fully support your sacrifice o7
Be sure to post the results to the corresponding communities.
Meanwhile, friends at my old company run sites with CS and my current company doesn’t. I’m kicking back and having a great friday
So the hindsight is always 20/20 but was there like warning signs or red flags which should have been obvious this is going to happen or are you just lucky in hindsight?
Red flags? Yeah don’t use “security Software” that just increases your attack surface. Why the fuck would you want to install a root kit on your critical infrastructure?
The second one, as far as I can tell. But also, those calls are made above me and I have no insight into the decision-making. It could have been keen foresight by someone else.
Same. Had time for my trainees and used this for an extra learning session. :)
My office sent out this big message about people not being able to log in this morning. And I had absolutely no issues, and all of my tools are working. So I guess I’m stuck actually doing work.
Your work and their work, since they can’t log in.
Look at this “team” player hehe
Bro, why didn’t you lie 😭
Will this change how companies run their IT? Absofuckinglutelynot!
It kinda sounds like this ones more on the developers than sysadmins…
Well its really on the people deciding to use garbage software from some random, clearly incompetent company.
Literally one of the largest enterprise grade endpoint protection packages. This isn’t an issue of a bad sysadmin, or even developer, so much an issue bigger than the industry itself. Up until now, as far as I knew, crowd strike has been recommended as a solid choice for endpoint protection.
Who else are you going to trust? Fucking Symantec? Ask VMware how being owned by Broadcom is, then get back to me.
No one gives a shit about their job anymore because they have no reason to. I hate to sit here and chalk everything in the world up to late stage capitalism, but jfc if it doesn’t seem like the recurring theme from hell. Something tells me the guys who work at Crowdstrike are no different.
Nothing like getting a full work day in before the office opens
Ftfy: ‘Largest
ITWindows outage in history’I learned of the problems from the radio news on my way back home.
CrowdStrike, not Microsoft, is responsible. Let’s put blame where blame is due.
This could happen to any OS that has cybersecurity where permissions are needed at deeper levels to protect systems.
So was this Crowdstrike’s fuck up and not Microsoft’s?
Probably, but the issue is in the interface between Windows and the CrowdStrike software causing Windows to go into a crashing bootloop.
Closed source is great, I tell you. /s
It has nothing to do with closed source, this is entirely about a privileged application fucking around and not testing shit before pushing it globally. You can have the same issues with Linux too. My org has stated that our AV product is never to be installed on Linux machines because they hosed tons of machines years back doing something similar.
High privilege security applications are always going to carry this risk due to how deeply they hook into the OS to do their job.
That is true. An obvious failure is that the update that broke everything was pushed everywhere simultaneously.
That’s what has me confused. I haven’t even stepped into an office in 20 years so I have no modern experience there, but I would have thought that a company with such a massive user base would release things at different times based on region. Is it because with security based applications they don’t want to risk someone having time to exploit a new vulnerability?
Is it because with security based applications they don’t want to risk someone having time to exploit a new vulnerability?
Pretty much. Given how fast the malware scene evolves and implements day 1 exploits, and how quickly they need to react to day 0 exploits, there’s kind of an unwritten assumption (and it might actually be advertised as a feature) that security software needs to react as fast as possible to malicious signatures. And given the state of malware and crypto shit, it’s hard to argue that it isn’t needed, considering how much damage you’ll suffer if they get through your defenses.
That being said, this kind of a fuck up is damned near unacceptable, and these updates should have been put through multiple automated testing layers to catch something like this before this got to the end user devices. I could see the scale of it taking them out of business, but I also wouldn’t be surprised if they managed to scrape by if they handle it correctly (though I don’t see the path forward on this scale, but I’m not a c-suite for many reasons). Like I said above, we had an incident years back that hosed a bunch of our Linux boxes, but the vendor is still around (was a much smaller scale issue) and we even still use them because of how they worked with us to resolve the situation and prevent it from happening again.
Hard to tell, fake news running both of their names, looks like both?
I like how it’s the biggest IT issue and the best solution is to turn it off and on several times
They are saying “up to 15 times” now.
laughs in linux
Work was supposed to be slow today. D’: