Time keeping, commonly, is stored as a binary number that represents how many seconds have passed since midnight (UTC) on January 1st 1970. Since the year 10,000 isn’t x seconds away from epoch (1970-01-01T00:00:00Z), where x is any factor of 2 (aka 2^x, where x is any integer), any discrepancies in the use of “year” as a 4 digit number vs a 5 digit number, are entirely a display issue (front end). The thing that does the actual processing, storing and evaluation of time, gives absolutely no fucks about what “year” it is, because the current datetime is a binary number representing the seconds since epoch.
Whether that is displayed to you correctly or not, doesn’t matter in the slightest. The machine will function even if you see some weird shit, like the year being 99 100 because some lazy person decided to hard code it to show “99” as the first two digits, then take the current year, subtract 9900, and display whatever was left (so it would show the year 9999 as “99”, and the year 10000 as year “100”) so the date becomes 99 concatenated with the last two (now three) digits left over.
I get that it’s a joke, but the joke isn’t based on any technical understanding of how timekeeping works in technology.
The whole W2k thing was a bunch of fear mongering horse shit. For most systems, the year would have shown as “19-100”, 1900, or simply “00” (or some variant thereof).
Edit: the image in the OP is also a depiction of me reading replies. I just can’t even.
You need to qualify your statement about Y2K being fear mongering. People saying all technology would stop (think planes crashing out of the sky) were clearly fear mongering or conspiracy theorists. People saying certain financial systems needed to be updated so loans didn’t suddenly go from the year 1,999 to 19,100 or back to 1900 were not fear mongering. It’s only because of a significant amount of work done by IT folks that we have the luxury of looking back and saying it was fear mongering.
Look at this Wikipedia page for documented errors. One in particular was at a nuclear power plant. They were testing their fix but accidentally applied the new date to the actual equipment. It caused the system to crash. It took seven hours to get back up and they had to use obsolete equipment to monitor conditions until then. Presumably if the patch wasn’t applied this would happen at midnight on January 1st 2000 too.
Y2K was a real problem that needed real fixes. It just wasn’t an apocalyptic scenario.
You’re spot on. The vast majority of news coverage and “hype” from the general public relating to Y2K was all horse shit, but there were critical systems that did have issues and needed some work.
For the most part, the whole 19100 issue was a display bug, and likely wouldn’t have caused problems, and the same for 1900… Those are examples that people generally saw at banks and whatnot, it would, for the most part, look weird, but for the most part, wouldn’t create any actual problems. It would just be confusing for a while until the system caught up.
I think there’s a few examples of companies missing the January 1st deadline and ending up with stuff marked as January 1900 for a bit. Otherwise they didn’t have any significant issues.
Anything that involves a legally binding agreement would be critical though. Since the date is part of the agreement terms, it would need to be correct, and shown correctly.
Unless the “bug” literally crashed the system (which, it really should not have in most cases), like in your example, or it was connected to a legal contract, then it really wasn’t that big of a problem.
The media, and people in general kept going on about it like they knew what the technical problem was, and it was always just conjecture and banter that made people worry unnecessarily.
What I’m trying to say is that Y2K was something that needed to be fixed but the likelihood that it would affect any singular person in society was very small. Those that were going to be affected, generally knew who they were and they were taking the steps required to fix the problem.
Planes crashing out of the sky wouldn’t have been inconceivable. Say you have two air traffic control systems that are synchronizing - one handles dates with a modulo 100 (00-99, i.e. 1900-1999), another handles them in epoch time. All of a sudden the two reported time + positions of two different planes don’t match up by a century, and collision projection software doesn’t work right. I’ve seen nastier bugs than that, in terms of conceptual failure.
At no point is that a theory about a “conspiracy” either, IDK why you’re bandying that term around.
At no point is that a theory about a “conspiracy” either, IDK why you’re bandying that term around.
Conspiracy is probably the wrong term. What I mean is that some (keyword: some) predictions were quite extreme and apocalyptic. See the fringe group response section for examples of what I was trying to convey.
The New York Times reported in late 1999, “The Rev. Jerry Falwell suggested that Y2K would be the confirmation of Christian prophecy – God’s instrument to shake this nation, to humble this nation. The Y2K crisis might incite a worldwide revival that would lead to the rapture of the church. Along with many survivalists, Mr. Falwell advised stocking up on food and guns”.
That’s what I meant by the sort of “conspiratorial” response. Maybe I should reword my post to make it more clear?
Y2k was not fear mongering. There were a great many systems, in industrial finance and infrastructure applications that definitely needed to be addressed. You know, the things that keep modern infrastructures running. Of course there were consumer facing companies that took advantage of it, but that was small in comparison.
It ended up not being a disaster, because it was taken seriously.
Y2K was definitely not only fear-mongering. Windows Systems did not use Unix timestamps, many embedded systems didn’t either, COBOL didn’t either. So your explanation isn’t relevant to this problem specifically and these systems were absolutely affected by Y2K because they stored time differently. The reason we didn’t have a catastrophic event was the preventative actions taken.
Nowadays you’re right, there will be no Y10K problem mainly because storage is not an issue as it was in the 60s and 70s when the affected systems were designed. Back then every bit of storage was precious and therefore omitted when not necessary. Nowadays, there’s no issue even for embedded systems to set aside 64 bit for timekeeping which moves the problem to 292277026596-12-04 15:30:08 UTC (with one second precision) and by then we just add another bit to double the length or are dead because the sun exploded.
The Microsoft Zune had a y2k9 bug caused by a lingering clock issue from leap year from the extra day in February 2008 that caused them to crash HARD on Jan 1, 2009. I remember It being a pretty big PITA getting it back up and running.
… any discrepancies in the use of “year” as a 4 digit number vs a 5 digit number, are entirely a display issue (front end).
That’s exactly how I read the meme. It would still require a change.
Whether that is displayed to you correctly or not, doesn’t matter in the slightest. The machine will function even if you see some weird shit,
I’m not sure if this is some nihilistic stuff, or you really think this. Of course nothing actually matters. The program will still work even if the time is uint32 instead of uint64. The machine of course will still work as well. Shit, your life will go on. The earth continues to spin and this will for sure not cause the heat death of the universe. But aside from actual crashes and some functionality bugs, UI issues should be the ones you worry about the most. If your users are a bank and they need to date the contracts, and you only offer 3 digits for the year? I think you’ll agree with me that if users don’t like using your program, it’s a useless program.
My brother in Christ, there’s more to time than just storing it. Every datetime library I’ve ever used only documents formatting/parsing support up to four year digits. If they suddenly also supported five digits, I guarantee it will lead to bugs in handling existing dates, as not all date formats could still be parsed unambiguously.
It won’t help you if time is stored perfectly, while none of your applications support it.
Regarding Y2K, it wasn’t horse shit - thousands upon thousands of developer hours were invested to prevent these issues before they occurred. Had they not done so, a bunch of systems would have broken, because parsing time isn’t just about displaying 19 or 20.
The comment you’re replying to is really frustrating to me. It annoys me when people are so arrogant but also wrong. Do they live in a perfect world where nobody stores dates as ISO 8601 strings? I’ve seen that tons of times. Sometimes, it may even be considered the appropriate format when using stuff like JSON based formats.
I’m 100% with you - it’s the dangerous level of knowledge where someone understands the technical background for the most part, but is lacking real world experience. Reminds me of the blog posts titled “Misconceptions programmers have about X” - almost everything we touch in IT is complicated if you get deep enough.
But their style of commenting really jives with Lemmy on technical topics. I can’t count the number of posts where people proudly shout fundamentally wrong explanations for current AI models, yet any corrections are downvoted to oblivion. It’s not as bad on non-AI-topics, but I can’t imagine anyone in the field reading GPs comment and agreeing…
I would hope that these kinds of parsers are not used in critical applications that could actually lead to catastrophic events, that’s definitely different to Y2K. There would be bugs, yes, but quite fixable ones.
Regarding Y2K, it wasn’t horse shit - thousands upon thousands of developer hours were invested to prevent these issues before they occurred. Had they not done so, a bunch of systems would have broken, because parsing time isn’t just about displaying 19 or 20.
“There’s no glory in prevention”. I guess it’s hard to grasp nowadays, that mankind at some point actually tried to stop catastrophies from happening and succeeded
Even if such parsers aren’t used directly in critical systems, they’ll surely be used in the supply chains of critical systems. Your train won’t randomly derail, but disruptions in the supply chain can cause repair parts not to be delivered, that kind of thing.
And you can be certain such parsers are used in almost every application dealing with datetimes that hasn’t been specifically audited or secured. 99% of software is held together with duct tape.
True. But I wouldn’t see this as extremely more critical than the hundreds of other issues we encounter daily in software. Tbh, I’d be glad if some of the software I have to use daily had more duct tape on it…
I think you might be underestimating the potential impact.
Remember the Crowdstrike Windows BSOD? It caused billions in damages, and it’s the absolute best case scenario for this kind of issue. Our potential Y10K bug has a bunch of additional issues:
you don’t just have to patch one piece of software, but potentially all software ever written that’s still in use, a bunch of which won’t have active maintainers
hitting the bug won’t necessarily cause crashes (which are easy to recognize), it can also lead to wrong behavior, which will take time to identify. Now imagine hundreds of companies hitting the bug in different environments, each with their own wrong behavior. Can you imagine the amount of continuous supply chain disruptions?
fixes have to be thought about and implemented per-application. There’s no panacea, so it will be an incredible amount of work.
I really don’t see how this scenario is comparable to anything we’ve faced, beyond Y2K.
In this thread: mostly people that don’t know how timekeeping works on computers.
This is already something that we’re solving for. At this point, it’s like 90% or better, ready to go.
See: https://en.m.wikipedia.org/wiki/Year_2038_problem
Time keeping, commonly, is stored as a binary number that represents how many seconds have passed since midnight (UTC) on January 1st 1970. Since the year 10,000 isn’t x seconds away from epoch (1970-01-01T00:00:00Z), where x is any factor of 2 (aka 2^x, where x is any integer), any discrepancies in the use of “year” as a 4 digit number vs a 5 digit number, are entirely a display issue (front end). The thing that does the actual processing, storing and evaluation of time, gives absolutely no fucks about what “year” it is, because the current datetime is a binary number representing the seconds since epoch.
Whether that is displayed to you correctly or not, doesn’t matter in the slightest. The machine will function even if you see some weird shit, like the year being 99 100 because some lazy person decided to hard code it to show “99” as the first two digits, then take the current year, subtract 9900, and display whatever was left (so it would show the year 9999 as “99”, and the year 10000 as year “100”) so the date becomes 99 concatenated with the last two (now three) digits left over.
I get that it’s a joke, but the joke isn’t based on any technical understanding of how timekeeping works in technology.
The whole W2k thing was a bunch of fear mongering horse shit. For most systems, the year would have shown as “19-100”, 1900, or simply “00” (or some variant thereof).
Edit: the image in the OP is also a depiction of me reading replies. I just can’t even.
You need to qualify your statement about Y2K being fear mongering. People saying all technology would stop (think planes crashing out of the sky) were clearly fear mongering or conspiracy theorists. People saying certain financial systems needed to be updated so loans didn’t suddenly go from the year 1,999 to 19,100 or back to 1900 were not fear mongering. It’s only because of a significant amount of work done by IT folks that we have the luxury of looking back and saying it was fear mongering.
Look at this Wikipedia page for documented errors. One in particular was at a nuclear power plant. They were testing their fix but accidentally applied the new date to the actual equipment. It caused the system to crash. It took seven hours to get back up and they had to use obsolete equipment to monitor conditions until then. Presumably if the patch wasn’t applied this would happen at midnight on January 1st 2000 too.
Y2K was a real problem that needed real fixes. It just wasn’t an apocalyptic scenario.
You’re spot on. The vast majority of news coverage and “hype” from the general public relating to Y2K was all horse shit, but there were critical systems that did have issues and needed some work.
For the most part, the whole 19100 issue was a display bug, and likely wouldn’t have caused problems, and the same for 1900… Those are examples that people generally saw at banks and whatnot, it would, for the most part, look weird, but for the most part, wouldn’t create any actual problems. It would just be confusing for a while until the system caught up.
I think there’s a few examples of companies missing the January 1st deadline and ending up with stuff marked as January 1900 for a bit. Otherwise they didn’t have any significant issues.
Anything that involves a legally binding agreement would be critical though. Since the date is part of the agreement terms, it would need to be correct, and shown correctly.
Unless the “bug” literally crashed the system (which, it really should not have in most cases), like in your example, or it was connected to a legal contract, then it really wasn’t that big of a problem.
The media, and people in general kept going on about it like they knew what the technical problem was, and it was always just conjecture and banter that made people worry unnecessarily.
What I’m trying to say is that Y2K was something that needed to be fixed but the likelihood that it would affect any singular person in society was very small. Those that were going to be affected, generally knew who they were and they were taking the steps required to fix the problem.
Planes crashing out of the sky wouldn’t have been inconceivable. Say you have two air traffic control systems that are synchronizing - one handles dates with a modulo 100 (00-99, i.e. 1900-1999), another handles them in epoch time. All of a sudden the two reported time + positions of two different planes don’t match up by a century, and collision projection software doesn’t work right. I’ve seen nastier bugs than that, in terms of conceptual failure.
At no point is that a theory about a “conspiracy” either, IDK why you’re bandying that term around.
Conspiracy is probably the wrong term. What I mean is that some (keyword: some) predictions were quite extreme and apocalyptic. See the fringe group response section for examples of what I was trying to convey.
That’s what I meant by the sort of “conspiratorial” response. Maybe I should reword my post to make it more clear?
Y2k was not fear mongering. There were a great many systems, in industrial finance and infrastructure applications that definitely needed to be addressed. You know, the things that keep modern infrastructures running. Of course there were consumer facing companies that took advantage of it, but that was small in comparison.
It ended up not being a disaster, because it was taken seriously.
Y2K was definitely not only fear-mongering. Windows Systems did not use Unix timestamps, many embedded systems didn’t either, COBOL didn’t either. So your explanation isn’t relevant to this problem specifically and these systems were absolutely affected by Y2K because they stored time differently. The reason we didn’t have a catastrophic event was the preventative actions taken.
Nowadays you’re right, there will be no Y10K problem mainly because storage is not an issue as it was in the 60s and 70s when the affected systems were designed. Back then every bit of storage was precious and therefore omitted when not necessary. Nowadays, there’s no issue even for embedded systems to set aside 64 bit for timekeeping which moves the problem to 292277026596-12-04 15:30:08 UTC (with one second precision) and by then we just add another bit to double the length or are dead because the sun exploded.
The Microsoft Zune had a y2k9 bug caused by a lingering clock issue from leap year from the extra day in February 2008 that caused them to crash HARD on Jan 1, 2009. I remember It being a pretty big PITA getting it back up and running.
That’s exactly how I read the meme. It would still require a change.
I’m not sure if this is some nihilistic stuff, or you really think this. Of course nothing actually matters. The program will still work even if the time is uint32 instead of uint64. The machine of course will still work as well. Shit, your life will go on. The earth continues to spin and this will for sure not cause the heat death of the universe. But aside from actual crashes and some functionality bugs, UI issues should be the ones you worry about the most. If your users are a bank and they need to date the contracts, and you only offer 3 digits for the year? I think you’ll agree with me that if users don’t like using your program, it’s a useless program.
My brother in Christ, there’s more to time than just storing it. Every datetime library I’ve ever used only documents formatting/parsing support up to four year digits. If they suddenly also supported five digits, I guarantee it will lead to bugs in handling existing dates, as not all date formats could still be parsed unambiguously.
It won’t help you if time is stored perfectly, while none of your applications support it.
Regarding Y2K, it wasn’t horse shit - thousands upon thousands of developer hours were invested to prevent these issues before they occurred. Had they not done so, a bunch of systems would have broken, because parsing time isn’t just about displaying 19 or 20.
The comment you’re replying to is really frustrating to me. It annoys me when people are so arrogant but also wrong. Do they live in a perfect world where nobody stores dates as ISO 8601 strings? I’ve seen that tons of times. Sometimes, it may even be considered the appropriate format when using stuff like JSON based formats.
I’m 100% with you - it’s the dangerous level of knowledge where someone understands the technical background for the most part, but is lacking real world experience. Reminds me of the blog posts titled “Misconceptions programmers have about X” - almost everything we touch in IT is complicated if you get deep enough.
But their style of commenting really jives with Lemmy on technical topics. I can’t count the number of posts where people proudly shout fundamentally wrong explanations for current AI models, yet any corrections are downvoted to oblivion. It’s not as bad on non-AI-topics, but I can’t imagine anyone in the field reading GPs comment and agreeing…
I would hope that these kinds of parsers are not used in critical applications that could actually lead to catastrophic events, that’s definitely different to Y2K. There would be bugs, yes, but quite fixable ones.
“There’s no glory in prevention”. I guess it’s hard to grasp nowadays, that mankind at some point actually tried to stop catastrophies from happening and succeeded
Even if such parsers aren’t used directly in critical systems, they’ll surely be used in the supply chains of critical systems. Your train won’t randomly derail, but disruptions in the supply chain can cause repair parts not to be delivered, that kind of thing.
And you can be certain such parsers are used in almost every application dealing with datetimes that hasn’t been specifically audited or secured. 99% of software is held together with duct tape.
True. But I wouldn’t see this as extremely more critical than the hundreds of other issues we encounter daily in software. Tbh, I’d be glad if some of the software I have to use daily had more duct tape on it…
I think you might be underestimating the potential impact.
Remember the Crowdstrike Windows BSOD? It caused billions in damages, and it’s the absolute best case scenario for this kind of issue. Our potential Y10K bug has a bunch of additional issues:
I really don’t see how this scenario is comparable to anything we’ve faced, beyond Y2K.