I don’t think 10000 year is a problem. There is a real “year 2038 problem” that affects system storing unix time in signed int32, but it’s mostly solved already. The next problem will be in year 33000 or something like that.
There are so many problems there is an entire Wikipedia page dedicated to them.
In this thread: mostly people that don’t know how timekeeping works on computers.
This is already something that we’re solving for. At this point, it’s like 90% or better, ready to go.
See: https://en.m.wikipedia.org/wiki/Year_2038_problem
Time keeping, commonly, is stored as a binary number that represents how many seconds have passed since midnight (UTC) on January 1st 1970. Since the year 10,000 isn’t x seconds away from epoch (1970-01-01T00:00:00Z), where x is any factor of 2 (aka 2^x, where x is any integer), any discrepancies in the use of “year” as a 4 digit number vs a 5 digit number, are entirely a display issue (front end). The thing that does the actual processing, storing and evaluation of time, gives absolutely no fucks about what “year” it is, because the current datetime is a binary number representing the seconds since epoch.
Whether that is displayed to you correctly or not, doesn’t matter in the slightest. The machine will function even if you see some weird shit, like the year being 99 100 because some lazy person decided to hard code it to show “99” as the first two digits, then take the current year, subtract 9900, and display whatever was left (so it would show the year 9999 as “99”, and the year 10000 as year “100”) so the date becomes 99 concatenated with the last two (now three) digits left over.
I get that it’s a joke, but the joke isn’t based on any technical understanding of how timekeeping works in technology.
The whole W2k thing was a bunch of fear mongering horse shit. For most systems, the year would have shown as “19-100”, 1900, or simply “00” (or some variant thereof).
Edit: the image in the OP is also a depiction of me reading replies. I just can’t even.
Y2K was definitely not only fear-mongering. Windows Systems did not use Unix timestamps, many embedded systems didn’t either, COBOL didn’t either. So your explanation isn’t relevant to this problem specifically and these systems were absolutely affected by Y2K because they stored time differently. The reason we didn’t have a catastrophic event was the preventative actions taken.
Nowadays you’re right, there will be no Y10K problem mainly because storage is not an issue as it was in the 60s and 70s when the affected systems were designed. Back then every bit of storage was precious and therefore omitted when not necessary. Nowadays, there’s no issue even for embedded systems to set aside 64 bit for timekeeping which moves the problem to 292277026596-12-04 15:30:08 UTC (with one second precision) and by then we just add another bit to double the length or are dead because the sun exploded.
Not a storage problem but still a possible problem in UIs and niche software that assumes years have 4 digits or 4 characters. But realistically if our civilization is even still around then AI will be doing all that for us and it won’t be an issue humans even notice.
This would be a great short story if for some reason the AI didn’t realize there was going to be a date issue and didn’t properly update itself causing it to crash. Then the problem is it was self sufficient for so long no humans know how to restart it or fix the issue, causing society to have a technology blackout for the first time in centuries.
Yes, it’s kind of a familiar sci-fi trope - a supercomputer that has no built-in recovery mechanism in spite of being vitally important. Like the Star Trek episode where they made smoke come out of a robot’s head by saying illogical things.