1111111111111111111111111111111 & Unix Epoch - Computerphile
Вставка
- Опубліковано 31 сер 2020
- The highest signed 32bit integer is a ticking timebomb - sort of... Dr Tim Muller explains why it's his #MegaFavNumber
This re-upload features a slight repair to the audio where Dr Muller mis-spoke 'un-signed' instead of signed
/ computerphile
/ computer_phile
This video was filmed and edited by Sean Riley.
Computer Science at the University of Nottingham: bit.ly/nottscomputer
Computerphile is a sister project to Brady Haran's Numberphile. More at www.bradyharan.com
2020: I think most systems will be patched
292277026596: Still running legacy mainframes with COBOL
_If It Works Don't Touch It_
The COBOL systems will need to be patched again January 1, 10000. He's talking about Linux/C. Java was already 64 bits, but measured in milliseconds since 1970, so the year they run out is only 292278994 :-P
Implying humanity will still even exist by then haha
There's something satisfying about watching something you already know about.
and something unsatisfying knowing that there could have been another topic in its place :(
you schmuck ;)
I'm glad Tim pointed out all the work that was done on the millennium problem. I heard a politician say that there was no need to listen to the computer scientists who predicted disaster with the Y2K bug, because? "The year 2000 arrived and nothing happened!" That's actually what people think!
Politicians, politicians, the most reliable of all sources of information, doesn’t matter if it’s climate change or anything else...
wait for 13 year until it happen
The fallacy is called survivorship bias. I highly suggest reading up on it, it's very interesting.
Discounting a lot of patching and testing that occurred before 2000.
Five months ago, I heard epidemiologists saying, "If we do everything right, the future will say we overhyped Covid-19." Oh, if only.
To answer some misguided comments from the first version right off the bat:
No, the bit width of your processor or operating system has nothing to do with your ability to calculate time, no matter what you use to represent the epoch. 8-bit computers can calculate 64 bit numbers perfectly fine, they just have to do it in chunks.
Yes, 64 bit time can handle the entire history of the universe, 14 billion years (the age of the universe) is substantially less than 292 billion years (the farthest 64 bit time can count back).
Edit: 64 bit will also likely be the last time stamp ever needed. 292 billion years in the future is a large percentage of the expected lifespan of the universe. By that time, star formation will have ceased, and most stars will have entered dwarf phase. You only need 72 bits to get to the Degenerate Era, when there will be no more light in the universe.
_> 8-bit computers can calculate 64 bit numbers perfectly fine_
... just go grab a coffee, watch a movie or two, be back just in time to watch your processor compute the eighth and final byte as it's dripping sweat all over the external memory controller :)
@@unperrier5998 64 bits is 8 bytes. A 6502 (a very popular 8 bit cpu) can perform a 64 bit addition operation in 190 cycles. 1MHz is one million cycles per second, thus over 5000 of these 64 bit additions in one second. So unless you can grab your coffee within 0.00019 seconds, it will be long finished before you get back.
@@lorddissy I believe he is doing what humans call a "joke"
They should set t=0 to our best estimate of the moment the universe began.
I wonder if we would also want to keep track of milliseconds when we use 64 bit, because you know might as well do that instead of being able to keep track of another 250 billion years, by then i assume we will be doing something else
“We’re swimming in bits.” Love it.
Actually... No. We are already having trouble dealing with DWORDs when doing communications, and QWORDs would be even worse. We can swim in bits but the lanes aren't wide enough and people are constantly arguing whom to swim first.
@@FlameRat_YehLon I still love the expression but I take your point. And we haven't even talked about "endianness".
Friday 13th, 1901... What could possibly go wrong? 😁
Well something has obviously gone very wrong if there's no month.
December 13th has a lot of in common with totalitarian regimes. 🙂
When I was a product tester at Hewlett Packard in 1998, I was tasked to do Y2K testing on the hardware and software my division was rolling out. Y2K testing didn't just check the rollover from 1999 to 2000, but also checked a few other conditions. Namely whether or not 2000 is a leap year (it was because it's divisible by 400, but not all century years are leap years), and we had to test the 2038 error. Our software ran on Windows 98, and the Epoch date in Windows 98 is Jan 1, 1980, so we also had to test roll over to 2048.
Unix related Computerphile videos are my favorite. :- )
awk
Second this..
@@grainfrizz that was awk ward
AK
The problem is of course not strictly the processor or even the operating system. These can all take care of wider timestamps, even if that's a tiny bit slower on some of them.
The actual problem is existing file formats, network protocols, etc. Every time we save or transfer the timestamp in a fixed format that was decided ages ago without the possibility of having space for wider timestamps, where we still write those formats with timestamps after 2038.
Every Dutch person can recognize another Dutch person speaking English
Haha I also heard it immediately
Hahaha yep!!
Jazeker
I wonder where he was from. Still makes it very weird that when there's a Dutch speaker, on a British channel, that the graphics are American instead of European/British.
Sure, the graphics really doesn't matter. But we really need to stop putting America as the default. Especially in Europe.
@@Liggliluff it's called standardization...
I'm unreasonably proud of the fact that I was born within a week of the Unix Epoch, though every year it becomes a little less cool and a little more "damn, I'm getting old."
Were you born in positive or negative time?
Excellent explanation. Tim seems like a nice (and very clever) guy. His accent instantly gave away that he's Dutch though! Haha. Love it.
We don’t have to wait until 2038 for a special Unix time moment: 30 bits will flip on 2021-01-14 at 08:25:36 UTC, when it reaches 0x60000000. The last 30-bit flip was at 0x40000000 on 2004-01-10 at 13:37:04 UTC.
13:37
I know some legacy system is still going to be running the 64-bit dates when 292277026596 rolls around.
But I'll be retired by then, so not my problem.
"Never underestimate the lifespan of a line of code."
Tbh that was probably Ritchie and Thompson's line of thinking in 1972
Retired in 292277026596? I may have to work one or two more years in order to get my full retirement pension.
At least the sun is gone by then
It's not the biggest double mersenne prime. It's the second biggest. Biggest known one is 2^127-1.
In case you were also wondering: a double Mersenne prime is of the form 2^(2^prime(n) - 1) - 1. And yes, there are only finitely many of those.
Why are they finite?
we demand evidence!
My favorite mersenne prime is 170141183460469231731687303715884105727, which is 2^127 - 1, or 2^(2^(2^(2^(2) - 1) - 1) - 1) - 1. It’s the only known quad-mersenne prime, and one of only 2 known triple-mersenne primes (127).
According to doublemersennes.org, there are actually 5 known double mersenne primes. The 5th is 9223372036854775807
@@StylishHobo I'm not sure where you're seeing this but it's unlikely to be true as 9223372036854775807 = 7^2*73*127*337*92737*649657.
According to the OEIS there are four known double Mersenne primes and according to their Wikipedia entry, it's conjectured that that's all of them. And, as we know, there's only finitely many. I'm not aware of any proof that the four we know are all there are but that's where we are right now from what I gathered.
5:19 actually, a time keeping software at my dad's company did crash, for a single day.
most bugs were fixed, it was just one day that it didn't work.
Knowing how legacy systems manage to hang around forever, I'm sure there will still be something running with the 64 bit Unix time when it runs out.
Ok, this guy is knowledgeable because he can present such a specific topic with relatively easy to follow explanation. Props to my man.
Am I the only one who heard that number and immediately went “Signed int32-max?”
Oh, you mean 2 to the power of 31 minus 1?
Yes. You basically invented computer science. Nobody even knows what bits are.
I read the title and I thought about it
Casper S� Well, only you... and about 48,253 others, as of this writing...
We are here mate. U not alone 😌
I just hope we get Half Life 3 before we have to patch yet another millennium bug
This was the first video I saw on this chanel simply loved it, and Subscribed.
Hey, what time is it?
-It's Pi o'clock
Pi, you weren't invited, but here you are... again.
Actually, it was invited by the friend of a friend of a friend of a friend of a friend of a friend of a...
yes, its like pi is going round in circles
What's your favorite number?
"Let me check my papers."
Victor Ekekrantz I’m sure he could have rattled off 0x7fffffff from memory if necessary. 🙂
Fascinating. Love this channel!
I watched Y2K roll over in three time zones: New Zealand (UTC+13, first place west of the International Date Line with lots of computers), UTC and local (Toronto, UTC-5). No issues noted. A couple of years earlier I had done a Y2K audit in my apartment. My VCR failed miserably.
The systems I work with now have GPS week rollover issues that we have to keep an eye on. I expect to be long retired by the next GPS rollover or the 2038 Unix time rollover.
People tend to forget that most of the reason we DIDN'T have many issues with Y2K was BECAUSE of the huge investment in fixing everything beforehand. Many systems absolutely would have failed, had we done nothing about it.
@FeatherDerg Ayup! I was doing contract sysadmin in the late 90s and we did an awful lot of testing to ensure that no one noticed!
The problem was real and the media hyped it as they always do. The problems were fixed in time, everything clicked over from 31 December 1999 to 1 January 2000 and people (other than the ones who actually did the work) concluded it was some sort of OMG WERE ALL GOING TO DIE conspiracy theory.
@@marsgal42 "hey, the people need a reason to party"
In UTC+13 the time will be 16:14:08 Tuesday, 19 January 2038 when Unix Epochalypse going to happen
Cute how he was momentarily geeking out about PI o'clock.
That's max cash stack on osrs :P
Haaang onnn, is it really pi o'clock?? I need to know all the digits!
If you're counting in seconds then you aren't going past seconds. Unfortunately, eight seconds past 3:14 isn't pi but who's going to round past the minutes anyways just to celebrate pi O-clock? You got a full minute to celebrate it
@@ronaldmullins8221 8 seconds is 0.13 minutes however (3:14.13), which is pretty close... however, 3:14:09 (3:14.15) gets even closer, but that's when 2038-01-19 3:14:08 becomes 1901-12-13 03:14:09
Remember. To a fair accuracy, pi seconds is a nanocentury.
I aas hired to deal with the Y2K issue.
When all was done and dusted we only had 2-3 standalone parallel port print servers become nusable. We got a letter of commodation from the mayor for it too.
I'm pretty sure no system is going to be patched by the end of the 64bit unix time because everyone assumed someone else did it already or its "so far away".
My hope is that by then computing will have moved on to arbitrary-precision arithmetic, with bignums and stuff
The human race will be long gone by the end of the 64 bit Unix
Well... the universe will be in excess of 20 times its current age... so I doubt any remnants of humans will be around to worry about it. Everyone will have died assuming someone else was going to do something about the sun running out of fuel.
I already knew the computerphile bits, but the uncanny mathematical connections were totally new for me! I wonder if (2^63)-1 also has some interesting mathematical properties?
There will be a lot of leap seconds before the 64-bit epoch ends.
UNIX doesn't count leap seconds - unlike Windows. Every UNIX year has the same amount of seconds, but some seconds are a tiny tiny bit longer (or shorter) on the evening of December 31st in leap years. After the turn of the year no one has to bother with past leap years anymore. This is far more elegant, because Windows has to keep track of all past leap seconds and can't know if and when they'll occur in the future. (They are inserted whenever needed.)
iirc most unix systems are no longer using a signed 32 bit integer to represent time. You can check for yourself on your own unix system (i'm pretty sure you can find it in some of the .h files included in the time standard library). My system (and probably yours) for example will "wrap around" in well after my lifetime, or even several hundred generations after my lifetime. I'm glad the video touched on this at the end.
I remember in college the first time I wanted to calculate the US national deficit in BASIC. It was too high to count without doing a few tricks to get past the 32-bit number barrier!
There's always that pesky embedded systems code - some hardware in some circles gets used for very long time, e.g., nuclear missile silos where 8 inch floppy drives were still in use to very recent, or Boeing 747 where gets system software updates through 3 1/2 in floppy drives. Now those two examples don't have an internal time keeping aspect to them, but where might there be lurking some embedded systems that do, that are still using 32-bit epoc time, and the calendar date of the time is relevant in some manner to their operation?
It makes you wonder. In a thousand years from now, people will still use 64 bit integers to calculate the time in their computers, and the reason will be so historical, so distant... yet, for us, it is happening right now. I am having kind of a "whoa dude" feeling here.
A similar problem existed for file sizes in bytes. And unsigned 32 bit integer is no where near enough space for expressing the size in bytes of modern persisted objects.
But at least file sizes ought not to be negative, so you get up to 4G instead of only 2G.
Menachem Salomon right. That is why the file size variable was always unsigned.... and a filesystem with only 4 gigabytes total storage would be very sad. My first UNIX box was a 32 bit alu, and a 10 megabyte disk....
If you run a unix system and still use a fat32 drive as the easiest means to share a drive with windows, it's still a problem lol
The camera shake causes the left side foreground grass to wiggle oddly with respect to the left side background grass (the grass past the water). Very weird.
Of course what people like to do these days with all those bits is count _nanoseconds_ since 1970 for added precision. Those will still last till the year 2262 though, so no immediate cause for concern.
This precision is useful in more scientific environments, like for example timestamping the data from observations at different locations (earthquakes/shockwaves/gravitational waves/blackhole imaging). In most cases it probably is to much precision to carry around, but on the other hand i think just counting seconds is outdated nonetheless. At least it should be milliseconds.
Well, since nanosecond accuracy may not readily possible right now, many systems provide microsecond or 100-nanosecond resolution for their timekeeping. For example, an internal structure of Windows called FILETIME uses signed 64-bit integer to count the number of 100-nanoseconds since Jan 1, 1601; the overflow of that will happen at Sep 14, 30828. And yes, as the structure name suggests, this is the time format used in NTFS file system.
@@pihungliu35 Well, Windows always had a knack for weird epochs. But yes, 1ns precision might not be actually achievable on most systems but it's often used as a unit for precise clocks regardless because... might as well, I guess.
@@danielroder830 milliseconds and microseconds are used in many APIs I've seen but on a more general level, nanosecond seem to establish themselves as the standard because you can always disregard the precision you _don't_ need.
Python for example used to primarily use floating-point seconds but they now added additional calls to get nanosecond bignums.
I think it should have been obvious back in 1970 that 32 bits for time wasn't enough. Integer second resolution has never been sufficient. IMO, any decent filesystem should guarantee that no two files get the same timestamp, unless the user explicitly copies a timestamp from one file to another. On the other hand, using nanoseconds, where it'll blow up again only 242 years from now, isn't a particularly wise use of 64 bits, either. Something a little less precise than nanoseconds would have been better, perhaps microseconds.
Nice to hear about the Windows FILETIME. That sounds like a well-chosen setup.
Oh man, weird coincidence. Last night I made a Simple Cellular Automata thing in JS and I represented each row as an integer and used binary operations to generate each new row and then I was distressed to discover that the program just started screwing up if I made the field more than 31 “pixels” wide. It was crazy! Of course what was happening was that the number was just going over the top.
The year 292 Billion. Greater than the likely maximum age of the Universe. Yep, that 64 bit should probably future proof us.
"Nothing crashed on y2k" nope. I was told, at the time, that the Pakistani Stock Exchange crashed, at the very least, and several other systems around the world. Nothing particularly dramatic, and nothing like the end of the world it was hyped to be, but there were for sure some systems that went down. And there were a number of them that were turned off during, thereby avoiding the problem, as well.
Yeah a bunch of trains broke in Norway too
I don't know about the exchange, but there was supposed to be a control signal (every minute in fact) coming from a nuclear reactor which didn't arrive at the control station and it wasn't until 1 or 2 minutes later that they started coming again. Ironically, this could've been completely unrelated, but when you have a nuclear reactor several sphincters were probably clenched. :P
I think 2 vending machines in Australia stopped working. Source: internet historian
In other news, Pakistan has a stock market?
Interesting, that 292 billionth year is the same (or close to) the number of milliseconds that can fit into a 64 bit integer.
There is an error a 4:52 the first digit should be 0 not one. And there might only be 31 digits (it is hard to count)
You're right, there are only 31 digits.
To be clear this is talking about the "new" date in red.
edit: this is wrong go read the following comments on why
@@davechen4979 Actually no, there should be 32 digits an the one on the far left should be a 0 to start then a 1 when it goes red
That's not an error, it's OK to leave out leading zeros (although in this case would help the explanation)
@@bcnelson ah, ok. i really don't know what I'm doing lol
"I think will be solved by the time that comes around"
Where have I heard that before...? :)
0:28 this number is not the biggest known double Mersenne prime, as (2^127)-1 is also a double Mersenne prime
thankyou for this information
At 5:00, such a slap in the face for the date rollover end up on a Friday the 13th...
I know of two instances of this issue:
1. When Gangnam Style reached half of that many views and broke UA-cam for a while.
2. The 32-bit signed integer is the maximum value that can be stored in games like RuneScape, and therefore the most money you can have in the game's currency. Of course this only affected exceptionally rich players, although the way inflation is rampant in the RS3 economy, more and more players will be affected. The workaround is to use items with guarenteed values like shards, or even rare items.
March 1 2000 epoch with a 64 bit timestamp. Mathematically the best possible combination for the Gregorian calendar.
For more epoch hype, you can celebrate epoch 1600000000 comming up next Sunday (September 13, 2020 12:26:40 PM GMT+00:00)
do a video on fpgas/cplds, and verilog or another language.
imagine the people in the future when they encounter this bug and the whole society crumbles
Jan 1, 1970 is a very special date.... it is exactly 165 days after the first moon landing. :D
Brody: "You're gonna need a bigger bit."
So how do Unix systems represent dates prior to 1901 with only 32 bits?
so by the time we are at the limit of the 64bit time, we would have fixed the problem with the limit at 32bit time? What does that help?
We're never going to get to the limit of the 64-bit time. It's over 200 billion years from now.
There is still an error: In 4:52 the number should be
01111111111111111111111111111111
and not
11111111111111111111111111111111
Or we could start keeping track of time in milliseconds with 64bit integers :o
even though we may already be doing that.
I imagine there was some discussion around which channel to put this on lol. The beginning feels like a NumberFile video, but the topic is clearly computer related. Either way, I never realised that 2^31-1, or the top half of a signed 32 bit int, was a prime.
discussion? im sure it was a full on fist fight. this is THE number (over 1M) any computer scientist, hacker, or programmer is going to immediately pick as their favorite.
2^32 - 1 is prime. 2^32 certainly is not for obvious reasons ;)
This is 100% computer file. The topic is about a Unix standard. Many standards are built on top of the Unix time standard, but are already ready to handle this change. Case in point: your browser's implementation of javascript which maxes out at 8.64e15 milliseconds (roughly September 12, 275760)
Niosus Oops, that was quite a silly mistake lol.
I can already hear the screams of people worried about the Y29227026596 bug when their replicators stop working and the teleportation devices stop working and people will have to grow food and walk around like prehistoric morons
This guy is very funny, yet clever and informative. 👍
Speaking about time, yet unable to use 24h format. That's just sad.
Can anybody explain me why it is the 4th number and not the third??? 0:22
Could we have a video on blotzmann machines, thanks =)
Most unix implementations switched to 64bit time a long time ago in a universe we all live in.
When switching to 64 bits, why not represent time as milliseconds since a specific date and time?
It would be easy to do so - obviously the OS knows how many milliseconds have past as you can already query the same functions with a little extra and get it.
But, most programs just need to be told "date objects holds more bits" at the definition up front and recompile (and/or all references to 'get32bittime' changed to the 64 bit call all throughout code), and the code will recompile around it (assuming not cheaty hacks which often doesn't happen with time calls).
Changing the result by a factor of 1000 means editing logic, introducing the human error of missing or doing it twice (did you store it as milliseconds? How does it handle prior data? do you factor before or after comparing two dates, and some methods are really long)
Over time there might be a change in convention eg using one system provided milliseconddatetime provider over another second based one, but it's unlikely something defined as "get seconds" gets the hard break to milliseconds... Windows might do a "things aren't backwards compatible from this version on" but Unix tends to avoid that ;)
Hang on, at 0:25, are you sure 2^31-1 is the biggest? Pretty sure it's 2^127-1 instead
largest signed 32-bit integer, it's in the text a second later
@@HexPortal hey says its the largest double mersenne prime which isn't true, its the second largest of the 4 with the largest being 2^127-1
Hey man I’m a fan of max cash too
6:49 I hope that’s not the correct date. By then we will need to have moved away from the Gregorian calendar as it, too, fails to exactly match the year length.
The year-length mismatch will get particularly bad when the Sun goes Red Giant and swallows the Earth. We'll probably switch to the Ow, Ow, It Burns! calendar.
If we do not become extinct by then, then our species will be using millions of different calendars on different planets.
@@csbruce Oops, yeah, I misread by a couple orders of magnitude and thought it was only hundreds of *millions* of years.
Why don't we call the turn over of 64-bit time the Mega-millenium bug.
because it's not related to mega or millenium?
So in 292 billion years when the entire universe has been converted to one big machine-substrate consciousness, the 64-big legacy will finally come back to bite.
And that would be the downfall of our AI overlords.
Pff...
How many systems are using Integers 4-byte for date storage?
I love genius..
my favorite number, hold on i dont know it
Anyone who ran the STOCK MACHINE GUN will recognize this.
I am a time traveller from the year 292 billion... we beg you, use 128 bits! The fate of the future depends on it.
5:12 JavaScript actually fixes this problem. The built in date object supports values up to just a little below 2^52. To be exact, JavaScript allows for 50,000,000 days before or after Jan 1 of 1970.
This works because JavaScript uses 64 bit floats (under the IEEE-64 standard). 53 of these 64 bits can be used to correctly represent an integer up to around 9 quintillion (9.007 quintillion).
This is how your game console, smart phone, tablet, cable setup box, etc calculates accurate time and date.
Ok 2 things:
1. Most databases can handle timestamps from way before 1970 or even 1901, so do they already use bigger numbers? Is this more just a OS relative problem?
2. even with 64 bits are we still counting from 1970? Or would starting from 0 make more sense? I guess it will still be 1970 to reduce the amount of refactoring needed?
Many databases count time down to the millisecond, or even nanosecond (I don't know of any that go down to the picosecond, but the CPUs do), so they would have run out of bits a long time ago had they only used 32 bits. (86.4B ns/day is already more than 32 bits, and even 86.4M ms/day only allows for about 3 years.) So they've been using 64 bits to represent a DATETIME for a very long time.
If the OS uses 32-bit timestamps for file times, every program on that OS will have a problem, but for internal processing, the time library each program uses may or may not come up against it. So databases have to be written carefully, but I'm guessing they were.
Anyway, we always need an epoch, or a start time. Set a timestamp to 0, and your program has to translate it into some sort of readable date. 1970 was convenient in that "current time" would always be positive, but whatever moment you pick will be somewhat arbitrary. (Note that any date before 1752 will have to deal with the
Gregorian calendar adjustment, and any date before 1901 will mess up simple leap year algorithms.)
Friday , 13th 1901 , is it a coincidance ?
please, turn on auto subtitle
Gogo more videos;) nice channel
Lekker man Tim
Ashton Kutcher...Kevin Malone....Dr Tim Muller - That's all I want to say. Thank you.
Who remembers the first version of this video from 2013?
Solution to epoch issues: whatever exponent in 2^x, raise that expression to x for 2^x^2.
How about why not using 512-bit counting from the beginning?
Breaking News! At Jan 19, 2038 3:14:08 AM, the 32-bit Unix clock dies, resetting 32-bit computer time to Dec 13, 1901.
My #MegaFavNumber too :)
According to OEIS there are 8 double Mersenne primes, of which 170141183460469231731687303715884105727 is the largest. 2147483647 is the fifth largest, sorry.
Not quite. It is the eighth Mersenne prime, which is a prime of the form, 2^n - 1 = 2^31 - 1. A double Mersenne prime is of the form, 2^(2^p - 1) - 1, where 2^p - 1, is itself a Mersenne prime. There are only four known double Mersenne primes and this number, 2^(2^5 - 1) - 1, is the third largest.
VMS will still rule, long after *nix is time-bombed.
Something that had always puzzled me is if there is a standard way unix uses to define dates before December 1901 using the epoch with 32 bits. Does it store a 64 bit number in those cases? Presumably birthdays for example would have extended beyond the 32bit range when the epoch was created.
Unix, not that I am aware of, but rather than create dedicated memory, addition and subtraction more than suffice to "tell the computer that it is before 1970". The only time this will be an issue is in boot or with code that rewrites itself. Boot can be solved easily by adding in another loop, bad science, effective result and code that rewrites itself is usually a virus.
This Unix time is your system clock. When are files changed, what's the uptime, when do scheduled tasks need to run, etc. It's used for times which are relevant to the operation of the computer. You will never encounter a file edited before 1901 or schedule something out at that range. While I'm sure at the time many programs used Unix time to represent time, these days that's not really the case by default. If you need high precision times, or work with huge ranges, you need to use some other representation. That becomes more and more relevant as Unix time does not know about leap seconds, and certainly doesn't know about historic changes to the calendar system. Things get really messy the further you go away from today in either direction. Luckily most programming languages have libraries available that solve these tricky problems for you.
I had to write a program to check people's birthdays against the academic year to see if they were mature students or not, and the easiest thing to do was convert them to Unix time and subtract. Well, not only did it not work for people born before 1901, it didn't even work for people born before 1959! The Perl date conversion function I had couldn't handle negative Unix times more than 10 years in size.
You would never directly use the Unix Epoch time to calculate time before 1901 pretty much.
There are data structures that utilises multiple variables to store and handle such things.
You just need a list of many 32 bit slots to store any date, it just requires some processing to get a complete date out of it. You will for examples never in your life get a representation of how many seconds the year 0 was from now from a 32 bit.
You would need 64 bit to store that number.
While they're at it, why not measure the number of microseconds since Jan 1, 1970 at 12:00:00.000001AM? We'll run out that clock a lot sooner.... about the year 292,277.
Exactly my thought. We don't need all that range, but we could use some extra precision.
Operating systems already measure time at that and even higher precisions. The Unix timestamp in second is just used for data storage and transfer, and in many programs when they need to calculate stuff where "human clock precision" is more than enough. One famous example are file timestamps. We don't need to know the exact nanosecond a file was created, one second (or for DOS: 2 seconds) is just fine. And only rarely do we (need to) keep the clocks of different computers synced up to better than 1-2 seconds anyway.
I had to check his name, he sounded so Dutch :)
he must be dutch
32 bit pie o'clock.
64 bit pie o'clock again?
Thing is, what if you want millisecond and microsecond data?