In the Overview of cross-architecture portability problems, I have dedicated a section to the problems resulting from use of 32-bit time_t type. This design decision, still affecting Gentoo systems…
This seems overblown, we've faced these things before.
The straightforward path is adding new calls and structs and leaving the old code in place, then having tests that return -1 for time32_t and seeing what breaks.
It's not pretty, but this is life in the new epoch, gentoo doesn't have it harder than anyone else except when they're trying to rebuild while the transition is happening.
I know nobody wants 2 apis, 1 deprecated, but this is an ancient design decision we have to live with, this is how we live with them.
Ah, the joys of requiring non-standard library calls for apps to function.
The problem is that this approach breaks the C standard library API, which is one of the few things that are actually pretty universal and expected to work on any platform. You don't want to force app developers to support your snowflake OS that doesn't support C.
The current way forward accepted by every other distro is to just recompile everything against the new 64-bit libraries. Unless the compiled software makes weird hardcoded assumptions about sizes of structs (hand-coded assembly might be one somewhat legitimate reason for that, but other distros have been migrating to 64-bit time_t for long enough that this should have been caught already), this fixes the problem entirely for software that can be recompiled.
That leaves just the proprietary software, for which you can either have a separate library path with 32-bit time_t dependencies, or use containers to effectively do the same.
Sneaky edit: why not add new 64-bit APIs to C? Because the C standard never said anything about how to represent time_t. If the chosen implementation is insufficient, it's purely on the platform to fix it. The C17 standard:
The range and precision of times representable in clock_t and time_t are implementation-defined.
Your argument is to have 2 subtly incompatible abis and one day binaries magically break.
You're right it breaks c stdlib, but that's literally the point, libc is broken by design, this is the fix.
No program with time32_t will ever work after 2038, so any compiled that way are broken from compilation.
You're right that the length isn't specified though, the issue is changing types for triplets silently has unfortunate side effects.
If you really want to be clever, mangle the symbols for the functions that handle time so they encode time64 as appropriate, but doing it silently is begging for trouble.
One thing people reading this should remember is that you cannot guarantee all packages on a Gentoo system will be updated simultaneously. It just can't be done. Because several of the arches affected by this are old, slow, and less-used (32-bit PowerPC, anyone?), it's also impossible to test all combinations of USE flags for all arches in advance, so sooner or later someone will have something break in mid-compile. For this change, that could result in an unbootable system, or a badly broken one that can't continue the upgrade because, for example, Python is broken and so portage can't run.
The situation really is much more complicated than it would be on a binary distro whose package updates are atomic. Not intractable, but complicated.
That being said, even a completely borked update would not make the system unrecoverable—you boot from live media, copy a known-good toolchain from the install media for that architecture over the borked install, chroot in, and try again (possibly with USE flag tweaks) until you can get at least emerge --emptytree system or similar to run to completion. It's a major, major pain in the ass, though, and I can understand why the developers want to reduce the number of systems that have to be handled in that way to as few as possible.
I'm not familiar with the specific install/upgrade process on Gentoo so maybe I'm missing something, but what's wrong with forcing new installations to use time64 and then forcing existing installs to do some kind of offline migration from a live disk a decade or so down the line? I feel like it's probably somewhat uncommon for an installation of any distro to be used continuously for that amount of time (at least in a desktop context), and if anyone could be expected to be able to handle a manual intervention like this, it's long-time Gentoo users.
The bonus of this would be that it wouldn't be necessary to introduce a new lib* folder - the entire system either uses time64 or it doesn't. Maybe this still wouldn't be possible though depending on how source packages are distributed; like I said I dont really know Gentoo.
I imagine the "update from another system" path runs in troubles with more complex gentoo installs than just the base system. For a full update from the live disk it will have to include lots and lots of (often exotic) tools that might be used in the building process (document generators like doxgen, lexer, testing frameworks, several build systems and make-likes. programming languages...) in addition to being able to build against the already installed updates for packages while not accidental building against packages that are not updated yet.
Or you go the simpler way and only do a base update from the live-system...only update the base build system and package management of the gentoo system and afterwards boot in a "broken" system in which only the basics works and rebuild it from there.
For be both those options sound less desirable than what is suggested in the blog.