Skip Navigation

I fucking hateeee DNS domain name landlordism

DNS is the most neoliberal shit system that too many have just accepted as how computers work and always worked to the point where I have heard actual landlord arguments deployed to defend it

It's administered by ICANN, which is like the ideal neoliberal email factory NGO that justifies the de facto monopoly of a tiny few companies with responsible internet government-stewardship stakeholderism, etc type bureaucracy while upholding the right of domain landlords to charge hundreds or even thousands of dollars in rent for like 37 bytes on a server somewhere lol

Before this it was administered by the US military-industrial complex, you can thank Bill Clinton and the US Chamber of Commerce for this version of it along with Binky Moon for giving us cheap .shit TLDs for 3 dollars for the first year

Never forget the architects of the internet were some of the vilest US MIC and Silicon Valley ghouls who ever lived and they are still in control fundamentally no matter how much ICANN and IANA claim to be non-partison, neutral, non-political, accountable, democratic, international, stewardshipismists

"Nooooo we're running out of IPv4 addresses and we still can't get everyone to use the vastly better IPv6 cuz uhhh personal network responsibility. Whattttt?????? You want to take the US Department of Defense's multiple /8 blocks? That's uhhhh not possible for reasons :|" Internet is simultaneously a free-market hellscape where everyone with an ASN is free to administer it however they want while at the same time everyone is forced into contracts with massive (usually US-based) transit providers who actually run all the cables and stuff. Ohhh you wanna run traffic across MYYYYYYY NETWORK DOMAINNNNNNN????? That'll be...... 1 cent per packet please, money please now money now please money now money please now now nwoN OWOW

You're viewing part of a thread.

Show Context
47 comments
  • I agree about static linking but...... 100mb of code is absolutely massive, do Rust binaries actually get that large?? Idk how you do that even, must be wild amounts of automatically generated object oriented shit lol

    Because portability has only been practical for the majority of applications since 2005ish.

    Also wdym by this? Ppl have been writing portable programs for Unix since before we even had POSIX

    Also Plan 9 did without dynamic linking in the 90s. They actually found their approach was smaller in a lot of cases over having dynamic libraries around: https://groups.google.com/g/comp.os.plan9/c/0H3pPRIgw58/m/J3NhLtgRRsYJ

    • I agree about static linking but...... 100mb of code is absolutely massive, do Rust binaries actually get that large?? Idk how you do that even, must be wild amounts of automatically generated object oriented shit lol

      My brother in Christ if you have to put every lib in the stack into a GUI executable you're gonna have 100mb of libs regardless of what system you're using.

      Also Plan 9 did without dynamic linking in the 90s. They actually found their approach was smaller in a lot of cases over having dynamic libraries around: https://groups.google.com/g/comp.os.plan9/c/0H3pPRIgw58/m/J3NhLtgRRsYJ

      Plan 9 was a centrally managed system without the speed of development of a modern OS. Yes they did it better because it was less complex to manage. Plan 9 doesn't have to cope with the fact that the FlatPak for your app needs lib features that don't come with your distro.

      Also wdym by this? Ppl have been writing portable programs for Unix since before we even had POSIX

      It was literally not practical to have every app be portable because of space constraints.

      • My brother in Christ if you have to put every lib in the stack into a GUI executable you're gonna have 100mb of libs regardless of what system you're using.

        You just link against the symbols you use though :/ Lemme go statically link some GTK thing I have lying around and see what the binary size is cuz the entire GTK/GLib/GNOME thing is one of the worst examples of massive overcomplication on modern Unix lol

        There are also Linux distros around that don't have a dynamic linker but I couldn't find any stats when I did a quick search

        Also I'm not a brother :|

        Plan 9 was a centrally managed system without the speed of development of a modern OS. Yes they did it better because it was less complex to manage. Plan 9 doesn't have to cope with the fact that the FlatPak for your app needs lib features that don't come with your distro.

        It was less complex cuz they made it that way though, we can too. FlatPaks are like the worst example too cuz they're like dynamically linked things that bring along all the libraries they need to use anyway (unless they started keeping track of those?) so you get the worst of both static and dynamic linking. I just don't use them lol

        It was literally not practical to have every app be portable because of space constraints.

        You mean portable like being able to copy binaries between systems? Cuz back in the 90s you would usually just build whatever it was from source if it wasn't in your OS or buy a CD or smth from a vendor for your specific setup. Portable to me just means like that programs can be be built from source and run on other operating systems and isn't too closely attached to wherever it was first created. Being able to copy binaries between systems isn't something worth pursuing imo (breaking userspace is actually cool and good :3, that stable ABI shit has meant Linux keeps around so much ancient legacy code or gets stuck with badddd APIs for the rest of time or until someone writes some awful emulation layer lol)

        • You just link against the symbols you use though :/ Lemme go statically link some GTK thing I have lying around and see what the binary size is cuz the entire GTK/GLib/GNOME thing is one of the worst examples of massive overcomplication on modern Unix lol

          If you link against symbols you are not creating something portable. In order for it to be portable the lib cannot ever change symbols. That's a constraint you can practically only work with if you have low code movement and you control the whole system. (see below for another way but it's more complex rather than less complex).

          Also I'm not a brother :|

          My bad. I apologize. I am being inconsiderate in my haste to reply.

          It was less complex cuz they made it that way though, we can too. FlatPaks are like the worst example too cuz they're like dynamically linked things that bring along all the libraries they need to use anyway (unless they started keeping track of those?) so you get the worst of both static and dynamic linking. I just don't use them lol

          But there's no other realistic way.

          You mean portable like being able to copy binaries between systems? Cuz back in the 90s you would usually just build whatever it was from source if it wasn't in your OS or buy a CD or smth from a vendor for your specific setup. Portable to me just means like that programs can be be built from source and run on other operating systems and isn't too closely attached to wherever it was first created. Being able to copy binaries between systems isn't something worth pursuing imo (breaking userspace is actually cool and good :3, that stable ABI shit has meant Linux keeps around so much ancient legacy code or gets stuck with badddd APIs for the rest of time or until someone writes some awful emulation layer lol)

          That's a completely different usage of "portable" and is basically a non-problem in the modern era, as long as and see my response to the symbols point, you are within the same-ish compatibility time frame.

          It's entirely impossible to do this over a distributed ecosystem over the long term. You need symbol migrations so that if I compile code from 1995 it can upgrade to the correct representation in modern symbols. I've built such dependency management systems for making evergreen data in DSLs. Mistakes, deprecation, and essentially everything you have ever written has to be permanent, it's not a simple way to program. It can only be realized in tightly and directly controlled environments like Plan 9 or if you're the architect of an org.

          Dependency management is an organization problem that is complex, temporal, and intricate. You cannot "technology" your way out of the need to manage the essential complexity here.

          • If you link against symbols you are not creating something portable. In order for it to be portable the lib cannot ever change symbols. That's a constraint you can practically only work with if you have low code movement and you control the whole system. (see below for another way but it's more complex rather than less complex).

            I'm not entirely sure what you mean tbh. Like if something changes in a library you linked against? I guess you would have to rebuild it but you would have to rebuild a shared library too and place it into the system. Actually, you don't necessarily have to rebuild anything, you can actually just relink it if you still have object files around (like OpenBSD does this to relink the kernel into a random order on every boot), just swap in a different object file for what you changed

            My bad. I apologize. I am being inconsiderate in my haste to reply.

            It's okay :3

            But there's no other realistic way.

            This is just my experience ofc but I've never used Flatpaks or Snaps anywhere tbh, I just get binaries from my distribution or build them myself if I need something unusual. The issue with that is that it's not as easy as it should be, I legit should just be able to do "make" and have it work but ofc I have to fix stuff all the time. Plan 9 is a carefully tuned system ofc and I obviously have the Plan 9 brainworms but like..... it's never been a problem cuz the entire operating system builds in like... 7 minutes on a Core 2 Duo, not joking lol. And it was IO-bottlenecked during that on an SSD even! If you have fast compilers it's not so bad and you only ever need to build the whole system on an update (and mk, the build tool, will ofc not rebuild things that don't need rebuilding)

            Tbh.... I would be in favor of just having an interpreted or JIT-compiled language everywhere too (the line between static and dynamic linking gets blurrier but also simpler anyway here hehe). There are many different ways to approach this problem. Idk it's just easy to write stuff off like that as "not realistic", especially if you're an expert in a highly technical field who has done it one way for a long time, but it is realistic cuz its been done even. We should do it cuz our methods and knowledge improving is good

            It's entirely impossible to do this over a distributed ecosystem over the long term. You need symbol migrations so that if I compile code from 1995 it can upgrade to the correct representation in modern symbols. I've built such dependency management systems for making evergreen data in DSLs. Mistakes, deprecation, and essentially everything you have ever written has to be permanent, it's not a simple way to program. It can only be realized in tightly and directly controlled environments like Plan 9 or if you're the architect of an org.

            I've never written any programs that were subject to such strict verification tbh. I had to look up what "DSL" means lol, Wikipedia says "definitive software library". I rly think it's not such a problem most of the time, code changes all the time and people update it, as they should imo, cuz it's impossible outside of formal verification (which is cool and good) to write perfect bug-free software. And that formal verification can only get you as far as verifying there are no bugs but it can't force you to write good systems or specifications and can't help you if there are things like cosmic rays striking your processor ofc hehe

            I'm not sure what kind of software you have experience with, like if it needs to not make planes fall out of skies or ppl's insulin pumps not shut off (you would def know more than me about writing that kind of software) but I think there are many ways to address software reliability regardless of how you link or how you distribute software. Make hashed symbols idk hehe, relink them all you like but they all have a hash in the "definitive" software library maybe. Personally, I love formal methods for stuff like this

            Dependency management is an organization problem that is complex, temporal, and intricate. You cannot "technology" your way out of the need to manage the essential complexity here.

            I agreee, this isn't just a technological problem to me but also a social one. Like ideally I would love to see way more money or resources for computer systems research and state-sponsored computer systems. Tbh I feel like most of the reason ppl focus so much on unchanging software, ABIs, APIs, instruction sets, operating systems, etc is cuz capitalists use them to make products and them never changing and just being updated forever is labor reducing lol. When software is designed badly or the world has changed and software no longer suits the world we live in (many such cases), we (the community of computer-touchers lol) should be able to change it. Ofc there will be a transition process for anything and this is quite vague but yeh

            Am rly tired, may respond later if you reply

            • I'm not entirely sure what you mean tbh. Like if something changes in a library you linked against? I guess you would have to rebuild it but you would have to rebuild a shared library too and place it into the system. Actually, you don't necessarily have to rebuild anything, you can actually just relink it if you still have object files around (like OpenBSD does this to relink the kernel into a random order on every boot), just swap in a different object file for what you changed

              Okay let's say I am writing MyReallyCoolLibrary V1. I have a myReallyCoolFunction(). You want to use myReallyCoolFunction in your code. Regardless if your system works on API or ABI symbols, what a symbol is is a universal address for a specific functionality. So when my library is compiled it makes a S_myReallyCoolFunction, and when your app is compiled it makes a call S_myReallyCoolFunction and this symbol needs to be resolved from somewhere.

              So static linking is when you compile the app with S_myReallyCoolFunction inside of it so when it sees call S_myReallyCoolFunction it finds the S_myReallyCoolFunction in the app data. Dynamic linking is when it finds call S_myReallyCoolFunction in a library that's a file on your machine. Plan9 uses static linking.

              So let's talk about this what it means for "code portability". Let's say I make an MyReallyCoolLibrary V1 and I have to change a few things, here are alternate universes that can happen:

              • I don't change myReallyCoolFunction
              • I change myReallyCoolFunction but I do not change its behavior, I simply refactor the code to be more readable.
              • I change myReallyCoolFunction and I change its behavior.
              • I change myReallyCoolFunction and change it's interface.
              • I remove myReallyCoolFunction.

              So let's compute what this should mean for encoding a Symbol in this case.

              • myReallyCoolFunction from V2 can stay declared as S_myReallyCoolFunction
              • myReallyCoolFunction from V2 can stay declared as S_myReallyCoolFunction
              • myReallyCoolFunction from V2 has to be declared as S_myReallyCoolFunctionNew
              • myReallyCoolFunction from V2 has to be declared as S_myReallyCoolFunctionNew
              • I technically no longer have an S_myReallyCoolFunction

              Now these are the practical consequences for your code:

              • none, everything stays the same and code written to V1 can use V2.
              • none, everything stays the same and code written to V1 can use V2.
              • app refactor - everything written for V1 has to change to use V2. The app may no longer be able to work with V2.
              • app refactor - everything written for V1 has to change to use V2.The app may no longer be able to work with V2.
              • app refactor - everything written for V1 has to change to use V2.The app may no longer be able to work with V2.

              So now to make code truly portable I must now remove the app refactor pieces. I have 2 ways of doing that.

              1. Version resolution from inside the system by managing lib paths most likely.
              2. V2 must include all symbols from V1

              With #1 you have the problem everyone complains about today.

              With #2 you essentially carry forward all work ever done. Every mistake, every refactor, every public API that's ever been written and it's behaviors must be frozen in amber and reshipped in the next version.

              There is no magic here, it's a simple but difficult to manage diagram.

              Plan 9 is a carefully tuned system ofc and I obviously have the Plan 9 brainworms but like.....

              I agree that Plan 9 is really cool, but in practice Linux is the height of active development OS complexity that our society is able to build right now. Windows in comparison is ossifying, and OSX is much simpler.

              I've never written any programs that were subject to such strict verification tbh. I had to look up what "DSL" means lol, Wikipedia says "definitive software library".

              DSL in this case means Domain Specific Language

              I rly think it's not such a problem most of the time, code changes all the time and people update it, as they should imo,

              But here's the problem with this statement, it unravels your definition of "code portability". The whole point of "code portability" is that I don't have to update my code. So I'm kind-of confused about what we're arguing if it's not Flatpak style portability, it's not code portability, what are we specifically talking about?

              And that formal verification can only get you as far as verifying there are no bugs but it can't force you to write good systems or specifications and can't help you if there are things like cosmic rays striking your processor ofc hehe

              The formal verification can only reify the fact that you need something called Foo and I can provide it. The more formal it is the more accurate we can make the description of what Foo is and the more accurately I can provide something that matches that. But I can't make it so that your Foo is actually a Bar because you meant a Bar but you didn't know you needed a Bar. We can match shapes to holes but we cannot imbue the shapes or the holes with meaning and match on that. We can only match geometrically, that is to say (discrete) mathematically.

              I agreee, this isn't just a technological problem to me but also a social one. Like ideally I would love to see way more money or resources for computer systems research and state-sponsored computer systems. Tbh I feel like most of the reason ppl focus so much on unchanging software, ABIs, APIs, instruction sets, operating systems, etc is cuz capitalists use them to make products and them never changing and just being updated forever is labor reducing lol. When software is designed badly or the world has changed and software no longer suits the world we live in (many such cases), we (the community of computer-touchers lol) should be able to change it. Ofc there will be a transition process for anything and this is quite vague but yeh

              I generally agree with this sentiment but I think the capitalist thing defeating better computing standards, tooling, and functionality is the commodity form. The commodity form and its practical uses don't care about our nerd shit. The commodity form barely cares to fulfill the functional need it's reified form (e.g. an apple) provides. That is to say, the commodity form doesn't care if you make Shitty Apples or Good Apples as long as you can sell Apples. That applies to software, and as software grows more complex, capitalism tends to produce shitty software simply because the purpose of the commodity form is to facilitate trade not to be correct/reliable/be of any quality.

47 comments