Skip Navigation

Stubsack: weekly thread for sneers not worth an entire post, week ending 16th February 2025

awful.systems /post/3436005

Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

(Taking over for Gerard this time. Special thanks to him for starting this.)

226 comments
  • Saltman has a new blogpost out he calls 'Three Observations' that I feel too tired to sneer properly but I'm sure will be featured in pivot-to-ai pretty soon.

    Of note that he seems to admit chatbot abilities have plateaued for the current technological paradigm, by way of offering the "observation" that model intelligence is logarithmically dependent on the resources used to train and run it (i = log( r )) so it's officially diminishing returns from now on.

    Second observation is that when a thing gets cheaper it's used more, i.e. they'll be pushing even harded to shove it into everything.

    Third observation is that

    The socioeconomic value of linearly increasing intelligence is super-exponential in nature. A consequence of this is that we see no reason for exponentially increasing investment to stop in the near future.

    which is hilarious.

    The rest of the blogpost appears to mostly be fanfiction about the efficiency of their agents that I didn't read too closely.

    • My ability to guess the solution of Boolean SAT problems also scales roughly with the log of number of tries you give me.

    • christ this is dumb as shit

    • It probably deserves its own post on techtakes, but let’s do a little here.

      People are tool-builders with an inherent drive to understand and create

      Diogenes’s corpse turns

      which leads to the world getting better for all of us.

      Of course Saltman means “all of my buddies” as he doesn’t consider 99% of the human population as human.

      Each new generation builds upon the discoveries of the generations before to create even more capable tools—electricity, the transistor, the computer, the internet, and soon AGI.

      Ugh. Amongst many things wrong here, people didn’t jerk each other off to scifi/spec fic fantasies about the other inventions.

      In some sense, AGI is just another tool in this ever-taller scaffolding of human progress we are building together. In another sense, it is the beginning of something for which it’s hard not to say “this time it’s different”; the economic growth in front of us looks astonishing, and we can now imagine a world where we cure all diseases, have much more time to enjoy with our families, and can fully realize our creative potential.

      AGI IS NOT EVEN FUCKING REAL YOU SHIT. YOU CAN’T CURE FUCK WITH DREAMS

      We continue to see rapid progress with AI development.

      I must be blind.

      1. The intelligence of an AI model roughly equals the log of the resources used to train and run it. These resources are chiefly training compute, data, and inference compute. It appears that you can spend arbitrary amounts of money and get continuous and predictable gains; the scaling laws that predict this are accurate over many orders of magnitude.

      “Intelligence” in no way has been quantified here, so this is a meaningless observation. “Data” is finite, which negates the idea of “continuous” gains. “Predictable” is a meaningless qualifier. This makes no fucking sense!

      1. The cost to use a given level of AI falls about 10x every 12 months, and lower prices lead to much more use. You can see this in the token cost from GPT-4 in early 2023 to GPT-4o in mid-2024, where the price per token dropped about 150x in that time period. Moore’s law changed the world at 2x every 18 months; this is unbelievably stronger.

      “Moore’s law” didn’t change shit! It was a fucking observation! Anyone who misuses “moore’s laws” outta be mangione’d. Also, if this is true, just show a graph or something? Don’t just literally cherrypick one window?

      1. The socioeconomic value of linearly increasing intelligence is super-exponential in nature. A consequence of this is that we see no reason for exponentially increasing investment to stop in the near future.

      “Linearly increasing intelligence” is meaningless as intelligence has not been… wait, I’m repeating myself. Also, “super-exponential” only to the “socio” that Ol’ Salty cares about, which I have mentioned earlier.

      If these three observations continue to hold true, the impacts on society will be significant.

      Oh hm but none of them are true. What now???

      Stopping here for now, I can only take so much garbage in at once.

      1. My big robot is really expensive to build.
      2. If big robot parts become cheaper, I will declare that the big robot must be bigger, lest somebody poorer than me also build a big robot.
      3. My robot must be made or else I won't be able to show off the biggest, most expensive big robot.

      QED, I deserve more money to build the big robot.

      P.S. And for the naysayers, just remember that that robot will be so big that your critiques won't apply to it, as it is too big.

    • Second observation is that when a thing gets cheaper it's used more, i.e. they'll be pushing even harded to shove it into everything.

      Are they trying to imply that when they will make it cheaper by shoving it everywhere? I honestly can't see how that logic is holding together

      • as I read it, it's an attempt at reference to economy of scale under the thesis "AI silicon will keep getting cheaper because more and more people will produce it" as the main underpinning for how to reduce their unit economics. which, y'know, great! that's exactly what people like to hear about manufacturing and such! lovely! it's only expensive because it's the start! oh, the woe of the inventor, the hard and expensive path of the start!

        except that doesn't hold up in any reasonable manner.

        they're not using J Random GPU, they're using top-end purpose-focused shit that's come into existing literally as co-evolution feedback from the fucking industry that is using it. even some hypothetical path where we do just suddenly have a glut of cheap model-training silicon everywhere, imo it's far far far more likely to be an esp32 situation than a "yeah this gtx17900 cost me like 20 bucks" situation. even the "consumer high end" of "sure your phone has a gpu in it" is still very suboptimal for doing the kind of shit they're doing (even if you could probably make a great cursed project out of a cluster of phones doing model training or whatever)

        falls into the same vein of shit as "a few thousand days" imo - something that's a great soundbite, easily digestible market speak, but if you actually look at the substance it's comprehensive nonsense

      • The surface claim seems to be the opposite, he says that because of Moore's law AI rates will soon be at least 10x cheaper and because of Mercury in retrograde this will cause usage to increase muchly. I read that as meaning we should expect to see chatbots pushed in even more places they shouldn't be even though their capabilities have already stagnated as per observation one.

        1. The cost to use a given level of AI falls about 10x every 12 months, and lower prices lead to much more use. You can see this in the token cost from GPT-4 in early 2023 to GPT-4o in mid-2024, where the price per token dropped about 150x in that time period. Moore’s law changed the world at 2x every 18 months; this is unbelievably stronger.
  • An entertaining bit of pushback against the various bathroom bills being pushed at the moment. Bonus points for linking it with ai training. I feel like this is an idea that’s very adaptable…

    https://mefi.social/@MissConstrue/113983951020093710

    Signs which have been adhered to bathroom stall interiors at the Dallas Fort Worth airport.

    SECURITY NOTICE Electronic Genital Verification (EGV) Your genitalia may be photographed electronically during your use of this facility as part of the Electronic Genital Verification (EGV) pilot program at the direction of the Office of the Lieutenant Governor. In the future, EGV will help keep Texans safe while protecting your privacy by screening for potentially improper restroom access using machine vision and Artificial Intelligence (Al) in lieu of traditional genital inspections. At this time, images collected will be used solely for model training purposes and will not be used for law enforcement or shared with other entities except as pursuant to a subpoena, court order or as otherwise compelled by legal process. Your participation in this program is voluntary. You have the right to request removal of your data by calling the EGV program office at (512) 463-0001 during normal operating hours (Mon-Fri 8AM-5PM). STE OP CRATMENT OA Pusi DFW DALLAS FORT WORTH INTERNATIONAL AIRPORT

    The contact number appears to be for Dan Patrick, the lt. governor of Texas.

  • In a hilarious turn of events that no one could have foreseen, Anthropic is having problems with people sending llm generated job applications, and is asking potential candidates to please not use ai.

    While we encourage people to use AI systems during their role to help them work faster and more effectively, please do not use AI assistants during the application process. We want to understand your personal interest in Anthropic without mediation through an AI system, and we also want to evaluate your non-AI-assisted communication skills. Please indicate 'Yes' if you have read and agree.

    https://www.404media.co/anthropic-claude-job-application-ai-assistants/

  • US government tech hellscape roundup part the third (ugh):

    1. Elon Musk jokes(?) that the government doesn't use SQL ??? (source, note that his tweet has an ableist slur). I don't even know what to think about this. Is it supposed to be funny or something? Does he actually believe it?
    2. Article: Elon Musk’s A.I.-Fuelled War on Human Agency -- People here probably already knew all this; but one of the ways the admin thinks they can fire everyone is by replacing people with AI / automating everything. Some of the social media responses from federal workers are pretty great:

      Really excited to see AI put on some waders and unclog a beaver dam from a water structure for me.

      If I've learned anything from all this it's about how unfathomably based cool a lot of federal workers are.
    3. The less fascist / cowed parts of the infosec industry are currently raising the alarm about how insecure this all is. A representative social media post from Gossi The Dog

      I definitely recommend posting about what is happening in the US on LinkedIn as you will quickly learn many of the largest security vendors are staffed by people who have no interest in protecting people, while posting with their employers names.

    4. Some federal workers have been fired via emails calling them [EmployeeFirstName].

    Edit:

    1. Elon Musk The US State Department plans to buy $400m worth of armored Cybertrucks from Elon Musk (nytimes) (Edit: may have been ordered under Biden's administration)
    2. dogegov has been updated. Mostly just with more useless baby's first website materials; but they promise a "comprehensive, government-wide org chart" and are hiring "software engineers, InfoSec engineers, and other technology professionals". Aside: I already found two three minor website bugs despite not really looking for them and the website being tiny. But that can't be right... they're IT professionals while I'm DEI.
    3. Find replace is so hard :( and that's why the government writes about "gay and rights" to avoid saying the... the... the forbidden t-word of which I dare not speak
    4. So about how I said dogegov gives baby's first website vibes; it's database was left world writable lol
    5. dogegov shares classified information
    6. Classic Musk "humor": a "tech support" T-shirt to allude to all of this. The dude really likes custom T-shirts (which to be fair custom t-shirts can be awesome when they're less bad)
    • Is it supposed to be funny or something? Does he actually believe it?

      Or: does he even know what it is?

    • Nr 7 would be amusing if the context was not so evil. It's weird also how they allow gay but not transgender (oh no I said the word!), but I guess the question is "for how long"...

      Nr 8 is just... wow. Very surprising that they don't care at all and/or are super incompetent. I wonder how much AI was involved in creating that site.

      • It is quite the feeling seeing the federal government do their best to erase and discriminate against me and other trans people so openly and flagrantly and suddenly. Can't really put it into words easily. I now feel like a stranger in my own country.

        r.e. incompetence look at #9 that I just added :D (I guess I should cut it off there and start collecting stuff for a new comment next week)

226 comments