Skip Navigation

Stubsack: weekly thread for sneers not worth an entire post, week ending 5th January 2025

Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

(Semi-obligatory thanks to @dgerard for starting this, and happy new year in advance.)

120 comments
  • An interesting thing came through the arXiv-o-tube this evening: "The Illusion-Illusion: Vision Language Models See Illusions Where There are None".

    Illusions are entertaining, but they are also a useful diagnostic tool in cognitive science, philosophy, and neuroscience. A typical illusion shows a gap between how something "really is" and how something "appears to be", and this gap helps us understand the mental processing that lead to how something appears to be. Illusions are also useful for investigating artificial systems, and much research has examined whether computational models of perceptions fall prey to the same illusions as people. Here, I invert the standard use of perceptual illusions to examine basic processing errors in current vision language models. I present these models with illusory-illusions, neighbors of common illusions that should not elicit processing errors. These include such things as perfectly reasonable ducks, crooked lines that truly are crooked, circles that seem to have different sizes because they are, in fact, of different sizes, and so on. I show that many current vision language systems mistakenly see these illusion-illusions as illusions. I suggest that such failures are part of broader failures already discussed in the literature.

  • hoping for a 2025 with solidarity, aid, and good opsec for everyone who needs it the most

  • ellison wants to compete with thiel for title of chief boot-wielder https://archive.is/cOnPx

    • Not that I expect anything better from the fucking lawnmower but the flippant attitude on display is little short of amazing. How bad is it when Business Insider of all publications calls your vision a "surveillance dystopia"?

      Every police officer is going to be supervised at all times, and if there's a problem, AI will report that problem and report it to the appropriate person.

      Body cam footage of the officer-involved shooting was not available, as the AI system supervising the involved officers was coincidentally disregarding its previous instructions and instead writing a minstrel show routine at the time of the event.

    • I can't help but feel like for Ellison in particular, he must have given himself no choice but to believe this stuff is more capable than it is. He's 80 years old now, and if building towards honest-to-god "real AI" wasn't what his whole career was about, then what was the point? The twilight of the older generations of tech executives is going to be its own special kind of pathology.

      • and if building towards honest-to-god “real AI” wasn’t what his whole career was about, then what was the point?

        For Larry? Building a corporation that will last a thousand years fueled by greed and contempt to developer and consumer alike, and which would make nazis blush for its industrial disregrad for ethics in pursuit of profit. He did build a legacy for himself. I'll go to my grave cursing his name and he'll hear it from the depths of hell and smile.

  • Not sure where this came from, but it can't be all bad if it chaos-dunks on Yudkowsky like this. Was relayed to me via Ed Zitron's Discord, hopefully the Q isn't for Quillete or Qanon

    • Curtis

      IQ:300, Special Move: Urbital Laser

      Curtis Boldmug has defined the meta for years. A competitive staple that strongly influences even builds not running him. Special attack causes unavoidable psychic damage even if you resist its charm effect. Vulnerable to sunlight.

      Balaji

      IQ: 300, Special Move: Yes Country for Old Men

      A support type character. Good for ramping grift mana, but can't carry a game on his own. His ultimate is overcosted and just sucks up the hypecoins he spent the entire game producing.

      Ray

      IQ: 300, Special Move: Black Hole Graviton

      Mostly just receives support thanks to boomer nostalgia factor. Low but nonzero win rate in modern tournament meta. Highly viable in time machine formats.

      Eliezer

      IQ: 300, Special Move: Goffik the Hedgehog and the Enders of Game

      Former newbie favorite, fairly accessible and flashy. The Yud has seen heavy nerfs in the past years and at medium to high levels, his stats plateau severely much like his special move's plot. Thiel synergy has also shifted towards Curtis mains leaving Yud in shambles. Still a fun archetype and enjoys popularity as a smurf build.

      Jack

      IQ: 300, Special Move: Snorting an entire ground up bitcoin

      Rather run of the mill character whose effectiveness was rather limited for a long time. The Blue Sky archetype made him meta relevant for all of five minutes until he got reclaimed by the toxic playerbase built around the social media platform he originally started and the uber braingenius currently in charge of that company. Beard gives him +1 armor bonus which is fine I guess.

      Peter

      IQ: 300, Special Move: Pondering my Orb

      The apex predator of SV capitalism. The Black Lotus of technofascist grifters. His character is rumored to be based on Count Dracula. Even most SV billionaires can't touch him in a 1v1 matchup. Truly classic S-tier thinky boi.

      Beff

      IQ: 300, Special Move: World's Most Divorced Man First Date Percent Speedrun

      Likely intended as a joke character, a guy named Guillaume pretending to know how to pretend to be cool on the internet. His posts turned out to be so lethally cringeworthy he started an entire archetype of */acc brainos. Not quite on the power level of Peter or Curtis, but surprisingly influential for an obvious meme build. Extremely weak to heartbreak from women named Ruth.

      Leopold

      IQ: 300, Special Move: To The Moooooon

      Honestly, I had never heard of this guy before today but the data doesn't lie. The dots do go up and to the right and he posts a lot of them. Extrapolating from current trends, he will single-handedly reach singularity by the end of Q3 of this year.

    • I recognize everyone except Leopold. Increase my suffering by telling me who it is.

    • this logo in corner is for something called overfit qs, they have instagram page and that image was posted there

    • each of them needs a scale (logarithmic) showing how much adderall they take

  • noodling on a blog post - does anyone with more experience of LW/EA than me know if "AI safety" people are referencing the invention of nuclear weapons as a template for regulating/forbidding "AGI"?

    • just after end of manhattan project there was an idea coming from some of manhattan project scientists to dispose american nukes and ban development of nukes in any other country. that's why we live in era of lasting peace without nuclear weapons. /s

      some EAs had similar idea wrt spicy autocomplete development, which comes with implied assumption that spicy autocomplete is dangerous or at least useful (as in nuclear power, civilian or military)

      • Yeah, my starting position would be that it was obvious to any competent physicist at the time (although there weren't that many) that the potential energy release from nuclear fission was a real thing - the "only" thing to do to weaponise it or use it for peaceful ends was engineering.

        The analogy to "runaway X-risk AGI" is there's a similar straight line from ELIZA to Acausal Robot God, all that's required is a bit of elbow grease and good ole fashioned American ingenuity. But my point is that apart from Yud and a few others, no serious person believes this.

    • A notable article from our dear friend Nick Bostrom mentions the atmospheric auto-ignition story:

      https://nickbostrom.com/papers/vulnerable.pdf

      Type-0 (‘surprising strangelets’): In 1942, it occurred to Edward Teller, one of the Manhattan scientists, that a nuclear explosion would create a temperature unprecedented in Earth’s history, producing conditions similar to those in the center of the sun, and that this could conceivably trigger a self-sustaining thermonuclear reaction in the surrounding air or water (Rhodes, 1986).

      (this goes on for a number of paragraphs)

      This whole article has some wild stuff if you haven't seen it before BTW, so buckle up. He also mentions this story in https://nickbostrom.com/existential/risks and https://existential-risk.com/concept.pdf if you want older examples.

    • I'd be surprised if Eliezer hasn't mentioned it at some point, maybe more in the way that you're after. Can't find any examples though.

      In his Times article the only place he mentions nukes is what we should do to countries that have too many GPUs: https://time.com/6266923/ai-eliezer-yudkowsky-open-letter-not-enough/

      Edit: Not Mr. Yudkowski but see https://futureoflife.org/document/policymaking-in-the-pause/

      “The time for saying that this is just pure research has long since passed. […] It’s in no country’s interest for any country to develop and release AI systems we cannot control. Insisting on sensible precautions is not anti-industry. Chernobyl destroyed lives, but it also decimated the global nuclear industry. I’m an AI researcher. I do not want my field of research destroyed. Humanity has much to gain from AI, but also everything to lose.”

      “Let’s slow down. Let’s make sure that we develop better guardrails, let’s make sure that we discuss these questions internationally just like we’ve done for nuclear power and nuclear weapons. Let’s make sure we better understand these very large systems, that we improve on their robustness and the process by which we can audit them and verify that they are safe for the public.”

120 comments