Skip Navigation
Stubsack: weekly thread for sneers not worth an entire post, week ending 17th November 2024
  • Oh wow, Dorsey is the exact reason I didn't want to join it. Now that he jumped ship maybe I'll make an account finally

    Honestly, what could he even be doing at Twitter in its current state? Besides I guess getting that bag before it goes up or down in flames

    e: oh god it's a lot worse than just crypto people and Dorsey. Back to procrastinating

  • Stubsack: weekly thread for sneers not worth an entire post, week ending 10th November 2024
  • I know this shouldn't be surprising, but I still cannot believe people really bounce questions off LLMs like they're talking to a real person. https://ai.stackexchange.com/questions/47183/are-llms-unlikely-to-be-useful-to-generate-any-scientific-discovery

    I have just read this paper: Ziwei Xu, Sanjay Jain, Mohan Kankanhalli, "Hallucination is Inevitable: An Innate Limitation of Large Language Models", submitted on 22 Jan 2024.

    It says there is a ground truth ideal function that gives every possible true output/fact to any given input/question, and no matter how you train your model, there is always space for misapproximations coming from missing data to formulate, and the more complex the data, the larger the space for the model to hallucinate.

    Then he immediately follows up with:

    Then I started to discuss with o1. [ . . . ] It says yes.

    Then I asked o1 [ . . . ], to which o1 says yes [ . . . ]. Then it says [ . . . ].

    Then I asked o1 [ . . . ], to which it says yes too.

    I'm not a teacher but I feel like my brain would explode if a student asked me to answer a question they arrived at after an LLM misled them on like 10 of their previous questions.

  • Elon’s double nothingburger: robotaxis any year now, bro. And robots, bro. Trust us, bro.
  • I think he might have adhd.

    Oh no, I don't think we're ready for him to start mythologizing autism + ADHD.

    Watching my therapist pull up Musk facts on his phone for 40 minutes going "bro check this out you're just like him frfr" the moment he learned I was autistic was enough for me. Please god don't let musk start talking about hyperfocusing.

  • Stubsack: weekly thread for sneers not worth an entire post, week ending Sunday 13 October 2024
  • I feel like the Internet Archive is a prime target for techfashy groups. Both for the amount of culture you can destroy, and because backed up webpages often make people with an ego the size of the sun look stupid.

    Also, I can't remember but didn't Yudkowsky or someone else pretty plainly admit to taking a bunch of money during the FTX scandal? I swear he let slip that the funds were mostly dried up. I don't think it was ever deleted, but that's the sort of thing you might want to delete and could get really angry about being backed up in the Internet Archive. I think Siskind has edited a couple articles until all the fashy points were rounded off and that could fall in a similar boat. Maybe not him specifically, but there's content like that that people would rather not be remembered and the Internet Archive falling apart would be good news to them.

    Also (again), it scares me a little that their servers are on public tours. Like it'd take one crazy person to do serious damage to it. I don't know but I'm hoping their >100PB of storage is including backups, even if it's not 3-2-1. I'm only mildly paranoid about it lol.

  • Stubsack: weekly thread for sneers not worth an entire post, week ending Sunday 13 October 2024
  • Oh look! Human horrors beyond regrettably within my comprehension

    https://x.com/haveibeenpwned/status/1843780415175438817

    Tweet description

    New sensitive breach: "AI girlfriend" site Muah[.]ai had 1.9M email addresses breached last month. Data included AI prompts describing desired images, many sexual in nature and many describing child exploitation. 24% were already in @haveibeenpwned . More: https://404media.co/hacked-ai-girlfriend-data-shows-prompts-describing-child-sexual-abuse-2/

  • Stubsack: weekly thread for sneers not worth an entire post, week ending Sunday 13 October 2024
  • I don't know how materials work in Asset Forge, but they have a guide on their site for exporting models to animate with Mixamo: https://kenney.nl/knowledge-base/asset-forge/rigging-a-character-using-mixamo. You could also animate things like moving platforms or doors in-engine with an AnimationPlayer.

    Speaking of Asset Forge, Kenny Shape is a similar thing for quickly throwing assets together. It has a really fast 2D workflow for creating 3D models that reminds me of Doom mapping a little bit. For lo-fi levels, you might also like Crocotile 3D or the combo of TrenchBroom + Qodot. Crocotile is great for repurposing 2D pixel art tilesets from itch or OpenGameArt into 3D assets, and Trenchbroom/Qodot is a more fully featured level editor I've seen people work crazy fast in.

  • Stubsack: weekly thread for sneers not worth an entire post, week ending Sunday 6 October 2025
  • I think a dark theme with red accents would make sense.

    Oh HELL no that's the same editor theme I'm using. How do I cast a spell to banish these people

    It was some Adobe-style theme I downloaded a long time ago but I guess I'm using the anti-woke theme now

  • Stubsack: weekly thread for sneers not worth an entire post, week ending Sunday 22 September 2024
  • This quote flashbanged me a little

    When you describe your symptoms to a doctor, and that doctor needs to form a diagnosis on what disease or ailment that is, that's a next word prediction task. When choosing appropriate treatment options for said ailment, that's also a next word prediction task.

    From this thread: https://www.reddit.com/r/gamedev/comments/1fkn0aw/chatgpt_is_still_very_far_away_from_making_a/lnx8k9l/

  • "Hours and hours of content have been minted by highly-educated, prestigiously-credentialed people, consternating about the policy implications of Sam Altman’s speculative fan fiction"
  • Chiming in with my own find!

    https://archiveofourown.org/works/38590803/chapters/96467457

    I've seen this person around a lot with crazy takes on AI. They have a couple quotes that might inflict psychic damage:

    If I had the skill to pull it off, a Buddhist cultivation book would've thus been the single most rationalist xianxia in existence.

    My acquaintance asks for rational-adjacent books suitable for 8-11 years old children that heavily feature training, self-improvement, etc. The acquaintance specifically asks that said hard work is not merely mentioned, but rather is actively shown in the story. The kid herself mostly wants stories "about magic" and with protagonists of about her age.

    They had a long diatribe I don't have a copy of, but they were gloating about having masterful writing despite not reading any books besides non-fiction and HPMoR, their favorite book of all time.

    There's also a whole subreddit from hell about this subgenre of fiction: https://www.reddit.com/r/rational/

  • Stubsack: weekly thread for sneers not worth an entire post, week ending Sunday 18 August 2024
  • Oh whoops, I should have archived it.

    There were about 7 images posted of users roleplaying with bots, all ending with a bot response that cut off halfway with an error message that read "This content may violate our policies; blablabla; please use the report button if you believe this is a false positive and we will investigate." The last one was some kind of parody image making fun of the warning.

    Most of them were some kind of romantic roleplay with bad spelling. One was like, "i run my hand down your arm and kiss you", and the bots response triggered the warning. Another one was like, "*is slapped in the face* it's okay, I still love you" and the rest of the message generated a warning. There wasn't enough context for that one, so the person might have been writing it playfully (?), but that subreddit has a lot of blatant sexual violence regardless.

  • Stubsack: weekly thread for sneers not worth an entire post, week ending Sunday 18 August 2024
  • https://www.reddit.com/r/CharacterAI/comments/1eqsoom/guys_we_have_to_do_somthing_about_this_fiӏtеr/

    This community pops up on /r/all every so often and each time it scares me.

    Sometimes I see kids games (and all games really) have ultra-niche, super-online protests that are like "STOP Zooshacorp from DESTROYING K-Smog vs. Batboy Online", and when I look closer it's either even more confusing or it's about something people didn't like in the latest update. This is like that, but with an awful twist where it's about people getting really attached to these AI girlfriend/sex roleplay apps. The spelling and sentences make it seem like it's mostly kids, too.

    edit: here's a terrible example!

  • Stubsack: weekly thread for sneers not worth an entire post, week ending Sunday 11 August 2024
  • Oh no. Kurzgesagt just published a full-on TREACLES piece.

    https://www.youtube.com/watch?v=fa8k8IQ1_X0

    These are the sources they cited: https://sites.google.com/view/sources-superintelligence/

    Open Philanthropy is a sponsor of kurzgesagt. The foundation is supporting academic work across the field of Artificial Intelligence, and some of the sources used to create this script (from OpenAI, Future of Humanity Institute, Machine Intelligence Research Institute, Future of Life Institute and Epoch AI) also receive financial support from Open Philanthropy.

    Open Philanthropy had no influence on the content and messages of this video.

    I'm sure!

  • InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)HR
    hrrrngh @awful.systems
    Posts 0
    Comments 33