Skip Navigation

Stubsack: weekly thread for sneers not worth an entire post, week ending Sunday 15 September 2024

Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

(Semi-obligatory thanks to @dgerard for starting this)

205 comments
  • OpenAI manages to do an entire introduction of a new model without using the word "hallucination" even once.

    Apparently it implements chain-of-thought, which either means they changed the RHFL dataset to force it to explain its 'reasoning' when answering or to do self questioning loops, or that it reprompts itsefl multiple times behind the scenes according to some heuristic until it synthesize a best result, it's not really clear.

    Can't wait to waste five pools of drinkable water to be told to use C# features that don't exist, but at least it got like 25.2452323760909304593095% better at solving math olympiads as long as you allow it a few tens of tries for each question.

    • Some of my favorite reactions to this paradigm shift in machine intelligence we are witnessing:

      bless you Melanie.

      Mine olde friend, the log scale, still as beautiful the day I met you

      Weird, the AI that has read every chess book in existence and been trained on more synthetic games than any one human has seen in a lifetime still doesn't understand the rules of chess

      ^(just an interesting data point from Ernie, + he upvotes pictures of my dogs on FB so I gotta include him)

      Dog tax

    • Would there ever be a way to tell that they didn't just feed the answers into the training data?

  • One to keep an eye on… you might all know this already, but apparently Mozilla has an “add ai chatbot to sidebar” in Firefox labs (https://blog.nightly.mozilla.org/2024/06/24/experimenting-with-ai-services-in-nightly/ and available in at least v130). You can currently choose from a selection of public llm providers, similar to the search provider choice.

    Clearly, Mozilla has its share of AI boosters, given that they forced “ai help” onto MDN against a significant amount of protest (see https://github.com/mdn/yari/issues/9230 from last July for example) so I expect this stuff to proceed apace.

    This is fine, because Mozilla clearly has time and money to spare with nothing else useful they could be doing, alternative browsers are readily available and there has never been any anti-ai backlash to adding this sort of stuff to any other project.

  • this isn’t surprising, but now it’s confirmed: in addition to the environmental damage generative AI does by operating, and in spite of all attempts to greenwash it and present it as somehow a solution to climate change, of course Microsoft’s been pushing very hard for the oil and gas industry to use generative AI to maximize resource exploitation and production (via Timnit Gebru)

    • tbh i don't see a single sane way that genai could be used for anything like they say it can be, if it works it's gotta be something more or less custom. but ms doesn't care, because they're selling shovels so it doesn't matter if their shit doesn't work as long as someone's buying. it sorta starts looking like cryptobros in 2020-ish trying to insert themselves as middlemen everywhere where there's already some money

  • holy fuck awful.systems works on servo

    • so for posting it's definitely less than ideal (not pictured: the 15 second delay before typing and the comment text being filled in), but it actually renders lemmy with shockingly few issues

  • Why are you saying that LLMs are useless when they're useless only most of the time

    I'm sorry but I've been circling my room for an hour now seeing this and I need to share it with people lest I go insane.

    • I find the polygraph to be a fascinating artifact. most on account of how it doesn't work. it's not that it kinda works, that it more or less works, or that if we just iron out a few kinks the next model will do what polygraphs claims to do. the assumptions behind the technology are wrong. lying is not physiological; a polygraph cannot and will never work. you might as well hire me to read the tarot of the suspects, my rate of success would be as high or higher.

      yet the establishment pretends that it works, that it means something. because the State desperately wants to believe that there is a path to absolute surveillance, a way to make even one's deepest subjectivity legible to the State, amenable to central planning (cp. the inefficacy of torture). they want to believe it so much, they want this technology to exist so much, that they throw reality out of the window, ignore not just every researcher ever but the evidence of their own eyes and minds, and pretend very hard, pretend deliberately, willfully, desperately, that the technology does what it cannot do and will never do. just the other day some guy way condemned to use a polygraph in every statement for the rest of his life. again, this is no better than flipping a coin to decide if he's saying the truth, but here's the entire System, the courts the judge the State itself, solemnly condemning the man to the whims of imaginary oracles.

      I think this is how "AI" works, but on a larger scale.

    • that dude advocates LLM code autocomplete and he's a cryptographer

      like that code's gotta be a bug bounty bonanza

      • dear fuck:

        From 2018 to 2022, I worked on the Go team at Google, where I was in charge of the Go Security team.

        Before that, I was at Cloudflare, where I maintained the proprietary Go authoritative DNS server which powers 10% of the Internet, and led the DNSSEC and TLS 1.3 implementations.

        Today, I maintain the cryptography packages that ship as part of the Go standard library (crypto/… and golang.org/x/crypto/…), including the TLS, SSH, and low-level implementations, such as elliptic curves, RSA, and ciphers.

        I also develop and maintain a set of cryptographic tools, including the file encryption tool age, the development certificate generator mkcert, and the SSH agent yubikey-agent.

        I don’t like go but I rely on go programs for security-critical stuff, so their crypto guy’s bluesky posts being purely overconfident “you can’t prove I’m using LLMs to introduce subtle bugs into my code” horseshit is fucking terrible news to me too

        but wait, mkcert and age? is that where I know the name from? mkcert’s a huge piece of shit nobody should use that solves a problem browsers created for no real reason, but I fucking use age in all my deployments! this is the guy I’m trusting? the one who’s currently trolling bluesky cause a fraction of its posters don’t like the unreliable plagiarization machine enough? that’s not fucking good!

        maybe I shouldn’t be taking this so hard — realistically, this is a Google kid who’s partially funded by a blockchain company; this is someone who loves boot leather so much that most of their posts might just be them reflexively licking. they might just be doing contrarian trolling for a technology they don’t use in their crypto work (because it’s fucking worthless for it) and maybe what we’re seeing is the cognitive dissonance getting to them.

        but boy fuck does my anxiety not like this being the personality behind some of the code I rely on

    • Criticizing others for not being perfectly exacting with their language and then jumping in front of the LLM headlights all at once, truly the human mind has no limits.

    • Some ok anti-AI voices in that thread. But mostly a torrent of shit

    • Valsorda was on mastodon for a bit (in ‘22 maybe?) and was quite keen on it , but left after a bunch of people got really pissy at him over one of his projects. I can’t actually recall what it even was, but his argument was that people posted stuff publicly on mastodon, so he should be able to do what he liked with those posts even if they asked him not to. I can see why he might not have a problem with LLMs.

      Anyone remember what he was actually doing? Text search or network tracing or something else?

      • oh! was he the guy doing a search engine archiving as much of the fediverse as possible, over the objections of the people being indexed?

        yeah that tracks

205 comments