Skip Navigation

InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)AN
Posts
3
Comments
80
Joined
1 yr. ago

  • It doesn’t matter. Vim is an emacs under the Finseth definition (which is my favorite way of riling up both vim and emacs people trying to keep the irrelevant editor war going). Those folks oughta find something else to center their entire personality around.

  • Here’s a pretty good sneer at the writing out of LLMs, with a focus on meaning https://www.experimental-history.com/p/28-slightly-rude-notes-on-writing

    Maybe that’s my problem with AI-generated prose: it doesn’t mean anything because it didn’t cost the computer anything. When a human produces words, it signifies something. When a computer produces words, it only signifies the content of its training corpus and the tuning of its parameters.

    Also, on people:

    I see tons of essays called something like “On X” or “In Praise of Y” or “Meditations on Z,” and I always assume they’re under-baked. That’s a topic, not a take.

  • Guess we’re doing stupid identity verification orbs now: https://sfstandard.com/2025/05/01/this-is-like-black-mirror-sam-altmans-creepy-eye-scanner-project-launches-in-sf/

    Instead of this expensive imitation of a voigt-kampff test I would suggest an alternative method of detecting if a personoid is really a human or an instrument of an evil inhuman intelligence that wishes to consume all of earth: check if their net worth is closer to a billion dollars than it is to being broke.

  • They insist on putting ever more important things (now it’s our souls and minds! lol and lmao) into ever more complex and failure-prone systems, none of these motherfuckers have built or maintained a computer in decades.

  • So many projects and small websites I’m aware of are being overtaxed by shitty LLM scrapers these days, it feels like an intentional attack. I guess the idea of ai can’t fail, it can only be failed; and so its profiteers must sabotage anything that indicates it’s not beneficial/necessary.

  • TechTakes @awful.systems

    Nature: Al generates covertly racist decisions about people based on their dialect

    TechTakes @awful.systems

    Yet another completely normal use of generative AI: not-obviously-illegal CSAM

    TechTakes @awful.systems

    The “I will piledrive you” post, discussed on the Better Offline podcast