Skip Navigation

InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)SW
Posts
19
Comments
638
Joined
2 yr. ago

  • A few years ago, maybe a few months after moving to the bay area, a guy from my high school messaged me on linkedin. He was also in the bay, and was wanting to network, I guess? I ghosted him, because I didn’t know him at all, and when I asked my high school friends about him, he got some bad reviews. Anyway today linkedin suggests/shoves a post down my throat where he is proudly talking about working at anthropic. Glad I ghosted!

    PS/E: Anthro Pic is definitely a furry term. Is that anything?

  • OK I sped read that thing earlier today, and am now reading it proper.

    The best answer — AI has “jagged intelligence” — lies in between hype and skepticism.

    Here's how they describe this term, about 2000 words in:

    Researchers have come up with a buzzy term to describe this pattern of reasoning: “jagged intelligence." [...] Picture it like this. If human intelligence looks like a cloud with softly rounded edges, artificial intelligence is like a spiky cloud with giant peaks and valleys right next to each other. In humans, a lot of problem-solving capabilities are highly correlated with each other, but AI can be great at one thing and ridiculously bad at another thing that (to us) doesn’t seem far apart.

    So basically, this term is just pure hype, designed to play up the "intelligence" part of it, to suggest that "AI can be great". The article just boils down to "use AI for the things that we think it's good at, and don't use it for the things we think it's bad at!" As they say on the internet, completely unserious.

    The big story is: AI companies now claim that their models are capable of genuine reasoning — the type of thinking you and I do when we want to solve a problem. And the big question is: Is that true?

    Demonstrably no.

    These models are yielding some very impressive results. They can solve tricky logic puzzles, ace math tests, and write flawless code on the first try.

    Fuck right off.

    Yet they also fail spectacularly on really easy problems. AI experts are torn over how to interpret this. Skeptics take it as evidence that “reasoning” models aren’t really reasoning at all.

    Ah, yes, as we all know, the burden of proof lies on skeptics.

    Believers insist that the models genuinely are doing some reasoning, and though it may not currently be as flexible as a human’s reasoning, it’s well on its way to getting there. So, who’s right?

    Again, fuck off.

    Moving on...

    The skeptic's case

    vs

    The believer’s case

    A LW-level analysis shows that the article spends 650 words on the skeptic's case and 889 on the believer's case. BIAS!!!!! /s.

    Anyway, here are the skeptics quoted:

    • Shannon Vallor, "a philosopher of technology at the University of Edinburgh"
    • Melanie Mitchell, "a professor at the Santa Fe Institute"

    Great, now the believers:

    • Ryan Greenblatt, "chief scientist at Redwood Research"
    • Ajeya Cotra, "a senior analyst at Open Philanthropy"

    You will never guess which two of these four are regular wrongers.

    Note that the article only really has examples of the dumbass-nature of LLMs. All the smart things it reportedly does is anecdotal, i.e. the author just says shit like "AI can do solve some really complex problems!" Yet, it still has the gall to both-sides this and suggest we've boiled the oceans for something more than a simulated idiot.

  • Why? Per the poll: “a lack of reliability.” The things being sold as “agents” don’t … work.

    Vendors insist that the users are just holding the agents wrong. Per Bret Taylor of Sierra (and OpenAI):

    Accept that it is imperfect. Rather than say, “Will AI do something wrong”, say, “When it does something wrong, what are the operational mitigations that we’ve put in place to deal with it?”

    I think this illustrates the situation of the LLM market pretty well, not just at a shallow level of the base incentives of the parties at play, but also at a deeper level, showing the general lack of humanity and toleration of dogshit exhibited by the AI companies that they are trying to brainwash everyone with.

  • What would some good unifying demands be for a hostile takeover of the Democratic party by centrists/moderates?

    me, taking this at face value, and understanding the political stances of the democrats, and going by my definition of centrist/moderate that is more correct than whatever the hell Kelsey Piper thinks it means: Oh, this would actually push the democrats left.

    Anyway, jesus christ I regret clicking on that name and reading. How the fuck is anyone this stupid. Vox needs to be burned down.

  • thielbucks laundered through 17 cutouts get used to fund useful idiots for stochastic terrorism to whip up an anti-trans panic

    While this didn’t happen to the letter, thielthoughts stochastically brought about Luigi, and we have the thielsweat to show for it

  • TechTakes @awful.systems

    Musk accretes Gebbia, wants to run an AirBnB out of 1600 Pennsylvania

    TechTakes @awful.systems

    What is the charge? Eating an LLM? A succulent chinese LLM? Deepseek judo-thrown out of Australian government devices

    TechTakes @awful.systems

    OpenAI says stealing is wrong and bad unless they do it

    TechTakes @awful.systems

    Musk presses H twice to perform the Nazi salute twice on stage at inauguration.

    TechTakes @awful.systems

    Grimes defends Musk’s gaming honor, for “personal pride”

    TechTakes @awful.systems

    GM avoids reinventing a bus or a train with this one weird trick

    TechTakes @awful.systems

    Are we going to see prediction markets enter mainstream journalism? Taylor Lorenz says yes

    TechTakes @awful.systems

    SF tech startup Scale AI, worth $13.8B, accused of widespread wage theft

    SneerClub @awful.systems

    In which some researchers draw a spooky picture and spook themselves

    TechTakes @awful.systems

    Elon is constructing a texas compound to trap, err, I mean, house his ex wives and children

    TechTakes @awful.systems

    Microsoft says EU to blame for the world's worst IT outage

    SneerClub @awful.systems

    The Star Fox-style roguelite whose dev refused to use AI voices to cut costs is adding an entire "anti-capitalist revenge" campaign about a cat-girl destroying AI

    TechTakes @awful.systems

    An AI beauty pageant? Miss me with that Miss AI.

    SneerClub @awful.systems

    "The Better Angels of Our Nature" Part 2: Campus Lies, I.Q. Rise & Epstein Ties - If Books Could Kill

    TechTakes @awful.systems

    Guy who “would convince employees to take shots of pricey Don Julio tequila, work 20-hour days [and] attend 2am meetings” wants to own WeWork again

    SneerClub @awful.systems

    Roko’s Basilisk gets a shoutout on CONAF, hypothetically dooming many unsuspecting listeners to… nothing, basically.

    TechTakes @awful.systems

    A (non-tech) comedy podcast I like covered FTX!