BlueMonday1984 @ BlueMonday1984 @awful.systems Posts 42Comments 532Joined 1 yr. ago
New piece from Brian Merchant: Deconstructing the new American oligarchy
looking at the history of AI, if it fails there will be another AI winter, and considering the bubble the next winter will be an Ice Age. No minduploads for anybody, the dead stay dead, and all time is wasted.
Adding insult to injury, they'd likely also have to contend with the fact that much of the harm this AI bubble caused was the direct consequence of their dumbshit attempts to prevent an AI Apocalypsetm
As for the upcoming AI winter, I'm predicting we're gonna see the death of AI as a concept once it starts. With LLMs and Gen-AI thoroughly redefining how the public thinks and feels about AI (near-universally for the worse), I suspect the public's gonna come to view humanlike intelligence/creativity as something unachievable by artificial means, and I expect future attempts at creating AI to face ridicule at best and active hostility at worst.
Taking a shot in the dark, I suspect we'll see active attempts to drop the banhammer on AI as well, though admittedly my only reason is a random BlueSky post openly calling for LLMs to be banned.
Quick update on the CoreWeave affair: turns out they're facing technical defaults on their Blackstone loans, which is gonna hurt their IPO a fair bit.
I'd bet good money they vibe-coded the whole thing. Its AI, the whole point is to enable laziness, grifts and laziness in grifts.
Here's the link, so you can read Stack's teardown without giving orange site traffic:
https://ewanmorrison.substack.com/p/the-tranhumanist-cult-test
Stumbled across some AI criti-hype in the wild on BlueSky:
The piece itself is a textbook case of AI anthropomorphisation, presenting it as learning to hide its "deceptions" when its actually learning to avoid tokens that paint it as deceptive.
On an unrelated note, I also found someone openly calling gen-AI a tool of fascism in the replies - another sign of AI's impending death as a concept (a sign I've touched on before without realising), if you want my take:
AI bros are exceedingly lazy fucks by nature, so this kind of shit should be pretty rare. Combined with their near-complete lack of taste, and the risk that such an attempt succeeds drops pretty low.
(Sidenote: Didn't know about Samizdat until now, thanks for the new rabbit hole to go down)
Is there already a nice term for “this was published before the slop flood gates opened”? There should be.
"Pre-slopnami" works well enough, I feel.
EDIT: On an unrelated note, I suspect hand-writing your original manuscript (or using a typewriter) will also help increase the value, simply through strongly suggesting ChatGPT was not involved with making it.
I tagged it NSFW because the previous thread was tagged NSFW.
Ultra-rare footage of orange site having a good take for once:
Top-notch sneer from lobsters' top comment, as well (as of this writing):
You want my opinion, I expect AntiRez' pleas to fall on deaf ears. The AI funders are only getting funded due to LLM hype - when that dies, investors' reason to throw money at them dies as well.
I much prefer the Whoppenheimer.
I didn't mean to link something else, I just mangled my description. Thanks for catching it.
Court documents regarding Facebook's plagiarism lawsuit just started getting unsealed, and ho-lee shit is this a treasure trove:
This confirms basically everything I said a week ago - AI violates copyright by design, and a single copyright suit going through means its open fucking season on the AI industry. Wonder who's gonna blink first.
lol, that’s too charitable to them, nukes at least work
And Oppie realised the gravity of their invention. And he was trying to end the Second World War with them, not make money by causing untold suffering.
Nukes and AI both represented a new and unique threat capable of causing worldwide devastation, so I'd say the analogy works pretty well.
New thread from Baldur Bjarnason:
Keep hearing reports of guys trusting ChatGPT’s output over experts or even actual documentation. Honestly feels like the AI Bubble’s hold over society has strengthened considerably over the past three months
This also highlights my annoyance with everybody who’s claiming that this tech will be great if every uses it responsibly. Nobody’s using it responsibly. Even the people who think they are, already trust the tech much more than it warrants
Also constantly annoyed by analysis that assumes the tech works as promised or will work as promised. The fact that it is unreliable and nondeterministic needs to be factored into any analysis you do. But people don’t do that because the resulting conclusion is GRIM as hell
LLMs add volatility and unpredictability to every system they touch, which makes those systems impossible to manage. An economy with pervasive LLM automation is an economy in constant chaos
On a semi-related note, I expect the people who are currently making heavy use of AI will find themselves completely helpless without it if/when the bubble finally bursts, and will probably struggle to find sympathy from others thanks to AI indelibly staining their public image.
(The latter part is assuming heavy AI users weren't general shitheels before - if they were, AI's stain on their image likely won't affect things either way. Of course, "AI bro" is synonymous with "trashfire human being", so I'm probably being too kind to them :P)
from someone who helped build their LLM
Nice to get a look on the inside from one of the 21st-century Oppenheimers.
Rupert Murdoch’s News Corporation fills its tabloid papers across Australia with right-wing slop. Now the slop will come from a chatbot — and not a human slop churner.
The quality of its tabloids will remain exactly the same, I presume.
r/cursor is the gift that keeps on giving:
Don't Date Robots!
Kill them instead