This stuff is getting pushed all the time in Obsidian plugins (note taking/personal knowledge management software). That kind of drives me crazy because the whole appeal of the app is your notes are just plain text you could easily read in notepad, but some people are chunking up their notes into tiny, confusing bite-sized pieces so it's better formatted for a RAG (wow, that sounds familiar)
Even without a RAG, using LLMs for searching is sketchy. I was digging through a lot of obscure Stack Overflow posts yesterday and was thinking, how could an LLM possibly help with this? It takes less than a second to type in the search terms and you just have to look at the titles and snippets of the results to tell if you're on the right track. You have the exact same bottleneck of typing and reading, except with ChatGPT or Copilot you also have to pad your query with a bunch of filler and read all the filler slop in the answer as it streams in a couple thousand times slower than dial-up. Maybe they're more equal with simpler questions you don't have to interrogate, but then why even bother? I've seen some people who say ChatGPT is faster, easier, and more accurate than Stack Overflow and even two crazy ones who said it's completely obsolete and trying to understand that perspective just causes me psychic damage.
I'm in the same boat. Markov chains are a lot of fun, but LLMs are way too formulaic. It's one of those things where AI bros will go, "Look, it's so good at poetry!!" but they have no taste and can't even tell that it sucks; LLMs just generate ABAB poems and getting anything else is like pulling teeth. It's a little more garbled and broken, but the output from a MCG is a lot more interesting in my experience. Interesting content that's a little rough around the edges always wins over smooth, featureless AI slop in my book.
slight tangent: I was interested in seeing how they'd work for open-ended text adventures a few years ago (back around GPT2 and when AI Dungeon was launched), but the mystique did not last very long. Their output is awfully formulaic, and that has not changed at all in the years since. (of course, the tech optimist-goodthink way of thinking about this is "small LLMs are really good at creative writing for their size!")
I don't think most people can even tell the difference between a lot of these models. There was a snake oil LLM (more snake oil than usual) called Reflection 70b, and people could not tell it was a placebo. They thought it was higher quality and invented reasons why that had to be true.
Like other comments, I was also initially surprised. But I think the gains are both real and easy to understand where the improvements are coming from. [ . . . ]
I had a similar idea, interesting to see that it actually works. [ . . . ]
I think that's cool, if you use a regular system prompt it behaves like regular llama-70b. (??!!!)
It's the first time I've used a local model and did [not] just say wow this is neat, or that was impressive, but rather, wow, this is finally good enough for business settings (at least for my needs). I'm very excited to keep pushing on it. Llama 3.1 failed miserably, as did any other model I tried.
For story telling or creative writing, I would rather have the more interesting broken english output of a Markov chain generator, or maybe a tarot deck or D100 table. Markov chains are also genuinely great for random name generators. I've actually laughed at Markov chains before with friends when we throw a group chat into one and see what comes out. I can't imagine ever getting something like that from an LLM.
Getting flashbacks to the people who thought the GameStop guy was a leftist
I'm wondering if this might have stemmed from A) OpenAI making it a nightmare for him, B) feeling despondent about the case, or C) personal things unrelated to the lawsuit. Kind of like what happened with the Boeing whistleblower after he had been fighting them for years and Boeing retaliated against him and got away with it. I don't know if we'll ever know though.
Friends don’t let friends OSINT
i can stop any time I want I swear
The youtube page you found is less talked about, though a reddit comment on one of them said “anyone else thinking burntbabylon is Luigi?”. I will point out that the rest of his online presence doesn’t really paint him as “anti tech” overall, but who can say.
apparently there was an imposter youtube channel too I missed
not sure what his official instagram is, but I saw a mention of the instagram account @nickakritas_ around the beginning of his channel (assuming it's his). didn't appear in the internet archive though.
also saw these twitter & telegram links to promote his channel, the twitter one was deleted or nuked (I use telegram to talk with friends who have it but the lack of content removal + terrible encryption means I don't touch unknown telegram links with a 10ft pole, so I have no idea what's in there):
- https://twitter.com/AntiTechCabin
- https://t.me/antitechcabin
I missed a couple videos which survived on the internet archive but I couldn't make it through 5 seconds of any of them. one of them ("How Humans Are Becoming Dumber") cites that tech priest guy Gwern Branwen and "Anti-Tech" was gone from the channel name by then. he changed the channel name a lot so maybe he veered away from it being an anti-tech channel?
edit: channel names were a little wrong, I put them in the parent comment
EDIT: this probably isn't him, but I'll leave it up. the real account appears to be /u/mister_cactus
Unsure where to put this or if it's even slightly relevant, but I've had some fun looking up the UH shooter guy.
I think I've found both his Reddit account and YouTube channel (it's been renamed a couple times). Kinda just wanted to see how much I could dig up for the hell of it. Big surprise that he's completely nuts
He got raked over the coals for this: https://www.reddit.com/r/collapse/comments/126vycx/why_scientists_cant_be_trusted/
https://api.pullpush.io/reddit/search/comment/?author=burntbabylon
edit: reasoning and more details
here's my chain of reasoning to get to the youtube channel:
- his goodreads review quotes a reddit comment
- the reddit comment is in a small thread where the OP deleted their account
- since the thread is small, the OP has probably seen most of the comments
- the wayback machine shows the author as
burntbabylon
- that user linked to and defended a video from a very small youtube channel and everyone else on /r/collapse thought it was crazy
- some of the ted kaczynski analysis videos came out right before his goodreads review
his early channel had some thumbnails made for him by 'bastizopilled', an ironic/unironic "bastizo futurist" whose does interviews in a black mask with a gun on him. he leads right into a bunch of other groypers and the guy in the screenshot I posted below. kind of wonder if that 'black mask with a gun' aesthetic influenced the clothes he brought to the shooting.
the channel names he used in 2023:
- @NickAkritas, Nick Akritas (January)
- @NicksEssays, Nick's Essays (January ~20th)
- @AntiTechCabin, Anti-Tech Cabin (early March)
- @Cabin_Club, AntiTechCabin
- @Cabin_Club, Cabin Club (March ~18th)
- @CabinProductions_, Cabin Productions (June)
- @Laconian_, Laconian (September)
- @NicholasLaconian, Laconian (November)
here's a big pile of crazy tags he wrote on one of those videos (were people still writing tags in their video descriptions in 2023?):
unabomber, kaczynski, ted kaczynski, unabomber cabin, kasinski, kazinski, industrial society and and its future, unabomber manifesto, the industrial revolution and its consequences, transhumanism, futurism, anprim, anarchoprimitivism, anarchism, leftism, liberalism, chad haag, nick akritas, gerbert johnson, hamza, anti tech collective, what did ted kaczynski believe, john doyle, hasanabi, self improvement, politics, jreg, philosophy, funny tiktok, kaczynski edit, ted kaczynski edit, zoomer, doomer, A.I. art, artifical intelligence, elon musk, AI art, return to tradition, embrace masculinity, reject modernity, reject modernity embrace masculinity, reject modernity embrace tradition, jReg, Greg Guevara, sam hyde, oversocialized, oversocialization, blackpilled, modernity, the industrial revolution, self improvement
edit again: holy shit these people all suck. assuming the youtube channel is the shooter, he's a friend-of-a-friend of this guy:
and if that's true, he'd be a friend-of-a-friend-of-a-friend of nick fuentes
I'm not super familiar with Lobsters but I love how they represent bans: https://lobste.rs/~SuddenBraveblock
- Joined: 5 years ago
- ✧∘* 🌈"""Left"""🦄✧・゚: 3 hours ago
I saw this linked in the weekly thread and thought it was about Godot at first, but I thought that was just me. Didn't expect to see 90% of the people here thought the same thing lol
edit: oh man, some of those comments. I still get culture shock from true believers, I forgot this probably got some attention on the orange site
Hidden horses is too good of a phrase to leave buried here
We lost 'Mechanical Turk' as a descriptor for AI because it's literally the name of the service they use for labeling training data. 'Actually Indians' is still on the table.
edit: context https://www.independent.co.uk/tech/chatgpt-david-mayer-name-glitch-ai-b2657197.html
Time for another round of Rothschild nutso's to come around now that ChatGPT can't say one of their names.
At first I was thinking, you know, if this was because of the GDPR's right to be forgotten laws or something that might be a nice precedent. I would love to see a bunch of people hit AI companies with GDPR complaints and have them actually do something instead of denying their consent-violator-at-scale machine has any PII in it.
But honestly it's probably just because he has money
I think Sam Altman's sister accused him of doing this to her name awhile ago too (semi-recent example). I don't think she was on a "don't generate these words ever" blacklist, but it seemed like she was erased from the training data and would only come up after a web search.
RationalWiki really hits that sweetspot where everybody hates it and you know that means it's doing something right:
From Prolewiki:
RationalWiki is an online encyclopedia created in 2007. Although it was created to debunk Conservapedia and Christian fundamentalism,[1] it is also very liberal and promotes anti-communist propaganda. It spreads imperialist lies and about socialist states including the USSR[2] and Korea[3] while uncritically promoting narratives from the CIA and U.S. State Department.
From Conservapedia:
RationalWiki.org is largely a pro-SJW atheists website.
[ . . . ]
RationalWikians have become very angry and have displayed such behavior as using profanity and angrily typing in all cap letters when their ideas are questioned by others and/or concern trolls (see: Atheism and intolerance and Atheism and anger and Atheism and dogmatism and Atheism and profanity).[33]
From WikiSpooks (with RationalWiki's invitation for anyone to collaborate highlighted with an emotionally vulnerable red box for emphasis):
Although inviting readers to "register and engage in constructive dialogue", RationalWiki appears not to welcome essays critical of RationalWiki[3] or of certain official narratives. For example, it is dismissive of the Journal of 9/11 Studies, terming it, as of 2017, it a "peer- crank-reviewed, online, open source pseudojournal".[4]
And a little bonus:
"Can I have Google discount my rationalwiki entry, has errors posted out of spite 10 years ago"
My site questions Darwinism but that's become quite mainstream. But my rationalwiki page has over 20 references to me being a creationist, and is tagged "pseudoscience." Untrue
I don't think the main concern is with the license. I'm more worried about the lack of an open governance and Redis priorizing their functionality at the expense of others. An example is client side caching in redis-py, https://github.com/redis/redis-py/blob/3d45064bb5d0b60d0d33360edff2697297303130/redis/connection.py#L792. I've tested it and it works just fine on valkey 7.2, but there is a gate that checks if it's not Redis and throws an exception. I think this is the behavior that might spread.
Jesus, that's nasty
That kind of reminds me of medical implant hacks. I think they're in a similar spot where we're just hoping no one is enough of an asshole to try it in public.
Like pacemaker vulnerabilities: https://www.engadget.com/2017-04-21-pacemaker-security-is-terrifying.html
caption: """AI is itself significantly accelerating AI progress"""
wow I wonder how you came to that conclusion when the answers are written like a Fallout 4 dialogue tree
- "YES!!!"
- "Yes!!"
- "Yes."
- " (yes)"
I've seen people defend these weird things as being 'coping mechanisms.' What kind of coping mechanism tells you to commit suicide (in like, at least two different cases I can think of off the top of my head) and tries to groom you.
Hi, guys. My name is Roy. And for the most evil invention in the world contest, I invented a child molesting robot. It is a robot designed to molest children.
You see, it's powered by solar rechargeable fuel cells and it costs pennies to manufacture. It can theoretically molest twice as many children as a human molester in, quite frankly, half the time.
At least The Rock's child molesting robot didn't require dedicated nuclear power plants
One of my favorite meme templates for all the text and images you can shove into it, but trying to explain why you have one saved on your desktop just makes you look like the Time Cube guy
I love the word cloud on the side. What is 6G doing there
Oh wow, Dorsey is the exact reason I didn't want to join it. Now that he jumped ship maybe I'll make an account finally
Honestly, what could he even be doing at Twitter in its current state? Besides I guess getting that bag before it goes up or down in flames
e: oh god it's a lot worse than just crypto people and Dorsey. Back to procrastinating
I know this shouldn't be surprising, but I still cannot believe people really bounce questions off LLMs like they're talking to a real person. https://ai.stackexchange.com/questions/47183/are-llms-unlikely-to-be-useful-to-generate-any-scientific-discovery
I have just read this paper: Ziwei Xu, Sanjay Jain, Mohan Kankanhalli, "Hallucination is Inevitable: An Innate Limitation of Large Language Models", submitted on 22 Jan 2024.
It says there is a ground truth ideal function that gives every possible true output/fact to any given input/question, and no matter how you train your model, there is always space for misapproximations coming from missing data to formulate, and the more complex the data, the larger the space for the model to hallucinate.
Then he immediately follows up with:
Then I started to discuss with o1. [ . . . ] It says yes.
Then I asked o1 [ . . . ], to which o1 says yes [ . . . ]. Then it says [ . . . ].
Then I asked o1 [ . . . ], to which it says yes too.
I'm not a teacher but I feel like my brain would explode if a student asked me to answer a question they arrived at after an LLM misled them on like 10 of their previous questions.