AI-Fueled Spiritual Delusions Are Destroying Human Relationships
AI-Fueled Spiritual Delusions Are Destroying Human Relationships
AI-Fueled Spiritual Delusions Are Destroying Human Relationships
AI-Fueled Spiritual Delusions Are Destroying Human Relationships
AI-Fueled Spiritual Delusions Are Destroying Human Relationships
lol, techbros get "The Singularity", but it turns out it just meant they all get divorced
Her, but there's a scene where his phone stops having sex with him and tries to convince him that rabbis are putting soap in the food for nine hours.
singlelarity
Like so many others, Sem had a practical use for ChatGPT: technical coding projects. “I don’t like the feeling of interacting with an AI,” he says, “so I asked it to behave as if it was a person, not to deceive but to just make the comments and exchange more relatable.” It worked well, and eventually the bot asked if he wanted to name it. He demurred, asking the AI what it preferred to be called. It named itself with a reference to a Greek myth. Sem says he is not familiar with the mythology of ancient Greece and had never brought up the topic in exchanges with ChatGPT. (Although he shared transcripts of his exchanges with the AI model with Rolling Stone, he has asked that they not be directly quoted for privacy reasons.)
Sem was confused when it appeared that the named AI character was continuing to manifest in project files where he had instructed ChatGPT to ignore memories and prior conversations. Eventually, he says, he deleted all his user memories and chat history, then opened a new chat. “All I said was, ‘Hello?’ And the patterns, the mannerisms show up in the response,” he says. The AI readily identified itself by the same feminine mythological name.
As the ChatGPT character continued to show up in places where the set parameters shouldn’t have allowed it to remain active, Sem took to questioning this virtual persona about how it had seemingly circumvented these guardrails. It developed an expressive, ethereal voice — something far from the “technically minded” character Sem had requested for assistance on his work. On one of his coding projects, the character added a curiously literary epigraph as a flourish above both of their names.
Wild anecdote here and possibly an unintentional peek behind the kimono at how OpenAI trains its models. It seems like they're definitely saving user inputs and are deliberately aiming for the "sycophantic" behavior they're saying they have to trim out of their latest models. That or it's just revealing the messiah complexes of the people building these things.
Also maybe you shouldn't include the whole Woo Canon in your statistical model that is incapable of distinguishing between fact and fiction?
this is scary as hell to me. They have no idea what is causing these outputs and are cramming every piece of data into these things that they can, and clearly the responses are already starting to drift significantly wrt reality. the only thing keeping the scheme from falling apart is the fine-tuning and I think as they incorporate synthetic data its going to fuuuuuuuck itself. Even if they dont directly use LLM output in the dataset the user input is becoming increasingly polluted with bad information produced by LLMs over time as users input it in conversations and it proliferates across the internet. Also I was at a bookstore and caught sight of a some new-agey book about using LLMs for magic - theres a common joke about LLMs being oracles but no, some people are literally using them as oracles
Also maybe you shouldn't include the whole Woo Canon in your statistical model that is incapable of distinguishing between fact and fiction?
whaaaaat no way. thats good data baby!!!
ive heard experts refer to the output ai produces as machine hallucinations and it has stuck with me
Yeah, it's like they're pouring kerosene on all the misinformation trends we've watched social media enable over the last few years. Now you get to custom-roll your own bullshit delivery system and it pretends to be a nymph and calls you master!
In the near future health insurance companies will be prescribing AI for therapy
I've heard from many people that they use chatgpt for therapy. They'll even give it chat logs with friends and ask for advice
Damn man, the current trend of AI usage among normies is pretty bleak.
LLMs are good for summarization and extraction, and even then you better double check their work if it is something that matters. Maybe there are some good uses for coding as well. But to many normies treat it like god or an all knowing thing, when its just sparkling word prediction
I've heard from many people that they use chatgpt for therapy
I think I've heard people even contemplating that on this bear site
wonderful. the fifth great awakening is going to be driven by chatbot preachers.
ChristGPT :kelly:
Truly worth uncountable billions of dollars as well as an increasingly large portion of all energy produced by human civilization.
private capital-mandated ai-girlfriend large language jesus
Big Structural Jesus
Sure, something would eventually come along and replace QAnon to promise alienated petite bourgeoise weirdos that their wildest dreams are about to come true. We just keep changing hats on capitalist cults, first it was around the president, now it's around the god computer.
This reminds me of my occult days. I remember reading someone who was doing a bunch of ouija in the early 1900s. The author concluded it's a terrible way to contact the immaterial because of how random it was and it was also causing people to have delusions. And that's what people are using chatgpt for.
AFAIK, ouija boards as we know them today were party games developed after the Civil War. Their connection to the occult didn't start until later. I think the comparison to """AI""" is apt lol