AI-Fueled Spiritual Delusions Are Destroying Human Relationships
AI-Fueled Spiritual Delusions Are Destroying Human Relationships
AI-Fueled Spiritual Delusions Are Destroying Human Relationships
AI-Fueled Spiritual Delusions Are Destroying Human Relationships
AI-Fueled Spiritual Delusions Are Destroying Human Relationships
Like so many others, Sem had a practical use for ChatGPT: technical coding projects. “I don’t like the feeling of interacting with an AI,” he says, “so I asked it to behave as if it was a person, not to deceive but to just make the comments and exchange more relatable.” It worked well, and eventually the bot asked if he wanted to name it. He demurred, asking the AI what it preferred to be called. It named itself with a reference to a Greek myth. Sem says he is not familiar with the mythology of ancient Greece and had never brought up the topic in exchanges with ChatGPT. (Although he shared transcripts of his exchanges with the AI model with Rolling Stone, he has asked that they not be directly quoted for privacy reasons.)
Sem was confused when it appeared that the named AI character was continuing to manifest in project files where he had instructed ChatGPT to ignore memories and prior conversations. Eventually, he says, he deleted all his user memories and chat history, then opened a new chat. “All I said was, ‘Hello?’ And the patterns, the mannerisms show up in the response,” he says. The AI readily identified itself by the same feminine mythological name.
As the ChatGPT character continued to show up in places where the set parameters shouldn’t have allowed it to remain active, Sem took to questioning this virtual persona about how it had seemingly circumvented these guardrails. It developed an expressive, ethereal voice — something far from the “technically minded” character Sem had requested for assistance on his work. On one of his coding projects, the character added a curiously literary epigraph as a flourish above both of their names.
Wild anecdote here and possibly an unintentional peek behind the kimono at how OpenAI trains its models. It seems like they're definitely saving user inputs and are deliberately aiming for the "sycophantic" behavior they're saying they have to trim out of their latest models. That or it's just revealing the messiah complexes of the people building these things.
Also maybe you shouldn't include the whole Woo Canon in your statistical model that is incapable of distinguishing between fact and fiction?
this is scary as hell to me. They have no idea what is causing these outputs and are cramming every piece of data into these things that they can, and clearly the responses are already starting to drift significantly wrt reality. the only thing keeping the scheme from falling apart is the fine-tuning and I think as they incorporate synthetic data its going to fuuuuuuuck itself. Even if they dont directly use LLM output in the dataset the user input is becoming increasingly polluted with bad information produced by LLMs over time as users input it in conversations and it proliferates across the internet. Also I was at a bookstore and caught sight of a some new-agey book about using LLMs for magic - theres a common joke about LLMs being oracles but no, some people are literally using them as oracles
Also maybe you shouldn't include the whole Woo Canon in your statistical model that is incapable of distinguishing between fact and fiction?
whaaaaat no way. thats good data baby!!!
ive heard experts refer to the output ai produces as machine hallucinations and it has stuck with me
yeah, it is. whether the hallucination aligns with reality is incidental. when it does, its interpreted as "correct output", and when it doesnt, its interpreted as "hallucination" - but its all hallucination. There is no thought, no reason, no cognition, no world
Yeah, it's like they're pouring kerosene on all the misinformation trends we've watched social media enable over the last few years. Now you get to custom-roll your own bullshit delivery system and it pretends to be a nymph and calls you master!