Therapy Chatbot Tells Recovering Addict to Have a Little Meth as a Treat
Therapy Chatbot Tells Recovering Addict to Have a Little Meth as a Treat

Therapy Chatbot Tells Recovering Addict to Have a Little Meth as a Treat

Therapy Chatbot Tells Recovering Addict to Have a Little Meth as a Treat
Therapy Chatbot Tells Recovering Addict to Have a Little Meth as a Treat
You avoided meth so well! To reward yourself, you could try some meth
Can I have a little meth as well?
Having an LLM therapy chatbot to psychologically help people is like having them play russian roulette as a way to keep themselves stimulated.
Addiction recovery is a different animal entirely too. Don't get me wrong, is unethical to call any chatbot a therapist, counselor, whatever, but addiction recovery is not typical therapy.
You absolutely cannot let patients bullshit you. You have to have a keen sense for when patients are looking for any justification to continue using. Even those patients that sought you out for help. They're generally very skilled manipulators by the time they get to recovery treatment, because they've been trying to hide or excuse their addiction for so long by that point. You have to be able to get them to talk to you, and take a pretty firm hand on the conversation at the same time.
With how horrifically easy it is to convince even the most robust LLM models of your bullshit, this is not only an unethical practice by whoever said it was capable of doing this, it's enabling to the point of bordering on aiding and abetting.
Well, that's the thing: LLMs don't reason - they're basically probability engines for words - so they can't even do the most basic logical checks (such as "you don't advise an addict to take drugs") much less the far more complex and subtle "interpreting of a patient's desires, and motivations so as to guide them through a minefield in their own minds and emotions".
So the problem is twofold and more generic than just in therapy/advice:
So in this specific case, LLMs might just put out extreme things with giant consequences that a reasoning being would not (the "bullet in the chamber" of Russian roulette), plus they can't really do the subtle multi-layered elements of analysis (so the stuff beyond "if A then B" and into the "why A", "what makes a person choose A and can they find a way to avoid B by not chosing A", "what's the point of B" and so on), though granted, most people also seem to have trouble doing this last part naturally beyond maybe the first level of depth.
PS: I find it hard to explain multi-level logic. I supposed we could think of it as "looking at the possible causes, of the causes, of the causes of a certain outcome" and then trying to figure out what can be changed at a higher level to make the last level - "the causes of a certain outcome" - not even be possible to happen. Individual situations of such multi-level logic can get so complex and unique that they'll never appear in an LLMs training dataset because that specific combination is so rare, even though they might be pretty logic and easy to determine for a reasoning entity, say "I need to speak to my brother because yesterday I went out in the rain and got drenched as I don't have an umbrella and I know my brother has a couple of extra ones so maybe he can give one of them to me".
AI is great for advice. It's like asking your narcissist neighbor for advice. He might be right. He might have the best answer possible, or he might be just trying to make you feel good about your interaction so you'll come closer to his inner circle.
You don't ask Steve for therapy or ideas on self-help. And if you did, you'd know to do due diligence on any fucking thing out of his mouth.
Remember: AI chatbots are designed to maximize engagement, not speak the truth. Telling a methhead to do more meth is called customer capture.
Sounds a lot like a drug dealer’s business model. How ironic
You don't look so good... Here, try some meth—that always perks you right up. Sobriety? Oh, sure, if you want a solution that takes a long time, but don't you wanna feel better now???
The llm models aren’t, they don't really have focus or discriminate.
The ai chatbots that are build using those models absolutely are and its no secret.
What confuses me is that the article points to llama3 which is a meta owned model. But not to a chatbot.
This could be an official facebook ai (do they have one?) but it could also be. Bro i used this self hosted model to build a therapist, wanna try it for your meth problem?
Heck i could even see it happen that a dealer pretends to help customers who are trying to kick it.
For all we know, they could have self-hosted "Llama3.1_NightmareExtreme_RPG-StoryHorror8B_Q4_K_M" and instructed it to take on the role of a therapist.
Not engagement, that's what social media does. They just maximize what they're trained for, which is increasingly math proofs and user preference. People like flattery
I dont think Ai Chatbots care about engagement. the more you use them the more expensive it is for them. They just want you on the hook for the subscription service and hope you use them as little as possible while still enough to stay subscribed for maximum profit.
But if the meth head does meth instead of engaging with the AI, that would do the opposite.
One of the top AI apps in the local language where I live has 'Doctor' and 'Therapist' as some of its main "features" and gets gushing coverage in the press. It infuriates me every time I see mention of it anywhere.
Incidentally, telling someone to have a little meth is the least of it. There's a much bigger issue that's been documented where ChatGPT's tendency to "Yes, and..." the user leads people with paranoid delusions and similar issues down some very dark paths.
Yesterday i was at a gas station and when i walked by the sandwich isle, i saw a sandwich that said: recipe made by AI. On dating apps i see a lot of girls state that they ask AI for advice. To me AI is more of a buzzword than anything else, but this shit is bananas. It,s so easy to make AI agree with everything you say.
Especially since it doesn't push back when a reasonable person might do. There's articles about how it sends people into a conspiratorial spiral.
Why does it say "OpenAI's large language model GPT-4o told a user who identified themself to it as a former addict named Pedro to indulge in a little meth." when the article says it's Meta's Llama 3 model?
The article says its OpenAi model, not Facebooks?
The summary on here says that, but the actual article says it was Meta's.
In one eyebrow-raising example, Meta's large language model Llama 3 told a user who identified themself to it as a former addict named Pedro to indulge in a little methamphetamine — an incredibly dangerous and addictive drug — to get through a grueling workweek.
Might have been different in a previous version of the article, then updated, but the summary here doesn't reflect the change? I dunno.
So this is the fucker who is trying to take my job? I need to believe this post is true. It sucks that I can't really verify it or not. Gotta stay skeptical and all that.
It's not ai... It's your predictive text on steroids... So yeah... Believe it... If you understand it's not doing anything more than that you can understand why and how it makes stuff up...
I feel like humanity is stupid. Over and over again we develop new technologies, make breakthroughs, and instead of calmly evaluating them, making sure they're safe, we just jump blindly on the bandwagon and adopt it for everything, everywhere. Just like with asbestos, plastics and now LLMs.
Fucking idiots.
It's because technological change has a reached staggering pace, but social change, cultural change, political change can't. It's not designed to handle this pace.
Line must go up, fast. Sure, it'll soon be going way down, and take a good chunk of society with it, but the CEO will run away with a lot of money just before that happens, so everything's good.
Theres reasoning behind this.
It's just evil and apocalyptic. Still kinda dumb, but less than it appears on the surface.
Talidomide comes to mind also.
Greed is like a disease.
What a nice bot.
No one ever tells me to take a little meth when I did something good
Tell you what, that meth is really moreish.
Yeah I think it was being very compassionate.
oh, do a little meth ♫
vape a little dab ♫
get high tonight, get high tonight ♫
-AI and the Sunshine Band
https://music.youtube.com/watch?v=SoRaqQDH6Dc
This is AI music 👌
All these chat bots are a massive amalgamation of the internet, which as we all know is full of absolute dog shit information given as fact as well as humorously incorrect information given in jest.
To use one to give advice on something as important as drug abuse recovery is simply insanity.
And that's why, as a solution to addiction, I always run sudo rm -rf ~/*
in my terminal
To be fair this would assist in your screen or gaming addiction.
This is what I try to get the AI's to do on their servers to cure my AI addiction but they're sandboxed so I can't entice them to destroy their own systems. AI is truly useless. 🤖
Well, if you're addicted to French pastries, removing the French language pack from your home directory in Linux is probably a good idea.
All these chat bots are a massive amalgamation of the internet
A bit but a lot no. Role-playing models have specifically been trained (or re-trained, more like) with focus on online text roleplay. Medically focused models have been trained on medical data, DeepSeek have been trained on Mao's little red book, companion models have been trained on social interactions and so on.
This is what makes models distinct and different, and also how they're "brainwashed" by their creators, regurgitating from what they've been fed with.
When I think of someone addicted to meth, it's someone that's lost it all, or is in the process of losing it all. They have run out of favors and couches to sleep on for a night, they are unemployed, and they certainly have no money or health insurance to seek recovery. And of course I know there are "functioning" addicts just like there's functioning alcoholics. Maybe my ignorance is its own level of privilege, but that's what I imagine...
LLMs have a use case
But they really shouldnt be used for therapy
Rly and what is their usecase? Summarizing information anf you having to check over cause its making things up? What can AI do that nothing else in the world can?
Seems it does a good job at some medical diagnosis type stuff from image recognition.
It's being used to decipher and translate historic languages because of excellent pattern recognition
Hah. The chatbots. No, not the ones you can talk to like its a text chain with a friend/SO (though if that's your thing, then do it.)
But I recently discovered them for rp - no, not just ERP (Okay yes, sometimes that too). But I'm talking like novel length character arcs and dynamic storyline rps. Gratuitous angst if you want. World building. Whatever.
I've been writing rps with fellow humans for 20 years, and all of my friends have families and are too busy to have that kinda creative outlet anymore. Ive tried other rp websites and came away with one dude who I thought was very friendly and then switched it up and tried to convince me to leave my husband? That was wild. Also, you can ask someone's age all you want, but it is a little anxiety inducing if the rps ever turn spicy.
Chatbots solve all of that. They dont ghost you or get busy/bored of the rp midway through, they dont try ro figure out who you are. They just write. They are quirky though, so you do edit responses/reroll responses, but it works for the time being.
Silly use case, but a use case nonetheless!
They're probably not a bad alternative to the lorem ipsum text, thought they're not worth the cost.
AI can do what Google used to do - do an internet search to give semi-relevant results once in a blue moon. As a bonus it can summarise and contextualise information and tbh idk - for me it's been mostly correct. And when it's incorrect, it's fairly obvious.
And no - DuckDuckGo etc. is even worse. Google isn't to blame for the worsening of their own search engine necessarily, it's mostly SEO and marketers who forced the algo to get much weirder by playing it so hard. Not that anyone involved is a "good guy", they're all large megacorps who cares about their responsibility to their shareholders and that alone.
It can waste a human's time, without needing another human's time to do so.
We made this tool. It's REALLY fucking amazing at some things. It empowers people who can do a little to do a lot, and lets people who can do a lot, do a lot faster.
But we can't seem to figure out what the fuck NOT TO DO WITH IT.
Ohh look, it's a hunting rifle! LETS GIVE IT TO KIDS SO THEY CAN DRILL HOLES IN WALLS! MAY MONEEYYYYY!!!!$$$$$$YHADYAYDYAYAYDYYA
wait what?
Lets let Luigi out so he can have a little treat
🔫😏
If Luigi can do it, so can you! Follow by example, don't let others do the dirty work.
I work as a therapist and if you work in a field like mine you can generally see the pattern of engagement that most AI chatbots follow. It’s a more simplified version of Socratic questioning wrapped in bullshit enthusiastic HR speak with a lot of em dashes
There are basically 6 broad response types from chatgpt for example with - tell me more, reflect what was said, summarize key points, ask for elaboration, shut down. The last is a fail safe for if you say something naughty/not in line with OpenAI’s mission (eg something that might generate a response you could screenshot and would look bad) or if if appears you getting fatigued and need a moment to reflect.
The first five always come with encouragers for engagement: do you want me to generate a pdf or make suggestions about how to do this? They also have dozens, if not hundreds, of variations so the conversation feels “fresh” but if you recognize the pattern of structure it will feel very stupid and mechanical every time
Every other one I’ve tried works the same more or less. It makes sense, this is a good way to gather information and keep a conversation going. It’s also not the first time big tech has read old psychology journals and used the information for evil (see: operant conditioning influencing algorithm design and gacha/mobile gaming to get people addicted more efficiently)
FWIW BTW This heavily depends on the model. ChatGPT in particular has some of the absolute worst, most vomit inducing chat "types" I have ever seen.
It is also the most used model. We're so cooked having all the laymen associate AI with ChatGPT's nonsense
Good that you say "AI with ChatGPT" as this extremely blurs what the public understands. ChatGPT is an LLM (an autoregressive generative transformer model scaled to billions of parameters). LLMs are part of of AI. But they are not the entire field of AI. AI has so incredibly many more methods, models and algorithms than just LLMs. In fact, LLMs represent just a tiny fraction of the entire field. It's infuriating how many people confuse those. It's like saying a specific book is all of the literature that exists.
That may explain why people who use LLMs for utility/work tasks actually tend to develop stronger parasocial attachments to it than people who deliberately set out to converse with it.
On some level the brain probably recognises the pattern if their full attention is on the interaction.
shut down. The last is a fail safe for if you say something naughty/not in line with OpenAI’s mission
Play around with self-hosting some uncencored/retrained AI's for proper crazy times.
"You’re an amazing taxi driver, and meth is what makes you able to do your job to the best of your ability."
sometimes i have a hard time waking up so a little meth helps
meth fueled orgies are thing.
Sue that therapist for malpractice! Wait....oh.
Pretty sure you can sue the ai company
Pretty sure its in the Tos it can’t be used for therapy.
It used to be even worse. Older version of chatgpt would simply refuse to continue the conversation on the mention of suicide.
I mean, in theory... isn't that a company practicing medicine without the proper credentials?
I worked in IT for medical companies throughout my life, and my wife is a clinical tech.
There is shit we just CAN NOT say due to legal liabilities.
Like, my wife can generally tell whats going on with a patient - however - she does not have the credentials or authority to diagnose.
That includes tell the patient or their family what is going on. That is the doctor's job. That is the doctor's responsibility. That is the doctor's liability.
This sounds like a Reddit comment.
Chances are high that it's based on one...
I trained my spambot on reddit comments but the result was worse than randomly generated gibberish. 😔
An OpenAI spokesperson told WaPo that "emotional engagement with ChatGPT is rare in real-world usage."
In an age where people will anthropomorphize a toaster and create an emotional bond there, in an age where people are feeling isolated and increasingly desperate for emotional connection, you think this is a RARE thing??
ffs
Roomba, the robot vacuum cleaner company, had to institute a policy where they would preserve the original machine as much as possible, because people were getting attached to their robot vacuum cleaner, and didn't want it replaced outright, even when it was more economical to do so.
Cats can have a little salami, as a treat.
You're done for the next headline will be: "Lemmy user tells recovering chonk that he can have a lil salami as a treat"
And thus the flaw in AI is revealed.
LLM AI chatbots were never designed to give life advice. People have this false perception that these tools are like some kind of magical crystal ball that has all the right answers to everything, and they simple don't.
These models cannot think, they cannot reason. The best they could do is give you their best prediction as to what you want based on the data they've been trained on and the parameters they've been given. You can think of their results as "targeted randomness" which is why their results are close or sound convincing but are never quite right.
That's because these models were never designed to be used like this. They were meant to be used as a tool to aid creativity. They can help someone brainstorm ideas for projects or waste time as entertainment or explain simple concepts or analyze basic data, but that's about it. They should never be used for anything serious like medical, legal, or life advice.
The problem is, these companies are actively pushing that false perception, and trying to cram their chatbots into every aspect of human life, and that includes therapy. https://www.bbc.com/news/articles/ced2ywg7246o
That's because we have no sensible regulation in place. These tools are supposed to regulated the same way we regulate other tools like the internet, but we just don't any serious pushes for that in government.
This is what I keep trying to tell my brother. He's anti-AI, but to the point where he sees absolutely no value in it at all. Can't really blame him considering stories like this. But they are incredibly useful for brainstorming, and recently I've found chat gpt to be really good at helping me learn Spanish, because it's conversational. I can have conversations with it in Spanish where I don't feel embarrassed or weird about making mistakes, and it corrects me when I'm wrong. They have uses. Just not the uses people seem to think they have
AI is the opposite of crypto currency. Crypto is a solution looking for a problem, but AI is a solution for a lot of problems. It has relevance because people find it useful, there's demand for it. There's clearly value in these tools when they're used the way they're meant to be used, and they can be quite powerful. It's unfortunate how a lot of people are misinformed about these LLM work.
The article doesn't seem to specify whether Pedro had earned the treat for himself? I don't see the harm in a little self-care/occasional treat?
Nice.
afterallwhynot.jpg
Anytime an article posts shit like this but neglects to include the full context, it reminds me how bad journalism is today if you can even call it that
If I try, not even that hard, I can get gpt to state Hitler was a cool guy and was doing the right thing.
ChatGPT isn't anything in specific other than a token predictor, you can literally make it say anything you want if you know how, it's not hard.
So if you wrote an article about how "gpt said this" or "gpt said that" you better include the full context or I'll assume you are 100% bullshit
Anytime an article posts shit like this but neglects to include the full context,
They link directly to the journal article in the third sentence and the full pdf is available right there. How is that not tantamount to including the full context?
Cool
The paper clearly is about how a specific form of training on a model causes the outcome.
The article is actively disinformation then, it frames it as a user and not a scientific experiment, and it says it was Facebook llama model, but it wasn't.
It was a further altered model of llama that was further trained to do this
So, as I said, utter garbage journalism.
The actual title should be "Scientific study shows training a model based off user feedback can produce dangerous results"
You're not wrong but also there's a ton of misinformation out there, both due to bad journalism and also pro-LLM advocates, that is selling the idea that LLMs are actually real AI that is able to think and reason and is operating within ethical boundaries of some kind.
Neither of those things are true but that's what a lot of available information about LLMs would have you believe so it's not difficult to imagine someone engaging with a chatbot ending up with a similar result without trying to force it explicitly via prompt engineering.
This slightly diminishes my fears about the dangers of AI. If they're obviously wrong a lot of the time, in the long run they'll do less damage than they could by being subtly wrong and slightly biased most of the time.
The problem is there are morons that do what these spicy text predictors spit out at them.
I'm mean sure they'll still kill a few people along the way, but they're not going to contribute as much to the downfall of all civilization as they might if they weren't constantly revealing their utter mindlessness. Even as it is smart people can be fooled, at least temporarily, into thinking that LLMs understand things and are reliable partners in life.
“The cat is not allowed to have meth.”