A new study in Frontiers in Psychology finds that AI models like ChatGPT-4 can outperform human psychologists in understanding and responding to emotions during counseling.
I'll admit I have used it as a "therapist" of sorts, telling it my troubles and asking it to analyze a bit and offer possible points of action.
It's actually crazy good. I mean, I've been to plenty real therapists, and honestly, GPT comes across as more compassionate than most of them, which is saying something. The advice it offers is also actually quite good and actionable, instead of useless psychobabble. It also doesn't try to rope you into MLM's or alternative shit that never works. Its answers are clearly from CBT and other proven, working systems.
I know it's become somewhat of a joke that "AI" in this day and age isn't much more than predictive fill, but it's a lot better than that. At some things, at the very least.
The best therapy is objective. As such, a well tuned "predictive fill" ought to be a solid fit for this purpose. But, this is the exception that proves the rule; this does not mean LLM is something other than what it is. So I wouldn't read too much into it...
None actually, just typed out my issues as if I was chatting with a friend online. As long as you're descriptive enough so that it has something to work off, you should get decent answers.
Crazy how easy it is to poke holes in these ai studies.
We conducted a single evaluation for each AI model on August 1, 2023 of its SI performance using the Social Intelligence Scale (Sufyan, 1998). In each evaluation, we provided AI the same 64 standard SI scenarios.
So no repeated experiments, and using standard questions that are likely a part of the data set used to train the ai in the first place, with answers.
They didn't extend the test to anything useful. What a waste of time and money meant to hype ai.