ChatGPT Sets ‘Mental Health Guardrails’ After Telling a Bunch of Unwell People That They Were God
That *does* seem like a bug in the system.
Published 21 minutes ago in Wtf
For years, celebs have championed the idea that “everyone should go to therapy.” While this may be good in theory, the problem is that most people can’t afford to spend $150 per week on someone telling them why they feel bad.
So, some people turned to large language models like ChatGPT. For a while, they thought this would end up okay — heck, some of them started bragging about using the service as their therapist, while major names in the field described it as “eerily effective.” Is that because it’s trained on words from actual therapists? Or is it because the only therapists that people actually like and use are the ones who validate their feelings and tell them what they want to hear? Who’s to say!
Well, thankfully, we won’t have to wrestle with that question for long, as OpenAI has admitted that it went too far in offering therapy-like services. Specifically, it apologized for not recognizing that some people were mentally unwell and instead affirming all of their delusions.
For example, ChatGPT was recorded as praising users who claimed their family was sending radio signals at them through the walls, while a Reddit thread earlier this year documented the many people who alleged they were losing their partners to ChatGPT-inspired delusions.
So, if you were using ChatGPT as a therapist, it might not agree with you as much anymore. Also, stop.