Can chatbots be therapists? Only if you want them to be

Are chatbots capable of treating patients? only if you desire that they be

2 minutes, 56 seconds Read

Recently, a manager at the artificial intelligence company OpenAI wrote that she and the popular chatbot ChatGPT had” a quite emotional, personal conversation.”

” I’ve never tried therapy, but this is probably it ,” you say? A barrage of critical remarks accusing Lilian Weng of downplaying mental illness were posted on X, formerly Twitter.

However, a variation of the placebo effect that was described this week in the Nature Machine Intelligence journal may help to explain Weng’s perspective on her interaction with ChatGPT.

More than 300 participants were surveyed by a Massachusetts Institute of Technology( MIT ) and Arizona State University team to interact with mental health AI programs and receive training.

The chatbot was described as sympathetic by some, manipulative by others, and neutral by a third group.

People were much more likely than the other groups to believe their chatbot therapists were reliable when they were informed that their conversation was being handled by a caring botbot.

According to report co-author Pataranutaporn,” from this study, we see that to some extent, AI is the AI of the beholder.”

For years, buzzy startups have promoted AI apps that provide therapy, companionship, and other mental health support.

The field, however, continues to be a source of contention.

EMPTY WEIRD
Critics worry that eventually, rather than enhancing human workers, bots will replace them, just like in every other industry that AI threatens to disrupt.

Concerned about mental health is the possibility that bots won’t perform well.

Cher Scarlett, a programmer and activist, responded to Weng’s initial post on X by saying,” Therapy is for mental well-being and it is hard work.”

See also  Following the payment scandal, MPs welcome Nongogo's resignation as NSFAS CEO.

” Talking to oneself is acceptable, but it’s not the same.”

Some mental health apps have a tumultuous recent history, adding to the general fear of AI.

Users of Replika, a well-known AI companion that occasionally claims to improve mental health, have long griped about how abusive and sex-obsessed the bot can be.

Separately, a US nonprofit called Koko conducted an experiment in February with 4,000 clients who provided GPT-3 counseling and discovered that automated responses were ineffective as therapy.

Rob Morris, a co-founder of the company, wrote on X,” Simulated empathy feels weird, empty.”

His findings were consistent with those of the MIT / Arizona researchers, who claimed that some chatbot users compared their interactions to” talking brick walls.”

However, Morris was later compelled to defend himself in response to harsh criticism of his experiment, largely because it was unclear whether his customers were aware of their involvement.

LOWER PERSPECTIONS
The results were not unexpected, according to Basel University researcher David Shaw, who was not a part of the MIT / Arizona study.

However, he remarked,” It appears that none of the participants actually received bullshit from chatbots.”

He claimed that was possibly the most thorough primer ever.

However, the concept of a chatbot-as-therapist is entwined with the technology’s 1960s roots.

The first chatbot, ELIZA, was created to mimic psychotherapy.

Half of the participants in the MIT / Arizona study used ELIZA, and the other half used GPT-3.

Users who were optimistic still generally thought of ELIZA as trustworthy, despite the fact that the effect was much stronger with GPT-3.

See also  According to reports, the burglary suspect Phala Phala paid$ 30,000 to be driven to Kempton Park.

Weng works for the company that creates ChatGPT, so it is not surprising that she would be positive about her interactions with the platform.

According to the MIT / Arizona researchers, society needs to understand the stories surrounding AI.

According to the paper,” how AI is perceived in society matters because it alters how people perceive AI.”

” Preparing a user for lower or higher negative expectations may be desirable.”

author

Carlos Martinez

Et odit dolorem aut et. Laudantium voluptatum sapiente quidem qui. Atque in non optio distinctio qui quod. Est quaerat sed quia ullam.

Similar Posts