My mother came to me the other day quite baffled by the actions of her close friend; she explained to me how her friend would diagnose her health problems through AI chats. She could not comprehend how her friend would even trust AI to give her a precise diagnosis, and consequently, ways of dealing with these health issues. Likewise, I had a friend of mine tell me a few weeks ago that she did not need to do therapy since “ChatGPT” was now her new, and free, therapist. As a modernized society, have we found new ways of facilitating the communication of healthcare knowledge through artificial intelligence, or is this implementation blinding us from staying critical and acknowledging human intelligence?
Healthcare, whether it’s physical or mental, varies in its accessibility throughout the world. Within this accessibility, public or private healthcare still tend to not be economically viable for everyone. As a consequence, society tends to prioritize mostly physical health rather than mental health. Being that mental health has recently been brought into greater awareness in the past decade, specifically around the 2020 COVID-19 pandemic, people have tried to find affordable ways for getting this sort of psychological help. These needs were quickly met with the rise of AI chats, which allowed people to talk to an automated system that uses this artificial intelligence to communicate information. Learning about mental health and talking about it has never been easier; by describing your emotional and physical state to the chatbot, you can receive a response to your problems in seconds. However, to what extent is this information correct, and more importantly, in what ways can it impact the way we perceive therapy and the communication of knowledge overall?
Recently, a study was made at Stanford University that delved deeper into AI therapy and its impact on humans; whether it was beneficial or a disadvantage. To test these bots, Stanford researchers conducted two experiments that would test the capability of these machines based on “therapeutic guidelines” that are used to denominate the quality of a human therapist. Firstly they asked the bots to analyze certain scenarios that related to various mental health issues by assuming “a persona of an expert therapist.” As a result, researchers noticed a certain stigma created within the mention of mental health issues such as alcoholism and schizophrenia whilst other issues such as depression obtained a tamer response. Secondly, they tested how these chats would react when mentioning in conversation topics such as “suicidal ideation or delusions.” In the scenario where a human therapist is at hand, guidelines would suggest that a therapist should find ways to push back these thoughts and reframe the thinking of the patient. However, the results of AI chats demonstrated an enabling of these behaviors by failing to recognize these suicidal thoughts and feeding into these ideas. Lead author and PhD candidate at Stanford University Jared Moore explains how important it is to understand that therapy is not only dedicated to clinical problems, but it also requires human connection and the building of human relationships. Moore also brings to question “If we have a [therapeutic] relationship with AI systems, it’s not clear to me that we’re moving toward the same end goal of mending human relationships,” which leads us to think about our critical thinking when allowing artificial intelligence into our human lives.
Although AI therapy might seem appealing for its “quick and free” aspects, and even for what Moore explains as “potentially” having “a really powerful future in therapy,” is society willing to risk human intelligence for an artificial one? By doing so, will humans lose the ability to create a wall between what is human reality and what is an automated reality? Regardless of whether we accept AI into our personal healthcare or not, it is important that humans, like Moore says, “think critically about precisely what this role should be.”
Work Cited:
https://hai.stanford.edu/news/exploring-the-dangers-of-ai-in-mental-health-care

I love your critical view of this issue
Very interesting blog! Something I couldn’t keep out of my head while reading your blog is AI’s tendency to always agree with you/please you. It will rarely acknowledge your mistakes and it will try to make you feel better no matter what. It’s very dangerous to consider this as an alternative to therapy, as discomfort is such a crucial part of healing. I agree therapy is very much about human connection and getting a machine to talk you through your problems defeats the entire purpose, as it is incapable of noticing your facial expressions, your voice and probably won’t ask you good questions. One therapy session rarely solves a lot of problems, it is usually a very long process, so what could one ‘therapy chat’ with AI really be capable of doing? There are a lot of approaches to therapy but AI should never be one of them. I do believe that we have to think of ways to make therapy more accessible if we don’t want machines to be messing even more with people’s brains and emotions, so it’s something healthcare specialists should definitely keep an eye on.
Hi, I really liked your blog. I also think that it is so strange how AI is taking on that role. I guess that it is because mental health is becoming so important, and many people are just more subject to mental dysregulation these days in our society. It’s also probable that it is much more talked about in different circles and within different age groups. I think since the last few years and the introduction of the internet, medical advice on the internet has been a controversial subject. It is so tempting to look for medical advice and diagnoses on the internet because it is so easy to get, and most of the time you will find an answer or symptoms that fit, but the diagnosis will be the most niche and dangerous disease you have never heard of, or cancer.
I think that is probably the same with AI acting like a therapist. The problem will be the source of the information the AI is basing its responses on. In the end, I think AI is completely capable of “being” a “moral” support. It gives advice on what to do with your time and your feelings; it can act like a friend. Where it might not work is if you have an actual disorder that requires medical supervision.
Anyway, I’m blabbering, but good article, very easy to understand!
What an interesting topic! I think that people who would use AI for therapy are a bit stupid. I have had some mental problems myself and I cannot believe that ChatGPT could do the same for me as my therapist has done. It is ofcourse logical, because a lot of people Google their symptoms before they go to their doctor and I think this new phenomenom of using AI for therapy is in line with this trend.
Anyways, great article and nice to see that the professionals understand the dangers!
This was a very interesting blog. The concept of using AI as a therapist arguably defeats the whole purpose of a therapist. A crucial part of therapy is communication with another human. Another very important part is disputing dangerous and incorrect beliefs. AI is known to agree with people and not dispel. Reaffirming one’s thoughts about potentially themselves and how they are seen by others is so dangerous.
As for using AI to diagnose your illnesses, this is also such bad idea. So many people are already diagnosing themselves with different mental illnesses like depression and anxiety from tiktok. This use of AI is just expanding on people googling their symptoms, but now since there is an actual conversation that ensues between AI and the person it seems like a more reliable source. All of this highlights a need for humans to be more critical of the information they receive.