My mother came to me the other day quite baffled by the actions of her close friend; she explained to me how her friend would diagnose her health problems through AI chats. She could not comprehend how her friend would even trust AI to give her a precise diagnosis, and consequently, ways of dealing with these health issues. Likewise, I had a friend of mine tell me a few weeks ago that she did not need to do therapy since “ChatGPT” was now her new, and free, therapist. As a modernized society, have we found new ways of facilitating the communication of healthcare knowledge through artificial intelligence, or is this implementation blinding us from staying critical and acknowledging human intelligence?
Healthcare, whether it’s physical or mental, varies in its accessibility throughout the world. Within this accessibility, public or private healthcare still tend to not be economically viable for everyone. As a consequence, society tends to prioritize mostly physical health rather than mental health. Being that mental health has recently been brought into greater awareness in the past decade, specifically around the 2020 COVID-19 pandemic, people have tried to find affordable ways for getting this sort of psychological help. These needs were quickly met with the rise of AI chats, which allowed people to talk to an automated system that uses this artificial intelligence to communicate information. Learning about mental health and talking about it has never been easier; by describing your emotional and physical state to the chatbot, you can receive a response to your problems in seconds. However, to what extent is this information correct, and more importantly, in what ways can it impact the way we perceive therapy and the communication of knowledge overall?
Recently, a study was made at Stanford University that delved deeper into AI therapy and its impact on humans; whether it was beneficial or a disadvantage. To test these bots, Stanford researchers conducted two experiments that would test the capability of these machines based on “therapeutic guidelines” that are used to denominate the quality of a human therapist. Firstly they asked the bots to analyze certain scenarios that related to various mental health issues by assuming “a persona of an expert therapist.” As a result, researchers noticed a certain stigma created within the mention of mental health issues such as alcoholism and schizophrenia whilst other issues such as depression obtained a tamer response. Secondly, they tested how these chats would react when mentioning in conversation topics such as “suicidal ideation or delusions.” In the scenario where a human therapist is at hand, guidelines would suggest that a therapist should find ways to push back these thoughts and reframe the thinking of the patient. However, the results of AI chats demonstrated an enabling of these behaviors by failing to recognize these suicidal thoughts and feeding into these ideas. Lead author and PhD candidate at Stanford University Jared Moore explains how important it is to understand that therapy is not only dedicated to clinical problems, but it also requires human connection and the building of human relationships. Moore also brings to question “If we have a [therapeutic] relationship with AI systems, it’s not clear to me that we’re moving toward the same end goal of mending human relationships,” which leads us to think about our critical thinking when allowing artificial intelligence into our human lives.
Although AI therapy might seem appealing for its “quick and free” aspects, and even for what Moore explains as “potentially” having “a really powerful future in therapy,” is society willing to risk human intelligence for an artificial one? By doing so, will humans lose the ability to create a wall between what is human reality and what is an automated reality? Regardless of whether we accept AI into our personal healthcare or not, it is important that humans, like Moore says, “think critically about precisely what this role should be.”
Work Cited:
https://hai.stanford.edu/news/exploring-the-dangers-of-ai-in-mental-health-care
I love your critical view of this issue