Project Showcase: Can ChatGPT help your Mental-Health?

What you are going to find below is a paper I wrote last year for my thematic seminar. In my bachelor, International Studies, you get to practice your research skills before you write your thesis. I chose the thematic seminar, Post-Digital Society, as it’s one of my main academic interests. I remember reading about ELIZA, and it fascinated me because, despite its simplicity, people were confiding in it. This gave me the idea to try to recreate ELIZA with today’s LLMs.

In the paper, I tried to create a custom prompt to help me deal with negative automatic thoughts. This is how it worked: ChatGPT would start the conversation by telling me what I wanted to talk about in the session. I’d usually talk to it when something was bothering me mentally and I wanted to get a more nuanced understanding of my reaction. For example, I’d speak to it about not feeling satisfied after completing the work I set out to do. What it would do was the following. I based the prompt on Rogerian psychotherapy, which is about reflecting the person’s feelings back to them. I knew that ChatGPT didn’t truly know what I was feeling, but it could understand from my word choice that I was frustrated, disappointed, or annoyed. But that is not only what it did. Based on what I said, it also asked me a question that would guide me to a more nuanced thought.

There were many limitations, but I must admit it did help me for a time. However, this is not the case now. One very important thing about why ELIZA worked and other attempts didn’t was the attitude of people towards the chatbot. I realised that my attitude towards ChatGPT really had an impact on whether or not the system functioned. I remember receiving mixed feedback from people. People were weirded out that I would talk about these things with AI. This showed me that the way I perceived it at the time was fundamentally different from other people.

I was one of the first people who used ChatGPT in my social bubble. I used it just a few days after it was released, and I quickly adopted the technology to my advantage. I was amazed that I was able to talk to a computer, and I felt like others didn’t share the enthusiasm that I had. However, recently, with this AI becoming more and more controversial, the way it’s developing today concerns me. I’ve developed a more skeptical view of LLMs, a similar attitude to what others had towards it. Once this new attitude set in, I find it difficult not to think of LLMs as giving intelligent answers. My experience now is a lot different, and I seem to always focus on its stupidity rather than its intelligence. Therefore, now when I use my prompt and talk to it, it just doesn’t give me the same experience I used to have. Even though my attitude was shifting back then as well, I think now it has finally stabilised, but only time will tell.