Exploring the Limitations of ChatGPT: A Qualitative Assessment

Social media, especially Twitter and LinkedIn are filled with people trying out all sorts of crazy stuff on ChatGPT. As every respectable computer science student, I have also done my own highly scientific testing of ChatGPT… just kidding… but I have tried it out a lot, and here are my conclusions.

Sanna Persson
4 min readApr 9, 2023

In my conversations with ChatGPT, I have of course played around with it as a writing assistant, grammar coach, and Google substitute. Apart from this, I have also challenged it by giving it hard problems, sharing personal dilemmas, or asking for specific advice. In these situations, I feel there is still much development needed for it to be useful.

Limitation #1: Solving difficult problems and mathematical derivations

Being a mathematics and computer science student I come across problems requiring complex derivations or in-depth problem skills and this is a limitation ChatGPT. When given a hard algorithmic problem with a question to describe an algorithm and prove its correctness, it writes plausible-sounding text that is completely detached from sound logic. If I instead want it to write the derivations in a relatively simple university-level problem, it fails as soon as I give too much freedom for it to define the variables and problem setting. Overall, it is clear that for simple problems that are clearly defined ChatGPT can be useful but for problems that require interpretation or which solutions are not easily found on Google failure modes can occur.

Limitation #2: Giving emotional support

I believe that AI models could be great to help people going through difficult periods in their life because they are always available and can alternate between acting as a friend of a psychologist. In this way, there is a great possibility to mitigate loneliness and take away the stigma of seeing a therapist. As of now, there is still much to wish for in this area for ChatGPT. I have played around with different prompts to make it act like a fellow human being who can truly empathize with you but I feel it is limited to providing general advice and strategies. In the image below, I prompt it to act as a human psychologist and initially, it tries to emphasize with me. It is, however, clear that its perspective is that of explaining and generalizing rather than listening and giving emotional support.

To me, it seems this may be a design choice in the alignment training because in forums there is data that gives examples of how empathy and emotional support can be conveyed.

Limitation #3: Applying strategies to specific problems

In the conversation below I explain a specific problem and ask for advice. I then specify that I would like to know how Tony Robbins would act in the situation.

I receive a general description of his work and then prompt it to apply his work to my specific problem. The limitation that I have experienced multiple times is that this seems to be a hard problem understanding how a conceptual strategy can be applied to a real-world problem.

To me, this ties into the previous point. There seems to be a lack of understanding of how a user’s emotional state and situation may limit them from taking certain actions. Further, it is not clear that the model understands how to give useful and actionable advice. My conclusion: If you want facts or general and concrete points, ChatGPT works great but if you want to receive understanding or support, don’t expect much.

I was motivated to write this because the main limitations I read about are related to constructing false or biased claims or advice. It seems that the problem with authenticity is much broader and there are challenges in many areas.

Recently, I heard a quote, “What ChatGPT does is play language games”. And, if this is true then it still has not mastered the imitation game.

--

--

Sanna Persson

Currently exploring the realms of deep learning. Particularly interested in healthcare applications