“Stay critical!“ When I stared at the bold, eye-catching words, my mind involuntarily drifted back to my experiences interacting with GPT while writing my essays.
The Beginning of a Shortcut
When I first started using GPT for my essay, I approached it with a sense of reckless optimism. Faced with an assignment spanning thousands of words, I entered the topic and requirements into it, hoping it wound function like an essay generator and create a finished, submit-ready paper. Sure enough, it quickly delivered a well-structured, fluently written article. At that time, I felt an almost felt a sense of “liberation”.
But as I calmed down and read enough each paragraph carefully, I discovered numerous problems. Its arguments were hollow and generic, answers appeared logicl but repeated from the begining to the end. I suddenly realized that submitting it would result in a work that was all form and no substance. GPT povided me with information, but it could not conduct the genuine dialogue with the question on my behalf.
From Input to Interaction
I changed my approach. Instead of expecting it to generate a complete paper in one go, I act as a mentor, guiding it step by step. For example, I have the AI generate a rough, broad outline, then ask it to revise each section through targeted questions. This method is like collaborating with a partner unfamiliar the subject matter:
- I would introduce the topic;
- Explain the research scope;
- Ask it to propose several arguments;
- Then continuously probe and refine.
The process was tedious, sometimes requiring dozens of back-and-forths, but the final outcome far surpassed direct generation. And knowledge does leave impressions in my mind rather than flowing through like water. It is no longer merely a tool for writing papers, but a partner who discusses problems with me and helps clarify my thoughts.
I have come to see interacting with GPT as a cross-cultural communication. It operates from a completely different knowledge corpus world. If my expressions are vague, it misunderstands; if I provide context and logic, it delivers relatively appropriate repsonses. More often, however, both of us are beginners, gradually deepening our understanding of specific topics through interaction. Patience and clear expression have become the keys to our collaboration.
Critical Thinking Beyond Essay Writing
As I use it more, I increasingly realize: GPT is not a “truth engine”. Its answers do not equate to “correctness”; they are merely assembled and generated from existing data. It may help me discover new angles, but the final judgemet still rests with on. This reminds me of the admonition: Stay Critical! Whether using search engines or GPT, we must never conflate “getting an answer” with “finding the truth”. Information is readily available, but discerning truth requires critical thinking.
In my writing process, I cultivate several habits:
- Cross-verify: I cross-check references provided by GPT against library databases or Google Scholar.
- Rewrite and restructure: I often rewrite paragraphs it generates, reorganizing the logic in my own words.
- Maintain skepticism: when its arguments seem flawless, I deliberately ask: “Are there opposing viewpoints? “
These habits not only give me greater control over the writing process but also remind me: no matter how powerful AI tools become, they cannot replace human independent thinking.

In thinking about your post, I asked ChatGPT whether it thinks its communication with human users is cross-cultural. It said it could be considered as such yet pointed out that any culture it expresses “is derivative and algorithmic” and that it doesn’t have an identity or any positionality. This reflects quite similarly to your article. The cross-cultural element of communication that you refer to is the convergence of your inputs to the AI’s algorithm. Essentially, we are learning each other in a similar way that people do when in cross-cultural environments. However, according to ChatGPT it does not have an embodied identity. To some extent this is true, yet we still see the biases and perspectives that ChatGPT internalises and prefers come out in its answers. In this way it does have its own normative judgement system. Note that this is truer for the form, and not always for the substance of the learning module. Ultimately, we must learn how to control AI for our needs, and I hope we can do that in a way that helps our world!