Most people nowadays use AI: whether it’s to ask questions, generate images or for company. People use ChatGPT knowing it’s AI, while even a typical Google search is now influenced by AI-generated answers. It’s clear: AI is slowly taking over our society. Some people are happy with that; some people find it frightening. But the thing remains: many people use it, sometimes even on a daily basis.
As every student knows, it is generally not allowed to use ChatGPT or other AI for full essays or assignments, as it undermines the whole point of studying a subject: learning to think academically and scientifically. However, as many professors must also be aware, ChatGPT (and similar tools) is still used often by students. Whether it’s to answer questions quickly, to create a mock test, or in some cases even to write full essays, it can be difficult to check, and it can feel like an easy (and lazy) way to get your diploma.
But what will you do with that diploma if everything you seem to know comes from AI, something anyone has access to? How will your skills be useful in the world, if they’re not your skills at all? Why would you pay thousands of dollars to a university to study a subject, if you end up not actually learning it?

My friends and I know that we often use ChatGPT: not to write full essays or to answer all quizzes, but to help study and maybe create mock quizzes to prepare for exams. However, many of my friends notice something: professors using other students’ work as examples, while all other students in that class either know (or can see) that their work was created by AI. This feels extremely unfair. The solution? Either get better, or use AI yourself too. If the standard becomes AI-generated work, how will honest students measure up? How will they get graded for their honest work if it’s compared to an unfair standard like AI-enabled work?
ChatGPT is based on already-existing research, texts, data. It does not “think for itself.” Which means, even if there’s no definitive way for plagiarism checks to catch AI use, it can still be considered plagiarising to let AI write your essays.
Recent research supports what I and my friends sense: the use of AI among students is extremely high. According to a 2025 report, about 83 % of students said they use AI tools regularly, and more than half said they turn to them at least once a week, if not daily. Another survey found that roughly 90 % of college students have used AI academically.
On the one hand, this is not necessarily a bad thing. Many students say these tools help them stay focused, save time, and feel more confident in their work. On the other hand, problems arise when students rely on AI too much. For example: around 36 % said their ability to think critically had gone down; 34 % said trust between students and educators dropped; 27 % felt that grading fairness had suffered.
Here’s the thing: when universities ignore the reality of AI usage -pretending it doesn’t exist, or banning it outright without guidance- they are letting a gap grow between what students do and what students learn. If professors and institutions act as if AI is rare or only used for cheating, but in reality almost every student is using it in some way, then the rules, grading and teaching are misaligned with reality.
From my side, as someone who uses AI in the “help-me-study” mode (mock quizzes, checking ideas, etc.), I see both sides clearly. It can be helpful, especially when the workload is heavy and you just want to get your head around a subject efficiently. But I also see the danger: depending on AI too much means you might not develop the thinking you’re paying to develop. If AI generates summaries, you might skip the full reading. If AI suggests arguments, you might skip formulating your own.
When I look around at my friends, I see the unfairness: some students’ work obviously comes from AI, some don’t, but the grading doesn’t always reflect that difference clearly. That’s a problem. If the invisible “AI standard” becomes the baseline, honest students may end up working harder for the same or lower reward.

Another dimension: the world outside university already uses AI. My mum, for instance, works at a pharmaceutical company specialising in eye medicine. She told me that at her company they use AI to create weekly summaries of the latest publications: they tell the AI what makes a publication relevant (keywords, topics, etc.), and it reads and summarises them. They save time and energy. This shows that AI is not just a student-tool, but also a professional tool. And if that’s the real world, then we should be prepared to use it well.
In the end, I don’t believe the issue is simply “AI good” or “AI bad.” The issue is how we use AI. If we rely on it completely, we lose the ability to think for ourselves. But if we use it as a tool, rather than a crutch, it can actually make studying more creative and efficient.

Universities shouldn’t pretend that most students don’t use AI. They also shouldn’t simply ban it. Instead, they need to teach students how to use it responsibly: how to integrate AI in their work without undermining their own voice and critical thinking. Institutions should build clear policies, guidance, training and resources. The statistics show the gap is real: high usage and low institutional support.
For me personally, I resolve to use AI, but consciously. I’ll use it to help generate questions, plan revision, check ideas, but I’ll still do the real thinking, the reading, the writing. Because what I want is my skill, not AI’s skill. And as students, we should aim for just that: tools that enhance our thinking, not replace it.
Recent Comments