By Connor Barclay
Editor’s Note: Connor Barclay is a student at Holy Family University in Philadelphia, Pennsylvania, where internships are part of the curriculum. He has been interning with The Write Advantage for the past 4 months. On his first day, I asked him to write a blog post about how AI affects his life as a student, without giving any more instructions. As I expected, Connor took on the challenge in his own way and wrote a thoughtful and surprising piece. I’m sharing it here and on my social media because I believe young people’s perspectives matter. I hope you enjoy reading Connor’s insights.
Artificial Intelligence (AI) is another in a long list of ‘advancements’ that often feel imposed rather than embraced. In fact, a recent survey found that 45% of people distrust AI’s involvement in creative content, which highlights its controversial standing in society. Its use has become something of a societal taboo, particularly as a result of continued attempts to encroach on creative spaces rather than pursue more practical applications of the technology. I’ll spare you the rants on corporate enshittification and the ethics of AI usage. Not only are there more nuanced views than mine on those fronts, but my concern regarding AI is more developmental than moral.
I’ve reluctantly come to accept that AI is inevitable. More and more institutions have been adopting AI into their business models, and higher education is no exception (yes, it is a business). This trend is partly motivated by a desire to regulate student AI use or simply to capitalize on the AI bubble before it bursts. However, this adoption has contributed to a shift in students’ abilities. As a student myself, I’ve observed how many of my peers habitually rely on ChatGPT or Google Gemini (my school’s model of choice) to complete their assignments. Initially, AI tools are seen as helpers, but over time, this reliance can diminish students’ problem-solving and critical-thinking skills, much like a muscle that weakens from disuse. Admittedly, I’m no bastion of academic integrity in that regard, especially as professors increasingly embed AI into assignments. Nevertheless, there is one aspect of my studies I’ve found to be relatively untouched by the pervasiveness of AI: English lit.
More specifically, I’ve noticed that writing, good writing, is something that AI still can’t quite get down. I think anyone who reads in any appreciable amount can see that it is undeniable when reading low-effort AI-generated text. In imitating the voices of their nearly endless training material, AI models have developed their own voices: homogenous and uninspired. I’m not just talking about the overuse of certain punctuation marks, although that’s undoubtedly a tell. To illustrate, here is a sentence generated by AI: ‘The essence of humanity’s intrinsic nature lies within the collective consciousness that transcends the temporal limitations of our individual experiences.’ The result is something akin to a bad writer’s idea of what good writing sounds like. They say a lot without actually saying anything.
I know this because, just like many of my peers, I am no stranger to the vicious cycle of weeks of procrastination followed by an eight-hour stretch of nonstop essay writing. And I, like many of my peers, have attempted to circumvent that eight-hour consequence of poor time management by making convenient use of the Google Gemini subscription that our university generously provides. I gave it my topic, included some main points in the prompt, and sat back and watched as it spit out a full, five-page paper on Shakespeare’s A Midsummer Night’s Dream. And it was atrocious.
On the surface, the paper was an eloquently written analysis of the play, but as I read deeper, it dawned on me that nothing of substance was actually conveyed. In that moment, I experienced what I now call ‘the blank-screen revelation.’ I realized that no matter how polished the AI’s work appeared, it was hollow and ultimately unsatisfying. I wrote out the introduction, thinking the model just needed more guidance, or maybe a voice to latch onto. As you might imagine, the attempt was in vain, and I ended up writing more and more of the essay, trying to prompt a better result until, miraculously, I was done. I ended up writing the entire thing myself, with Gemini as nothing more than a glorified grammar checker.
I’ve made strides in my writing ability since then, not trusting AI to get the job done, nor really wanting it to. The thought that I almost let an algorithm replace my authorial voice is appalling to me in the present. Yet that’s precisely what is happening in schools across the country, not just in higher education.
As a future English teacher, I’ve become aware of the overwhelming amount of AI-related cheating in K-12 schools. If they aren’t being forced to do all their writing in class (something that takes up precious instructional time), students are turning to the convenience of AI to do their work for them. A recent study found that approximately 35% of high school students admitted to using AI tools to complete their writing assignments. I was fortunate to have developed my own voice and writing ability enough by the time AI became widespread that I could tell how bad its writing was, but what about kids who are in school now?
My perspective as a current student and future educator gives me insight into how easy it is to fall into the trap of letting AI think for you, and it concerns me for the students I have yet to teach. Will I look at their essays and see coherent arguments, or will they be façades masked by flowery prose? I do not think AI will be replacing good writing—real writing—but I do believe it is impairing many students’ ability to recognize it.

