A new analysis tracking nearly 7,000 scientific abstracts submitted to the prestigious journal Organisation Science has confirmed what many quietly suspected: since ChatGPT arrived uninvited to the scholarly table in November 2022, AI-assisted writing has surged by 42 per cent, whilst the quality of research submissions has measurably declined. The numbers are damning, but the deeper malaise they point to is far more troubling than any statistic. Let us be precise about what a Large Language Model actually does. It ingests vast oceans of existing human thought – papers, books, arguments, observations – and recombines that material through fixed algorithmic patterns, producing text that resembles insight without ever generating it. There is no original spark within these systems. There cannot be. A machine that has never stared out of a rain-streaked window, never felt the discomfort of an unsolved problem gnawing at 3 a.m., and never experienced the particular satisfaction of an idea that arrives unbidden – such a machine has nothing original to offer the world. It has only the world’s existing noise, repackaged.
This is not merely a technical limitation. It is a philosophical one. Science advances precisely because individual minds, shaped by unique experiences and perspectives, ask questions that have never been asked before. The Wharton researchers note that AI is not levelling the playing field – non-native English speakers and new entrants gain little publication advantage from its use. What it does level, catastrophically, is the quality of inquiry itself, homogenising thought into a grey porridge of algorithmically acceptable sentences that say nothing new because they cannot say anything new.
Yet the graver indictment here is not of the machines but of us. AI tools were conceived as instruments – calculators for language, assistants to amplify human thought. Instead, we have inverted the relationship entirely. The “publish-or-perish” culture has found in AI a convenient shortcut, and scholars – rational actors responding to perverse incentives – have obliged. We have not so much adopted a tool as surrendered our intellectual responsibility to one.
We are approaching an equilibrium of more research producing less knowledge. Used with discipline and honesty, AI may yet serve a genuine purpose. The question is whether we still possess the courage – and the habit – of original thought. If this analysis is any indication, that habit is eroding faster than we dare admit. It is time, urgently, to reclaim the distinctly human act of thinking for ourselves.
