New England towns do not lack for selective liberal arts colleges. The one at which I briefly taught some time ago was not one of them. In its heyday, it had been able to pick and choose from mostly local students, but the dwindling of its appeal had led it to accept virtually all who applied.
A few years after I left, it would be turned into another sort of institution. Many of its then towering trees would be cut down. But back when I taught there they were, to all appearances, flourishing. Since it was fall, though, their leaves were dying. The school was, too.
Two times a week I would drive out to Dying College to teach fifteen freshmen how to write. At the time, I was studying a seventeenth-century text dealing with the emergence of organized scientific inquiry, but the texts that I put on the syllabus were modern. While the discussions were lively, I soon discovered that only one student in the class could write well, and that two students were struggling to comprehend what they were reading and to write about it.
They were both native speakers and their records reflected no diagnoses of dyslexia or anything else. I mentioned my concern to someone who was teaching another section of the class. She sighed and said, “You shouldn’t have asked them to read a whole essay.” She pointed to a pile of Barbie dolls on the corner of her desk. “I have them do semiotic analysis of objects as signs. Like, what do they say about society’s perceptions.”
I was not going to ditch reading. Instead, I ended up offering individual tutorial sessions aimed at improving reading comprehension to the two students. They were hard-working and enthusiastic, and they made substantial progress over the course of the semester. Had I been teaching at Dying College in the age of AI, however, it is possible that I never would have known about these students’ situations and that I would have done nothing about them.
What Writing Reveals and Conceals
When used to generate papers, LLMs may provide invisibility cloaks for students who are struggling or failing to learn. When they are used to generate comments on papers, they can hide the fact that teachers are failing to respond to students on the basis of their own informed judgment.
But LLMs also serve to reveal information (such as patterns) within data beyond that which is accessible to untrained human attention, and to display the status quo concerning usage habits in a given language. Used pedagogically, it can provide mirrors in which we can see common habits of thought and expression. As such, the capacity of AI to obscure what is going on in any given teaching and learning situation is, I think, only part of the story.
Yes, AI, when used poorly, can generate smokescreens of bland, super-smooth, ultraprocessed writing product that may conceal skill deficits or blot out any incipient glimmerings of wit, beauty, and other artifacts of ingenuity and craft. But when used rightly—or at least differently—AI can help to reveal information about habits, patterns, and problems that may otherwise elude notice.
I do not marvel at LLM-generated writing (as distinguished from the science behind it), which I frequently find to be mediocre, tedious, clunky, and functional at best. I have started using AI in teaching not because it produces good—much less excellent—writing, but rather, I think, because it is good at competent, mediocre, and bad writing, depending on the context in which it is being used. It can show us our blind spots and unconsidered habits. And if we can study them, we may be able to make better choices about what to do about our own writing.
Of course, labels have their limitations, and most pieces of writing are successful in some areas and less so in others. Writing does not always need to be “one’s own” on an expressive level to work. For example, there is not much that one can add to a statement such as “The volume of the solution increased when the solvent was added.”
But other modes of writing present us with the opportunity to convey our own voices, and so to approximate the expressive power, range, and subtlety of in-person speech. Sometimes, though, we can’t do that. And, even more mysteriously, we can’t explain why our writing doesn’t match what we think we have to say apart from and prior to its expression.
Students judge their own writing without knowing what’s behind their self-criticism. A student sends you a draft by email with the comment that “it’s not good, but I can’t figure out what is wrong with it.”
Bad writing stands out. But mediocre writing may slide by without notice. It’s the boring part of the paper that we were not thinking much about when we wrote it. It may make us feel uneasy, because we can’t tell what is wrong with it. And given unease arising from our incapacity to identify what is amiss, we may fall into the habit of cloaking it in LLM-style prose—with or without the use of an LLM.
Subscribe for More
If you enjoy what I’m doing here, subscribe for more joy! I publish a literary essay and a review every week—and it’s always free.
Teaching and Cloak-Stealing
In Aristophanes’s play, The Clouds, Socrates is lampooned as figure who is as absurd as he is corrupt. He is depicted as teaching his students to use rhetoric to get away with injustice. He even practices petty thievery by stealing a cloak.
The main character, Strepsiades, who enrolls in Socrates’s school so that he can learn to make unjust arguments and so escape from having to pay his debts, is told by Socrates to take off his cloak prior to his initiation. Strepsiades later describes his surrender of his cloak as voluntary, but at the end of the play when he burns the school down and is asked who is to blame for the fire, he says, addressing himself to the school as a whole, “the one whose cloak you took.”
It is not necessarily an insult to say that teaching may involve, if not necessarily cloak-stealing of a sort, then cloak revealing. Many first-year students show up at college prepared to generate prose that could have been written by an LLM even if it was not. A fictive, parodic sample of “bad” high school level prose may convey, in hyperbolic fashion, some sense of the problems entailed. In J.D. Salinger’s The Catcher in the Rye, the character of Holden Caulfield writes an essay on ancient Egyptians that leads to him failing his history class:
The Egyptians were an ancient race of Caucasians residing in one of the northern sections of Africa. The latter as we all know is the largest continent in the Eastern Hemisphere. The Egyptians are extremely interesting to us today for various reasons. Modern science would still like to know what the secret ingredients were that the Egyptians used when they wrapped up dead people so that their faces would not rot for innumerable centuries. This interesting riddle is still quite a challenge to modern science in the twentieth century.
For comparative purposes, here is a paragraph written by an LLM on the same topic this month:
Modern scientists are captivated by ancient Egypt due to its advanced knowledge in mathematics, engineering, and medicine. The civilization’s well-preserved artifacts and mummies enable studies in archaeology, genetics, and environmental science, while offering insights into early societal structures and cultural practices. Innovations like CT scans of mummies push scientific boundaries, and unresolved mysteries, such as pyramid-building techniques, fuel research, connecting ancient achievements to modern challenges in technology, health, and sustainability.
The latter is more sophisticated and less desperate than the former, but both are boilerplate. In the second sentence of the LLM draft, things start congealing into ultraprocessed writing product, wherein subjects do things they presumably cannot do but do so anyway, and it all sounds so plausible.
The artifacts and mummies themselves are said to enable studies while offering unspecified insights. It is not, then, the studies themselves that yield or convey the insights of their authors. One might imagine that what mummies have to say about early societal structures and cultural practices might differ from the insights offered by artifacts, but since they are not quoted and discussed, we will never know. The mystery of what pyramid building per se has to do with modern health challenges also remains for captivated scientists of the future to explore.
In terms of argument, LLM-generated writing favors sequences such as “on the one hand this, but on the other hand that, and both this and that contribute to contemporary understanding as human beings tackle the most pressing problems of today” or the like. Everything connects with anything. And when it doesn’t, LLMs tend to go ahead and make the connection anyway.
It is this sort of thing that numbs and irritates readers at the same time. As such, it can be useful in teaching writing, since, as Flannery O’Connor pointed out, “we can learn how not to write.”
Using One Technology to Teach Another
For the past seven years, along with colleagues who specialize in quantum physics and biophysics, respectively, I have been teaching scientific writing to mostly doctoral students and some undergraduates in applied science, engineering, applied math, and other natural sciences and quantitative disciplines. The students in our class are generally engaged in producing research-based papers for publication in journals with coauthors.
In the context of our class, a colleague of mine, Professor Vinothan Manoharan, has given several guest presentations on the role of voice in scientific writing. In the 2024 version of this presentation, he included an exercise in which he compared a submitted introduction to a paper and the same introduction rewritten by GPT 3.5, and asked students which one was written by AI. He led a collective close reading of both introductions, and we talked about why it mattered that we could perceive a “voice” in the one version and not in the other.
This exercise was so inspiring to me that when I was asked to give a guest presentation last fall on writing papers in quantum science and engineering, I created a three-part exercise consisting of an abstract from a published paper, an LLM-generated version of the same abstract, and one that I edited myself. I think of these other texts as shadows of the original—hence, as shadow texts. They invite us to look back at the original to perceive the differences and to account for them.
Of the twelve students in the seminar, only one preferred the LLM-rewritten version. We spent more than an hour going through the three short chunks of text and stopped only because it was time for the seminar to end. Following that exercise, I developed AI-informed modules to work on with individual PhD students, and I am currently developing new curricular materials devoted to an array of comparative exercises in different genres.
AI and the Humanities
I used to take exception to the use of AI in teaching writing because I regard writing as an art and a craft that is oriented towards the perception and disclosure of truth. I still have questions and reservations. But I can see the opportunities, since I also regard writing as being a technology.
In one sense then, using AI to teach writing amounts to using one mode of technology to aid in teaching the use of another. Yet students need to be able to take a detached view of LLM-writing if they wish to use it well, and such distance can only be attained through close critical engagement with it.
As Walter J. Ong, S.J., has observed, “The fact that we do not commonly feel the influence of writing on our thoughts shows that we have interiorized the technology of writing so deeply that without tremendous effort we cannot separate it from ourselves or even recognize its presence and influence.” Perhaps the project of juxtaposing shadow texts with their counterparts and attempting to account for what is lost and gained, and concealed and revealed, in each medium may be part of that effort.
What do you think of using AI to teach writing? Comment below. And don’t forget to hit the ❤️ icon and share this essay. Thanks!
It's a really, really good thing that I retired from teaching and that my kids finished school before ChatGPT came along, or I'd have lost my damn mind. Turning in work that was not produced by you is plagiarism, full stop.
At the risk of sounding like a cranky old grandma: back in my day, we had these things called "blue books," and the whole piece of writing we were turning in had to spring, Athena-like, from our very own heads, in our very own handwriting, during the class session, in full view of the professor.
I'm not fully a Luddite; I can see that AI has uses (certainly in the sciences, for example), but it's really nothing more than a glorified search engine or spreadsheet: it's great for finding needles in haystacks, or sifting through ridiculous volumes of data to highlight the most promising leads, but it's not good for producing anything truly *new*. If you prompt it to write something, it will just regurgitate old words into new permutations. Nothing truly original will have been added to the lexicon of humanity, and its use in the arts and humanities will tarnish our output and diminish a lot of what makes us special as a species.
Wow! You and O. Alan Noble should chat over coffee (or a beer) about your AI articles. I wish I could sit at the table.