Prompted: Should Students Use AI, and How?
When ChatGPT Writes Better Than Your Average Term Paper, Maybe the Problem Isn’t the Student—or Even the Tool
Joel here. I’m pleased to introduce Prompted, a new occasional series exploring generative AI and the humanities.
The format? Epistolary! and I exchange discursive emails about the often bumpy intersection of culture and humanity’s latest cultural technology—namely, large language models like ChatGPT, Grok, and Claude.
We’re publishing these exchanges to surface relevant issues, share our questions, and think aloud in public. Our hope is that they’ll be as thought-provoking and, yes, generative for you as they are for us, no matter where your preferences land on the subject.
Here we go! Enjoy.
Buckets and Ethics
Hollis,
I know we’re both thinking a lot about AI in education and the humanities more broadly. I’ve got two angles I want to explore to get this series rolling, both involving ways we might be muddling the subject. The first is the bucket problem. The second involves ethics and pedagogy.
The bucket problem: AI is a wildly expansive term. If you’re online in any way, you’re already using AI, even if unknowingly. One thing I find tricky are pronouncements about AI, pro or con, that seem immune from specifics—e.g., If you’re a writer and you’re not using AI, you’re going to be left behind, or No creative should every use AI in their work. Both are paraphrases of statements I’ve seen and heard several times now. Social media is lousy with both takes.
The relative value of a tool always comes down to specific use cases. How might a writer or other creative use AI? Unless I’m missing something, the particulars matter to the judgement. Our shorthand ways of talking about AI thus tend to muddy the stream, and a lot of communication ends up failing to advance any real understanding.
We’re putting too much under the header, and it’s all rattling around undefined. I’d love to hear more of your thoughts about that, especially since it seems to play into the second issue: ethics and pedagogy.
When I see commentary on the use of generative AI by students, specifically using large language models to write essays or complete other such assignments, I tend to see two summary judgements, usually expressed with either deflation or outrage, sometimes both: cheating and plagiarism.
How exactly is using AI in this way cheating and/or plagiarism? That’s mostly assumed. But it seems to me we’re shortcutting our understanding of the technology, ethics, and pedagogy without probing deeper.
So let’s do that: Consider the pickleball over the net. I await your return.
Best,
Joel
Soup, Seas, and String: Describing AI
Hi Joel,
Your email came yesterday just when I was re-reading Alison Gopnik’s “Stone Soup” piece on AI, so your “bucket” question naturally made me think of AI as a bucket of soup. So forgive this sort of soupy response that allow me to put together in one place some thoughts on how to conceive of AI, which sort of responds to your bucket question. Plus I just read an advance copy of your brilliant book, so I have been thinking about books and technologies.
It may be that the trait that distinguishes humans from other animals is not naming, or language, but that we mis-name things, calling things by the wrong name, pointing to something else. There’s so much metaphor and simile in the AI discourse so that’s where my thoughts have led me. My metaphor for AI is a ball of twine, not soup. But I’m not ready (or able) to say why yet.
I started thinking about the long history of metaphors for characterizing knowledge. For Plato (and many others), knowledge was light. For Newton, an ocean. For Locke, a cabinet waiting to be filled. For Hegel, a living tree. For Quine, a web or network. For Cicero, a treasure trove. Each of these metaphors—light, sea, container, organism, network, treasure—works in their own way to get at the entirety of what is known, what is learned or discovered, what is remembered, and how all this connects with each other. So which works best in the context of AI systems, which assemble facts and knowledge and ways that humans engage with the world and each other from countless, disparate datasets?
In thinking solely about AI models as teachers, it would be helpful to agree on a conception of AI knowledge in order to represent what students are doing when they interface with a model and come away demonstrably more knowledgeable. Light shining upon them isn’t quite right, nor is jumping in an ocean. They may be filling their cabinet or climbing the tree of knowledge, but neither of those is right. Joining a network has great appeal in the digital age, but the network itself is not knowledge. Opening a treasure chest is best, but the metaphor is too romantic for what is essentially a mostly useful tool filled with as much dross as gold.
Because I’m a teacher, some citations: Plato’s Allegory of the Cave (The Republic, book VII) described education as a journey from darkness into sunlight. St. Augustine later amplified the idea of divine illumination, writing in the Confessions that the mind needs to be enlightened by light from outside itself: God’s truth. For Descartes it was the “natural light of reason,” where clear truths revealed cannot be doubted. The Age of Enlightenment itself makes the metaphor explicit. Kant’s What Is Enlightenment? loops back to Plato, with enlightenment (through reason) as man’s release from self-imposed darkness.
While knowledge as light dominates Western thought (“shedding light on a problem”), it isn’t a great metaphor for AI, especially for students. Do we really want to tell them to log on and “go into the light”?
Knowledge as a sea (or an ocean of truth) is probably older than Isaac Newton who, near the end of his life, compared himself to a child picking up shells on the seashore “whilst the great ocean of truth lay all undiscovered before me.” Francis Bacon had earlier popularized the idea of venturing into unknown seas of knowledge; the frontispiece of Novum Organum (1620) depicts a ship sailing beyond the Pillars of Hercules on an ocean of discovery. Quine riffed on this with Neurath’s Boat.
There’s a lot to like about the sea/ocean metaphor in the AI era. AI is vast, boundless, and often unfathomable. You have to navigate it. There are currents that pull you along. You can get lost exploring it. There are always new seas to explore. You dive deep in it and plumb those depths. You can drown in it. You can go aground in it. It is sometimes unexpectedly shallow. It is always a voyage.
Knowledge as a storage container also sort of works for AI. AI is the container rather than the knowledge in it, I suppose. This is my least favorite metaphor for knowledge to be honest. The idea appears in Plato’s Theaetetus, where Socrates says the mind mind as a wax tablet (as you know, I love wax tablets) and also an aviary or bird cage where we house pieces of knowledge and get them when we need them.
Centuries later we get John Locke’s tabula rasa, the mind at birth as “white paper, void of all characters,” which experience later “furnishes” with ideas; his Essay Concerning Human Understanding (1690) refers to the mind as an “empty cabinet” or storehouse being filled. It’s hard to escape this metaphor which is everywhere: bodies of knowledge, knowledge banks, archives, and libraries as storehouses of knowledge. It emphasizes possession and transfer of knowledge as if it were a tangible substance. People today talk of “holding an idea” or “filling someone’s head.”
So if we think of an AI model as a vessel or space that can be filled infinitely and could be the pitcher “pouring knowledge into students.” But is knowledge the supply or the serving? Is it the storing or retrieving or both? It’s just too messy.
Knowledge as a living, growing organism begins in the Western tradition with the Bible’s Tree of Knowledge, though it was left in Eden, and we don’t know whether we’re plucking from that particular tree. But the tree metaphor for how we organize knowledge (with branches) is still used and useful. Diderot used a Tree of Knowledge diagram (inspired by Bacon) in the Encyclopédie (1751) to organize human knowledge into disciplines, a kind an organic taxonomy of learning. Hegel used organic metaphors in the Phenomenology of Spirit (1807). Darwin’s theory of evolution (1859) involves branches.
So trees are a thing and the tree/organism metaphor works for AI pretty well, if we think about knowledge as alive, growing and changing, with branches diverging and specialized, yet connected at the roots (fundamental principles) and dropping fruits and seeds that sprout. But as something a user relates to, a tree doesn’t work as well as the sea (diving in, navigating). You don’t spend the day simply climbing.
Knowledge as a web or network is attractive in the AI era. I won’t pretend to know the non-Western sources as well as the Western, but Buddhism offers concepts like Indra’s Net, which stretches infinitely in all directions with a jewel at each node, each jewel reflecting all others. Quine also described human knowledge as a “web of belief,” in which all our beliefs collectively form an interconnected network, where no belief is tested in isolation but only in conjunction with others. Leibniz earlier imagined a relational structure of all knowledge. In our era, the internet (and World Wide Web—do we still use this term?) are both as a literal network of knowledge and a reinforcing metaphor: Tim Berners-Lee’s “Semantic Web” (2000s) treats knowledge as a network of linked data.
Does the network metaphor work for students accessing AI? Yes and no. Yes in that knowledge as a set of nodes (ideas, facts, concepts) connected by a web of relations and AI performs the process of getting and delivering them. In an AI model, no piece of knowledge stands alone; meaning and justification depend on its connections. The metaphor encourages systems thinking and interdisciplinary approaches (seeing knowledge as a web rather than isolated silos, or branches).
Maybe think of students as entering the network? Knowledge-network metaphors underpin techniques in AI (knowledge graphs) and education (connecting new knowledge to prior knowledge). They highlight connectivity, context, and complexity—understanding something means seeing how it fits in the web.
Knowledge as a cave of treasure sort of explains itself. But there’s a lot of slop in AI, so it isn’t a treasure trove. It was a great metaphor for universities but not AI.
So: My conception of AI as the world’s biggest ball of twine is accurate in terms of AI architecture—a collection of facts and data in strings tied together and wound around each other over time—but would something so mundane work? What is the metaphor for jumping from one strand to the next? There’s something in me that thinks twine is better than a sea because there are ways that you pull a thread that seem more like what happens when you start querying an AI model.
I see that I have both answered your questions and not!
Hollis
Plagiarism, Cheating, Fraud … Something Else?
Hollis,
What a delicious reply. You’ve answered—and not—in some intriguing ways. So much to unpack and play with. First, thank you for sharing the Gopnik essay. I’d never seen it before but agree with her assessment that large language models are “are ‘cultural technologies’ like writing, print, pictures, libraries, internet search engines, and Wikipedia.” I think that gives us a useful track to run on.
As you point out, we’re somewhat at the mercy of our metaphors here. What is thinking, anyway? Or thought? Or knowing? Or knowledge?
I’d like to propose two more. Let’s start with jazz (an analogy suggested by Nettrice Gaskins several years ago). Say you’re sax legend Sonny Rollins. Because you’ve trained on the great American songbook, you know melody and chord changes to “Stardust,” “All the Things You Are,” “Body and Soul,” and countless more. You’ve internalized what constitutes the music. Should you be prompted by an audience, you could reproduce nearly any desired song to fit the moment. It won’t be the canonical version, if such a thing even exists; it’ll be improv.
LLMs essentially work the same way. They don’t store specific documents, records, books, or other texts—at least, not like a database does. Instead, they’ve been trained on all those texts and have statistically modeled the patterns of all that knowledge (can we call it that?) to such a degree they can improvise the information on the fly. When we prompt an LLM, it’s not retrieving information; it’s reconstituting it, reconstructing it based on all its training. That’s how you can get better responses by fine-tuning the prompt: “Yes, ‘Body and Soul,’ but take it down a key and try it in 6/4—or, and hang with me, 7/8.” And out comes another answer. The LLM “performs” information and we sit behind the glass window with all the buttons and knobs producing the performance.
Here’s the other metaphor I keep coming back to: AI as both library and librarian. Not a traditional library, of course. Unless added by the specific user, there are no books in there, as such. Instead, it’s a library of patterns, an impossibly large archive of the way humans have expressed their thoughts. The library catalog? That’s the underlying algorithm which finds it all by probabilities; nobody really ever sees that except the librarian. When we approach the research desk and say we’re looking for X, the librarian hunts down the information that most probably fits the query. If it comes back with what appears to be Y, we can correct it. “No I was looking for X,” we might say. If it were smarter, the librarian might respond by saying, “Ah, but this is X. Are you sure you know what you’re asking for?”
The answer to that is likely not—at least not all the way. And how could it be any different? We pull up ChatGPT or Perplexity because we don’t know. And we pull up Claude or Grok because we don’t know how to say it assuming we did. We’re ignorant, and we’re looking for help. We don’t search for what we’ve found; such is the case with all research for term papers on Google or JSTOR, or help in the writing lab. The first and most fundamental difference is that agreed-upon methods exist for the older forms of those activities; LLMs fall outside the bounds, at least in most classrooms.
This is where Gopnik is right to say that generative AI models are forms of cultural technology like writing or printing or the internet. We’re too far removed from the initial impacts of those technologies to see the similarities, let alone the pluses. Instead, we see the negatives, the disruptions.
And why not? Those are obvious, and they’re happening because students are violating the norms (even if those are only assumed as fully or truly binding by the academic authorities). But in doing so, we skip past the foundational part of the discussion where we understand what kind of tool we’re working with.
That’s why—at least it seems to me—these conversations generate more heat than light. We’re using the same words but not talking about the same thing. Until we define the tool, we can’t define its proper use. So, yes! The metaphors are foundational. If we don’t get some real definitions to what we’re talking about, we’re hosed.
I think several of yours help especially when we layer them atop each other. I’m not keen on light or the sea, for the reasons you mention, but the cabinet, tree, web, and treasure all work, and they work particularly well when we mix them. AI does work in a sense like a cabinet full of treasure, if we recognize that it’s essentially storing not the treasure itself but the means of reconstituting it.
And with its biblical resonance—and maybe threats—and its branching structure the tree works as well. The problem with branching and information, of course, is that it’s too linear. Knowledge is also lateral (and diagonal for that matter); it interconnects in all sorts of ways. This is one of the drawbacks of older forms of library organization that rely on linear breakdowns of subject matter. But the web takes care of the interconnections.
Maybe that’s what’s also appealing about your ball of twine. It signals something dense, tangled, and full of hidden overlapping connections. When I prompt an LLM, it can feel like pulling on a string and watching associations unspool on the screen. It also explains why the same prompt doesn’t always the produce the same results—like a database would deliver. Instead, you tug on the ends and a pattern emerges based on how the strands are coiled together at that moment.
So let’s go back to the students. Are they plagiarizing when they use an LLM to generate at paper and turn it in as if it’s their own? Plagiarism implies the theft of another person’s work, but there’s no other person here. (We can find cases of AI plagiarism—where the LLM has “stolen” the words of an author—though that’s a funny word to use since there’s no intent on the part of the machine to filch the offending passages. We can say it has reproduced the words based on its training, and that reproduction matched the original. But when that happens, that’s the AI, not the student. I could go further, but I don’t think plagiarism is the right word.)
What about cheating? The better word is actually fraud, right? The student is claiming to have authored a text they didn’t actually write. Unless, of course, everyone knows what’s going on. Usually, the teacher does know, and the student knows the teacher knows, and the teacher knows the student knows. Nobody’s fooling anyone. So is it actually fraud?
It strikes me as more of a triple performance: the AI “performs” the knowledge, the student performs the act of presenting it as their paper, and the instructor performs the act of disapproval but has no other means of managing the situation. This is Alan Petigny’s insight that values change before norms. We’re on the front end of a values shift, but the norms are lagging, and no one’s entirely sure where the line is anymore—because, to return to the point above, we’re not even talking about the same thing.
When a student uses an LLM to research or even help compose a paper, they don’t think they’re cheating. They think they’re using a cultural technology like the internet or Google. If the instructor has blatantly told them they can’t, the student might recognize the incongruity between telling them they can’t use one tool (AI) but can use another, such as Google (which uses AI). Or they might just consider their teacher out of touch and figure the rule is meaningless.
Whatever the situation, it seems like returning to our foundational metaphors, we also need to return to our foundational purposes. If the use cases determine the value of the tool, what’s the point of that contested paper, anyway? What do we expect the student to learn by writing it? What counts as thinking? As writing? Is that clear to the student or only assumed by the instructor?
I hope I haven’t gone on too long, but—speaking of string—there’s a lot to untangle here!
Joel
A Different Kind of Assessment
Joel,
Okay—I think I was delaying writing back because I have been pushing back hard against the premise of the question, the idea that a “paper” is a proxy for learning. I’ve been writing against the “learning outcomes” paradigm in education for some time now. The task of creating an “artifact” that “demonstrates” “learning” to satisfy some sort of rubric that the state or an accreditor can check is killing higher education.
Once upon a time—like, only 25 years ago when I first started teaching—the paper was a communication between the student and the professor. Years ago I had a student in a modern drama course (we read Ionesco, Beckett, Genet, Pinter, and others) turn in a final paper that was written by his cat. I have it somewhere. I told him I wasn’t going to give it back because it was so excellent.
It was basically, “I’m a cat, this thing you call modernism means nothing to me because all cats are ‘modern,’ in that we have no history but also nine lives,” and then went on to critique this play or that from a cat’s perspective. This is the kind of professor I was, that a student could just turn something like this in, not asking first, not at all worried about the grade, just being moved to respond to the prompt (something about modernism as a genre and how these authors engaged with it) by writing as his cat. He hadn’t even mentioned he had a cat all semester.
Now obviously I gave this student an A. The A was for the paper, the confidence, and fact that turning in a paper written by his cat was perfectly appropriate for a class that was in large part about “the absurd” and not tying things up in bows and resisting expectation. Who is more resistant to following rules than a cat? And the paper, as I recall, didn’t hammer the point home. It was part of the unstated premise.
Anyway the moral of this story is not that ChatGPT couldn’t have written such a paper (it couldn’t, without a specific prompt) but rather the student needed me as a professor to have written such a paper which I invited by my being, not by prompting. It was a demonstration of learning beyond any metric that would have failed any metric. It is closer to your jazz metaphor than anything else.
So the other day when I tweeted, “The problem is not students using AI. The problem is asking students to demonstrate understanding with a ‘paper.’ Any campus can change this tomorrow,” I was pushing back against this artifact. I followed up with this:
1. The Personal Quest System
Each student embarks on a semester-long “quest” tailored specifically to individual interests:
At the beginning of the semester, AI helps identify each student's interests, learning style, and knowledge gaps through directed conversation.
Then it creates a personalized learning “journey” with milestones that connect course concepts to things the student actually cares about.
Students document their progress through multimedia “quest logs”—videos, audio, visuals, writing—whatever format works best for them.
The final “demonstration” is a creative project that demonstrates mastery in a way that’s meaningful to that specific student.
No two quests are identical, making AI-generated content pointless. Because it’s personalized, students are individually invested in the outcome.
2. The Evolving Simulation
Instead of static tests or papers, imagine an AI-powered simulation that adapts in real-time to each student's decisions:
Students enter a complex simulation environment related to the course material (historical period, business scenario, scientific phenomenon).
As they make decisions and solve problems, the AI adjusts the simulation to challenge their specific understanding.
The system identifies knowledge gaps and creates new scenarios that specifically target those areas.
Assessment happens along the way through how effectively students navigate complex challenges.
Assessment is turned into more of an engaging game than a test, even as understanding is evaluated.
Both approaches remove concerns about cheating because the assessment is focused on application rather than regurgitation. And most importantly, they would be more fun than writing a paper. . . .
I was thinking about some of the best interactions with my students over the years. Sure we had a syllabus and everyone read the same things over the same weeks and we had amazing classroom discussions. But the papers were individual enterprises, and while the cat paper was exceptional, in fact all the papers were a version of that, between the student and me, our personalized conversation, something I drew out of the student. Nobody every plagiarized in my class, of if they did they personalized the plagiarism so much it wasn’t plagiarism any more.
It’s the individualized relationship that matters, and if that can be done with ChatGPT as the teacher, working with the student, then that to me is a good outcome.
Back to twine—it’s pulling a thread and each student will have a thread that is their own, that is linked to all knowledge but is their own thread, entwined with their own interests and once the student and the AI are pulling together on this thread it is a whole different thing than writing a paper to a prompt.
These sorts of new kinds of “artifacts” are closer to the cat paper than anything else I can think of. Neither that student nor I knew that the semester would end in a final paper written by a cat. It just seemed like the right thing to do and it was. Today, neither that student, nor I, nor ChatGPT would know what the result of the “Evolving Simulation” exercise would be. The whole point is the evolution.
Hollis
What Tugging the String Reveals
Hollis,
So, to paraphrase Dick the Butcher, the first thing we do, let’s kill all the papers. I think you’re probably right about this, though I always enjoyed writing papers as a student.
The trick was, as far as I could tell, to make them humorous—whatever the subject. I always figured that if my professors were spending a grueling weekend grading papers, I wanted mine to make them chuckle or smile or otherwise not regret their life choices. I was never so clever as to have my cat write one, but I often succeeded and regularly got A’s.
Did I learn anything? Of course. But did the professor actually know that?
I’m glad I spent a few days noodling on your response because it helped me think through something I’ve struggled with when people object to student use of LLMs.
I once had to write an impromptu Blue Book essay to test out of remedial English at my JC, Sierra College in Rocklin, California. By sheer coincidence, the assigned topic touched on an area I’d been reading about, so I had plenty to say. But the real test wasn’t my command of the subject matter; it was the act of organizing ideas under pressure and expressing them with sufficient respect for basic grammar. Could I perform the thinking? Could I shape a thought, sustain it, and communicate it?
That’s what a paper is supposed to show. But papers assigned outside of live context have always had a fundamental vulnerability: they don’t require an actual performance. They don’t require thinking in public, in time, under any meaningful form of scrutiny. A student could always outsource the heavy-lifting to a friend, a paper-writing service, or the campus writing lab.
ChatGPT only makes the problem obvious, and the problem isn’t the tool—it’s the type of assignment itself. It was never built to do what we claimed it did. That doesn’t invalidate the good work done by good students operating in good faith; I was one of them. But it does call into question the paper’s status as a reliable proxy for actual knowledge, actual thinking. The instrument was never designed to detect what we assumed it measured. It might reflect both knowledge and thinking, but it’s never been clear it was the student’s own.
ChatGPT also highlights another problem with papers, especially the kind that most students write. If ChatGPT can spit out something that simple and derivative, we shouldn’t have students waste their time with those anyway. At least, it seems that way to me. What are we assessing, that they know as much as they can already find on Wikipedia?
I hear professors and other academics regularly complain about AI slop and crummy papers their students are bringing in. Were the pre-ChatGPT papers things of wonder and astonishment? Not usually. Sturgeon’s Law would seem to apply in both worlds: “90% of everything is crud.”
But! If I were a professor and my students were turning in cruddy ChatGPT-enabled papers, I’d hold them to a higher standard. Bring me something novel! If you can’t be more imaginative than this, we have a problem. You now have better tools, so let me see better thinking. ChatGPT boosts the baseline.
That’s what I find most intriguing about your suggested alternatives: The Personal Quest and The Evolving Simulation. Not necessarily because they solve everything, but because they acknowledge the real question: not what was produced, but how did it come to be? What paths were explored, what resistances encountered, what insight earned?
That’s also why I keep coming back to performance metaphors. Jazz still seems apt. A paper written by a student in isolation, absent time constraints or social context, is like a studio recording polished long after the moment has passed. A live solo, however—whether good or messy—demonstrates inventiveness, fluency, responsiveness, and risk in ways that a sterile paper can’t quite manage. Maybe that’s why those dreaded Blue Books are making a comeback.
But that’s not the only answer, and here’s where your cat paper, in all its delightful absurdity, is so interesting: What it shows is that a student, when given room and trust, will sometimes take real risks, will attempt something that only works because of the context you as their teacher cultivated. You made it safe to experiment. That paper wasn’t written to a rubric; it was written within a relationship. It’s the same gamble (safer in my case) I took when writing satirical and otherwise humorous papers; I was counting on my professors’ human response; in that I did something different, something more than my peers, and I was rewarded for the effort.
Something else you said has stuck with me: “The whole point is the evolution.” I agree, and both the traditional paper and its usual ChatGPT expression work against this without the sort of relationship you cultivated in your classroom.
Evolution can’t happen if outcomes are known in advance. And yet, most formal assessments still assume that they should be. The very structure of those assessments—the conventional paper included—works against discovery. It rewards students for producing artifacts that confirm what we already know and believe learning should look like. It tends toward the safest, most middling, most conservative expression of ideas. Unless a student is creatively using ChatGPT, they’re often producing something worthy of little more than a full-body eye roll. The professors can confirm!
But this is precisely where my frustration comes with the standard academic complaints about LLMs, especially in the humanities. Back to Gopnik: Generative AI may be the most powerful cultural technology of our lifetimes (I say may, but I mean is), and yet many seem more invested in condemning it than in discovering what it might make possible or how it can be used. They seem to point to the 90% that’s crap and condemn the whole enterprise—never mind that the reason we’re getting so much crap is we’re basically asking for it and not cultivating the kind of safe, experimental environments where students might produce something better; and, of course, they could if prompted. I think this might even address the concern that LLMs are actually making us stupid or undermining cognitive performance. (Were we doing so well beforehand?)
Generative AI isn’t going away, and it’ll be everywhere in the workplace (not that that is the only criterion to note). It seems to me we need less pontificating and positioning from afar and more intentional experimentation, more playing, more prototyping. We can’t know what this is or what it’s capable of until we’ve used it, extensively—and it’s clear many haven’t.
So, yes! Back to your twine metaphor: Let’s keep tugging on those strings and seeing what they’re attached to. We don’t even know right now! Let’s follow where they lead. And if we do it together—students, educators, tools, and all—we might rediscover that learning has always been about emergence. And if that means we spare students the annoyance of writing papers that don’t mean much to begin with, all the better.
But what of the claim that using LLMs like ChatGPT is degrading our ability to think? As Nicholas Carr recently said, “Armed with generative AI, a B student can produce A work while turning into a C student.” What do you make of that?
Best,
Joel
Solving for the Wrong Students
Joel,
Yes! I saw that tweet. And it reminded me of long ago advice by a department chair who said he was quoting Alfred North Whitehead when he said there are occasions you give a C student an A to because you suspect someday he’ll deserve it.
I’ve never found the quote, but I know what he meant.
The problem is never the B students. The B students are like second pig in the three little pigs story. The first pig spends the least time working building his straw house and then plays the rest of the day until the wolf comes along. The second pig works all day on his stick house, and it still doesn’t withstand the wolf’s huffing and puffing. The third pig works all day on his brick house, and he at least gets the satisfaction of sheltering his homeless brothers.
The moral of the story is don’t be the second pig! That’s the one Nicholas Carr’s tweet refers to. The first pig is the one who uses ChatGPT to do as little as possible and go off and play. The third pig is going to do old-fashioned research and understand what ChatGPT is useful for.
Perhaps the best thing that AI will do is get rid of the entire grading system? It will huff and puff and blow it all down!
Let’s talk about the future next time—let’s truly put ourselves 10 years from now . . .
Hollis
Thanks for reading. If you enjoyed this post—or hated every bit of it—please share it with a friend!
More remarkable reading is on its way. Don’t miss out. Subscribe. It’s free and contains trace amounts of enlightenment and zero microplastics.
Wow, this was a long read! I wish these conversations could happen live, but presenting your perspectives in an epistolary format is the next best thing. Framing student LLM use as "triple performance" hit the spot, as did Hollis' point about the importance of individualized relationship (although I think the learning outcomes are vastly different with ChatGPT as teacher). I appreciate that you spent time digging deeply into finding the right metaphors, as it helps to clarify the muddied waters of conversation around AI.
Jazz vs. Twine-- kinda fitting since "cool cats" are associated with both.
I see your header image was created by ChatGPT. It's a three-quarter shot, which means the upper part of the cat's back hind leg should have been visible, framing his stomach. "Chat" needs to repeat his course in Figure Drawing.
Had to smile about Hollis reading an advance copy of your "brilliant book." 😽😅