Hug the Robot? AI and the Humanities
A Symposium Featuring Hollis Robbins, Martin Puchner, Shadi Bartsch, Henry Oliver, and James Pethokoukis
It’s easy to pit the humanities against STEM disciplines of science, technology, engineering, and math. Nothing prompts the squareoff like AI, particularly large language models (LLMs). But what about potential partnerships between AI and the humanities?
AI companies are using poets to train their LLMs, and countless practitioners in the humanities are already using AI to help them in their work. Scholars have, for instance, created an AI model to facilitate translation of Akkadian clay tablets, which will shed potential light on the politics, economics, and religion of the Ancient Near East.
Others are busy creating AI guides to help readers explore the classics. And Harvard English professor Martin Puchner has generated custom GPTs to help readers engage the ideas of Socrates, Thoreau, George Eliot, and others. You can even get leadership advice from Machiavelli (“use at your own risk”).
Given the strong reaction in the humanities to AI, I thought I’d speak with Puchner about it, along with a few other academics and intellectuals: University of Utah humanities dean Hollis Robbins; University of Chicago classics professor Shadi Bartsch; author
, creator of The Common Reader; and American Enterprise Institute senior fellow , creator of Faster, Please!I asked all five the same six questions:
What are the relative upsides and downsides of AI in the humanities, as you imagine them actually playing out?
Given the concerns some express that AI will replace human creativity, originality, and critical thinking, how do you address those worries?
In a world of infinitely accessible information, it would seem the value has shifted from the answers we produce to the kind of questions we might pose. How do the humanities improve our questions?
Speaking with economist Tyler Cowen, venture capitalist Peter Thiel recently said he thinks “word people” will do better than “math people” in an AI future. Do you agree? How do “word people” best thrive in an AI future?
What role do the humanities play in guiding or informing the work of STEM disciplines? How can these worlds best cooperate?
As universities struggle with budgets and limit their humanities offerings, how can AI help fill the gap—either inside universities or out?
Below are their answers.
Hollis Robbins
Upsides, downsides
I remind people all the time that AI developers were not expecting that the public would immediately want to use LLMs as a lookup machine that would also write reports for you—a kind of cross between Wikipedia and a term-paper-writing service. OpenAI was developing products to reason and write in natural-sounding language, not to give consumers factual answers to random questions. The early conversation about ChatGPT in the humanities focused on errors as well as plagiarism, on what LLMs couldn’t do well rather than what it could or would soon.
Back in 2020 I had been working with researchers to determine how well GPT-3—the earlier model—could generate poems “as good as” human poems. Our findings showed that working poets—people who were exceptionally good and understanding language, by which I mean the diachronic and synchronic relationships of words to other words—could tell the difference between GPT-3-created poems and human poems even while non-poets could not. We poets could hear the parrot nature of GPT-3.
While new models, GPT-4o, Claude 3.5, the new Llama, reason with words much better than previous versions, LLMs still want words to be monosemantic (to mean one thing) rather than many things simultaneously. Yet the beauty of language is that words mean wonderfully various things at once. You’ve probably seen the study that LLMs do not do well at the New York Times game Connections, in which players have to group sixteen words into four groups of four based on some shared characteristic—homophones, names of hip hop stars, rhymes, things you’d find in a kitchen, Mariah Carey song names, the first half of names of U.S. presidents, computer commands, and so on—and the key is to see all the possible characteristics of every word and suddenly you see relationships.
Creativity, originality, and critical thinking
AI is already provoking conversation about what is “good enough” for most purposes. What is a “good enough” song? A “good enough” work of art? A “good enough” film? The story of human creativity in all creative realms—art, poetry, music, literature, architecture—is the story of which works survive over time. Go to a museum and you will generally see the very best work over centuries.
As a rule, great works of creativity and excellence of craft survive in part because they inspired other artists who looked upon the great work and marveled at it. (When you go to a museum, you can usually discern influence over time.) Does AI marvel and change its output because it just read or saw something marvelous? Perhaps someday it will, but how do you teach it to marvel when it has already absorbed so much and hasn’t yet marveled?
Better questions
Consider the yellow roses that Newland Archer sends to Countess Ellen Olenska in Edith Wharton’s Age of Innocence (1920), which is set in 1870s New York. Most scholarship on the roses has to do with what they “symbolize” and the like. So when you ask ChatGPT or Claude 3.5 or another generative AI product about the roses, it will tell you about what they symbolize. The output won’t be any better than what is found on the web.
If you ask it about the supply of yellow roses in New York in the 1870s, it will give you information about the state of floristry in the era of railways and steamships and long distance transport, and this is better information than most literary critics have (an upside!), and it may tell you that yellow roses were brought to the U.S. from the Middle East a century before, but the information will not be connected to Wharton’s novel.
You have to ask it whether Wharton is commenting on the state of floristry at the time (Newland Archer doesn’t seem to know, when he gets into the habit of sending yellow roses, that there is no reliable supply chain, and he “scour[s] the town in vain” to find more, weeks later, leading to an existential crisis) to get output that combines the best of AI with standard critique of the novel to get something new and innovative. But anyone who knows enough to ask questions about what Wharton might have meant by incorporating the still undeveloped cultivation of yellow roses in 1870s New York into her novel doesn’t really need AI. In other words, AI currently is good at answering questions that a smart human knows enough to pose.
So to answer your question—how can we teach better questions that allow AI to help humanists? We can start with the best four questions we should already be asking: What is known? How is it known? What is still unknown? and Why is it still unknown? We don’t need AI to teach these questions, but AI should make them absolutely central to all teaching in the humanities.
Word people, math people
I’m writing an essay in praise of Peter Thiel’s view of the humanities generally. I’ve seen Thiel speak in person twice in the past two years at academic conferences and his not-so-Straussian message is: The humanities today are shockingly insignificant. Or, to paraphrase Norma Demond in Sunset Boulevard, the humanities are big; it’s the faculty that got small.
I think Thiel is absolutely right. Consider the fall of 2022, as FTX was spectacularly imploding and Silicon Valley crypto favorite Sam Bankman-Fried (the epitome of “math person”) and FTX was riddled with frauds; when a previous Silicon Valley favorite, Elisabeth Holmes, was sentenced to eleven years for her fraud as CEO of Theranos, where were the humanities scholars who should have been asking Girardian questions about mimetic desire for shiny new things?
Humanists were ignoring all (and ignoring ChatGPT) that to read John Guillory’s just published sociology of literary criticism, Professing Criticism, which takes a hard look at what “word people” have been doing for decades (not much) and when they will again ask big questions. I see AI as forcing “word people” to ask better questions about the world and how we live in it.
Humanities and STEM
The prompt! Objects at rest tend to stay at rest until they’re prompted. I’m working on a project now with several faculty members to develop a prompt language that is less imprecise than English. We’re thinking histograms or Latin or possibly a revival of Esperanto.
Filling budgetary gaps
Your question is gesturing to places like West Virginia University, which recently cut its language programs because of budgets. There is a way that AI can fill in the gap in some ways. If the idea is that state universities have language programs so that there will also be speakers and writers of a particular language in that state—creating teachers of Spanish, French, Italian, German, Russian, Arabic, and so on, to develop competency in future generations—then I suppose you could argue that AI translation technology is “good enough” for most purposes. But is it?
The real opportunity for AI researchers in developing AI products in the education space is not tutoring, despite what everyone is saying. Ed tech has not come close to developing any product close to the teaching power of a great human teacher. AI ed tech is “good enough” at delivering information, and if that’s fine for you, fine.
The real opportunity for AI researchers in the education field is scouring the existing literature for what has been overlooked, the needles in the haystack of insignificant and redundant scholarship that makes up most of what is published. Only AI has the power to filter out human weakness and generations gaming the system—citation churn, citations for political reasons, messes of dead ends—and then identify what deserves a second look.
Hollis Robbins is dean of the humanities at the University of Utah and author of Forms of Contention: Influence and the African American Sonnet Tradition.
Martin Puchner
Upsides, downsides
The main downsides are disruptions in the labor market. As many people have pointed out, AI is coming for white-collar jobs in advertising, PR, communications, all kinds of content creation, and these are jobs humanities majors have traditionally gone into. This isn’t to say that the humanities won’t be able to adapt and that new kinds of jobs won’t emerge, which I think they will, but the disruptions will come first and they will hurt.
The upsides, right now, are intellectual and pedagogical. We have been waiting for AI for something like two hundred years, and now it’s here, though it looks very different from what we expected. With its language and image creation capacities, and the ability to synthesize knowledge, AI should be a boon to people in the humanities. Alas, that’s not how most colleagues think of it right now, but some of us (including everyone on this blog) are trying to change that.
Creativity, originality, and critical thinking
I think many of these worries are driven by the wrong kind of science fiction. We in the humanities often put a kind of defensive halo around terms such as creativity, originality, and critical thinking, but if you look closely at how humanities are actually taught, I’m not sure that we’re all that good at teaching creativity, originality, or critical thinking; when these values are taught, they’re more often byproducts of our teaching practices rather than deliberate goals.
I do believe in these values. I just think that if we were to design teaching practices that primarily aimed at them, we’d end up with something very different from what we currently have. What I hope is that AI will force us finally to take the teaching of these values more seriously, to be more intentional about it. And who knows, perhaps AI will actually help us achieve them, if we use it right.
Better questions
I like the way you pose this question! There’s a long humanities tradition based on asking questions, beginning with figures like Socrates, Confucius, and the Buddha. More recently, “hermeneutics” from Gadamer to Heidegger has made the posing of questions a primary focus.
Here, too, I think that if we want to take this approach seriously, we’d have to change our teaching practices. What we would need is real methodological diversity, teaching students different approaches, different ways of asking questions. Sadly, our actual teaching practices are often much narrower.
Word people, math people
I’ve been following these statements with great interest (Robert Goldstein, the COO of BlackRock, recently made a similar point). What intrigues me is that these are not self-serving humanists making wild assertions about the value of the humanities, something I’ve become quite skeptical of, but hard-nosed business people.
What they see is that something huge is happening with language, which after all is humanity’s key technology. And the humanities know a lot about language. What we in the humanities need to know is learn how to use that knowledge to maximal effect.
Humanities and STEM
On the whole, I’m not a big fan of a division of labor according to which STEM people build exciting new things and humanists then “criticize” those things. I don’t think we in the humanities have a patent on ethics or morals, nor do we have a particularly good track record in those domains. I actually think we have a lot to learn from STEM when it comes to evaluating evidence, guarding against common fallacies, accepting negative results, and changing hypotheses.
Having said that, what we have is a deep sense of history (things were different in the past) and of cultural diversity (things are different in different places), so bringing these areas of knowledge to the table is important.
Filling budgetary gaps
I think AI will have a huge impact on education across the board, including the humanities. Just two examples from my own experience. I’m currently building a critical thinking and writing course for Harvard’s new online platform. The course is not restricted to Harvard students and will be scalable, meaning that a potentially unlimited number of people can take it. To this end, we’re figuring out both how to use AI for all kinds of teaching purposes, and also to emphasize those aspects of writing and thinking that AI is not good at.
At the same time, I’ve also been building customized GPTs that let you talk to Socrates, get writing advice from Scheherazade, and life coaching from Montaigne. These are just two examples of how AI can make teaching more accessible and more interactive. Also, hopefully, more fun.
Martin Puchner is professor of English and comparative literature at Harvard University and author of Culture: The Story of Us, from Cave Art to K-Pop.
Shadi Bartsch
Upsides, downsides
On the plus side, I hope the access to a huge store of data will enable humanists to demonstrate the humanistic embeddedness of scientific research as well, creating a bridge between C.P. Snow’s “two cultures.” This outcome would weaken the perceived chasm between the two fields that lets people to say things like, “The humanities are dying,” and (on campuses) represent non-STEM departments or institutes as unessential to human growth. There is no us versus them. The humanities and sciences are in symbiosis, and my hope is that AI will help us see this.
On the negative side, we’ve seen some problems playing out already in secondary and higher education, such as students handing in LLM-generated papers or not learning how to do original research.
Outside the academy, it’s a hugely complex question, of course. How will AI affect our relationship to knowledge? In the university context, I am cautiously optimistic, since I hope the continued development of LLMs will result in an emphasis on, not a diminution of, creative thought and bridges between knowledge silos. More broadly, we humanists and social scientists are going to be able to comment on AI’s effect on humanity even as it happens, creating a change more dramatic than the invention of the printing press.
Creativity, originality, and critical thinking
Prior to any positive outcome, we have to remember that we, the humans, have agency. We can decide whether or not we are going to let AI impact human creativity.
That is why humanists, scientists, and the developers of AI need to work together from the start to limit potential negative outcomes on society. Later will be too late (this is a point emphasized by Dr. Songyee Yoon, former CSO of NCSOFT and advisor to Stanford’s HAI initiative). If we can ask this question about creativity, can’t we all work together to solve it? And yet, who is doing this work? It’s urgent!
Better questions
I take it you mean in a broader sense than knowing how to enter better prompts into LLMs? I’d say we need to understand that we live in a tremendously complicated environment, and if we don’t build that knowledge into our questions, the simplicity of our queries will generate simple answers. And that is dangerous. And yet it’s humanistic/social science context (history, literature, sociology, anthropology, history of science, philosophy, and others) that make our lives complicated in the first place.
The reasons Socrates was put to death in Athens in 399 BC are not reasons that reverberate with us now: can AI understand that? If not, improving our questions won’t help.
Word people, math people
I decided to ask Perplexity.ai, which cannot be a “word person” because it actually knows nothing but algorithms. Combing through the human-generated data available to it, it reflected the following views:
Peter Thiel’s statement that “AI is good for word people, bad for math people” highlights the increasing importance of language skills in the AI era, namely:
—Since most of us communicate via language, communication with AI will required well-developed language skills.
—The skill of crafting precise and effective prompts to guide AI output is becoming increasingly valuable. Those who excel in language and communication are naturally suited for this role.
—Content creation and curation: Human expertise is still needed to create nuanced and emotionally resonant material.
—We’ll need humanists who can collaborate with sciences and we’ll need ethical governance along with that.
—Word people from various backgrounds can contribute to making AI more representative and beneficial for all users.
—Bridging technology and humanity: As AI becomes more prevalent, there’s an increasing need for individuals who can explain complex technological concepts to non-technical audiences and ensure that AI development remains focused on solving human problems
I think that’s pretty good for a non-word nonperson. Imagine what the rest of us will be able to do.
Humanities and STEM
The first step to cooperation is communication. This is why I am advocating for a rethinking of departmental and divisional units at American universities, why I have a white paper to that effect, and why I and other faculty at the University of Chicago implemented this “discipline-agnostic thinking” at the Institute on the Formation of Knowledge (IFK).
Our need to take that first step in our thinking is crucial. We’re nowhere near cooperation yet, we have to speak to each other first, and understand our commonalities.
Filling budgetary gaps
We’ll need leaders of higher education to support a context in which AI, science, and humanities can work in collaboration. I honestly don’t think AI is going to do much for filling the gap without this as a beginning point.
Shadi Bartsch is professor of classics at the University of Chicago, director emerita of the Institute on the Formation of Knowledge, and translator of Vergil’s Aeneid.
Henry Oliver
Upsides, downsides
When you’re reading Ulysses and you get stuck on a tricky passage, you can ask ChatGPT-4 and it will give a reasonably good explanation. (It will also recognize the book without being prompted.)
The downsides will be to reduce the median person’s engagement with literature: AI is also, like adaptations and SparkNotes, a new way to not read the book. However, I do believe very strongly that of the many things AI will be able to do for humans, reading isn’t one of them. AI can be a compliment to reading, but there is no adequate replacement for reading Anna Karenina.
Creativity, originality, and critical thinking
I don’t think AI will replace human creativity. For example, I think novelists will still write novels. (Live music is thriving. IRL events seem to be growing. Will we see a new Dickens reading their work?) And models including Claude 3.5 Sonnet don’t seem to me to have made much progress as creative writers.
If you drop a few lines of poetry into ChatGPT it responds with the rest of the poem. This is even true if you drop a stanza from the middle of the Faerie Queen by Edmund Spenser. It did write its own poetry when I gave it some Shakespeare, but in general I think that indicates something about the model.
That said, technology is often an impetus for new forms of art: the Renaissance and perspective, Shakespeare and the indoor theatre, the nineteenth-century novel and mass publication, Hollywood and celluloid. (Some people would add television and video games to that list.) So there will probably be some sort of AI art, and I expect that the best forms of that will be made by the most creative users.
Better questions
I’m not academic, so I can’t speak to this, but there seems to be an opportunity to interrogate books in a much bigger, methodological manner than we have previously been able to do.
Word people, math people
I also found Thiel’s answer interesting. While it’s a very good and unusual answer, I don’t think it’s right. I’m sure that many “word people” will flourish in an AI future, but that will be because of their other personal qualities. There are plenty of “word people” without the necessary interest, engagement, or sense of possibility in AI. There are plenty of “word people” who seem to wish it didn’t exist or who ignore it. I think that will make a very, very big difference to who will flourish.
Humanities and STEM
The limits of your imagination of the limits of your world. Scientists like Thomas, Edison and Nicola Tesla knew that and read widely in the humanities. Tesla, for example, could recite large amounts of Goethe, and it was a passage from Goethe’s Faust that inspired the induction motor.
Filling budgetary gaps
I can’t answer that question for universities, but I do hope as I said above that it will enable people to read great literature with something more like a tutor available to them. I see the humanities flourishing online, and AI will make that easier—even if only by reducing the back-end cost of running something like the Catherine Project.
Henry Oliver writes The Common Reader newsletter and is the author of Second Act: What Late Bloomers Can Tell You About Success and Reinventing Your Life.
James Pethokoukis
Upsides, downsides
One of the most exciting AI applications is as a “super research assistant.” While most of the focus here has been on STEM fields, it’s not hard to imagine AI allowing humanities scholars to analyze vast amounts of text, images, and data to uncover new insights and patterns. This could lead to breakthroughs in understanding historical trends, literary analysis, and cultural phenomena.
AI can also democratize access to knowledge by providing powerful translation tools and making rare texts or artifacts more accessible through digital preservation and analysis. For example: Researchers have deciphered the first word from ancient Herculaneum scrolls, charred by Vesuvius in 79 AD. Using AI, the “Vesuvius Challenge” team cracked the 2,000-year-old code without unrolling the fragile papyri. This breakthrough opens the door to reading these long-inaccessible historical documents.
Downsides? Other than people thinking the humanities are unimportant in the Age of AI, there’s also a danger of critical thinking skills being eroded if AI-assisted analyses become AI-generated analyses. The key will be leveraging AI as a tool to augment human creativity and insight, rather than replace it.
Creativity, originality, and critical thinking
It’s important to keep in mind that we are talking about a “thinking” technology that still fundamentally lacks the human consciousness, emotional depth, and lived experience that fuel true creativity and critical thinking. Being trained on the internet falls short of being trained on the fullness of the human condition.
As a tool of creativity, AI can serve as a source of inspiration, helping to generate initial ideas or variations that humans can then refine and imbue with deeper meaning. This is obvious to anyone who, say, has tried to refine an AI-generated image on Midjourney.
For critical thinking, again, AI can process vast amounts of information and identify patterns, but human insight is still crucial for interpreting context, questioning assumptions, and drawing nuanced conclusions relevant to human society. And it is simply impossible to predict the new ways humans will find to express originality by working in tandem with AI, creating hybrid forms of art and thought that weren’t previously possible.
It is a similar challenge as predicting how human workers will work in tandem with AI or the new jobs AI will create.
Better questions
At its best, the humanities improve our questions by encouraging us to think critically about context and implications. We need psychology and sociology, for example, to understand the anthropological impact of rapidly advancing technology. This interdisciplinary approach enables us to ask more comprehensive questions about how AI and other technologies might reshape human society and behavior.
Finally, the humanities encourage us to question the ineffable and explore beyond the literal. We need literature and art and the social sciences to judge value in a world where AI can create content. This pushes us to ask deeper questions about meaning, beauty, and human experience that go beyond mere information processing.
Word people, math people
My baseline, which assumes a world where AI has constraints and does not possess science-fictional abilities bordering on the magical, is that math skills will continue to serve workers quite well. But so will the ability to understand, analyze, and communicate the human condition through art and history and the social sciences. I think it’s a mistake to view a radically different world in a time frame that is relevant to most of us.
Humanities and STEM
Again, the humanities enrich STEM disciplines by providing contextual understanding of human experiences, cultures, and historical trends that can inform scientific inquiry and technological innovation. Linguistics and communication studies enhance how scientific findings are conveyed to the public. Literature and art inspire creative thinking in design and problem-solving. Anthropology and sociology offer insights into human behavior and social dynamics. History provides valuable lessons from past scientific and technological developments.
By collaborating, humanities and STEM can create more comprehensive and nuanced approaches to research and innovation. This interdisciplinary synergy can lead to breakthroughs that neither field might achieve alone through a more holistic understanding of human needs and aspirations.
Filling budgetary gaps
Outside universities, AI can democratize access to humanities knowledge through interactive courses, virtual museum tours, and language learning apps. And that’s probably just the start. Rather than treat a chatbot like a search engine, actually chat with it. Discuss humanities subjects with it. Download a paper, or even a book, and begin a discussion. This isn’t a substitute for a class with a brilliant professor, but most professors aren’t brilliant.
Inside universities, AI can offer intelligent tutoring systems that adapt to individual student needs, analyze vast collections of historical texts and artifacts, and generate engaging content for literature and language studies. AI can make professors more productive and effective.
James Pethokoukis is senior fellow at the American Enterprise Institute, writes the Faster, Please! newsletter, and is author of The Conservative Futurist: How to Create the Sci-Fi World We Were Promised.
Thanks for reading! If you enjoyed this post, please hit the ❤️ below and share it with your friends.
Not a subscriber? Take a moment and sign up. It’s free, and I’ll send you my top-fifteen quotes about books and reading. Thanks again!
Thank you for soliciting and sharing these reflections, which provide useful food for thought. To add an observation of my own: I think the contributions are focused a little too much on what we can or can't do with AI, and not enough on how we may be shaped by it. None of your contributors have been molded by AI's ready availability; they all come to these questions already formed by a deep education and long experience of thinking carefully about complex questions. I teach at a small, non-elite, Christian university. I'm a political theorist, but at a small school I teach widely, cross-listing several courses and teaching in our general education humanities sequence. My students come to me as 18-year-olds. Few of them have received especially good educations in the past; few have been encouraged to develop a deep love of learning; those who are bright may have learned mainly how to succeed reasonably well with modest effort. (Of the exceptions to this general description, probably 75% have been homeschooled.) What I most want them to take away from my classes (and those of my colleagues) is a love of learning, a sense that the world is a vastly interesting place, a desire to keep wondering and asking questions and exploring new things. They will remember relatively little of the specific content I teach them, but if they leave college with this, it will stand them in good stead for the rest of their lives.
I worry that easy access to AI will make this much harder. It has become far too simple for students to generate written work that is pretty good without ever doing the work of reading the assignments and engaging with them. The temptation not to think, and not even to want to think, is very powerful, and it is asking a lot of an 18-year-old to resist that temptation. As C. S. Lewis wrote in "The Weight of Glory," there are certain experiences that bring great rewards--his example was learning Greek--but rewards that are only available to those who have already become adept at the relevant activity. My experience so far--limited, obviously--leaves me afraid that AI is making it very easy indeed for students to "succeed" in ways that will leave those rewards forever unavailable to them.
Whenever I see an AI generated image, it leaves me cold. The lines of AI generated images are generally smoother than a photographed or painted image, yet there is a lack of depth in that very smoothness. I don't mean perspective depth, I mean emotional depth. AI images lack humanity.
So I would question the ability of AI to actually be able to detect the true depth of human creativity. Take the example used here of the symbolism of the yellow roses.
Dorothy L. Sayers, in 'The Mind of the Maker' notes that often a writer mau not realize all the foreshadowing and symbolism that is in their work. They may make a creative decision for one reason, and only later will readers see another reason. This matches my own observation that when a work, whether in words, images, or sounds, is good, it often draws a whole host of interpretations that the original creator had not thought of during creation. So, will AI really be conveying what Wharton was thinking about the yellow roses, or only what readers think she was thinking about the roses.
The rose example reminds me of a story that a relative told me about why he doesn't like to read. In high school, he had to read 'The Great Gatsby' and then answer questions about the book. One of the questions was 'Why was the light on the dock green?" My relative's answer was based on what he knew from living in a harbour town - he suggested the light colour was a signal for boats. His English teacher flatly told him he was wrong - she had wanted an answer that gave a symbolic significance. Her reply, and the contempt for him conveyed in it, convinced him that the apparent subtleties of literature were not for him. Even if AI could generate sophisticated answers about the significance yellow roses, if those answers will simply be used generate questions that make another generation of potential readers feel the machine knows better than they, it will only drive them further from the humanities.