We're playing the role of King Thamus
Large Language Models in education could reform or transform. My stab at using ChatGPT in my seminar. I think it worked, but there are big questions.
In his Phaedrus, Plato recounted a visit of the Egyptian god Theuth to Thamus, the King of Egypt. Theuth had devised a new “branch of learning that will make the people of Egypt wiser and improve their memories. My discovery provides a recipe for memory and wisdom.” It was writing. The king was skeptical, and he told Theuth that though he was the inventor of “great things of art,” Thamus reserved judgment of “what measure of harm and of profit [Theuth’s inventions] have for those that shall employ them.”
Thamus rejected Theuth’s new technology of writing, saying, “If men learn this, it will implant forgetfulness in their souls. They will cease to exercise memory because they rely on that which is written, calling things to remembrance no longer from within themselves, but by means of external marks.”
King Thamus put an even finer point on it, too:
It is no true wisdom that you offer your disciples, but only the semblance of wisdom, for by telling them of many things without teaching them you will make them seem to know much while for the most part they know nothing. And as men filled not with wisdom but with the conceit of wisdom they will be a burden to their fellows.
A pretty devastating critique.1
I am wondering whether the rush of new AI products like ChatGPT2 is Theuth’s new invention for us today. We play the role of King Thamus in this twenty-first century reprise of the story, and we determine the implications of new AI/LLM bots.
* * *
OpenAI released ChatGPT last year on November 30, 2022. The fall semester was winding down, and rumors and fears began to rise. I had pre-holiday drinks with friends in mid-December, and we talked about how shocking the new thing was. “Just wait until the next version comes out,” I told them. I had seen brief reports and surmises about what was to come from OpenAI as soon as the first quarter of 2023.
What was unveiled on the last day of November was enough to raise eyebrows and kindle dread among teachers. Would a more powerful release shake some foundations of learning, we wondered. In early winter, writing, for one, seemed to be in danger of a lingering and cold death by automation. At least I was thinking that might be the case. With writing, practices of thought also seemed endangered.
I wondered what fall 2023 would bring to my seminar. Or, rather, what I could do beforehand to rejigger the mechanisms I used to teach, since writing and engagement with words were — and still are, I’m happy to say — central to my ways of teaching and learning.
I first thought of banning ChatGPT. My mates in December were leaning in that direction, since they felt that their assignments were out of ChatGPT’s grasp in any case.
I thought about shifting toward the spoken word: oral exams (the hated things), extemporaneous speeches, tutorials, one-on-ones, more and more conversations. These never really lacked in my seminar, of course, but they weren’t freighted with evaluation and, in any case, were often the offspring of thought trained by writing. Speech in a seminar should admit occasional half-hewn remarks, after all. Exchanges can sharpen and clarify a half-cocked comment.
I even thought about just hanging it up entirely. Just backing out of leading a seminar. Period. I had already committed to fall, though, so that would be dishonorable — not only to my colleagues, but to myself. A retreat based on a rumor, maybe a fantastic and nebulous specter? Yielding to a half-wit robot?
In June, a couple months before classes would begin, I decided avoidance, banning, or retreat were cowardly. If learning is bold, I figured, ChatGPT ought to become a subject of inquiry, not Something-That-Must-Not-Be-Named. Besides, managing ChatGPT and its successors is likely to be some part — maybe even a significant part — of students’ professional lives.
“Almost Weekly Letters” might reveal effects
Thoughtful writers gain a certain critical detachment from their own writing — usually a hard won distance, often acquired by watching other’s critical eyes: The crusty editor. The teacher waving a red pen. A discerning parent. A trustworthy peer. Of course, exercises to help writers gain that valuable perspective have long been part of learning the craft of writing. (Teachers try to help foster that perspective with so-called “peer reviews.” It sometimes works.)
I DECIDED AVOIDANCE, BANNING, OR RETREAT WERE COWARDLY. IF LEARNING IS BOLD, CHATGPT OUGHT TO BECOME A SUBJECT OF INQUIRY, NOT SOMETHING-THAT-MUST-NOT-BE-NAMED.
I decided to ask students to unpack some of the challenges that robotic utterance might pose to their own learning and development. I also realized that I wasn’t in total control. Students would be using ChatGPT, Bing, or whatever in other classes, and they didn’t necessarily honor bans anyway. In large part, my panicked response to ChatGPT expressed a fear of loss, not just of some of my teacherly tricks and trappings but of a facet of human uniqueness and value that’s right at the core of teaching and learning.
What if ChatGPT would impoverish an ability to learn and to grow in mind and sense? What’s at stake for learning humans when writing and something resembling thought comes from a computer?
I crafted three AI-assist assignments as part of my series of “Almost Weekly Letter” assignments. With these letters, I hoped to probe ChatGPT to help students determine who (or what) is in charge; what does it add, and is it sufficient; and when (if ever) is it okay, and why. (The three letter assignments and related documents are available in PDF form at the bottom of this post.)
The first letter assignment featuring an AI bot used ChatPDF rather than ChatGPT. ChatPDF is actually quite inventive and, if it works reliably, a boon to dabbling into large complicated PDF documents. It has an interface very similar to ChatGPT, but instead of raking together probable answers from a large model, the tool focuses on a PDF that users upload — in the case of my letter assignment, a lengthy article by Richard Kimball called “Heidegger at Freiburg, 1933.” With PDF loaded, users of ChatPDF then ask questions about the article, and the bot responds and offers links to specific areas of the article.
The second and third letters used ChatGPT or a similar bot (some students chose Bing). The second letter used a tactic known as “A/B testing,” in which some variable is different in versions “A” and “B.” The test part notes effects of the difference. It’s great for exploring market responses in advertising. For my purposes, the difference was when students used ChatGPT: either when they began their letter or after they had already completed a draft. The third letter built off of the second and asked students to use ChatGPT (or similar) as a “co-author.” The assignment allowed freer form use of the tool. For both the second and the third letter assignments, students devoted some of their time to consider the experience of using the bot and reflect on the complexities it may have introduced or helped to resolve.
Should we use ChatGPT as an object of inquiry? Or have writing bots already moved beyond, er, question?
If the letter assignments ended up only on paper or, in this age, in a PDF, the whole thing would be fruitless. Seminars are built on well considered talk, and though the writing part of the assignments help with the “well considered” part, discussions needed to follow. I devoted between a quarter and a half of a class session to discussion of the letters. Students first met with the students who I had paired them with, and then we convened as a group to share larger issues and discoveries.
The second assignment with the A/B testing obviously required discussion, since students experienced either “A” or “B” and the test itself emerged only when “A” and “B” could come together in exchanges over the seminar table. (I had paired students so that one had done “A” and the other “B.”)
The letter assignments culminate in reflection on the experience of using the AI bot. This is critical to the success of the assignments, and it has become a standard part of other’s courses that use bots in writing.
There are at least two kinds of reflection that instructors ask students to do. One is a matter that appeals to dissection and “reverse engineering”; students take their experience of using a writing bot to investigate how it might have been constructed — its underlying data, the readerly audience that its output assumes, and the like. For example, Ryan Cordell’s “Writing with Robots” course at UIUC includes a “Collaborative AI Paper & Audit” that requires “reflection”: “Finally, you will draw on your prompt engineering and annotation to write a critical reflection … that use[s] your experiences to theorize about the language model itself, and its applications to scholarly or critical writing,” Cordell writes. “Essentially, you are using the iteration and probing of this assignment to interrogate the ‘black box’ system.” Such reflections expose a writing bot’s inner workings and its artificiality, too — especially since students identify and explore a bot’s shortcomings, its apparent “strengths,” and, I think, glimpse the void at the center of its concocted prose.
Another form of reflection particularly intrigues me and, frankly, challenges my abilities as a teacher. How does the technology change (or doesn’t change) human capacities and definitions of thinking, of creativity, of work, even of “happiness”? The reflection, in a sense, asks students to watch themselves using writing bots like ChatGPT and note subjective implications of using them. This is a very hard task, but I think part of success is just giving it a whirl. You realize powers of self-observation. Others are trying to incorporate this species of reflection in class as well. Stanford’s “Future Text: AI and Literatures, Cultures, and Languages” lists “learning goals.” Number one on the list: “To learn not just the basics of how AI works, but its broader social, cultural, artistic, and philosophical implications.”
In my seminar, writing about — or with — an AI bot like ChatGPT in the ways that the letter assignments directed helped create a critical distance between the writers and the tools. The assignments served as exercises in metacognition, creating enough of a distance that students might tease out where bots could help them think and where they were, well, just nuisances. Beyond the workmanlike judgment of ChatGPT or the like as tools, the exercise could shine light on the subjective processes of human thought.
In a sense, the letters sought to create a mirror into which students might see, however dimly, the ways that a writing tool shaped and cajoled their own thought.
Teachers can easily focus on the tool-like qualities of ChatGPT (or similar) and locate its utility in what it spits out. (See this post from Ethan Mollick for an example.) But I think that tool approach might not be ambitious enough. As an object of inquiry, the little bot might help writers reveal to themselves the value of writing for human matters of understanding and judgment.
So, what did the students say? I didn’t take a poll, but I did observe growing skepticism about ChatGPT and Bing (the two bots that students used the most, with ChatGPT 3.5 being the prevalent choice). Students usually identified the bot’s confabulations, though a couple of them were cowed by chat prose. And they perceived the superficial flatness of much of its prose.
At the close of the discussion of the third and last bot-assisted letter, I asked whether the students would use ChatGPT or whatever in future classes. About a third categorically rejected bot “help”; the rest would accept help from the bot, but most said they would be selective. In a class of engineers, pre-meds, computer scientists, and “undecideds,” most said that bot use would be mainly “to help with coding problems” and similar formulaic tasks for which bots seem to be most helpful. Writing that required critical thought would be their own, for the most part, though they’d turn to ChatGPT to broaden their approaches. Maybe.
Are ChatGPT and similar bots mere “tools”?
I began this post by looking way back to Plato’s Phaedrus and the King Thamus’ condemnation of writing. The technology of writing, goes the argument, would weaken human memory and change objects and processes of learning.
The real questions about AI and LLMs bear some resemblance to the excerpt from Phaedrus. We don’t know now, for sure, that these new technologies will change human thinking. Will we lose something in the deal, just as King Thamus surmised about wisdom and memory succumbing to this new technology of writing?
Depending on how we answer the questions of what impacts ChatGPT and similar bots pose, our stance as teachers shifts. If the technologies are tools, the way forward for teachers is perhaps more comfortable and familiar: we teach ourselves and our students how to use them expertly. But if the technologies are something more — a replacement of human abilities, a marked cognitive “enhancement” perhaps — the ground shifts more profoundly.
Mike Loukides leans toward the “tool” interpretation in an article published by O’Reilly Media and demarcates human knowing and the utility of ChatGPT. You apply your knowledge in order to use the tool, and the two are separate:
To be really good at prompting, you need to develop expertise in what the prompt is about. You need to become more expert in what you’re already doing—whether that’s programming, art, or humanities. You need to be engaged with the subject matter, not the AI. The AI is only a tool: a very good tool that does things that were unimaginable only a few years ago, but still a tool. If you give in to the seduction of thinking that AI is a repository of expertise and wisdom that a human couldn’t possibly obtain, you’ll never be able to use AI productively.
But the other side — the side of Thamus, King of Egypt, as reported in the Phaedrus — nags many teachers (including me). Through the invention of writing, the king told god-man Theuth, “it is no true wisdom that you offer your disciples, but only the semblance of wisdom, for by telling them of many things without teaching them you will make them seem to know much while for the most part they know nothing.”
In one of the many email exchanges I’ve had with others on the topic of ChatGPT in the classroom, my colleague Michael Faber put the dilemma eloquently, even though a clear way forward eluded him: “It is getting harder to separate out whether we should be looking at ChatGPT and the like as ‘a technology to be studied, etc.’ in the same way that you do in your class …,” he wrote. “Or whether we should already be rethinking the underlying notions of teaching itself in light of what may turn out to be world-turns-upside-down level change” (emphasis in the original).
He thinks that something big is afoot in teaching and learning, he said in a later email. I have to agree — something maybe as big as the difference that Plato recounted in his Phaedrus.
A shift of human capacity, occasioned by a technology, a change at the “world-turns-upside-down level.”
Got a comment?
Tags: chatgpt, ai, writing, teaching, learning, critical distance, thinking, thought, letter, technology, plato, bing
Links, cited and not, some just interesting
He thinks it’s a tool, not a mind-bender. Loukides, Mike. “Prompting Isn’t The Most Important Skill.” O’Reilly Media, October 17, 2023. https://www.oreilly.com/radar/prompting-isnt-the-most-important-skill/.
A thoughtful article from friend and colleague Michael Faber. Faber, Michael. “Learning How to Learn with ChatGPT.” Innovation Co-Lab, November 7, 2023. https://colab.duke.edu/blog-post/learning-how-learn-chatgpt/.
NPR One. “🔊 Listen Now: Q&A: To Find out More about AI, an NC State Professor Asked His Students to Cheat,” November 6, 2023. https://one.npr.org/i/1210922189:1210922215. You can read it here: https://www.wunc.org/education/2023-11-06/nc-state-ai-chatgpt-cheating-fyfe-interview.
Dwyer’s essay provides a picture of the larger context of the writing world, with an interesting note on AI writing. Dwyer, Kate. “Has It Ever Been Harder to Make a Living As An Author?” Esquire, November 8, 2023. https://www.esquire.com/entertainment/books/a45751827/make-a-living-as-a-writer/. “[Ayad] Akhtar has a close friend (a creative executive) who has been using artificial intelligence to write scripts for the last six months. ‘I read a script they had outputted about two and a half months ago. It was hands down the most compelling TV script I’ve read in a long time,’ he said. Not because it was good, but because, he said, ‘It had my number in the same way that the iPhone has my number. I was turning the pages even though I had no real understanding of why I cared.’ ”
MLA-CCCC Joint Task Force on Writing and AI. “Working Paper: Overview of the Issues, Statement of Principles, and Recommendations.” AI and Writing. Accessed November 11, 2023. https://aiandwriting.hcommons.org/working-paper-1/.
Another stab at an AI-writing course at Stanford. “Future Text: AI and Literatures, Cultures, and Languages.” 2023. Reprint, Github, November 4, 2023. https://github.com/quinnanya/future-text.
A broader historical view of recent development in AI. Gibson, Richard Hughes. “Language Machinery.” The Hedgehog Review, Fall 2023. https://hedgehogreview.com/issues/markets-and-the-good/articles/language-machinery. “Generative artificial intelligence is a headspace and a technology—as much an event playing out in our minds as it is a material reality emerging at our fingertips. Fast and fluent, AI writing and image-making machines inspire in us visions of doomsday or a radiant posthuman future. They raise existential questions about themselves and ourselves.”
considers AI in fiction and in subjective experience in this week’s .
is writing a book on AI bots and writing. His Substack often includes posts that writers and teachers will find helpful.
I agree with John Warner about this post:
Of course, the fact that Plato captured the words in writings is ironic, as he no doubt was smart enough to recognize. We readers of his writings can thank him for the effort, and we can interpret his decision to be ironic, too.
I use “ChatGPT” as a generic term for LLMs in this article.
I really appreciate the thoroughness with which you documented your experiment. I’d love to see something similar applied well down into elementary schools, when students are first learning the think/write. I wonder how early kids will be learning to use ChatGPT and whether they will quickly approach it just as we did books when we were young, as simply something to test ourselves against?
Mark, this is a really thoughtful (and thought-provoking piece). I think you probably already know where I'm going to land on the mere tool vs. more pervasive technology question and the Phaedrus discussion is a good parallel to consider. In many ways if we think of any tool as a "mere tool" we're kidding ourselves. Of course some tools shape us more or less than others but I can't help but think that LLMs are going to be on the "more" side of the equation because of the interface.
I think LLMs and AI tools more generally will be more impactful because of the way that we directly interact with them and how they connect directly with our writing and thus, in a way, with our thinking.
Really good stuff here and I love the way that you prototyped some of these things in your seminar this semester. Bravo!