Book review: John Warner's More Than Words: How to Think about Writing in the Age of AI
"The Biblioracle" uses AI to guide us back to solid ground on writing and reading, revealing he's a conservative at heart. An enjoyable and lively book.
Warner, John. More Than Words: How to Think about Writing in the Age of AI. New York: Basic Books, 2025. 320 Pages. ISBN: 978-1-54160-550-3 $30.00
The “revolutionary” and “disruptive” technology of Large Language Models (LLM) deepens a continuous thread that John Warner has woven into his work on education and writing in particular. That continuity emphasizes an essential and refreshing conservatism in his thought—and conservatism is not the first word that comes to mind when you think of John Warner. But he has persisted in his belief that writing and reading are far more than today’s schooling recognizes, or at least has built into teaching routines. Reading and writing are essential practices that nurture the human spirit and are not merely building-block skills or by-products of correct spelling, grammar and adherence to templated prose.
In More Than Words: How to Think about Writing in the Age of AI (Bookshop), Warner presents his truer and expansive view of writing in light of AI. The book appears in bookstores on February 4.
Warner claims that instead of offering new technologies for education, LLMs and “artificial intelligence” actually clarify a challenge to reading and writing, their place in education, and, ultimately, what it means to be a thinking human being. In the book, Warner takes that challenge as his fight: to preserve the intellectual and spiritual furniture that AI has been scooting around or plucking from the rooms of human imagination, thereby reconfiguring and undermining essential features—and life-giving powers—of human thinking. More Than Words shows John Warner in that house of intellect, shoving around and recovering essential life-giving furnishings of thought, so that the human beings can continue to have a livable home in a world with AI. (He uses this furniture metaphor in his introduction, by the way.)
“That’s what I’m hoping to do here, to make a bit of a mess,” he tells us in his introduction, “but in so doing, I’m also hoping to make a case that it is vital to maintain our humanity even as we make use of these new tools of artificial intelligence.” Throughout Warner’s argument, “maintaining our humanity,” means preserving and reinvigorating deeper values of writing and reading that enable thinking and fundamentally reinforce humanistic values like freedom and dignity.
More Than Words is aptly titled, for Warner explores an understanding of writing and reading beyond the ken of even an LLM with infinite “tokens.” But also, the title subtly hints at a threat: writing and reading are more than words and so surrendering the human practices of reading and writing to lifeless technologies of mere wordings risks losing the more that springs beyond the living words of humans.
That more makes up some of the most treasured and hard-won qualities of being human.
I’ve noticed before that Warner’s style leans toward the manifesto, drawing its power from defining a challenge and rhetorically moving people toward a resolution—which, in truth, is what any argument does. But a manifesto sharpens its edges and rises from emphasis and, in some cases, its shrillness. Readers of
are already familiar with this intensity (if not also Warner’s occasional crankiness). Not content with mere description or elucidation, the manifesto urges action. But there is a declarative form of writing that also operates in Warner’s writing. It is the credo, a statement of belief, an assertion magnified into a truth.More than Words has elements of both manifesto and credo. Warner’s writerly choices to use those elements rhetorically amplifies his message. It is a good mix.
What’s this “more”? And John Warner’s conservatism
The book has four parts, ordered to support his argument that culminates, manifesto fashion, in a call to arms.
Part one contrasts artificial intelligence and human thought, describes what ChatGPT (which Warner uses as a blanket term for LLMs) actually do, and lays out uncomfortable underpinnings of recent AI development from hijacked online sources, exploited labor, and environmental damage. Warner also considers his “personal history of the automation of writing,” showing that he is not opposed to technological tools of writing. But Warner claims that LLMs endanger reading and writing by replacing the activity of writing instead of serving the writer as tools. They spew “results” that look like the products of a writer, but such results themselves, Warner reminds his readers, are not the essential products of writing.
“With ChatGPT, … while the end product of the output—strings of syntax—bears significant similarity to what a human may produce, the underlying theme is quite different,” Warner writes. “Fetching tokens based on weighted probabilities is not the same process as what happens when humans write.”1
Part two shifts the focus from LLMs to humans. Its chapters implicitly contrast with the description of ChatGPT’s “writing,” stressing instead that for humans “writing is thinking,” “writing is feeling,” and “writing is practice.” Thinking, feeling, and practice in effect make up a world that is “dynamic, useful, and uniquely human”—and, as Warner notes, also not reducible to rote exercises or, for that matter, to statistical weights and connections in a language model. Warner uses the term “embodied” throughout the book to frame the human activities of thinking and feeling, and in this part he describes writing and reading as embodied activities, which itself keenly distinguishes the language-like outputs of ChatGPT from human language. LLMs do not have bodies, and embodiment matters. His conclusion:
Only humans can read. Only humans can write.
Don’t let anyone tell you otherwise.
Part three dives more deeply into the ways that human writing and robots threaten or at least profoundly influence schooling and the writing market and profession. This part of the book includes two chapters that I think will especially interest writers: “Content vs. Writing” (chapter 14) and “On the Future of Writing for Money” (chapter 15).
I’ll discuss part four a bit more fully below.
Why I call John Warner “conservative.” (And, John, it’s a good thing.)
Warner claims, correctly I think, that the attraction to use LLMs in “writing” has arisen from an abasement of what schools and society consider to be “good writing.” Rather than mitigating and repairing that abasement, LLMs deepen it.
That is, LLMs are a serious symptom, not a cure, of the problem of “why they can’t write.”2
I’m quite sure that my labelling of Warner as a “conservative” will amuse him (perhaps just after he snorts with disgust). Here’s what I mean. John Warner wants to conserve and protect human thought and, even beyond that, the human qualities that allow such ideals as freedom, self-determination, and meaning to flourish. Writing enables thinking and invention, and Warner fears that AI will short-circuit whatever learning could take place. More Than Words lays out Warner’s approach to preserve and strengthen the human values, the human powers that the humanities have revealed and promoted through reading and writing.
In a sense, he’s a classic conservative when it comes to writing and reading, and his views cross the grain of many practices that have come to define schooling in the U.S. Most notable and perhaps most infuriating is the reduction of writing to “templates”—the “Five-Paragraph Essay” being the most notorious—and to thoughtless adherence to prescriptive and often arbitrary rules. The appeal of these dead-end approaches? Their “standards of good writing” cancel out deeper writing practices for the sake of testing, quantification, and classification.
Warner counters schooling’s bureaucratic urge to simplify: “If we’re going to confront what ChatGPT means for the kind of reading and writing students should be doing in school, we have to get down to the root values of what is important and meaningful about the writing we ask students to do.” That search for “root values” goes way beyond knee-jerk “back-to-basics,” and that spirit allows me to label John Warner as a “conservative.”
John Warner is conservative because he happens to think that being a free and productive and thoughtful human being is something worth preserving.
So how does Warner suggest we resurrect the power of writing and conserve human powers that come with it in the face of the challenge of AI? Warner especially addresses the question in two sections of his book: one section, in part three, that focuses on writing and teaching and the other, which makes up that last section of the book, providing guidance on how to consider and manage AI more generally.
In both cases, the task amounts to finding ways to limit encroachments.
“IF WE’RE GOING TO CONFRONT WHAT CHATGPT MEANS FOR THE KIND OF READING AND WRITING STUDENTS SHOULD BE DOING IN SCHOOL, WE HAVE TO GET DOWN TO THE ROOT VALUES OF WHAT IS IMPORTANT AND MEANINGFUL ABOUT THE WRITING WE ASK STUDENTS TO DO.”
“There are three principles that can help us think about how to make ChatGPT largely irrelevant to the work we ask students to do and make choosing to outsource work to it a less attractive proposition for students,” Warner writes. The principles are not administrative reductions (“Thou Shalt Not…”) or punitive or simplifying tricks for the sake of testing. Rather, they help guide teachers as they search for ways to make learning worthwhile. Warner formulates the principles this way:
Foster engagement through appreciation.
Build rigor through authentic challenge.
Make learning visible through reflection.
(Many teachers yearning to shed the shackles of teaching-to-the-test already put these principles to work however they can.)
Warner’s chapters on teaching are lively and, for me as one who has integrated writing into learning whenever I’ve taught, reassuringly sensible, even “natural.” Warner avoids listing precepts and recipes, preferring to relate stories that are lively and telling. Many of his observations in chapters devoted to “Writing in the Classroom Today (and Tomorrow)” (chapter 12) and “Reading Like a Writer” (chapter 13) come from exchanges in the classroom and encounters with students. The approach gives flesh to the bones of pedagogy, and I found that the scenes Warner relates and examines in his book happen in some form in my own classroom, too.
Although Warner maintains a focus on writing throughout the book, he chooses in its final chapters to take a broader view of AI and its presence in society. Part four rings the tones of a manifesto more forthrightly than the previous parts. Warner’s thinking continues to relate to teaching and writing, but his scope widens to include human qualities that writing practices give rise to and nourish.
The final part of the book points to human abilities that can place AI into a proper role, if it should have any role at all in our lives. Chapter titles reflect this: “Resist, Renew, Explore” (chapter 17) followed by three chapters bearing one of the words in that title.
Despite the tsunami of hype emanating from Silicon Valley, human beings still have power. Warner reminds us of truer human powers and urges us to resist the seductions of appearance and efficiency in artificial intelligence “by recognizing what I think are deeper values attached to writing and reading that must be preserved not for the sake of nostalgia but because they are significant to being human.”
In many ways, More Than Words calls for a deeper and more vital literacy.
Tang, wheelless horses, technologies and words
I was about halfway through More Than Words when I remembered another writer who considered “the technologizing of the word.” It was when Warner recalls his childhood love of Tang, the drink of astronauts. He got over it, but he went through a phase of hounding his mother to buy it. Orange juice won out with better taste. Warner used Tang in an analogy:
But what if you’d never had orange juice in the first place? You might not recognize the trade-offs of going with Tang. If you value cost, speed, and efficiency, Tang it is. But if taste and nutrition are the thing, orange juice is the superior choice.
Just in case my analogy is not clear, ChatGPT is or at least may be Tang. I don’t think we want a world where all we have is Tang, but we may stumble into that reality before we recognize it’s happening if we’re not careful and thoughtful.
And that made me think of Walter Ong.
In 1982, Ong’s Orality and Literacy: The Technologizing of the Word came out. I vaguely recalled a story that Ong used to clarify the profoundly clear edge separating oral cultures from literate cultures. I dug it up:
Imagine writing a treatise on horses (for people who have never seen a horse) which starts with the concept not of horse but of ‘automobile’, built on the readers’ direct experience of automobiles. It proceeds to discourse on horses by always referring to them as ‘wheelless automobiles.’’… Instead of wheels, the wheelless automobiles have enlarged toenails called hooves; instead of headlights or perhaps rear-vision mirrors, eyes; instead of a coat of lacquer, something called hair; instead of gasoline for fuel, hay, and so on. In the end, horses are only what they are not. No matter how accurate and thorough such apophatic description, automobile-driving readers who have never seen a horse and who hear only of ‘wheelless automobiles’ would be sure to come away with a strange concept of a horse.
Ong related the story to his topic of “orality” and literacy: “The same is true of those who deal in terms of ‘oral literature’, that is, ‘oral writing’.” Using the framework and powers of literacy to describe a something existing outside of or even instead of literacy introduces “serious and disabling distortion … putting the car before the horse—you can never become aware of the real differences [between orality and literacy] at all.”
Ong, of course, sought to clarify the distinction and examine the influences of the “technology” of literacy, and his book does that well for literary studies. But the reason I thought of Ong’s story of “wheelless automobiles” and horses was because Warner suspects that AI in writing could create a similar gulf between a literate world and whatever disembodied and contextless AI technologies conjure to replace it. The transformation of human powers, which the technologies of reading and writing have created and magnified, can be disrupted. The question in its most radical form becomes: “Are we willing to sacrifice what Warner calls ‘deeper values attached to writing and reading’?”
Perhaps the brave new world of AI’s “technologizing of the word” would leave human thought intact, more or less. But if the deeper values are not conserved and protected, we could live through an unthinkable loss of thought.
Warner characterizes the “intelligence” of today’s AI in schools as mostly dorky costumes deployed for “academic cosplay.” He writes, “Students who have exclusively engaged in academic cosplay will have no reference point from which to understand how to produce original insight. They will literally not know how to think.”
Got a comment?
Tags: john warner, teaching, writing, ai, llm, education, reform, luddite, literacy, thinking, thought, feeling, practice
Links, cited and not, some just interesting
First published in 1982, Walter Ong’s book considered how the shift from an oral tradition to written literacy changed human thinking and society. Ong, Walter J. Orality and Literacy: The Technologizing of the Word. Reprinted. New Accents. London: Routledge, 2009.
Recommended: Over the past year or so, Warner’s newsletter included early ponderings and drafts that made their way into More Than Words.
Uh, oh. Here come the robots. “AI systems that can conduct research with the depth and nuance of human experts, but at machine speed. OpenAI's Deep Research demonstrates this convergence and gives us a sense of what the future might be.” A post today from Ethan Mollick. It’s worth asking who, if anyone, learns from this kind of research.
AI companies have used human-like metaphor to make their products appealing, and in the process they have allowed their machines to lay claim to human attributes. They garb their chatbots in outward trappings of humanity: Chat-like exchanges are chummy, and use a friendly, even sultry voice. They respond with anthropomorphic vocabulary (think, feel, apologize) or even the use of the pronoun I—even though no LLM can think, feel, apologize or, much less, experience regret. There is no I there. Chatbots were designed to trick people into anthropomorphizing them. And the trick works. By casting statistically related words into a language-like syntax, the machines prompt humans to weave meaning into their outputs. Warner quotes Emily Bender of the University of Washington: “A very key thing to keep in mind here is that the output of these systems [doesn’t] actually make sense. It’s that we are making sense of the output. It’s very hard to evaluate them because we have to take that distance from our own cognition to do so.”
"Academic cosplay" is an apt description. We need reference points. From my experimentation with AI, i would say it produces "writing" that is as nourishing as eating cardboard. But i suppose if youve never eaten decent food, eating cardboard would stave off hunger pangs.
Great review of the book More than Words! I have appreciated Warner's take on AI and agree that it *should* force us to think carefully about what writing means and does now. AI makes the production of text easier for students, and so a countervailing force can be to emphasize the challenge, rigor, and deep intellectual engagement of writing and human connection.