The "writer" at your side, helping ... or not
It's inevitable. The ChatGPT/AI essay. Here's mine, flavored with the thinking I'm doing about student writing in my fall seminar.
“The pace of AI is absolutely insane,” one of my seminar guests wrote in an email to me in May, “if I tried to prep for your course now it would be outdated by the fall!” That was a bit over three months before the fall semester begins, and I had already begun tweaking my syllabus. I quickly realized how much I had to update from last year’s seminar and how difficult it was to make out what would be relevant and useful a mere three months in the future—in August 2023.
When I look back three months, I, too, am amazed by the pace of developments in LLMs (Large Language Models) and AI—a pace no doubt supercharged by competition in the industry. I’m also surprised by the rapid shift in attitudes and concerns about AI, even among leaders in the field.
A case in point: Cade Metz wrote about Sam Altman, OpenAI’sCEO, in March 2023: “He said his company was building technology that would ‘solve some of our most pressing problems, really increase the standard of life and also figure out much better uses for human will and creativity.’ ” But Metz also reported that Altman worried that AI could disrupt and do “serious harm.”
Kelly Sims, a board advisor to OpenAI, commented, “In a single conversation, he [Altman] is both sides of the debate club.”
In May—just a couple months after Metz’s article appeared in the New York Times—Altman seems to have darkened his forecasts. He testified to the US Senate subcommittee for privacy, technology and the law: “I think if this technology goes wrong, it can go quite wrong. And we want to be vocal about that. We want to work with the government to prevent that from happening.”
That is a far cry from Eliezer Yudkowsky’s frantic opinion piece in TIME magazine, where he claims that nuclear war is less threatening than large scale AI. “Make it explicit in international diplomacy that preventing AI extinction scenarios is considered a priority above preventing a full nuclear exchange,” he writes, “and that allied nuclear countries are willing to run some risk of nuclear exchange if that’s what it takes to reduce the risk of large AI training runs.”
I’m not surprised that we have little clarity and much noise on how AI will fit into (or disrupt) human society and culture. We don’t know how it works on a deep level, we don’t know its capabilities or the “edges” of its powers, we don’t have an agreed upon vocabulary to discuss AI, and we have few useful analogies that shed more light than confusion about AI and its ilk. The murk will persist, but the void will be filled—with hype and “criti-hype.”
No wonder why I’ve held off writing about ChatGPT and AI recently, except for ruminations on empathy and our reception of thinking-like utterance, the accelerated pace of development, and “SALAMI.” The technology will indeed remake much, even though we don’t know exactly what will happen, so it’s no wonder why it’s important to try to wrestle with the topics.
Now, as I shape the fall seminar on “our complex relationships with technology,” I’m wondering how I need to remake plans in a manner that preserves AI as a subject to examine, rather than allowing AI to turn us into its subjects.
Things have changed a lot since last fall, and I’m relieved to be able to focus on a set of challenges that, for now at least, don’t include protecting the world from a ravaging AI.
Learning and teaching with AI
Among many university teachers, the curve of discussion and change follows the path you might expect. Rejection and denial at first. Diddling around, some irritation and rage. Depression. Resignation to the “inevitable.” Acceptance. A lot like the path the dying take toward acceptance of their end, I guess, and it’s a process that recurs with every new challenge and change. AI is just a recent example.
In the matter of “what to do about ChatGPT” I’m not yet at “acceptance,” but I’ve danced around the other steps. I typically devote the last class session last December to thorough evaluation of the seminar and give a peek at changes I am thinking about for a future run. Back then, I sensed that GPT-3 had changed the context for learning and that GPT-4, which we then expected early in 2023, would cement the changes. As has been my practice, I’ve used writing to help unlock topics and as a tool for exploration as we’ve have gone through the weeks of the seminar, so I expected the challenges of GPT-3 (and following versions) would be significant.
What if GPT-3 and its successors changed what it meant for students to write? What if the technology would strip, or at least substantively alter, writing of its power to focus and transform “merely human” thought?
In that last class session, I floated the idea of de-emphasizing the written word and, with it, the role it played in helping students (and me) think. Looking back, I see the idea was an evasion, even a cowardly way to avoid confronting GPT technology. Most students didn’t think much of the idea, even though all of them struggled at some point to keep going with their writing.
Adversarial examination, messy but (often) revealing
My stab at evasion amounted to a denial that GPT and its ilk warrant no response in a writing-intensive seminar. Worse, it shrouded the emerging technology and assumed that students taking part in the seminar would be able to avoid it in their later lives.
In months since that last class session, it’s clear that LLMs and AI will become as entwined in work life as the computer, the spreadsheet, the word processor are today. Soon AI will become a component of, say, Microsoft Word so entirely woven in that “writing” might devolve into mere “prompting” and evaluating (or not) the results coughed up by the daemon in the machine. Indeed, writing might diminish in importance for professional life. Last week Salesforce announced new AI services that automate chunks of sales and marketing activities.
(aka biblioracle) I think correctly points out.1I’m with Warner and many others in thinking that writing isn’t entirely—or maybe even mostly—involved with mere production of words. It’s tied with the development of thought, a process of and for thinking. It is distinguished by qualities, not evaluated by quantity—the number of words committed to a page, for example. Despite what writers sometimes say, production of words per se isn’t the main signal of success in writing, though writing as a practice requires production of many, many words. Lots of them stillborn, too.
The more you look at ChatGPT, the more you see that the technology is not “writing”—even though our perception of ChatGPT output lazily categorizes it as “writing.” Using ChatGPT has risks, and not just because ChatGPT “hallucinates,” a word that misleads, suggesting a mind that has departed from reality or is capable of conjuring an alternate one.2 The deeper and more useful truth is, as Warner puts it, “machines assemble syntax” but they don’t create sentences. They do so in industrial-sized, mechanically efficient bulk, too. Engineer and entrepreneur Rodney Brooks said of GPT that “what the large language models are good at is saying what an answer should sound like, which is different from what an answer should be.”
We often fail to remember the distinction between seeming and being, and we can slip up and count ChatGPT output as “sentences.” Sentences are the thing in writing, aren’t they? My Latin is rusty, but I’m enough a classicist to remember that sententia also means wisdom, an appropriate thing to echo in a word so central to writing.
Can we examine ChatGPT in relation to human writing? That is, can we look at it more skeptically—more distantly, in a sense—and without rushing to label it as a “tool” or a “co-author” or a “time-saver”? Given some serious study of the products of ChatGPT, can we learn something about writing as a product and a prodder of human thought? Can we learn something about a possible and responsible use of the technology? Can we learn about pitfalls and traps that the technology, as seductive as it is, sets in the paths of writers and thinkers?
Exploration. Analysis. Creativity. Those are things that shouldn’t be short-circuited by what today amounts to a sophisticated parlor trick. But is it possible to use the parlor trick well in service of human intentions and human development?
That’s the question, because our students will almost certainly use the tools through much of their working lives. That’s the question teachers pose when they consider using ChatGPT and other AI “assists” in the classroom.
Three I think fundamental questions. (There are others, I’m sure)
Some might think I’m being too precious and nit-picky, but I think about writing circumstances that usually fall below the radar. These are assumptions that writers rely upon. They’re woven into the order, the context, and processes of writing. I think that ChatGPT and its ilk modify these circumstances, perhaps to an extent that changes the whole nature of the writing enterprise as a tool of clear thinking. Whether these changes enhance or degrade human thought is still up in the air.
Three biggies, for me:
The power of the first draft. Words can sometimes trap us, especially those of us who are learning ways to master prose and unpack language. The “shitty first draft” (Anne Lamont) is a milestone for writers for many reasons, but the two reasons that I think are most powerful are 1) the draft comprises a whole, however faulty or “shitty,” and 2) the draft shifts the writing problem, its focus, and its process. Where attention was paid to creating a product on a page ex nihilo (or at least from many sources and as a product of much thought), once a draft is to hand the draft itself becomes the main subject. It is, literally, a central object of writer’s attentions. That makes the first draft powerful in shaping a whole writing project.3 The draft steers thought; writing shifts toward assessment, improvement, “tweaking,” re-writing.
A question: What happens when ChatGPT creates the draft? How does that change the nature of the human acts of writing? How might it improve or degrade the service that old-fashioned writing offers us — the service of clarity, definition, and, yes, sometimes even eloquence?
Small-step, directed thinking. Related to power of first drafts I think of the micro-manner of my own thinking-through-writing. I don’t know that others share this experience, but it happens as I write. My prose emerges in short hops—small steps, sentence-like, built together and sometimes loosely, sometimes tightly woven. In my experience, a first draft doesn’t just happen as a complete thought, like a shining image popping into view. Rather it creeps into view imperfectly, the product of many erasures and missteps. In some ways, this incremental unveiling of thought could be a source of the frustration that most writers face in their work: they seek the whole draft, not the drip of ideas that feels like mere leakage. They see a “blockage” where they hoped a fluency might have been.
By small steps my writing helps me think. It ain’t easy. Ever.
A question: How might an AI like ChatGPT replace or transform those small-step habits of mind? Is my thinking transformed by having a 500-word “result” appear seconds after I submit a prompt?
Why do we say “please” to a machine prompt? Is there a requirement or at least a benefit to having another human reading and thinking about a writing project? What happens when that Other is an AI that displays some ability with language but that doesn’t know, doesn’t intend, doesn’t feel? I’ve marveled that people often prompt ChatGPT politely and use the word “please.” The manner is telling: it suggests that people feel a human connection with a seemingly polite but utterly inanimate “voice.”
A question: How might that assignment of a subjective reality to ChatGPT shape a human writer’s identity? How does it change the act of writing, especially when the AI interlocutor produces results from statistical associations that lack truth value or even intent?
So how will I change this fall’s seminar?
There is, of course, a danger that including ChatGPT and other LLMs into writing both unjustifiably legitimizes its use and teaches bad habits of thought and writing. But given that ChatGPT and its successors will become part of life for today’s students, ignoring it is probably irresponsible. Discovering and encouraging responsible use is more important.
So, yes, my students this fall will use ChatGPT, but they’ll also be tasked with examining it as a problematic technology, and they’ll need to identify and justify choices they make when they use it.
I’m not convinced ChatGPT is a useful tool, yet; but it will be good to enunciate that ways that it changes writing and thinking. I’m hoping that assignments in the class will give students a chance to see writing with and without using the application. I think that together we’ll come upon uses that are appropriate and not. (I also suspect that ChatGPT will make us approach writing as “fact-checking” in a rather dismaying way.)
Postscript
One thing that I noticed as I put this post together: the topics of ChatGPT, LLMs, AI seem to defy satisfying conclusions. That might not be what readers want — a settled matter is happier, perhaps—but I think the way I conclude reflects the ambiguity and the unsettled quality of the technologies.
We don’t know what to make of these fast developing technologies, really. And it’s hard to feel okay with that right now.
Got a comment?
Tags: ai, chatgpt, teaching, learning, writing
Links, cited and not, some just interesting
Previous posts relate to this topic and to my experience and plans for teaching: I prefer Authentic Intelligence (April 2023), I make a numerical mistake. Then I think about two rivers (February 2023), Technological rubble (October 2022; guest post by ), Start with a talk. Draw a picture. Grow (September 2022), Seminar and silence (August 2022), Salons, Twitter, seminars, and guests (May 2022), Writers first, then students (April 2022), Learning and “sorting” (January 2022).
AI is making movies now, too. Type in a prompt at runwayml.com, and the service will return a short video. Like Midjourney, but it moves! An example, produced with much exertion, I’m sure:
Metz, Cade. “The ChatGPT King Isn’t Worried, but He Knows You Might Be.” The New York Times, March 31, 2023, sec. Technology. https://www.nytimes.com/2023/03/31/technology/sam-altman-open-ai-chatgpt.html.
Kang, Cecilia. “OpenAI’s Sam Altman Urges A.I. Regulation in Senate Hearing.” The New York Times, May 16, 2023, sec. Technology. https://www.nytimes.com/2023/05/16/technology/openai-altman-artificial-intelligence-regulation.html.
Yudkowsky, Eliezer. “Pausing AI Development Isn’t Enough. We Need to Shut It All Down.” Time, March 29, 2023. https://time.com/6266923/ai-eliezer-yudkowsky-open-letter-not-enough/.
A useful and early take on using LLMs in the college classroom. Fyfe, Paul. “How to Cheat on Your Final Paper: Assigning AI for Student Writing.” AI & Society, March 10, 2022. https://doi.org/10.1007/s00146-022-01397-z.
Rao, Venkatesh. “Life After Language.” ribbonfarm, May 5, 2023. https://www.ribbonfarm.com/2023/05/04/life-after-language/.
ChatGPT is a great source for automated and refined bullshit. Harry Frankfurt talks with his Princeton University Press publisher about bullshit, which he examines philosophically in On Bullshit (Princeton, 2005). It is amusing to see a serious philosopher and an academic editor musing about, and saying the word, bullshit. I have considered using Professor Frankfurt’s concise book as a text in class. On Bullshit Part 1. Princeton, NJ, 2005. (There’s a part two as well.)
Interview with Rodney Brooks with comments on ChatGPT and LLMs, robotics, and self-driving cars. Zorpette, Glenn. “Just Calm Down About GPT-4 Already.” IEEE Spectrum, May 17, 2023. https://spectrum.ieee.org/gpt-4-calm-down.
I have loads of other references, and if you want them I’ll send them to you. Just email me at technocomplex@substack.com.
The circumstance of business communication may become even weirder, since humans may become a sideline or “lastmile artifact” — a diminishing target of communications. Venkatesh Rao, a self-proclaimed “AI accelerationist,” put it this way: “There is no good reason for the source and destination AIs to talk to each other in human language, compressed or otherwise, and people are already experimenting with prompts that dig into internal latent representations used by the models. It seems obvious to me that machines will communicate with each other in a much more expressive and efficient latent language, closer to a mindmeld than communication, and human language will be relegated to a ‘last-mile’ artifact used primarily for communicating with humans. And the more they talk to each other for reasons other than mediating between humans, the more the internal languages involved will evolve independently. Mediating human communication is only one reason for machines to talk to each other.”
Part of the reason we’re having so much discussion about ChatGPT and AI is that we are using language of consciousness, knowledge, and perception in that discussion, when neither consciousness, knowledge, nor perception is part of the substance of ChatGPT. Our exchanges in some manner thrash around because we are misled by the very words we find ourselves forced to use. ChatGPT doesn’t “think” and it certainly doesn’t have an identity, and yet much of our talk about it reveals — and reinforces — a view that ChatGPT has its own subjective reality. We assign it a “subjective entity,” and that’s our fault — or at least our limitation.
That’s also why I found grant writing so stimulating, since I often created first drafts and thus had some power over the eventual implementation of projects if they got funded. Grant writing in the sciences is, of course, a collaborative project, but the first draft sets an agenda and a path; writing that follows the first draft often elaborates, expands, fortifies — and, naturally, rejects and deletes.
As an aside to this footnoted aside: I pity members of NIH “study sections” and NSF panelists. They will have to wade through much ChatGPT prose, and I would imagine will notice it. And grants never have been a particularly exciting and engaging genre, to say the least. What is it like for a human reader to review words that didn’t have a human origin? How does that color an assessment of scientific value? What does it say about the scientific enterprise?
To have come to a stirring conclusion and/or a definitive path would have really been audacious ... I don’t see how you can do anything but recognize we’re all still floating in the figuring-it-out stage.
This was a really interesting incisive essay, Mark, and I will need to come back to it to follow up the references and lines of thinking. Just a few points I'd like to make:
1. I don't believe in shitty first drafts because, perhaps egotistically, I don't regard my first drafts as shitty. They are initial steps. You wouldn't say to a toddler learning to walk "Wow, that was a shitty first effort!". The issue becomes, does this draft reflect what I want to say, or have the effect I want it to have. And it's the result of a thinking process.
2. So it seems to me that when we use ChatGPT or embedded AI to create the first draft, we become critics rather than writers, at least in the first instance. But we have bypassed the initial part of the process.
3. However, it may be useful to give a head start. I've been experimenting with ChatGPT again and embedded AI recently (Squarespace, which I use for two of my websites, has introduced an embedded AI to write text for you in response to a prompt.) I used Chatgpt to generate personas for a target readership, and the students on my course (about blogging) and I agreed that it would give someone a few useful ideas and starting points.