Read time: about 9 minutes. This week: I break silence on ChatGPT and its ilk, extend earlier thinking about robots and machine empathy, and ponder a tribute to MEATSPACE. Next week: A review of Sally Mann’s Hold Still. It’s a memoir “with photographs,” and it’s very good!
The Boulangerie offers glimpses of what’s in a warm place rising or already in the bakery oven. This past week, the bakers took off to till their gardens. I only announce when something happens in the Boulangerie with my Mastodon loudspeaker: @mrdelong@mastodon.online.
If you got this from a friend, how about getting your own copy? A subscription is free, and it’s only another email.
I’ve been silent, mostly, about ChatGPT, Microsoft Bing’s chatbot, Google’s Bard, and the GPT-4 debut. It’s not that I haven’t thought about them but rather that the landscape shifts so quickly, the terms and features remain paradoxical and fluid. At least it seems to me. What you’d write about one week may be — no, could well be — irrelevant or useless the next. Or at least needing qualification.
You might witness the shift of the technology in hours if you wrestle with the burbling topics of “AI” or LLMs (Large Language Models). (I list three of the reasons for this at the end of the post, and there are many more.)
Too many knuckles
Kyle Chayka probably noticed the quickened shift. In a newsletter post that appeared on March 27, he mentions his “The Uncanny Failures of AI-Generated Hands” an article published online in The New Yorker on March 10. AI has trouble with hands: “Midjourney can generate a realistic human being, but the fingers have too many knuckles.” In that article, Chayka writes, “The strange contortions of A.I. hands make me feel a sense of anticipatory nostalgia, for a future when the technology inevitably improves and we will look back on such flaws as a kitschy relic of the ‘early A.I.’ era, the way grainy digital-camera photos are redolent of the two-thousands.”
That future when the technology inevitably improves came quickly. On March 26, a day before Chayka’s post went out to his Substack readers, Pranshu Verma’s article “AI Can Draw Hands Now. That’s Bad News for Deep-Fakes” appeared in the Washington Post. Too many knuckles no more because “in mid-March, Midjourney … released a software update that seemed to fix the problem, with artists reporting that the tool created images with flawless hands.”
The situation of hands and machines’ growing competence in rendering them illustrates part of the problem about saying anything about “AI.” By the time you utter something, your utterance can become a matter of nostalgia. So fast is the development of the tools.
It’s not just a matter of keeping up in your writing and thinking, either; it’s a matter of being able to navigate larger, more existential changes. Dan Shipper, an AI promoter if anyone is, even has qualms, and he has a circumstance in mind when he “might hop off the AI train.” He wrote, “People build their lives and make decisions based on a set of promises that society makes about what they’ll get if they behave a certain way. If AI progresses so quickly that it breaks all of those promises all at once and with no warning, it would be unethical and deeply unfair.”
Get more independent writing. I recommend The Sample. They send one newsletter sample a day. No obligation. You can subscribe if you like it.
By the way, Kyle Chayka is finishing up his manuscript for Filterworld: How Algorithms Flattened Culture, which will appear in early 2024. His previous book on minimalism and his other work, especially for The New Yorker, show that he’s among the top thinkers in matters of art, technology, and society. When I think of the circumstances of AI and algorithms embedded in so many commonly used apps and services, I begin to see some of the challenge that I imagine Chayka faces as he goes through his book drafts. His topic is timely and hugely important, but the technologies he considers as he moves through his task, well, they are early forms — larva-like, changing into a seemingly unrelated pupae with each molt and turn. That’s a challenge to trace and explore, much less to interpret.
I’m looking forward to reading his book.
Shape-shifting god playing with us? Without “knowing” anything, of course.
I wrote about empathy and AI in February last year. I was concerned then about the reception of artificial “empathy,” which seemed to me inherently manipulative. “Rather than looking at the technical achievements in empathetic AI, I want to look at our human reception and interpretation of machine-made emotion,” I wrote. “After all, the human being is on the other side of the empathetic relationship, and we construct judgments of AI — its trustworthiness, its ‘morality,’ its being — from the signals and behaviors we witness. What’s up with the humans when they behold a machine-made emotion?”
The term “empathetic” is weaselly when applied to an AI, since AIs do not have the capacity to feel empathy or, for that matter, have any sort of intelligence or “inner life.” Theirs is even less than the empathy of a psychopath — displayed, but empty. No, I think the emotional attachment springs from the medium presented to and perceived by humans: “chat” or competent (though not particularly eloquent) sentences in response to human statements.
In an “open letter” prompted in part by a “chatbot-incited suicide in Belgium,” the authors point out that
emotional manipulation can also manifest itself in more subtle forms. As soon as people get the feeling that they interact with a subjective entity, they build a bond with this “interlocutor” — even unconsciously — that exposes them to this risk and can undermine their autonomy. This is hence not an isolated incident. Other users of text-generating AI also described its manipulative effects [web links provided in the original letter].
The words used to describe interactions with AI chatbots often implicitly attribute human or intelligent qualities to them. And the interaction with chatbots is powerful and can be “deeply unsettling.” For that matter, the term “artificial intelligence” leans toward assigning a “subjective entity” (as the open letter writers put it) to AIs. And, as many have pointed out “artificial intelligence” completely mislabels the kinds of things we’re seeing today, hyped as they are by their makers who have an interest in portraying their products as “intelligent.” It’s worth noting that chat AIs use emoticons, too. That small typographical detail has rhetorical meaning and further complicates the emotional freight of an exchange with a chatbot.
Over a year ago, I wrote,
Until we become familiar with such [emotional and empathetic] behavior from machines — and whether machines themselves remain the kind of machines we have known before — it’s likely we’ll have this uncanny relationship with machine empathy, a feeling of distortion and discomfort. It’s not just the “tech,” it’s us and how we’ve evolved and how we function human-to-human.
Today, the uncanniness of the machines is a bit less so, and our feeling of distortion and discomfort has diminished. But as a result, we humans may be slipping into significantly more threatening error — namely, misinterpreting AI as a “subjective entity.”
The habits of language that we use to describe AI are partially — maybe even largely — to blame.
Emily M. Bender of the University of Washington, no shrinking violet when it comes to arguing that AI needs transparency, guidelines, and a large helping of humility, thinks a name change for “AI” might not be a bad thing. In a profile of her in New York magazine, Elizabeth Weil reports that
Bender remains particularly fond of an alternative name for AI proposed by a former member of the Italian Parliament: “Systematic Approaches to Learning Algorithms and Machine Inferences.” Then people would be out here asking, “Is this SALAMI intelligent? Can this SALAMI write a novel? Does this SALAMI deserve human rights?”
SALAMI. Sounds about right to me. The name might just provide a certain distance, not to mention humor, and blow off some of the pretension of the term Artificial Intelligence.
Hooray for Meatspace
Actually, I do prefer authentic intelligence in most cases, especially when it comes to writing and reading. I’d be perfectly happy to leave routine and boring bits of life to blatantly obvious machine artificiality — maybe even labeled “artificial,” like a food additive. You know, the SALAMI section.
Let’s leave matters closest to human meaning and happiness to the humans.
This week’s “Shouts & Murmurs” features “Upcoming Landmarks in Artificial Intelligence.” “Shouts & Murmurs” is a regular humor section in The New Yorker that offers humorous commentary, often on current events. Henry Alford conjured up this one. “Novelists and poets galvanized by chatbots’ having provided new reason to embrace alcoholism,” reads one of the upcoming landmarks. The next one in the list is “Proliferation of AI-derived art causes painters and sculptors to form union, Artists in the Meatspace.”
How about “Writers in the Meatspace”? Like here in Substack? If I were a graphic designer, I’d love to come up with a “seal of approval” for my newsletter that would assure readers that the words, photographs, and drawings came forth from a human being, however flawed and limited, and were not “synthetic media” spewed forth by an algorithm exquisitely tuned to conjure up bullshit.
Anyone join me in a pledge to stick with meat rather than silicon chips? Anyone have a bit of an artistic flair to come up with a seal? We could display it together — a celebration of humanity and meatitude. It’d be the “Good Housekeeping” seal for a human corner of Substack.
It would be awesome.
Got a comment?
Three of the reasons for the shift in speed and presentation of AI-centered services. In part it’s because the developments have left the lab … or have turned the marketplace into the lab.
The technology is quickly being developed and, to the consternation of many, quickly made available to consumers, with some warning. The quick release may have two purposes: to exert pressure on competitors and to press consumers into service by having them run the AIs through wide-ranging, real life challenges — a Pandora’s Black-Box form of quality control. Both of these purposes have some nasty downsides and, it seems to me, few benefits for human beings sitting in front of their displays.
The technology is being integrated into already existing applications (like search or even Microsoft Word) or adapted for specific purposes either by industry or by individuals. You can get an “AI” to write an employment application letter for you at Careered. No promises about success, though. And some have given ChatGPT, Bing, and Bard a chance at serving as their assistants. The integration of an AI along the lines of ChatGPT into word processing programs quite obviously changes the essential nature of composing thought through writing — even more drastically than the first “word processors” did in the 1980s.
The “edges” of the technology aren’t apparent and are difficult to describe. Testing of products claiming to be “AI” appears more incomplete than is often the case with software products. This is probably a reason why usual language has such trouble describing the technology; it is easy to mislabel, say a response from ChatGPT as “intelligent” and to use metaphors of thought and emotion in descriptions. Perversely, the difficulty of testing might be exactly the reason why “AI” chat has been released to the public. They can test in real life, regardless of the consequences since it’s only a beta, more or less.
Tags: SALAMI, AI, artificial intelligence, synthetic media, algorithm, authenticity
Links, cited and not, some just interesting
Chayka, Kyle. “The Uncanny Failure of A.I.-Generated Hands.” The New Yorker, March 10, 2023. https://www.newyorker.com/culture/rabbit-holes/the-uncanny-failures-of-ai-generated-hands.
Verma, Pranshu. “AI Can Draw Hands Now. That’s Bad News for Deep-Fakes.” Washington Post, March 26, 2023. https://www.washingtonpost.com/technology/2023/03/26/ai-generated-hands-midjourney/.
You can sign it, if you want. Smuha, Nathalie A., Mieke De Ketelaere, Mark Coeckelberg, Pierre Dewitte, and Yves Poulette. “Open Letter: We Are Not Ready for Manipulative AI – Urgent Need for Action.” KU Leuven, March 31, 2023. https://www.law.kuleuven.be/ai-summer-school/open-brief/open-letter-manipulative-ai.
Profile of Emily Bender. Weil, Elizabeth. “You Are Not a Parrot.” Intelligencer, March 1, 2023. https://nymag.com/intelligencer/article/ai-artificial-intelligence-chatbots-emily-m-bender.html.
A much discussed and debated article. Roose, Kevin. “A Conversation With Bing’s Chatbot Left Me Deeply Unsettled.” The New York Times, February 16, 2023, sec. Technology. https://www.nytimes.com/2023/02/16/technology/bing-chatbot-microsoft-chatgpt.html.
Marcus is an important worker in AGI and AI safety. He predicted that 2023 would see the first AI-incited death. Marcus, Gary. “The First Known Chatbot Associated Death.” Substack newsletter. The Road to AI We Can Trust, April 4, 2023.
It’s worth noting that Marcus and Bender don’t see eye-to-eye. Discussions about regulating AI regularly explode, and I think perhaps to no one’s benefit. But academics like to argue, and the topic lends itself to polemics.
Even fanbois have their limits. Shipper, Dan. “Awe, Anxiety, and AI,” March 24, 2023. https://every.to/chain-of-thought/awe-anxiety-and-ai.
Alford, Henry. “Upcoming Landmarks in Artificial Intelligence.” The New Yorker, March 27, 2023. https://www.newyorker.com/magazine/2023/04/03/upcoming-landmarks-in-artificial-intelligence.
I'm with you Mark: I want read the output of HI (Human Intelligence).
I'm still working on 'intelligence'....! Fascinating post, Mark - thank you!