6 Comments

As usual, Mark, you manage to put these complex and multifaceted concepts into articles that read like poetry, where each word dances to the next, not based on a probability distribution, but based on deep thought and contemplation over much time.

Commenting from both a scientific perspective and a human perspective:

Scientific -

Use caution when distributing AI generated content without attribution/citation/watermarking as AI generated. When this content is used and attributed to humans, it will be used in training future models and can lead to model collapse (see recent article explaining concept by new Duke prof Emily Wenger: nature.com/articles/d41586-024-02355-z )

Human -

If you think of art and writing as merely the destination (the final product, and nothing more), why not use AI to generate content? I, however, think of art and writing as the journey - the journey of processing information over time, discovering and exploring, and, importantly, connecting with others through it. Take this piece you have written, Mark. If generated through ChatGPT, it would not allow us to connect on a deeper level about these topics. Rather, the connection is one-sided. The true beauty in writing and art is connection.

Expand full comment

Poetry! I wish!

You're right about the evaluation of art and writing hinging on how to view their products. As an artifact or as a process. For the time being, I think we can look at artifacts and see their depth (or want of it), maybe making some judgment about a human or machine origin. But, as Isaiah says to AI Evan, what happens in "a world where the overlap is complete, and it’s impossible to distinguish" between human and machine?

For my students, I know that AI does short circuit learning and grasping the kinds of connections that lead to beauty in art and writing. I guess an increasing portion of teaching is to maintain the integrity of the circuit and to help students see and value the effort, the challenge, even the fear that comes with deep learning of the human kind.

Our conversations have meant much to me, Brinnae, as I've puzzled through this maze.

Expand full comment

Love the questions you’re asking here Mark, especially : “could AI prod humans to reckon with inner lives of other creatures?” I’ve added Shell Game to my listening queue.

Expand full comment

I think you'll like Shell Game. It's nuanced and evocative, so you move through it having to judge and think. All good. I'll be interested to see what Ratliff takes on for the second season.

This post was a real lingerer for me, and I felt I was taking a risk in pushing the publish button. Hacking the topic into a series of posts might make the project a bit easier to pull off, and I hope it'll given me time enough to flesh things out concretely and with less of my (signature?) nerdiness.

One thing that doing this post and noodling about the followup pieces: I have shifted my view of AI a bit. I was always a bit conflicted, since I think the technology has some real benefits, so long as it can be approached and used as a tool. There are some exciting uses in scientific research, for example, that are worth pursuing and evaluating. We already know AI can actually work. But AI is also destructive--destructive in ways that are not readily apparent. AI needs a container that resists its corrosion of essentially human qualities, a kind of way that we can cautiously and mindfully use the powers of AI without spilling it and corroding what we value.

Right now, I think the ways that we're talking about AI amount to some of that inadvertent spillage.

Been meaning to ask you about Patagonia....

Expand full comment

A great essay, Mark, and I look forward to the rest of the pieces. I find myself contemplating AI quite a bit these days. I am frustrated by the adoption of marketing metaphors which deliberately warp people's perceptions of what is actually going on with this technology, and people's instant faith in current AI.

Every time we inject machine content creation into a channel of human communication, that channel becomes debased, devalued, and disregarded. I fully expect talky AI to become omnipresent in the next few years, and for human speech to begin to mimic AI's tics -- as we teach AI to listen and talk, so it teaches us.

I do not look forward to a customer support AI asking me to prove I'm human before it will talk to me.

AI can make a pretty picture, but it's never going to be true art. (That doesn't mean people won't like it or experience feelings from it).

Humans have a powerful urge to anthropomorphize. Current AI exploits that, particularly with simple tricks to make it feel more human ("uh", "um", "like").

Expand full comment

Evan Ratliff's Shell Game is really an interesting way of exploring the kind of compromises and debasements you mention. At one point, he even notices that he feels himself beginning to adopt some of the AI Evan's habits, and even the suspicion of that happening is disturbing to him. On AIs detecting AIs I did a little experiment on Shell Game. I asked for an AI transcription of episode 1, which had a rather rudimentary AI Evan. It did the job quite well, separating Real from AI Evan, etc. I did the same for episode 5, and the AI transcription was confused and couldn't separate real from machined. Ratliff improved his AI voice over time, and the distinction of voice attributes narrowed. The cues that mattered to me in the last episode were mainly relational--how would a human being respond to the situation? The voice attributes were fairly polished; the "soul" was ... absent.

BTW, sending you an email soon. Looking forward to seeing you again!

Expand full comment