6 Comments

It’s great to hear about the outcome of your seminar, and I like the bookending/framing of your process with King Thamus. At the very least, you’ve placed the questions - to use or not to use, and how - in their minds going forward. But I do think there’s something to the idea that it’s useful insomuch that you are already an expert in the material; you still need to know what things mean and how they can work together.

Expand full comment

Yes, "placing the questions" is the main benefit, I think. Just doing that imposes a certain critical distance. I do think that as the technology improves, it'll be more difficult to maintain a sense of a "human factor" in thought. The bot's mimicry might just just get assent rather than a critical judgment. (I think that in the future the freedoms of art practice might enliven our minds. We'll see how it all goes.)

Expand full comment

I really appreciate the thoroughness with which you documented your experiment. I’d love to see something similar applied well down into elementary schools, when students are first learning the think/write. I wonder how early kids will be learning to use ChatGPT and whether they will quickly approach it just as we did books when we were young, as simply something to test ourselves against?

Expand full comment

I wonder if there's a point when our minds are developed enough to handle the ambiguities of mimicry. Already ChatGPT mimics language so well that it's easy to attribute more authority or "sense" to it than is warranted. I do think that AI applications in education will become prominent -- in part to pull up the slack of teacher shortages. What happens when a bot stands in for teachers? When I was a grade school student, I saw my teachers as authorities. Is that what students should think of their AI bots? It'll be interesting to see how we manage the uniqueness of human thought in education. It's already something to wonder about in employment!

Expand full comment

Mark, this is a really thoughtful (and thought-provoking piece). I think you probably already know where I'm going to land on the mere tool vs. more pervasive technology question and the Phaedrus discussion is a good parallel to consider. In many ways if we think of any tool as a "mere tool" we're kidding ourselves. Of course some tools shape us more or less than others but I can't help but think that LLMs are going to be on the "more" side of the equation because of the interface.

I think LLMs and AI tools more generally will be more impactful because of the way that we directly interact with them and how they connect directly with our writing and thus, in a way, with our thinking.

Really good stuff here and I love the way that you prototyped some of these things in your seminar this semester. Bravo!

Expand full comment

Josh, you and I are pretty much on the same page when it comes to teaching approaches, I think. Like you, I think a lot about the regular old interface of ChatGPT -- it's an approachable "chat" just like what we are accustomed to with our family and friends. Except of course that there's silicon on the other end of the line. It's so easy to give the bot status as a "subjective entity" just because of the fluidity of regular language interaction -- and its familiarity with interactions with real subjective entities!

Yesterday, my seminar had a guest, the very talented Dr. Brinnae Bent. She unveiled her "Technocomplex bot" during her presentation. She whipped it up using the contents of the Technocomplex 'stack. She told the bot "I like cars" and the bot replied with a suggestion (on target, too) for a post to read. I just asked it to tell me "what DeLong thinks of AI" and this was the response: "Based on your interest in DeLong's views on AI, I recommend the story "Will AI behave like earlier automation? What affect will AI have on skill requirements?" by Mark R DeLong, published on September 8. This story explores the potential impacts of AI on the future of work and skill requirements, posing intriguing questions about the similarities and differences between AI and previous forms of automation."

Good to hear from you, as always, Josh.

Expand full comment