11 Comments

UX:

Humans have a big cognitive bias in the form of anthropomorphism. It doesn't take much to invoke it -- one study showed that humans would even anthropomorphize a tennis ball on a stick if it moved slightly.

Wrapping current "AI" technology in a cute voice and giving it human tics is simply exploiting that bias. It will make the average person believe the machine is both "thinking" (more cognizant, conscious, and alive than it is) and more "correct" (or less likely for us to aggressively assume it is wrong).

From a UX standpoint, this kind of superficial frosting is arguably the last thing you should worry about, and may be contrary to providing an accurate depiction of what the thing actually is...unless your primary goal is to camouflage your inaccurate, inhuman machine's failings. It strikes me as rather dystopian -- the smiling face masks strapped across authority in "Brazil".

It's a Trojan horse, and one that will eventually be specifically targeted to what the various systems and companies know about you. Think about how annoying, absurd, and accurate online advertising is, then extrapolate.

Authority:

In the short term, people will happily defer to the AI - "It's smarter than you and lacks your bias", before eventually griping all the time about how AI is inconsistent and lacks any nuanced understanding of a situation. Consider how frustrated people were with human tier 1 tech support, until everyone replaced that with those phone trees that force you to go through a bunch of options and utterly prevent you from speaking to a human.

Soon, our only portal into many institutions will be these chipper, unflappable, uncaring bots who refuse to deviate from their "friendly" scripts and programming.

Personally, the last thing I would want is for a hospital to wheel out one of those Segways with an iPad stuck to it or worse, some kind of humanoid robot, all to break some bad news to me in chipper, unflappable, uncaring perfection before manipulating me into doing what its masters want.

I am pessimistic about the tech future at the moment, not because I think AI will be so amazingly good, but because it will be so amazingly bad, but appear to be "good enough" for it to be widely adopted. AI may not be "good at your job", but "good at the job" didn't stop Corporate America from offshoring many people's positions because it was substantially cheaper.

Finally, the fact that AI will be so accommodating and infinitely patient will lead some people to fail to learn consequences for behaving badly, and make it even more difficult for humans to talk to other humans (because everyone is used to getting what they want and being pandered to/indulged all the time).

Expand full comment

Your comment about being "good enough" I think lays out the path to expect: "not because I think AI will be so amazingly good, but because it will be so amazingly bad, but appear to be 'good enough' for it to be widely adopted." And I think that the acceptably unacceptable grey area is also the most dangerous for matters of quality. If something is just barely passable or just barely not, people are more likely to settle for it--not without complaint but just to get over the hassle of complaining or rectifying or improving.

I suspect that over time, the limitations of AI will be shown, as the hype now circulating around AI fades even more. We might be surprised by what it can do well; we will probably put up with what it does marginally okay. It'll be tough to compete even in that barely okay range, but I hope humans might just press on with a preference to be with humans.

BTW, I'll be in touch with you soon.

Expand full comment

have you read any of the research that was conducted using Kismet? This was an earlier instantiation of a robot modeling human 'feelings' when interacting with humans, in this case using facial gestures. I need to go read them again to glean insights into how that research has been used to influence gpt development.

Expand full comment

I just looked up Kismet, and found the 1990s-era website in all its glory! http://www.ai.mit.edu/projects/humanoid-robotics-group/kismet/kismet.html. I vaguely remembered the look of the robotic "face."

The human response/UI/UX research is pretty interesting. One of my favorites is "'Robovie, you'll have to go into the closet now': Children's social and moral relationships with a humanoid robot"(http://doi.apa.org/getdoi.cfm?doi=10.1037/a0027033). I assign that article to students in my seminar in the fall, even though it's now over a decade old.

Good to hear from you, Darin!

Expand full comment

Continuing to wrestle the AI beast, great stuff. (Not to get ahead of myself, but book 2?)

And the fact that doctors used AI to sound more human only proves to me that we’re living in Brazil. (Not the country, the Terry Gilliam movie.)

Expand full comment

Book 2?! OMG, I'm just hoping to get through the one I'm working on! But the thought of doing something more with this interest has occupied a bit of my thought. There might be an opening somewhere for a resource to help students navigate some of the uses of AI/robots and the opportunities and treacheries that come with them. Lord knows I've waded through a lot of articles and research on it, some of which is particularly provocative and stimulates discussion and thought.

I was part of a small "roundtable" on AI in education and workforce development on Thursday and Friday--a collection of people mainly in healthcare and health education, in area higher ed, tech business, and federal government. It's clear that AI dwells in this oddly hopeful, threatening, promising, ambiguous, and maybe even unique (or just historically rare) experiential "space." I hope that a broad community of people (and it needs to be an inclusive and broad community!) can find ways of turning AI into a tool for people. The alternative is that people slip and fall into tool-like behaviors themselves ... and there are multiple meanings of the word "tool"!

Expand full comment

I know, I was hesitant to utter the words 😅😅 but I think you’ve got something there.

I just saw an article retracted from a journal - ostensibly for an ethical violation, but what made it viral was the AI detritus the authors (and editors) left in the copy. I’ll have to send you the images, but suffice to say, the “publish or perish” environment of academia is primed and ready for abuses of the already questionable “tool” (by “tools” 😂).

Expand full comment

Bravo to you for continuing to teach this incredibly challenging subject … I envy you the rationale for keeping up with all that is happening. I increasingly find that I am just watching it happen and feeling glad that I don’t need to be involved with it.

Expand full comment

You know, it's really hard to find a place where you can be without having AI or at least digital manipulation touch your life. And in watching the development of AI unfold, it's hard not to be sucked into the often clownish drama that gets dished up, too. Sam Altman's transformation in the past 6-8 months is just a very visible example. We mere mortals can see the external evidences--the festering blister on the skin--and we wonder about an underlying infection.

Expand full comment

Thank you for sharing this incredibly well researched and timely article. There are so many questions to explore and not enough time to do so given the rapid speed at which the tech is advancing. I appreciate the opportunity to slow down and think deeply about these topics, so thank you for providing the means to do that through reading this.

Expand full comment

And, Brinnae, I'm thankful for your weekly posts on current solid AI research.(Everyone should subscribe to Spill the GPTea, BTW: https://spillthegptea.substack.com/). You're absolutely right about the speed of development and change in the field. It makes it a real challenge to plan a class, too.

But the speed also is somehow invigorating. The changes in GPT-4o, for example, were not wholly unexpected, though the integrations under the hood are no doubt impressive. But the change seems to have accelerated in experience simply because the "overlay" of empahty and human qualities change the experience of interaction so dramatically. It really is possible to see how "Sky" and her voice could have the effect on people that were portrayed quite fictionally and fancifully in Her a decade ago.

I'll be in touch soon!

Expand full comment