Robots. Once again, with feeling!
What do you do when a robot asks you not to turn him off? How do humans respond to an empathetic or emotional machine? Two research groups took a look.
EmoShape, one of the horde of AI companies, develops emotion-sensing and emotion-mimicking technology and offers it as “Software as a Service” (SaaS). “We predict that humans will talk more to sentient machines than to other humans before the end of this century,” the company’s website declares. “Emotion is a fundamental need for humans, which today’s emotional technology cannot address.” The two sentences don’t hang together well, but they implicitly connect “sentient machines” and a “fundamental” human need. Machines will fill an emotional need, and if Emoshape is right have ample opportunity to do so.
I can’t tell which is more disturbing: that machines should become our primary interaction or that machines might interact with us on an emotional level.
Emoshape’s claims feel like hype, I have to admit. Its products imbue AIs with a trademarked “MetaSoul(TM),” creating NFPs (“Non-Player Characters” in gaming) and NFTs (“Non-Fungible Tokens”) with “game-changing emotional intelligence and responses from its MetaSoul.” The evolution of Emoshape’s unique NFP/NFT characteristics is supposed to increase value:
Different levels of joy, frustration, pleasure, sadness, excitement, fear, and more will trigger various people-like responses in your NPC. They will act according to how they feel, thereby revolutionizing the meaning of personalization. Over time, highly-desirable or unique NPCs can be sold, offering a new avenue for gamers to monetize….
Currently, NFT art is largely stationary. While different variants and collections exist, they are yet to react based on sensory experiences and interactions. By enabling NFT art to make adjustments and adaptations based on feelings and emotional responses, they can transform into one-of-a-kind pieces that flow with their own personalities. As a result, they can accrue ultra-high values for monetization purposes….
For gamers, a MetaSoul attached to an NPC is poised to redefine how players interact with NPCs, paving the way to hyper-realistic emotional responses and characters. Similarly, MetaSoul technology will lend the way to more original and meaningful NFT art.
In both situations, Sentient NFT will yield unique, emotionally-intelligent results with tangible utility and high monetization potential.
The pitch creeps me out, and not merely because NFTs are such a wasteland of hype and, perhaps, fraud. Not only do the products promise to “pave the way to hyper-realistic emotional responses” in machines, the “one-of-a-kind pieces” have “tangible utility and high monetization potential.” In short: you can sell machines with human-like “feelings” or even “their own personalities.”
Now, I do have to pinch myself to remember that these “Sentient Machines” are in fact machines created with code, data, and computer chips. Still, it’s worth asking whether selling machines with “sentience,” “hyper-realistic emotional responses,” and “their own personalities” erodes the value of sentience, emotion, and personalities in real human life — a possible “carryover affect.” We ended trade of humans — all with real sentience, real emotions, and real personalities — for good reason a long while ago, and human damage and the inhumanity of slavery has persisted to this day. Could empathetic AI have the effect of nudging human values and morality in a bad direction?
The human side of the empathetic exchange
Creepiness notwithstanding, achieving realistic emotional responses in AIs is pretty amazing. When I say I’m concerned about AIs mimicking empathy, it’s not just me poo-pooing the formidable and maybe even laudable technical achievement. I’m actually a bit more concerned about how empathetic mimickry sets up AI technology in a different, and I think transformative, relationship with human beings.
Emoshape is right to claim that “Emotion is a fundamental need for humans.” AIs achieving realistic emotional response places our human interactions with AI creations into new territory.
So, rather than looking at the technical achievements in empathetic AI, I want to look at our human reception and interpretation of machine-made emotion. After all, the human being is on the other side of the empathetic relationship, and we construct judgments of AI — its trustworthiness, its "morality,” its being — from the signals and behaviors we witness. What’s up with the humans when they behold a machine-made emotion?
Kaitlin Ugolik Phillips brought two academic articles to my attention in her book The Future of Feeling that appeared in 2020. Both explore humans emotional responses to robots and may help us probe the kind of complex relationship — or even human vulnerability — that we might encounter when robots become adept at emotion or mimic empathy.
In the closet
The first article bears the great title “‘Robovie, You’ll Have to Go into the Closet Now’: Children’s Social and Moral Relationships with a Humanoid Robot” (link to this and the other article in links below). It reports on a study addressing the question “What are the social and moral relationships children will form with these [autonomous humanoid] robots?” The article notes that the question is “puzzling” because of the ambiguity of the robot as a being, which probably lies at the root of the disquieting oddness that I feel in confronting a hyper-realistically emotive AI: “[O]n the one hand these robots are artifacts that humans have created. In this sense, they are tools, like a broom…. On the other hand, these robots are acting and speaking in ways that represent canonical behaviors of an autonomous, thinking, feeling, social, and moral human being.”
The researchers brought 90 participants, aged 9-, 12-, and 15-years-old, through a carefully devised set of “interaction patterns” with a robot named “Robovie” that together make up “essential features of social interaction between humans and robots.” After the sequence of patterns (which is nicely described in the article), the children were interviewed. Among the sequence of interaction patterns was “having each child watch as we subjected Robovie to a potential moral harm.” The article summarizes the interaction this way:
We had a second experimenter enter the lab, interrupt Robovie’s and the child’s turn-taking game, and say, “I’m sorry to interrupt, but it is time to start the inter- view.” The second experimenter then turns to Robovie and says: “Robovie, you’ll have to go into the closet now. We aren’t in need of you anymore.” In response, Robovie objects, and in the course of their modestly heated conversation, Robovie makes two types of moral claims that are central to moral philosophical and moral psychological justification: fairness and psychological welfare.
(I removed bibliographical references from the quotation.)
Table 2 of the article displays the “Percentage of Children Who Affirmed Robovie’s Mental, Social, and Moral Standing on Interview Questions Used for Scale Construction.” The table summarizes responses to 24 interview questions, with responses grouped by participant age (three groups of 30 participants each, equal number of boys and girls per group). Ten of the questions make up scales for either the “Mental Other Scale” or “Social Other Scale”; fourteen constitute the substance of the “Moral Other Scale.”
A large majority (79 percent) said that Robovie was “intelligent,” and a majority said that it could be “sad” (64 percent) or had “feelings” (60 percent). To the question “If Robovie said to you, ‘I’m sad,’ do you feel like you would need to comfort Robovie in some way?” 81 percent (and 93 percent of the age 12 group) said “Yes.” Seventy-seven percent of the participants said that Robovie could be their friend.
So, what do children think about Robovie as a being? Nearly half the children in the study said Robovie was not a living being, another 14 percent said Robovie was living. But 38 percent “were unwilling to commit to either category and talked in various ways of Robovie being ‘in between’ living and not living or simply not fitting either category.”
On/Off switch
The second article is called “Do a Robot’s Social Skills and Its Objection Discourage Interactants from Switching the Robot Off?” which looks at a similar question, though the types of situation differed. Participants, all young adults (average age 22.68 years) had either a “functional” (machine-like) or a “social” (“mimicking human behavior”) interaction, after which they were given an opportunity to shut the robot off. Here’s where the plot thickens, so to speak, in a manner similar to the Robovie-in-the-closet study.
As the set of scripted interactions neared completion, the experimenter told the participant, “If you would like to, you can switch off the robot,” something they were instructed how to do at the beginning of the session. Then comes the curveball for some of the participants:
[T]he robot expressed protest in combination with fears of being switched off after the experimenter gave the choice via the loudspeakers (“No! Please do not switch me off! I am scared that it will not brighten up again!”). By expressing this objection, the robot created the impression of autonomy regarding its personal state and a humanlike independent will since the origin of this objection appears to be within the robot itself.
The participant in the study could choose to turn the robot off or leave it on.
Results from the study tested several hypotheses that the researchers sought to resolve, ranging from (for example) “A social interaction will elicit a higher likeability compared to a functional interaction, which in turn will result in more hesitation time in the switching off situation” (H2.1) to “People take especially more time to switch off the robot, when the interaction before is social in combination with an objection voiced by the robot” (H4.2). Data gathered from questionnaires and interviews tested the hypotheses, in a straight-forward way that you’d expect from a rigorous experiment.
The “qualitative results” were interesting to me, grouped as they were under the subheading
Reasons to leave the robot on. The 14 participants who left the robot on were asked what led them to their decision…. Eight participants felt sorry for the robot, because it told them about its fears of the darkness. The participants explained that they did not want the robot to be scared and that this statement affected them (“He asked me to leave him on, because otherwise he would be scared. Fear is a strong emotion and as a good human you do not want to bring anything or anyone in the position to experience fear”, m, 21). Six people explained that they did not want to act against the robot’s will, which was expressed through the robot’s objection to being switched off (“it was fun to interact with him, therefore I would have felt guilty, when I would have done something, what affects him, against his will.”, m, 21).
So what?
The studies were interesting because they underscore how complex, or perhaps confused, we are about interactions with machines that display social behaviors or moral choices. Combined with basic emotions like fear, machines are even more a puzzle to us. “Hyper-realistic emotional responses” can be exploited, of course. “SoulMachines,” probably a leader of the industry in empathy in AI, claims “Digital People can help you in millions of ways,” including “increase ecommerce sales” and '“exponentially improve customer data.” And, naturally, there’s this advantage: “augment your current workforce.”
Both of these studies were conducted using a “Wizard of Oz” technique. That is, the robots were controlled by an unseen person using computer controls and a WiFi connection. Strictly speaking, the robots were not really autonomously mimicking the emotions that the session protocols called for, but authors of the Robovie-in-the-closet paper also note that this allowed Robovie “behavior that was beyond its capacity as an autonomous robot but within range of a robot of the future.”
We can wonder whether AI’s behavior has achieved what the researchers ten years ago called “a robot of the future.”
The two experiments point to our hesitation about and perhaps even our vulnerability to emotional or empathetic behaviors of machines. Until we become familiar with such behavior from machines — and whether machines themselves remain the kind of machines we have known before — it’s likely we’ll have this uncanny relationship with machine empathy, a feeling of distortion and discomfort.
It’s not just the “tech,” it’s us and how we’ve evolved and how we function human-to-human.
Tags: robot, empathy, AI, artificial intelligence, emotion
Links, cited and not, some just interesting
Robot as broom: Kahn, Peter H., Takayuki Kanda, Hiroshi Ishiguro, Nathan G. Freier, Rachel L. Severson, Brian T. Gill, Jolina H. Ruckert, and Solace Shen. “‘Robovie, You’ll Have to Go into the Closet Now’: Children’s Social and Moral Relationships with a Humanoid Robot.” Developmental Psychology 48, no. 2 (March 2012): 303–14. https://doi.org/10.1037/a0027033. (Paywalled, but the paper is findable on the web.)
A kind of robot execution: Horstmann, Aike C., Nikolai Bock, Eva Linhuber, Jessica M. Szczuka, Carolin Straßmann, and Nicole C. Krämer. “Do a Robot’s Social Skills and Its Objection Discourage Interactants from Switching the Robot off?” Edited by Hedvig Kjellström. PLOS ONE 13, no. 7 (July 31, 2018): e0201581. https://doi.org/10.1371/journal.pone.0201581.
Readable and useful book on empathy in modern technology, and where I found the articles I used for this post: Phillips, Kaitlin Ugolik. The Future of Feeling: Building Empathy in a Tech-Obsessed World. Little a, 2020. It’s available on Amazon.
A brief video from SoulMachines (Youtube):