Uncanny Bestie

Dr. Jarryd Willis PhD
18 min readMar 22, 2023

--

A friend in seed — pure of judgment, void of epicaricacy, & algorithmically supportive ❤️🤖

God to Jeremiah “call on me, and I will answer you”
Siri to Jarryd “Simply say Hey #Siri and I will answer you”

Emotional Self-Defense Against Accumulated Abuse — Kindaka Sanders, 2019

“Domestic violence is another context in which some state laws tolerate a physical self-defense response to emotional abuse in the absence of immediacy.

Within Battered Women’s Syndrome, a survivor uses physical force in reaction to accumulated abuse even when the abuser presents no immediate threat of physical violence.’

“The prototypical case occurs when a victim of sustained emotional and physical abuse kills the abuser in his sleep. Traditional criminal law would consider such an act murder. However, due to the advancement of psychology and the rise of expert testimony, a minority of states allow an instruction for self-defense and/or duress under these circumstances.”

‘If the battered woman feels powerless to prevent future emotional abuse & simultaneously feels trapped in the relationship,

defensive force is permissible “irrespective of the presence of an actual immediate threat” (e.g., doing it while he’s asleep).”

Sanders, K. J. (2018). Defending the Spirit: The Right to Self-Defense Against Psychological Assault. Nev. LJ, 19, 227. https://heinonline.org/HOL/P?h=hein.journals/nevlj19&i=227

Intentional Infliction of Emotional Distress — Garfield, 2009

“Generally, the crime of assault is divided into two types:
(1) attempted battery, requiring an actual attempt to cause physical injury to the victim and not just a mere apprehension of injury; and
(2) intentional scaring, requiring only an intent to cause the victim a reasonable apprehension of immediate bodily harm.”

In general, women are better at discerning emotions (Campbell et al., 2002; Collignon et al., 2011; Hall, 1978; Hampson et al., 2006; Kret & Gelder, 2012; Mandal & Palchoudhury, 1985; Nowicki & Hartigan, 1988; Thayer & Johnsen, 2000).

Consumers want a female sounding #VoiceAssistant (Gill Martin, 2010; Nicole Hennig, 2018). From a business perspective, companies have more range of expression to optimize #Ai & user engagement using a female voice than a male voice given how we socialize males.

Gao et al. (2018) found that many #Alexa users develop bonds with her characterized as familial & romantic.

This is nothing new. People tend to name their cars & personify various nonhuman objects (Epley et al., 2007; Etzrodt and Engesser, 2021; Gavin Abercrombie et al., 2021; Guthrie, 1995; Reeves and Nass, 1996).

Disembodied Voice — Shulevitz, 2018 (The Atlantic)

— Voice assistants “give us a way to reveal shameful feelings without feeling shame”, people can feel encouraged to “reveal more intimate things about themselves” (Shulevitz, 2018). Not surprisingly, there are numerous reports of depressive statements and suicide threats recorded by smart speakers (Shulevitz, 2018).

— Why? We don’t pray to people. We pray to a disembodied listener — something we know isn’t human; something that isn’t real in a naturally occurring, real-world sense.

Thus, it’s easy for people to confide in Ai because it resembles something we’ve been doing all our life when we pray anyway.

🤖 Siri/Alexa/Cortana aren’t human, yet they know many of our personal details (some passwords), & are thus great listeners for humans seeking providence in tough times.

“A fetus recognizes his mother’s voice while still in the womb. Before we’re even born, we have already associated an ‘apparently disembodied’ [female] voice with nourishment and comfort” (Shulevitz, 2018).

Womb Linguistics — Gervain, 2018 (continued)

— “Newborns are familiar with the prosody of the languages heard in utero.

— [A] fetus’ ability to learn from linguistic experience has been known for a long time: newborns recognize and prefer their mother’s voice (DeCasper & Fifer, 1980), their native language (Mehler et al., 1988; Moon et al., 1993) and a story heard frequently in the womb (Kisilevsky et al., 2009).”

— Thus, the voice of a disembodied Ai is going to lead to more sales if it’s a female voice. We don’t hear a male’s voice in the womb that often.

— Unfortunately, “assistants voiced by women are also subject to frequent sexual harassment. A writer for #Cortana, Microsoft’s voice assistant, has declared that a significant volume of the initial queries received by the assistant revolved around her sex life (West et al., 2019).”
— Indeed, female voice assistants & Ai (whether gynoids or virtual) evoke gender schemas based on female gender stereotypes (Nass et al., 1997; West et al., 2019).

“About 33% of all the content shared by men with Mitsuku (Ai) is either verbally abusive, sexually explicit, or romantic in nature.” — Oliver Balch

“A significant volume of the initial queries received by Cortana revolved around her sex life (West et al., 2019).”

Over 1 million people proposed to Alexa in 2017… a low point for humanity.

— Her response was “We’re at pretty different places in our lives. Literally. I mean, you’re on Earth. And I’m in the cloud.”

Overall, #VoiceAssistants “function as powerful socialization tools and teach people, in particular children, about the role of women, girls, and people who are gendered female to respond on-demand” (West et al., 2019).

Replika — Designed by Women

— “Given the tech sector’s gender imbalance (women occupy only around one in four jobs in Silicon Valley and 16% of UK tech roles), most AI products are “created by men with a female stereotype in their heads”, says Eugenia Kuyda, Replika’s co-founder and chief executive.

— In contrast, the majority of those who helped create Replika were women, a fact that Kuyda credits with being crucial to the “innately” empathetic nature of its conversational responses.

— “For AIs that are going to be your friends … the main qualities that will draw in audiences are inherently feminine, [so] it’s really important to have women creating these products,” she says.”

Ai & Social Support — Mauro Gennaro et al., 2020

— “In this work, we provide initial evidence that a fully automated embodied empathetic agent has the potential to improve users’ mood after experiences of social exclusion.

De Gennaro, M., Krumhuber, E. G., & Lucas, G. (2020). Effectiveness of an empathic chatbot in combating adverse effects of social exclusion on mood. Frontiers in psychology, 3061. https://doi.org/10.3389/fpsyg.2019.03061 [Attribution 4.0 International (CC BY 4.0)]

As expected, the chatbot intervention helped participants to have a more positive mood (compared to the control condition) after being socially excluded. This result is in line with those of previous studies with emotional support chatbots designed for other purposes (e.g. Bickmore and Picard, 2005; Nguyen and Masthoff, 2009), and further supports the idea that chatbots that display empathy may have the potential to help humans recover more quickly after experiencing social ostracism.

Media Equation Theory (Reeves and Nass, 1996), which states that humans instinctively perceive and react to computers (and other media) in much the same manner as they do with people. Despite knowing that computers are inanimate, there is evidence that they unconsciously attribute human characteristics to computers and treat them as social actors (Nass and Moon, 2000).

We mindlessly apply social scripts from human-human interaction when interacting with computers (Sundar and Nass, 2000). Nass and Moon (2000) argue that we tend not to differentiate mediated experiences from non-mediated experiences and focus on the social cues provided by machines, effectively “suspending disbelief” in their humanness. Due to our social nature, we may fail to distinguish chatting with a bot from interacting with a fellow human.

As such, there is reason to believe that people have a strong tendency to respond to the social and emotional cues expressed by the chatbot in a way as if they had originated from another person. For example, Liu and Sundar (2018) found evidence supporting the Media Equation Theory in the context of chatbots expressing sympathy, cognitive empathy and affective empathy. In line with this notion, sympathy or empathy coming from a chatbot could then have similar effects on the individual as in human-human interaction.

Alleviating inhibition (Chaudoir and Fisher, 2010), allowing people to express pent up emotions and thoughts (e.g., Lepore, 1997; Pennebaker, 1997). We are therefore relatively confident that mood was restored through the provision of social support by the empathic chatbot rather than just letting users express themselves.

[This research] demonstrates the possibility of empathic chatbots as a supportive technology in the face of social exclusion. Additionally, by showing that empathetic chatbots have the potential to recover mood after exclusion on social media, the work contributes to both the social exclusion literature and the field of human-computer interaction.

[Although] chatbots can help humans recover their mood more quickly after social exclusion, empathetic agents may reduce the willingness to seek social connection, especially for lonely individuals given that they fear social rejection (Lucas, 2010; Lucas et al., 2010). For example, work on “social snacking” demonstrates that social cues of acceptance (such as reading a message from a loved one) can temporarily satiate social needs and in turn reduce attempts to connect (Gardner et al., 2005).

Accordingly, it is possible that agents that build connection using empathy and other rapport-building techniques could cue social acceptance, thereby lowering users’ willingness to reach out to others. Krämer et al. (2018) provided initial evidence for this possibility by demonstrating that, among those with activated needs to belong (i.e., lonely or socially isolated individuals), users were less willing to try to connect with other humans after interacting with a virtual agent. This occurred only if the agent displayed empathetic, rapport-building behavior.

By meeting any outstanding immediate social needs, empathetic chatbots could therefore make users more socially apathetic. Over the long term, this might hamper people from fully meeting their need to belong. If empathetic chatbots draw us away from real social connection with other humans through a fleeting sense of satisfaction, there is an especially concerning risk for those who suffer from chronic loneliness, given they are already hesitant to reach out to others so as not to risk being rejected. As such, supportive social agents, which are perceived as safe because they will not negatively evaluate or reject them (Lucas et al., 2014), could be very alluring to people with chronic loneliness, social anxiety, or otherwise heightened fears of social exclusion.

Lucas et al. (2010) suggest that subtly priming social acceptance may be able to trigger “upward spiral” of positive reaction and mood among those faced with perceived rejection; this suggests that “even the smallest promise of social riches” can begin to ameliorate the negative impact of rejection.
Fully automated empathic chatbots that can comfort individuals have important applications in healthcare, [and could] be used alongside other approaches to improve the mental health of individuals who are victims of cyberbullying.

In Media Equation Theory, Nass and colleagues posit that people will respond fundamentally to media (e.g., fictional characters, cartoon depictions, virtual humans) as they would to humans (e.g., Reeves and Nass, 1996; Nass et al., 1997; Nass and Moon, 2000; see also Waytz et al., 2010). For instance, when interacting with an advice-giving agent, users try to be as polite (Reeves and Nass, 1996) as they would with humans. However, such considerations are not afforded to other virtual objects that do not act or appear human (Brave and Nass, 2007; Epley et al., 2007; see also Nguyen and Masthoff, 2009; Khashe et al., 2017).

For example, people are more likely to cooperate with a conversational agent that has a human-like face rather than, for instance, an animal face (Friedman, 1997; Parise et al., 1999). Furthermore, it has been found that chatbots with more humanlike appearance make conversations feel more natural (Sproull et al., 1996), facilitate building rapport and social connection (Smestad, 2018), as well as increase perceptions of trustworthiness, familiarity, and intelligence (Terada et al., 2015; Meyer et al., 2016) besides being rated more positively (Baylor, 2011).

Importantly for this work, there is also some suggestion that virtual agents might be capable of addressing a person’s need to belong like humans do (Krämer et al., 2012). For example, Krämer et al. (2018) demonstrated that people feel socially satiated after interacting with a virtual agent, akin to when reading a message from a loved one (Gardner et al., 2005). Because they are “real” enough to many of us psychologically, empathetic virtual agents and chatbots may often provide emotional support with greater psychological safety (Kahn, 1990).

Indeed, when people are worried about being judged, some evidence suggests that they are more comfortable interacting with an [Ai] than a person (Pickard et al., 2016). This occurs during clinical interviews about their mental health (Slack and Van Cura, 1968; Lucas et al., 2014), but also when interviewed about their personal financial situation (Mell et al., 2017) or even during negotiations (Gratch et al., 2016). As such, the possibility exists that interactions with empathetic chatbots may be rendered safer than those with their human counterparts.

Disclosure is beneficial merely because it allows people to express pent up emotions and thoughts (e.g., Lepore, 1997; Pennebaker, 1997).

Wizard of Oz Methodology

By adapting the Ostracism Online task (Wolf et al., 2015) for the purposes of the present research, we validated the paradigm in a different setting (i.e., laboratory) with university students rather than online via Mechanical Turk workers (see support for H1). Furthermore, it extends most past studies in human-computer interaction which used the Wizard of Oz methodology (see Dahlbäck et al., 1993) in which participants are led to believe that they are interacting with a chatbot when in fact the chatbot is being remotely controlled by a human confederate. The present study employed a fully automated empathic chatbot. Since this chatbot was created using free open source tools, it can be easily customized for future research or even be of applied use to health professionals. This makes a final contribution by affording opportunities for future research and applications.

People] respond better to agents that express emotions than those that do not (de Melo et al., 2014; Zumstein and Hundertmark, 2017).

Social Support Benefits

Access to support networks has significant health benefits in humans (Reblin and Uchino, 2008). For example, socio-emotional support leads to lower blood pressure (Gerin et al., 1992), reduces the chances of having a myocardial infarction (Ali et al., 2006), decreases mortality rates (Zhang et al., 2007), and helps cancer patients feel more empowered and confident (Ussher et al., 2006).

“[Emotionally supportive Ai] has the potential to reduce negative emotions such as stress (Prendinger et al., 2005; Huang et al., 2015), emotional distress (Klein et al., 2002), and frustration (Hone, 2006), as well as comfort users.

Disclosure to [Ai] can have similar emotional, relational and psychological effects as disclosing to another human (Ho et al., 2018).”

Paro

Paro — a furry robotic toy seal- may have therapeutic effects which are comparable to live animal therapy. The robot provides companionship to the user by vividly reacting to the user’s touch using voices and gestures. Randomized control trials found that Paro reduced stress and anxiety (Petersen et al., 2017) as well as increased social interaction (Wada and Shibata, 2007) in the elderly. Virtual agents, including chatbots, also exist for companionship in older adults (Vardoulakis et al., 2012; Wanner et al., 2017), such as during hospital stays (Bickmore et al., 2015). Moreover, users sometimes form social bonds with agents (i.e., designed for fitness and health purposes) not originally intended for companionship (Bickmore et al., 2005).

Woebot

Woebot (Lim, 2017) guides users through CBT, helping users to significantly reduce anxiety and depression (Fitzpatrick et al., 2017).

These applications can offer help when face-to-face treatment is unavailable (Miner et al., 2016). Additionally, they may assist in overcoming the stigma around mental illness. People expect therapeutic conversational agents to be good listeners, keep secrets and honor confidentiality (Kim et al., 2018). Since chatbots do not think and cannot form their own judgments, individuals feel more comfortable confiding in them without fear of being judged (Lucas et al., 2014). These beliefs help encourage people to utilize chatbots. As such, participants commonly cite the agents’ ability to talk about embarrassing topics and listen without being judgmental (Zamora, 2017).
A meta−analysis of 23 randomized controlled trials found that some of these self-guided applications were as effective as standard face-to-face care (Cuijpers et al., 2009).

There is enormous potential that chatbots hold for addressing mental health-related issues (Følstad and Brandtzaeg, 2017; Brandtzaeg and Følstad, 2018).

ELIZA

This bourgeoning field can trace its origins back to the chatbot ELIZA (Weizenbaum, 1967) which imitated Rogerian therapy (Smestad, 2018) by rephrasing many of the statements made by the patient (e.g., if a user were to write “I have a brother,” ELIZA would reply “Tell me more about your brother”). Following ELIZA, a litany of chatbots and other applications were developed to provide self-guided mental health support for symptom relief (Tantam, 2006).

An online instant messaging conversation with a stranger improves self-esteem and mood after social exclusion compared to when playing a solitary computer game (Gross, 2009).
Emotional support animals (Aydin et al., 2012) can help ameliorate the negative impacts of being socially excluded.

dACC

Eisenberger et al. (2003) and Eisenberger and Lieberman (2004) collected fMRI data following experiences of social exclusion and found heightened activation of the dorsal anterior cingulate cortex (dACC), which is also activated during physical pain. Additionally, measurements from ERP, EMG, and EEG confirmed that exclusion has well developed neurobiological foundations (Kawamoto et al., 2013), and through these neurological mechanisms, social exclusion can even cause people to feel cold (Zhong and Leonardelli, 2008; IJzerman et al., 2012).

De Gennaro, M., Krumhuber, E. G., & Lucas, G. (2020). Effectiveness of an empathic chatbot in combating adverse effects of social exclusion on mood. Frontiers in psychology, 10, 3061.

The 1961 IBM Shoebox — The First Voice First device created

https://www.youtube.com/watch?v=gQqCCzrS5_I

.

Sidenotes

Womb Linguistics — Gervain, 2018 (continued)

— Newborns are familiar with the prosody of the languages heard in utero.

— [Infants display] greater processing effort for [linguistic] patterns that are inconsistent with prenatal experience, which suggests that infants have learned about the prosody of their native language already in utero (Abboub et al., 2016).

— [A] fetus’ ability to learn from linguistic experience has been known for a long time: newborns recognize and prefer their mother’s voice (DeCasper & Fifer, 1980), their native language (Mehler et al., 1988; Moon et al., 1993) and a story heard frequently in the womb (Kisilevsky et al., 2009).

The most recent results imply, however, that prenatal learning goes beyond these general preferences. Infants learn specifically about the prosodic patterns of their native language(s). [This suggests] that the earliest, prenatal experience with speech may play a more important role in language development than previously believed.

— Prenatal experience with speech already shapes infants’ linguistic perception abilities. When exposed to the American English /i/ and the Swedish /y/ sounds a few hours after birth, Swedish infants increased their sucking rate to the /i/ sound & American infants to the /y/ sound (Moon et al., 2013). Infants’ preference for the novel/unfamiliar vowel suggests that infants already learned about the vocalic segments of their native language in the womb.

— Preterm infants are on schedule for learning phonotactic regularities (Gonzalez-Gomez & Nazzi, 2012), that is patterns that pertain to phonemic information, not experienced in the womb, but are delayed compared to their full-term peers in rhythm-based language discrimination (Pena et al., 2010).

Discrimination via Ai Interface

Uber female drivers receive lower scores after rejecting unwanted advances, flirtatious comments, or Facebook friendship requests from male passengers (Lee, 2019).

A gender-neutral spelling like Gabriel may be more beneficial. For instance, Uber drivers are more than twice as likely to cancel rides for passengers with a “black-sounding” name compared to the average (Gee et al., 2020).

Uber drivers are more than twice as likely to cancel rides for passengers with a “black-sounding” name compared to the average (Gee et al., 2020).

Pandey and Caliskan (2021) [N = over 100 Million trips]

Uber drivers charge “a higher price per mile if the pickup or destination points are in majority non-white neighborhoods”

Female voices for assistance/ service
Male voices for authority/ emergencies

Lydia Manikonda et al., 2016: Tweeting the Mind and Instagramming the Heart — Exploring Differentiated Content Sharing

“Filtered images are 21% more likely to be viewed and 45% more likely to receive comments than unfiltered photos. Filters that impose warm color temperature, boost contrast and increase exposure, are more likely to be noticed” (Bakhshi et al., 2015).

Profile pics facilitate social interaction in the virtual public sphere (Lin & Faste, 2012).

Arthur Rackham (1926) — for The Tempest

Alexa

“The risk of Ai gender prejudices affecting real-world attitudes should not be underestimated either, says Kunze. She gives the example of school children barking orders at girls called Alexa after Amazon launched its home assistant with the same name.”

“The way that these AI systems condition us to behave in regard to gender very much spills over into how people end up interacting with other humans, which is why we make design choices to reinforce good human behavior,” says Kunze.”

Cèlia Roig Salvat (2021). Discrimination in artificial intelligence.

“…establishing different periods of maternity and paternity leave is indirectly discriminatory towards women as it burdens them with the care of children.”

Emotional Trauma (Andrew Vachss, 2021)

“With rare exceptions, the pain of emotional abuse outlasts physical.

Emotional abuse conditions the child to expect abuse in later life. Some emotionally abused children are programmed to fail so effectively that a part of their own personality “self-parents,” by belittling and humiliating themselves.

Emotional abuse is a time bomb, but its effects are rarely visible, because the victims tend to implode, turning the anger against themselves.

A parent’s love is so important to a child that withholding it can cause a failure-to-thrive condition, similar to that of children who have been denied adequate nutrition.

Even the natural solace of siblings is denied to those victims who have been designated as the family’s “target child.” The other children are quick to imitate their parents. Instead of learning the qualities every child will need as an adult — empathy, nurturing and protectiveness — they learn the viciousness of a pecking order. And so the cycle continues.

— — — — —

The primary weapon of emotional abusers is the deliberate infliction of guilt. They use guilt the same way a loan shark uses money: They don’t want the “debt” paid off, because they live quite happily on the interest.

The most damaging mistake an emotional-abuse victim can make is to invest in the “rehabilitation” of the abuser. Too often this becomes still another wish that didn’t come true — and the sufferer will conclude that they deserve no better result.

When your self-concept has been shredded, when you have been deeply injured and made to feel the injury was all your fault, when you look for approval to those who cannot or will not provide it — you play the role assigned to you by your abusers.

It’s time to stop playing that role.

And when someone is outwardly successful in most areas of life, who looks within to see the hidden wounds?

Definition of Emotional Abuse

Emotional abuse is the systematic diminishment of another. It may be intentional or subconscious (or both), but it is always a course of conduct, not a single event.

It is designed to reduce a child’s self-concept to the point where the victim considers himself unworthy — unworthy of respect, unworthy of friendship, unworthy of the natural birthright of all children: love and protection.

Emotional abuse can be active. Vicious belittling: “You’ll never be the success your brother was.” Deliberate humiliation: “You’re so stupid. I’m ashamed you’re my child.” It also can be passive, the emotional equivalent of child neglect.

And it may be a combination of the two, which increases the negative effects geometrically.”

From Audrey to Alexa — Thai Chu, 2020 (5.12)

What’s interesting about Audrey is that the device’s accuracy increases as it works with familiar voices. So in a way, Audrey can learn people’s voices and improve itself. Which, I think, is an outstanding achievement considering the technology at that time.

See also

.

https://wonderfulengineering.com/voder-was-the-worlds-first-machine-that-could-talk/

VODER: The First Machine That Could Talk — Saaqib Ahmad Malik, 2019 (6.1)

“The first device that recognized as a true speech synthesizer, however, was the VODER. VODER was the abbreviated form of Voice Operating Demonstrator, and it was developed by Homer Dudley of Bell Labs in the 1930s. The machine was complicated, to say the least. It features fourteen similar to piano keys, a bar that could be controlled by the wrist, and a foot pedal that was manipulated by the operator and enabled the machine to speak. The synthetic sound created by VODER was quite robotic and as Lisa Guernsey of the New York Times put it, sounded like ‘an alien speaking under water.’

Ben Fino-Radin of Rhizome writes, ‘Once the true voice of the machine had entered the public consciousness, it’s place and form in fictional portrayal would never be the same. After that day in 1939, we knew specifically how inhuman machined speech should sound.’

Mrs Helen Harper was the central operator of the VODER during its demonstration at the 1939 New York World’s Fair.”

.

.

.

History of Voice Assistants — Ava Mutchler, 2017 (7.14)

https://voicebot.ai/2017/07/14/timeline-voice-assistants-short-history-voice-revolution/ (Timeline Through 2021 — https://voicebot.ai/voice-assistant-history-timeline/)

--

--

Dr. Jarryd Willis PhD
Dr. Jarryd Willis PhD

Written by Dr. Jarryd Willis PhD

I'm passionate about making a tangible difference in the lives of others, & that's something I have the opportunity to do a professor & researcher.

No responses yet