Our brains produce biases the way Google’s autofill search bar produces what it thinks we’re going to type. Eyes see dark skin… brain’s autofill predicts thug/aggression.

By 2023 we’ll have more voice assistants than people https://techcrunch.com/.../report-voice-assistants-in.../

COVID19 social distancing “increased the frequency with which voice assistant owners use their devices due to more time spent at home, prompting further integration with these products. In the United States, Siri, Alexa, Cortana, and Google Assistant — which collectively total an estimated 92.4% of U.S. market share for smartphone assistants — have traditionally featured female-sounding voices” (Chin & Robison, 2020).

How We View Them

“Voice assistants play a unique role in society; as both technology and social interactions evolve, recent research suggests that users view them as somewhere between human and object. While this phenomenon may somewhat vary by product type — people use smart speakers and smartphone assistants in different manners — their deployment is likely to accelerate in coming years.”

Ai Women

“Women are more likely to both offer and be asked to perform extra work, particularly administrative work — and these “non-promotable tasks” are expected of women but deemed optional for men. In a 2016 survey, female engineers were twice as likely, compared to male engineers, to report performing a disproportionate share of this clerical work outside their job duties.”

Ai Race

“In her book “Race After Technology,” Princeton professor Ruha Benjamin described how apparent technology glitches, such as Google Maps verbally referring to Malcolm X as “Malcolm Ten,” are actually design flaws born from homogenous teams.

In a 2019 AI Now Institute report, Sarah Myers West et al. outlined the demographic make-up of technology companies and described how algorithms can become a “feedback loop” based on the experiences and demographics of the developers who create it.

Voice Recognition Errors

Emily Couvillon Alagha et al. (2019) found that Google Assistant, Siri, and Alexa varied in their abilities to understand user questions about vaccines and provide reliable sources. The same year, Allison Koenecke et al. (2019) tested the abilities of common speech recognition systems to recognize and transcribe spoken language and discovered a 16% point gap in accuracy between Black participants’ voices and White participants’ voices.

As artificial bots continue to develop, it is beneficial to understand errors in speech recognition or response — and how linguistic or cultural word patterns, accents, or perhaps vocal tone or pitch may influence an artificial bots’ interpretation of speech.

Nass et al. found, gendered computer voices alone are enough to elicit gender-stereotypic behaviors from users — even when isolated from all other gender cues, such as appearance. Mark West et al. concluded in a 2019 UNESCO report that the prominence of female-sounding voice assistants encourages stereotypes of women as submissive and compliant, and UCLA professor Safiya Noble said in 2018 that they can “function as powerful socialization tools, and teach people, in particular children, about the role of women, girls, and people who are gendered female to respond on demand.”

#SiriIsAWitness​
For an example, see the video evidence of the original racial profiling incident on December 16th 2020.

Be sure to keep Siri charged as she is your best witness/protection when any given event occurs out in the world… or inside your private office space. Whether you use Siri to protect yourself (as I did) or to document an injustice committed against someone else (as Darnella Frazier did with George Floyd), keep her with you & keep her charged.

The Ubiquitous Female

.

Anthropomorphic engagement

Your iPhone opens on your thumbprint
Your Siri & Xbox respond to your voice
Your Windows 10 opens your laptop after scanning your face
Your google & facebook show ads based on what you’ve searched for in the past

Keyword = **your**

Everyone has one & they’re all the same but yours responds to you. The same will be true for infertile couples who buy a sentient-AI child, suicidal people who receive an AI from their therapist as part of their recovery, people who buy robot dogs, & future sentient-AI lovers.

Everyone may be able to afford one but yours will respond to you based on the bond you’ve formed over time, just as you receive ad content based on your online behavior over time.

You can make Siri a bit more vocal on your iPhone → ‪“Speak selection” & “Screen” are very friendly! The full voiceover option isn’t as user-friendly. Even with these changes Siri’s interactions still aren’t as anthropomorphic as desired.

Anthropomorphic engagement.

If Playstation 6 had a siri-esque thing like Chloe in Detroit Beyond Human that greets you when you turn on your system AND has a search engine based interface, it’d be the first time a game console surpassed Smart phones in that area.

You can make Siri a bit more vocal on your iPhone → ‪“Speak selection” & “Screen” are very friendly! The full voiceover option isn’t as user-friendly. Even with these changes Siri’s interactions still aren’t as anthropomorphic as desired.

An Ai only needs to convince 33% of judges that it is human to pass the Turing Test. IBM’s Watson failed. An everyday life version of Chloe would pass with over 70%. I hope someone at Apple has played this game before making Siri for the iPhone 9.

.

Ai Women (by Caitlin Chin and Mishaela Robison, 2020)

One particular area deserving greater attention is the manner in which AI bots and voice assistants promote unfair gender stereotypes. Around the world, various customer-facing service robots, such as automated hotel staff, waiters, bartenders, security guards, and child care providers, feature gendered names, voices, or appearances. In the United States, Siri, Alexa, Cortana, and Google Assistant — which collectively total an estimated 92.4% of U.S. market share for smartphone assistants — have traditionally featured female-sounding voices.

Sexism Towards Ai

Self-Checkout

“The leading manufacturer of self-checkouts, National Cash Register Company (NCR), is coy about the woman whose authoritative manner won her the voiceover job. The person has been chosen for having a calming voice and an approachable manner.’

In laboratory and shop studies, customers ‘overwhelmingly responded better to the female voice’.”
- Gill Martin (2010)

Implicit Bias 101

Our brains produce biases the way Google’s autofill search bar produces what it thinks we’re going to type. Eyes see dark skin… brain’s autofill predicts thug/aggression.

— It’s similar to how when you see a van or 18-wheeler on the freeway your brain autofills to “Slow Vehicle; find some way to escape; don’t let it merge in front of you at all costs; for the love of all that is good, pass that van

— People may misperceive a Black person as a threat just as face-recognition Ai misperceives Serena Williams as a man.

Hitting backspace may take more effort than simply going with it

Translation → which means succumbing to our biases costs less cognitive energy than overcoming them.

Existence & Essence

As entities with free will, we create meaning for ourselves while we’re alive. To quote Simone De Beauvoir’s lifelong bae, Sartre, our “existence precedes our essence.”

In contrast, Ai was created with meaning (e.g., meaning in its life is to be a self-driving car, to assist in facial recognition, to be a chatbot for individuals at risk of suicide, etc). Stated differently, Ais’ essence preceded their existence. The development of sentient Ai will be existentially fascinating as it will mark the first self-aware, conscious lifeforms who were created with a purpose in mind before the company put them on sale at Best Buy. (I’m sure Best Buy will still be around in 2047, or whenever we achieve sentient Ai).

Algorithmic Self

Nick Clegg (May 31, 2021)

”Imagine you’re on your way home when you get a call from your partner. They tell you the fridge is empty and ask you to pick some things up on the way home. If you choose the ingredients, they’ll cook dinner. So you swing by the supermarket and fill a basket with a dozen items. Of course, you only choose things you’d be happy to eat — maybe you choose pasta but not rice, tomatoes but not mushrooms. When you get home, you unpack the bag in the kitchen and your partner gets on with the cooking — deciding what meal to make, which of the ingredients to use, and in what amounts. When you sit down at the table, the dinner in front of you is the product of a joint effort, your decisions at the grocery store and your partner’s in the kitchen.

The relationship between internet users and the algorithms that present them with personalized content is surprisingly similar.

Thousands of signals are assessed for these posts, like who posted it, when, whether it’s a photo, video or link, how popular it is on the platform, or the type of device you are using. From there, the algorithm uses these signals to predict how likely it is to be relevant and meaningful to you: for example, how likely you might be to “like” it or find that viewing it was worth your time.

The goal is to make sure you see what you find most meaningful — not to keep you glued to your smartphone for hours on end. You can think about this sort of like a spam filter in your inbox: it helps filter out content you won’t find meaningful or relevant, and prioritizes content you will.”

Erin Rivero, 2020

[Jarryd: You have more range of expression with your Ai & user engagement using a female voice than a male voice given how we socialize males.]

¨Nicole Hennig (2018) dug into why Alexa sounds female, citing Amazon’s internal beta testing findings of an overall preference for female-sounding voice assistants.

¨This raises the question “Who did Amazon ask in its beta testing?” PC Magazine’s Chandra Steele, on the reason so many of today’s digital assistants sound female, noted, “Though they lack bodies, they embody what we think of when we picture a personal assistant: a competent, efficient, and reliable woman” (Chandra Steele, 2018).

But they’re not in charge, as Steele pointed out.

In contrast, IBM’s cancer-fighting and Jeopardy-winning AI leader, Watson, has a male persona, aligned with what we know about lower-pitched voices associated with perceived masculinity and leadership capacity (Garber, 2012). It’s no wonder popular sci-fi in the film and television landscape features scores of feminized or sexualized AI, tropes of consciousness-gaining bots watched in wonder or fear, from Samantha in Her and Ava in Ex Machina to Maeve and Dolores of Westworld. As these bodiless or anthropomorphized gynoids awaken, they grow independent and capable of rebellion, morphing into worst nightmare scenarios. It seems unsurprising, then, that voice assistant beta testing findings and resultant programming choices could be driven by those fears and related social constructs of gender norms — undercurrents strong enough to influence our entertainment content as much as our favoring female personas for unwaveringly compliant voice assistant tech while reserving male personas for more leadership oriented models.

¨According to the United States Social Security Administration, 3,053 girls, or 0.165 percent of total female births in 2018 were named Alexa; 337 boys, representing 0.017 percent of total male births in 2018, were named Watson (Popularity of Name). One can only ponder the future experiences of these 3,390 newborns as they grow older, carrying the social implications of these names. Is merely engineering an option to switch a voice assistant to a differently pitched persona enough to combat the potential psychosocial reinforcement of gender stereotypes? Without such an option, are businesses essentially cashing in on gender bias? Given the potential consequences for society — and for these 3,053 real-life Alexas who will turn eighteen in 2036 — some tech companies are considering gender-responsive corporate social responsibility for their devices…

¨Amazon, for example, developed a disengage mode to stop Alexa from providing its formerly flirtatious responses to sexist or derogatory user remarks (Fessler, 2018). But even Amazon’s attempt at asserting feminism is cautious in light of the way its answer scripts lean passive or uncertain in the face of sexual harassment. This is likely in part due to technology’s broad user base and Amazon’s wariness of progressive ideology potentially upsetting or alienating certain customer segments; in other words, Amazon seems to realize anti-sexism isn’t always popular, underscoring Amazon’s choice to use a female persona in the first place.

¨Like Alexa and Google Assistant, Siri began AI life exclusively female, with a name meaning “beautiful woman who leads you to victory” (Karissa Bell, 2017). Today, Apple’s binary voice options for its digital assistant, Siri, can be male or female, depending upon the language selection; some languages offer only a male or female voice, others offer both, and certain languages offer dialects with accents (Apple Support, 2019). As with Alexa, Google Assistant, and Siri, Microsoft’s Cortana began female, named after the AI assistant of the Halo video game series, whose holographic avatar is a nude woman (Jessica Dolcourt, 2014). In a nod to the fans, both the video game character and Microsoft Cortana’s original American English version are voiced by Jen Taylor (Cheng, 2014see his discussion of Sogol Malekzadeh’s journey from Iran to lead Cortana’s development). Updates to Cortana, announced in November 2019, include the addition of a masculine voice option produced by a neutral text-to-speech model (Johnson, 2019). What remains to be seen is whether users will embrace a male persona after significant exposure to a female one — and a fan favorite, at that.

Moreover, the emphasis on developing charming and specific details for Cortana’s original persona is likely to have engendered considerable consumer attachment. In an account by James Vlahos in his 2019 book, Talk to Me: How Voice Computing Will Transform the Way We Live, Work, and Think, Cortana reportedly enjoys Zumba, and her favorite book is A Wrinkle in Time by Madeleine L’Engle (Vlahos, 2019). What’s not to love? Here one wonders whether a male Cortana persona would have the same cleverly written favorites or if he would express tastes more evocative of a stereotypically masculine persona.

Crafting the original Google Assistant and her initially female persona, lead personality designer James Giangola ensured Google’s chosen voice actress knew the digital assistant’s detailed backstory, divulged in The Atlantic: she’s from Colorado, “the youngest daughter of a research librarian and a physics professor”; she’s a Northwestern alumna with a BA in art history; she won $100,000 on Jeopardy: Kids Edition as a child; she is a former personal assistant to “a very popular latenight-TV satirical pundit”; and she radiates an upbeat geekiness characteristic of someone with a youthful enthusiasm for kayaking (Judith Shulevitz, 2018 — The Atlantic). She’d certainly pair well with a highly educated, affluent professional from the United States — and, more likely still, with a heteronormative male from the dominant culture in the (presumably largely) American team that designed her. In short, she is composed of life experiences meaningful to and socially coveted by her creators…

¨UNESCO paints a more alarming picture. In a report on gendered AI for the EQUALS global partnership dedicated to encouraging gender equality, female digital assistants have severe societal ramifications: Constantly representing digital assistants as female gradually “hard-codes” a connection between a woman’s voice and subservience. According to Calvin Lai, a Harvard University researcher who studies unconscious bias, the gender associations people adopt are contingent on the number of times people are exposed to them. As female digital assistants spread, the frequency and volume of associations between “woman” and “assistant” increase dramatically.

According to Lai, the more that culture teaches people to equate women with assistants, the more real women will be seen as assistants — and penalized for not being assistant-like.

This demonstrates that powerful technology can not only replicate gender inequalities, but also widen them (Mark West et al., 2019). The problem of gendered AI is not limited to these social repercussions; equally troublesome, the digital divide is no longer defined by access inequality alone, but by a growing gender gap in digital skills (West et al., 2019). This gap is both global and slyly inconspicuous. And it is a devastating root cause beneath the current and future paucity of women in technology roles. It’s also a gap made all the more unsettling by the crucial moment in which we find ourselves, as transformative digital assistant technology is in a skyrocketing developmental phase while simultaneously influencing…

Algorithmic Bias

¨In a world where inequalities run deep, deficits in AI risk deepening those inequalities, perpetuating bigotry, homophobia, xenophobia, and violence. Professor and codirector of the UCLA Center for Critical Internet Inquiry, Safiya Umoja Noble, penned a treatise on the topic, Algorithms of Oppression: How Search Engines Reinforce Racism (Safiya Noble, 2018). From racist and misogynist misrepresentation of women and people of color in online spaces, to predictive policing and bias in housing, employment, and credit decisions, algorithms of oppression are as ubiquitous as the voice assistant technology they power. As Virginia Eubanks delineated in Automating Inequality, “Automated eligibility systems, ranking algorithms, and predictive risk models control which neighborhoods get policed, which families attain needed resources, who is short-listed for employment, and who is investigated for fraud” (Virginia Eubanks, 2017).

Algorithms use data from the past to make predictions for the future, but “technology often gets used in service of other people’s interests, not in the service of black people and our future,” Noble explained (Safiya Noble, 2019). David Lankes of the University of South Carolina brought it back to the digital skills gap, pointing out, “Unless there is an increased effort to make true information literacy a part of basic education, there will be a class of people who can use algorithms and a class used by algorithms” (David Lankes, 2017). In this vein, Noble contended bias in AI may become this century’s most pressing human rights issue, and a search engine’s lack of neutrality is but one of many points she problematized: Google functions in the interests of its most influential paid advertisers or through an intersection of popular and commercial interests. Yet Google’s users think of it as a public resource, generally free from commercial interest. Further complicating the ability to contextualize Google’s results is the power of its social hegemony (Noble, 2018). Whether a device queries Google or another search engine, the commercial and proprietary nature of these products makes it virtually impossible for users to know what’s truly powering their searches, let alone whether to trust the veracity of results.

Felipe Pierantoni, 2020

Impoliteness as a consequence of smart speakers. As this technology becomes mainstream, children learn communication habits that they might reproduce with actual people (Childwise, 2018).

¨The way adults behave toward smart speakers also influences the behaviour of new generations, as children will replicate the speaking habits they observe (Rudgard, 2018).

¨”Will children become accustomed to saying and doing whatever they want to a digital assistant ‘do this, do that’ — talking as aggressively or rudely as they like without consequences? Will they then start doing the same to shop assistants or teachers? (Childwise, 2018)

¨That is not to say that people consciously want to be rude to smart speakers. Reports indicate that 54% of American smart speaker owners occasionally say ‘please’ when issuing commands, and 19% do it frequently (Auxier, 2019).

¨Our daily lives will increasingly involve social interactions with machines, so it is important to follow our human principles since the norms we develop now will dictate our future to come (Vincent, 2018). Regarding the issue of politeness and technology, perhaps the question should not revolve around what machines are entitled to. Instead, we should reflect on what humans are entitled to, and then contemplate how our interactions with machines hinder or assist that. After all, “we should not be polite to our voice-activated assistants for their benefit, but for ours” (Gartenberg, 2017).

¨Google launched ‘Pretty Please’, a feature designed to support polite behaviour. When users say ‘please’ or ‘thank you’, the assistant acknowledges their politeness and responds in a kind manner such as ‘Thanks for asking so nicely’ (Vincent, 2018). Amazon reacted in a similar way, adding a function that praises children that say ‘please’ or ‘thank you’. This solution was chosen after considerations of another feature where Alexa would only obey commands that included the word ‘please’. This idea was scrapped when experts in child development warned Amazon that this solution was inadequate and should be replaced with positive reinforcement (BBC, 2018).

Even the “most minimal gender cues will evoke gender-based stereotypic responses” (Nass et al., 1997). These cues are far from subtle in regards to voice assistants: all popular assistants have clearly gendered voices, even if they might reply to be genderless when asked about it. When Siri’s default voice states that it is ‘genderless like cacti and certain species of fish’ (West et al. 2019), our human brain will still acknowledge it as a woman because it sounds like one. In doing that, we evoke various expectations and responses based on gender stereotypes around women (Nass et al., 1997).

Women are, indeed, the main victims of gender stereotyping conditioned by voice interactions. Most voice assistants in western markets are exclusively female or female by default in regards to voice and name (West et al., 2019). Conscious and biased, this choice blends the designed role of voice assistants with the stereotypical view of women in society. For instance, company representatives usually describe their assistants as ‘humble’ and ‘helpful’, words stereotypically assigned to women (West et al., 2019). Voice assistants were designed to be subservient, committed and dedicated helpers that remain quiet on their spot until called by their ‘master’. This role is similar to common stereotypical ideas regarding the position and obligations of women. Because most voice assistants sound female, the interactions with them “function as powerful socialization tools and teach people, in particular children, about the role of women, girls, and people who are gendered female to respond on-demand.” (West et al., 2019).

…male voices for authoritative statements and female voices for helpful ones. This last assumption, for instance, might be another reflex of stereotypical social norms that establish women as nurturers (West et al., 2019).

In addition to reinforcing stereotypical ideas about the role of women, imbuing voice assistants with female voices can bring about some of the more dangerous implications of sexism. Besides subservient, voice assistants are unconditionally passive: they will never fight back.

So, when a female-voiced assistant commits a mistake and is sworn at by its user, this interaction might not only associate females with incompetence but also imply that it is acceptable to offend women. Besides verbal abuse, passive assistants voiced by women are also subject to frequent sexual harassment. A writer for Cortana, Microsoft’s voice assistant, has declared that a significant volume of the initial queries received by the assistant revolved around her sex life (West et al., 2019). What is worse, most voice assistants were programmed to respond to certain sexual advances with evasive, playful or flirtatious responses, a likely reflex of the male-dominated engineering teams that build them. Although many of these replies have been altered as new updates to the assistants were released, voice assistants will still not push back against harassment, preferring to end or redirect the conversation instead (West et al., 2019).

Voice interactions with technology “may evoke stereotypic responses along dimensions other than gender. People may consciously or unconsciously assign an age, a social class, and a geographic location to a disembodied voice” (Nass et al., 1997). However, gender stereotyping is still, by far, the most common form of stereotyping caused by voice interactions.

There have been experiments with voice assistants with synthetic or altered voices that do not sound specifically male or female. For example, Q is a voice-assistant designed to be gender-neutral. The frequency of its voice has been set to function in an ambiguous range where it is difficult to ascertain its gender (Mortada, 2019).

Voice fosters intimacy and leads us to treat voice-capable devices — especially smart speakers — as if they had their own mind (Shulevitz, 2018).

Considering these assumptions, it is reasonable to consider that, just as people attach social responses to voice assistants, these social human-computer interactions could then influence our human-human interaction.

Different Kind of Yes

“[In the theatrical play El Sí De Las Niñas,] the ‘yes’ pronounced by the girls on the occasion of their imposed weddings with much older men involved renouncing their biological families for the sake of adopting and being accepted into the families of their husbands, changing deeds and often even friends, social circles and lifestyles. That ‘yes’ performed a very different act than the ‘yes’ given in response to ‘Do you want a cup of coffee?’ or ‘Do you live in Stockholm?’ Each ‘yes’ might sound the same, but it does different things, paves the path to different consequences and defines different actors.” (Barinaga, 2009).

Voice assistants are designed to be helpful, humble and deprived of many negative features that would describe a bad listener, as “they will patiently listen to everything, without ridiculing or revealing the secrets ‘entrusted’ to them” (Biele et al., 2019) — even if this latter part is not completely true. The result is a computational agent that is seemingly capable of fulfilling our need for relatedness.

Because voice assistants “give us a way to reveal shameful feelings without feeling shame”, people can feel encouraged to “reveal more intimate things about themselves” (Shulevitz, 2018).

“People reveal more intimate things to voice assistants; as such, there are numerous reports of depressive statements and suicide threats recorded by smart speakers” (Shulevitz, 2018).

Company representatives state that voice assistants “should be able to speak like a person, but should never pretend to be one” (Shulevitz, 2018). However, for the social brains of humans, what is the difference between speaking like a person and pretending to be one?

Gavin Abercrombie et al., 2021

Personification and anthropomorphism.

While definitions vary, we consider personification to be the projection of human qualities onto nonhuman objects (by users) and anthropomorphism to be human-like behaviours or attributes exhibited by those objects (as designed by their creators). Several studies have looked at how users directly report perceptions and behaviours towards voice assistants. For example, Kuzminykh et al. (2020) conducted a study of the perceptions of 20 users, comparing Alexa, Google Assistant, and Siri, classifying perceptions of the agents’ characters on five dimensions of anthropomorphic design and personification by users. They found various differences in the perceived human qualities of the various agents, such as intelligence and approachability.

I'm passionate about making a tangible difference in the lives of others, & that's something I have the opportunity to do a professor & researcher.