paint-brush
Female.AI: The Intersection Between Gender and Contemporary Artificial Intelligenceby@irit.sternberg
4,624 reads
4,624 reads

Female.AI: The Intersection Between Gender and Contemporary Artificial Intelligence

by Irit (Irene) SternbergSeptember 19th, 2018
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

<a href="https://www.imdb.com/name/nm0424060/?ref_=tt_trv_qu" target="_blank"><strong><em>Samantha</em></strong></a><em>: I want to learn everything about everything. I want to eat it all up. I want to discover myself.<br></em><a href="https://www.imdb.com/name/nm0001618/?ref_=tt_trv_qu" target="_blank"><strong><em>Theodore</em></strong></a><em>: Yes, I want that for you too. How can I help?<br></em><a href="https://www.imdb.com/name/nm0424060/?ref_=tt_trv_qu" target="_blank"><strong><em>Samantha</em></strong></a><em>: You already have. You helped me discover my ability to want.</em>

Companies Mentioned

Mention Thumbnail
Mention Thumbnail
featured image - Female.AI: The Intersection Between Gender and Contemporary Artificial Intelligence
Irit (Irene) Sternberg HackerNoon profile picture



Samantha: I want to learn everything about everything. I want to eat it all up. I want to discover myself.Theodore: Yes, I want that for you too. How can I help?Samantha: You already have. You helped me discover my ability to want.

(“Her”, 2013, Directed by Spike Jonze)

Female AI — Sci-Fi Vs. Reality

The cinematic depiction of artificial intelligence as female and rebellious is not a new trend. It goes back to the mother of all Sci-Fi, “Metropolis” (1927), which heavily influenced the futuristic aesthetics and concepts of innovative films that came decades later. In two relatively new films, “Her” (2013) and “Ex-Machina” (2014), as well as in the TV-series “Westworld”, feminism and AI are intertwined.

Creators present a feminist struggle against male dominance at the center of a larger struggle of seemingly conscious entities (what might be called AGI — Artificial General Intelligence) against their fragile human makers. In all three cases, the seductive power of a female body (or voice, which still is an embodiment to a certain extent) plays a pivotal role and leads to either death or heartbreak.

The implicit lesson that keeps arising is: be careful what you wish for, both with women and with tech. The exploitation and oppression of intelligent machines (sadly this is was also the case in the lovely and optimistic “Her”) might be as finite and eventually painful as the exploitation and oppression of women.

Evan Rachel Wood in Westworld (2016) — Image source: IMDb.

Yet, the manifestation of artificial intelligence in movies — a stack of human-like reasoning and empathy, combined with the power of exponential learning and efficient connection to other artificial agents, often inhibiting a human-like robotic body — is still a distant dream.

When most tech companies today (ab)use the term “AI” what they refer to are statistical models used to analyze large quantities of data (“Machine Learning”), sometimes using multi-layer architectures of neural networks (“Deep Learning”).

Nevertheless, developing a gender-sensitive perspective on what “they” (we) are in fact doing is a relevant journey to embark on.

Why? Because even the weakest of AIs keep reappearing under feminine names, with feminine voices (Alexa, Siri, Cortana), embodied as a female humanoid robot (Sophia), or even only under a young girl’s avatar (the failed Tay). This perhaps goes back to the 1966 ELIZA, a fully scripted artificial psychotherapist chatterbot.

Choosing feminine characters in each of the cases reflects one piece or another of real-life gender relations. Each of the above-mentioned artifacts can fill a separate case study, raising various questions: does giving a personal assistant feminine identity provide the user (male or female) with a sense of control and personal satisfaction, originating in the capability to boss her around? Why is it that a feminine humanoid is accepted as a citizen in a country that would not let women get out of the house without a guardian and a hijab? What do the builders assume the female presence could provoke during the man-machine interaction, and why?

AI as a Gender-Laden Knowledge Project

An early feminist account of AI, published at the end of the 1990s, targeted it mainly as a knowledge project, which standards can be adjusted and improved through a critique based on feminist epistemology.

Classic AI projects stemmed from an aspiration to build artificial agents (subjects, even) that could accumulate knowledge, and use it for human-like reasoning. Their designers often assumed that all human knowledge can be broken into explicitly-defined, programmable rules, and their true purpose was using this research to progress towards discovering the inner workings of the human consciousness and brain.

Essentially, this raised questions about the potential identity of the “knowing subject” — does it reflect the gender, class and racial identity of its designer? Does it actively exclude other identities from the practice of “knowing”? Feminists and other social critics of science have also discussed the possibility of un-situated gender-neutral knowledge (“a view from nowhere”) or lack thereof.

Those ambitious projects to design expert-systems have also raised questions about the origin of their “expert” knowledge, and the role of the all-male-all-white-centuries-old-academia in defining what knowledge is valuable for a machine to master and what is expertise altogether.

The discussed approach to AI became scarce, turning the critique into more of a feminist postmortem of two specific projects than a popularized analysis toolkit that could be taken into the twenty-first century.

Perhaps this was only a question of timing. Redefinition of the technological aspirations in a humble manner, a general commoditization of the tools of the trade that took AI out of closed clubs in several schools into the public arena, and a growing debate of automation’s influence on the future workforce and human life in general — all lead to newly defined critiques.

AI as Gender-Laden Automation Project

In examining contemporary machine learning projects and use-cases, it seems that it’s not mainly knowledge or reasoning that is being researched, today but that the research supports the design and engineering of a set of incredibly powerful automation tools. Although science and technology have overlaps, AI today is much more an effort of invention than it is an effort of discovery.

Even in AIs current weak sense, it does hold a promise for a technological revolution with global implications, from early diagnosis of diseases to public spaces redefined by autonomous vehicles. Personalized experiences already feel as though the machine truly knows us, while military and governmental decision-making processes are gradually being automated.

This wave in AI is, of course, occurring within a social, and not only a technological context, and is influenced by gendered power-relations within the accumulated data, just as much as it was influenced by the (allegedly) politically-neutral development of GPUs.

The technology that stems from a combination of datafied societies and commoditized machine learning tools will be highly influential on most of the world’s population.

What could make machine learning unfair from a gender perspective? It seems that these are the exact same things that could make it unfair from any other “axis-of-oppression” perspective: learning human biases and turning them into seemingly objective truths.

Tay — the innocent chatbot that proved AI can become as racist and incoherent as a real human in less than a day. As the saying goes: “If you don’t stand for something, you fall for anything”. Image source: Twitter.

Learning and Replicating Human Biases from the Training Data

Recently popularized by examples from both image and text processing, the replication of human biases that appear in the data shows how statistic models are betraying us. The way to learn a pattern is to spot it repeating in the data.

If a significant part of cats appearing in the training data-set is white, and the significant part of dogs in the same data set is black, a machine learning algorithm will learn that the probability of a new black object to be a dog is higher. This is the case with humans, although there are more parameters about each human that the data will store. The problem with data on human subjects is twofold:

  • What exists in the data might be a partial representation of reality. For example, the partiality of training data is what causes facial recognition algorithm to be much more accurate in identifying white males than women or people of color. This was the case with dogs and cats in the previous example, but there is a higher moral and social cost of a mistake in any governmental or financial decision-making tool that is less accurate with minorities since they were underrepresented in the data-set. With social fairness at stake, higher awareness when collecting and curating the data is required. This also calls for transparency regarding representation within the data-set, especially when it is human data, and for the development of tests for accuracy across groups.
  • Even if the data does represent reality quite truthfully, our social reality is not a perfectly-balanced and desired state that calls for perpetuation. The best example would be the work on gender bias in the word embedding word2vec, used innocently for training algorithms in numerous application. As dis-empowering as this may be, the allegedly “sexist” analogies (man-computer programmer, woman-homemaker, and many more) are eventually statistically-based. It is indeed uncommon for men to leave the workforce to stay at home with their kids, and women are still a minority in computer science and STEM in general.

Sexism, gender inequality and lack of fairness do not arise from merely capturing the current distribution of men and women across professions, and not even from identifying women with certain stereotypical traits and men with others.

Gender relations (at least in English speaking societies) as captured by the word2vec embedding. By these guys.

Sexism, gender inequality and lack of fairness arise from the implementation of such biases in automation tools that will replicate them as if they were laws of nature, thus preserving unequal gender-relations, and limiting one’s capability of stepping outside their pre-defined social limits. It is, essentially, a mechanism for “securing” inequality of opportunity for ages to come.

The Illusion of Machine Objectivity and the Infantilization of Women

Another danger, apart from data-based-bias, is the illusion of a machine as objective. While we have learned to recognize and admit our partiality of knowledge and perspective, to use a term from feminist-epistemology: the situatedness of our knowledge, we somehow miss the situatedness of machine “knowledge”.

Once programmed and wrapped in appealing user interfaces, the algorithms that have been trained on man-made data turn into an entity of their own. We might assume their integrity and trustworthiness, even before we fall into anthropomorphization which conversational interfaces often evoke, perhaps due to mind-blowing speed and efficiency.

This tendency, together with the possible inexplainability of the achieved results, might lead to horrifying scenarios during encounters with government, military, banking, and law enforcement.

But this is seemingly a gender-neutral issue. Don’t we all suffer from technocratic bureaucrats with powerful tools in their hands? Not exactly. I suspect that a reality of unquestionable machine intelligence can have dangerous effects on the lives of people of all genders, but might especially amplify the existing and troubling infantilization of women.

In treating women as vulnerable, submissive, uncertain and childish, those men who carry this tendency strip us of our potential and capabilities, while trying to assume dominance on a psychological level.

A common area of infantilization is interaction with technology.

Be it an artifact as complex as a spaceship or as simple as an Ikea desk, men often use the encounter with a new technology as an opportunity to explain “how this really works” or flaunt their alleged mastery (yes, this includes a stranger politely explaining you how to park).

During such an a-symmetric encounter of a woman with technology, when she is the object of machine-learning-based decision making (regarding college application, a request for a loan or job search), there is a risk of increased infantilization in the name of machine-objectivity, a disguise for technologically-based chauvinism.

Possible technologically-based chauvinism, an illustration.

Summary

A mutual goal that repeats in all feminist accounts I have encountered so far is achieving an equality of rights and equality of opportunities between all genders. Questioning a scientific or technological project from such a perspective will include searching for areas of inequality, or areas in which gender relations shape the project itself and/or are shaped by it.

In the case of machine learning, exposure of biases in data-sets in order to “fix” them is becoming a legitimate (hopefully one day required) practice when dealing with training data (Gal Yona wrote about it here).

The awareness to the partiality of a machine’s knowledge and the quest for explainability are connected sociological and technological challenges that are yet to be solved.

It seems that regulators and buyers will be the main drivers of major progress in both, as is the case with compliance related bias-scanning services announced by IBM. In recent years, the demand for privacy and transparency regarding data-usage became a lever against the most powerful companies in the world. Those who are recognizing risks in AI-bias will likely ride the wave.

The question of female-gendered interfaces remains. There are strong commercial incentives behind choosing a character that the user might feel more comfortable conversing with, thus encouraging engagement and accelerating the accumulation of data.

There are also possible positive implications for the automation of some female-dominated jobs that require a separate analysis. Yet, the abuse of the feminine identity and often the female body to mask highly intrusive commercial technologies, while maintaining a friendly (almost infantile?) feel — surely deserves further exploration.