Artificial intelligence (AI) is currently playing a central role in the digitization and modernization strategies of public administrations and companies in Europe, the United States and China. The potential improvements and efficiencies that the incorporation of AI can offer to strategic sectors in different countries have made it indispensable in a new era of technological transformation. And if no one wants to be left behind, the main players in this new digital age have from the start approached these technologies in significantly different ways.
While the United States and China have already adopted AI as one more component of their geopolitical strategies, the European Union (EU) is positioning itself as a world leader in its ethical use. According to the EU, to be considered ethical, any AI technology used on its territory must guarantee respect for the fundamental rights of EU citizens. In this way, the EU hopes to avoid the potential harm that the misuse of AI can cause to its citizens and to find solutions to the main ethical concerns (bias, discrimination, algorithmic opacity, lack of transparency, privacy issues , technological determinism, etc.) that these emerging technologies bring with them.
But despite the best efforts of the EU and others to mitigate the adverse effects of AI, some of the inherent flaws in the technology have yet to be properly addressed. One of these flaws is gender inequality.
Prejudices and habits integrated from the design phase
Since technology is a socially and culturally conditioned human construct, all biases, habits and ideas that are not regularly and rigorously examined are destined to find their way into the design and use of new technologies. AI is no exception: if it is not approached from a gender perspective capable of taking into account these circumstances, analyzing the different aspects involved and correcting them if necessary, all the prejudices, prejudices and discriminations present in our society are likely to be reproduced and even increased. .
Gender bias has been present in AI since its inception. This is partly due to the fact that for decades it was almost exclusively the domain of men. This is illustrated by the choice of the term “intelligence” to designate this new group of technologies.
Although the term is presented as universal, “intelligence” in AI has actually always referred to the reproduction of human capacities associated with logical-mathematical thinking and, therefore, traditional male rationality.
At the same time, other qualities such as feelings and attention, historically attributed to women, have been excluded from the scope. But while the AI was only able to replicate the sort of skills traditionally associated with male thinking, that was enough to earn it the “intelligent” label. This does not mean that women are not capable of this type of intelligence, far from it. This is to shed light on how certain qualities traditionally associated with masculinity are immediately equated with universal intelligence – without even seriously considering whether a machine that is only capable of calculating data can really be considered intelligent. .
At the same time, the very impossibility of reproducing characteristics such as sensitivity, feeling, intuition, etc., until recently relegated to the background due to their association with femininity, has led to a new appreciation of this type of intelligence. These traits are increasingly seen as unique, defining and defining human beings, even as machines have proven capable of replicating logical thought. Whether there really is logical thinking separate from feeling and intuition is another matter of debate.
Biases in the data that fuel AI
Along with the gender biases that have been present in AI since its inception, the data that powers it and the algorithms that drive its operation present their own set of issues that negatively impact women in particular. One reason is that the data typically used for AI is obtained from the internet or from databases where men tend to be overrepresented.
While 55% of men in the world have access to the Internet compared to 48% of women, the gap is much greater in regions of the world where equality remains a distant reality. In Africa, only 20% of women have internet access, compared to 37% of men. This phenomenon is known as the digital gender divide.
This divide makes women’s real lives less visible while their online representations tend to be more stereotypical and presented through a very masculinized filter. Various studies dealing with this problem have revealed that women are frequently portrayed on the internet as heavily infantilized, sexualized and insecure. Particularly well-known examples are Microsoft’s AI chatbot, Tay, which developed xenophobic, sexist and homophobic behavior in its interactions with Twitter users, andAmazon, which in 2015 discovered that its AI system used for staff selection discriminated against women.
Such complex issues require a multifaceted and holistic response that addresses their root causes. In the short term, databases that power AI need to be audited to ensure that women are represented fairly and that the data is free from gender or other biases.
In the longer term, ensuring women have equal access to the internet, digital services and e-government is crucial to bridging the gender digital divide.
This requires promoting women’s education at all levels, but especially in the area of digital skills, increasing the presence of women in key political decision-making and relevant areas of the technology sector, and actively fighting against sexist stereotypes and the objectification of women. Particular attention must be paid to the particular vulnerability and discrimination faced by racialized, non-Western, non-urban and precarious women.
On the other hand, if algorithms are not designed to take gender perspectives into account, they risk ending up reproducing trends that negatively impact women, as is the case with public policies. Mechanisms to put women on an equal footing (such as affirmative action quotas) should be required for algorithms, just as they should be for other policy spaces.
Ending the feminization of assistive technologies
Finally, the inequality between men and women in the field of AI is also reflected in the very physical characteristics of its hardware. Various studies have highlighted the overwhelming presence of feminine traits in chatbots and assistive AI technologies. The use of voices and names traditionally associated with women, such as Alexa and Siri among others, replicates the already existing association between women and bondage. However, transposition of roles between the analog and digital spheres goes beyond characteristics such as name or voice. Robots with humanoid characteristics (which are becoming increasingly common) are often distinctly feminine in their appearance.
The feminine appearance of these robots most often reflects established beauty canons where non-normative bodies and racialized women, among others, have no place.
This can lead to even more complex and harmful issues, such as sex robots that perpetuate violence against women. All these questions highlight the terrible situation of discrimination suffered by women and reproduced in new technologies. If our goal is to create egalitarian societies, all countries, companies and public administrations must approach the study and use of AI from a gender perspective before using these technologies.
This article has been translated from Spanish.