Search
  • Allison Graham

AI: By Men, For Men (Only)--Part 2

The sheer abyss of whiteness and maleness that is the human face of artificial intelligence

is not just about numbers and surface-level representation for the sheer sake of buzzword

diversity. It is about power. It affects how AI companies work, what products get built, who they

are designed to serve, and who benefits from their development.

Amazon’s Rekognition facial analysis service literally failed to be able to see dark-

skinned women, while being most proficient at detecting light-skinned men (Raji & Buolamwini,2019). Transgender Uber driver accounts were suspended because the facial recognition linked to their accounts could not identify them as they transitioned (Urbi, 2018). Facebook advertisements for jobs in the lumber industry were disproportionately appearing on the sidebars of white male users, while ads for taxi drivers appears more frequently for Black Facebook users (Ali et al., 2019). Search engines disproportionally run adds for arrest records for people with non-White racially associated names (Sweeney, 2013). Algorithms used to determine patient

enrollment in care management programs disproportionately selected White applicants

(Obermeyer and Mullainathan, 2019). The database of human faces frequently used to program

AI is only 7% Black (Han and Jain, 2014), leading to the highest AI error rates for dark-skinned

women. The vast majority of systems use a binary view of gender, operating under the

assumption that machines can “detect” and affirm gender through a commercialized technical

lens (Keyes, 2018).


As AI systems are increasingly playing a role in our social and political systems,

including education, healthcare, employment, and criminal justice these oversights have

astronomical implications. Algorithms and codes are littered with gendered and racialized biases,

born of the gendered and racialized biases held by those who created them. While AI and automation has the potential to be consistent and precise, they are just as susceptible to racial

socialization as a human infant.


Why Things Aren’t Going to Change

In Disciple and Punish (1995), Michael Foucault takes Jeremy Bentham’s panopticon

architectural discipline structure as a metaphor for the modern exercise of power. Just like in a

prison where the inhabitants and machine guns of towering guard posts are invisible to inmates

below, leaders are placed on a pedestal, ascending so far into the obscuring system of clouds that

the plebeians below can no longer see or name those who control these institutions, thus making

leaders untouchable and disconnected. The power of these faceless, nameless, unidentifiable

leaders lies in their anonymity—who can name the induvial(s) who ultimately control

“technology?” In these sense, nobody can be held accountable. The state of artificial intelligence

in America is a category five Foucauldian nightmare.


The power elite, the dominant group situated at the top of Foucault’s pyramid, has the

ability to make decisions and policies that continue to benefit themselves while disadvantaging

any subordinate groups—workers and users alike (Michels, 1915). This concentration of power

and ability to remain at the top is referred to as the law of oligarchy (Michels, 1915). The law of

oligarchy is a discriminatory practice by those seated at the throne of power, who eschew their

responsibility to create policies for the betterment of all, but rather ensures those at the top

maintain their position while disadvantaging those lower down on the social stratification

pyramid (Michels, 1915). In sum, Foucault’s panopticon allows those who sit at the top to carry

out the law of oligarchy in peace, obscured behind the curtain of the hidden tower.


Past The Point Of No Return

Solution. Hope. It appears to be a human instinct, to try and tie dismal prospects up in a nice neat bow called “Recommendations” or “Future Research” or “Implications.” And people have tried. West et al (2019) offer twelve recommendations for improving workplace diversity and addressing bias and discrimination in AI systems. These recommendations range from ending pay disparities, to structured systems to hire and retain underrepresented groups, to pre-release trials and ongoing monitoring to test AI systems for bias. Jackson-High (2018) recommends a re-evaluation of the school to tech pipeline in another decade.


But I am not about that. Yes, hope is an act of political resistance. But in this case, I choose not to be hopeful. I choose to be painfully practical, distressingly dismal, and rebelliously real. The system is in place, swaddled in fortresses of armor whose sole function is to maintain the status quo. Those who have the power to implement change see no need for such an evolution, and are unwilling to commit more than lip service to a systemic overhaul because it would implicate them as part of the dislodgement. Moreover, the systems of automation are already in place, running of their own accord and often outstripping their human creator. Just as science fiction has fantasized about for generations, machines are now so advanced that fundamentally, there is no point of return, no off switch, no drawing board to go back to. It has been nearly a century of massive technological growth in the arena of artificial intelligence, the majority of which has been at the creating of, and to the benefit of, White men. At this point, biases are so deeply embedded in the economic and political systems which maintain and advance the production of artificial intelligence that no solution will come in time. By our own creation, AI is too (un)intelligent for human intelligence to fix.

10 views0 comments