Search
  • Allison Graham

AI: By Men For Men. Only.

Who Created AI?

A diversity crisis in artificial intelligence (AI) perpetuates long-held biases regarding gender and people-of-color (POC) within these high-tech technology sectors. Specifically, white-privilege males perpetuating systemic inequalities maintain the status-quo. Weaponizing the language of diversity, throughout industry and academia, highlights the disparities in intersectionality between race, gender and power. Workplace cultures, power asymmetries, harassment, exclusionary hiring practices, unfair compensation, and tokenization cause people of color, women and non-conforming identities to replicate the patterns of racial and gender bias in ways that depend and justify historical inequalities.


#WomenInSTEM. That has been the phrase, the goal statement, the hot topic for the past decade. It is true, there is absolutely a statistically-backed representation crisis in the artificial intelligence (AI) sector when it comes to gender. In 2018, only 18% of authors at leading AI conferences were women (Element AI, 2019). In universities, more than 80% of AI professors are men (AI Index, 2018). Women comprise only 15% of AI research staff at Facebook, and 10% at Google (Simonite, 2018). Just under one third of total hires by Google were women in both 2019 and 2020 (Google, 2020). By defining the category of women as cis-women, and by omitting an intersectional overlay to account for race, Google creates and overly narrow lens of diversity that is likely to privilege White women over others. Glaringly, there is no public data on transgender workers or gender non-binary (NB) employees (Simonite, 2018).


#POCinSTEM is a familiar phrase, goal statement, hot topic for the past decade. Similar to gender, the race breakdown of tech employees is exceedingly bleak, or rather pale. Google is one of the largest tech monopolies in existence. Yet, in 2018, only 2.5% of Google’s workforce was Black and 3.6% is Latinx, with Black workers having the highest attrition rates of all racial categories (Google, 2018). By 2020, these numbers have not changed much though. When considering BIPOC, Native American employees experience the highest rate of attrition (Google, 2020). Their attrition rate is alarming because Native Americans only comprise .8% of Google’s workforce to begin with, the smallest of all racial groupings (Google, 2020). Thus, with the smallest percentage and the highest attrition rates, the underrepresentation of indigenous voices is compounded.


Numerical data is critical because it is a tangible, quantifiable representation of who is in the room, or in the virtual room. The lack of any publicly accessible data on transgender and NB employees at top tech companies is screeching concerning. To literally not exist in the data is categoric erasure on the terms of cisnormativity which holds implications for not only representation, but also regarding the relative distribution of power. With no reliable public data, it is easy to overlook the existence of trans and NB workers, and thus not advocate for their rights, needs, and voices. However, even the so-called diversity reports published annually beginning in 2014 by Apple, Facebook and Google, do not show the entire picture. However, even if one is fortunate enough to appear in the charts, numbers do not show the entire picture. For women who are lucky enough to appear in the numbers, quantitative data is insufficient to expose how dozens of women were passed over for promotion, side-lined, harassed, or demeaned (Gershgorn, 2019). Moreover, the data presented in these company-issued reports is, well, company-issued, and therefore, company-approved. There is a clear line of institutional bias and an interest in obscuring the data to appear more diverse than the reality. The self-reported numbers have been massaged, manipulated, and corrupted to the point where they are highly presentable and also highly inflated, highly pasteurized, standing in complete juxtaposition to the organic experiences of reality.


According to a 2018 exposé, Google’s diversity report was “designed to artificially inflate the numbers of women and the people of color employed by the company by only accounting for 80% of the company’s full-time workforce” (Lee, 2018 as cited by West et al., 2019). This blatant claim of data manipulation suggests the intent to minimize both the problems within the XXXX as well as public perception. Unsurprisingly, an analysis of data from 2010-2012 from the American Institute for Economic Research found that there were substantial pay inequities among high-tech workers (American Institute for Economic Research, 2014). On average, women of color software developers earn less than White, Black, and Asian men, as well as White women (American Institute for Economic Research, 2014). In other words, women of color are the lowest paid strata of software developers compared to any other intersectional classification. For example, Latina developers earned as much as 20% less than White males (American Institute for Economic Research, 2014). Disparities in equity ownership are even worse: an analysis of over 6,000 companies found that women only hold 9% of startup equity value, blocking them from streams of compensation that are often of greater worth than tech workers’ annual salaries. But not all data is this clear cut. As previously discussed, data can, and is, malleable and nebulous, hardly the clear-cut irrefutable picture that they claim to portray.


A 2019 account published by Google found that Google claimed that more men in junior engineering roles were underpaid than women (Barbato, 2019). This counterintuitive finding was widely reported because of its unusual nature (Wakabayashi, 2019) —and men always get more attention when they perceive to have been treated unfairly. But upon closer examination, the claim being made is extremely narrow, focusing on one level of one job category, and not taking into account equity and bonuses, which at senior levels of the company often comprise the majority of employee compensation (Tiku, 2019). The greater environmental and climate context of the timing of this “revelation” must also be taken into account. In 2019, when this news broke, Barbato’s counterinitiative finding was released at the same poignant moment when Google was being investigated by the US Department of Labor for a lawsuit by female employees after massive walkouts to protest discrimination, sexual harassment, and a hostile environment (Tiku, 2017; Google Walkout For Real Change, 2018).


The Road to Inequity

How did we get here? It begins in elementary school, with girls and Black and Brown children being disproportionately tracked into lower math classes, with white children disproportionately place in gifted programs (Smith, 2005). It continues to middle school where these voids only grow with the socialization of youth reinforced by teachers’ placement decisions (Chiu et al., 2008). High school supposedly presents the option to enroll in Advanced Placement (AP) and International Baccalaureate (IB) programs that is not really a matter of choice, but rather access and privilege (Kholi, 2016). When it comes time to choose a college, or whether to go to college, biases compound as primarily White college counselors review student transcripts and their own implicit assumptions are confirmed by mediocre transcripts that reflect nothing more than eighteen years of tracking. Strong math and science background in K-12 schooling positions students to maybe someday attend a college that has a strong STEM focus, including computer science (Jackson-High, 2018). Even better is early K-12 exposure to coding and programming, as is offered now most private, charter, and specialized STEAM public schools (Margolis et al., 2013). University classrooms are no more diverse in who is at the front of the room, assigning grades, and defining success. A 2007 review of tenured/tenure track faculty at the top 100 research institutions found that 81 of 2,865 were people of color (2.8%) (Nelson et al., 2007). The number of Latina women could be counted on one hand (5). The number of Indigenous women and men was even more demoralizing—only one man, no women (Nelson et al., 2007).


Actions Not Words

While “Women in STEM” and “POC in STEM” have been the performative refrains echoing over school loud speakers for the past decade, taking steps to fix the pipeline described above has shown “no substantial progress” in the AI industry (West et al., 2019, p. 3). The focus on the pipeline positioned the problem as linear, starting at youth and progressing through adulthood, naïvely fixable with the addition of some technology classes and implicit bias training. This framework also places the onus to solve issues of discrimination on those who are discriminated against (women and girls of color), rather than the perpetrators (masculine-dominated institutions). While energy and resources are focused on ameliorating this oversimplified, surface-level justification, deeper issues with workplace culture, power asymmetries, harassment, executive hiring practices, unfair compensation, and tokenization are left to fester and result in women and people of color to attrit or avoid the AI sector all together. The “pipeline fix” asks the question of who is harmed, but negates the deeper question of who benefits from the dominant structures governing the current technology ecosystem.

A 2018 survey of 32 leading tech companies found that many verbally express a desire to improve diversity, yet only 5% of 2017 philanthropic giving was focused on correcting the gender imbalance in the industry, and less than 0.1% was directed at removing the barriers that keep women of color from careers in tech. To put it in perspective, out of $500 million in total philanthropic giving by 32 tech companies in 2017, only $335,000 went to programs focused on outreach to women and girls of color (Wittemeyer et al., 2018).


Implications

Artificial intelligence (AI), augmented reality (AR) and virtual reality (VR) technology in education is changing the landscape of teaching, highlighting the transient nature of the field. Projecting towards 2030, the nature of the teaching profession will arc towards a digital learning specialist, removing human teachers almost entirely from the classroom as AIs begin to deliver curriculum in experiential and immersive ways. Instead of content delivery, humans will be charged with implementing and updating this technology which will then, in essence, run itself. Therefore, as this transition is actively occurring, it is the responsibility of human educators to be mindful and alert. When using AI in the classroom, which is defined as everything from Google “smart” searches, to autocorrect, to facial recognition software, teachers must understand the implications who designed this technology and for what purpose. Additionally, when teachers are approaching with an opportunity to use VR and their classroom to transform a lesson and make it more engaging, teachers must be mindful of who will benefit most and who might not have access. And finally, as 2030 grows nearer and teachers leave classrooms to assume digital learning specialist positions, they must be mindful of the fuller ramifications of access and ownership, along with the biases of developers, so as not to be blindly implementing flashy technology, drawn to all of the potential benefits but ignorant of the shadow side.

5 views0 comments