Features
4 MIN READ
Theories of “bias” alone will not enable us to engage in critiques of broader socio-technical systems.
Why is one far more likely to hear about “algorithmic bias” rather than “algorithmic racism” or “algorithmic sexism,” even when we mean the latter? “Bias” is the latest chapter in the still unfolding history of social psychology, which has struggled to address why racism and other oppressive systems persist. In a recent historical review of this discourse, Dovidio et. al. (2010) describe how the term “bias” emerges from the academic literature. They begin with Walter Lippmann, who, in his book Public Opinion (1922), popularized the term “stereotype” in the modern sense to mean one’s perception of a group of people.
By revisiting how Lippmann’s “stereotype” is taken up as a theoretical term in social psychology, which leads to contemporary theories of “bias,” we can better understand the limits of using “bias” in today’s conversations about algorithmic harm. Because both “stereotype” and “bias” are theories of individual perception, our discussions do not adequately prioritize naming and locating the systemic harms of the technologies we build. When we stop overusing the word “bias,” we can begin to use language that has been designed to theorize at the level of structural oppression, both in terms of identifying the scope of the harm and who experiences it.
When we stop overusing the word “bias,” we can begin to use languagethat has been designed to theorize at the level of structural oppression.
To understand the causes of social divisions within the democratic society of his day, Lippmann needed to provide an explanation for why individuals hold fast to generalizations about entire groups of people, even when they are harmful to social harmony. He was clearly critical of stereotypes, defining them as “a distorted picture or image in a person’s mind, not based on personal experience, but derived culturally.” His concept remains faithfully cited in the literatures of social psychology, journalism, and political science, especially as these fields respond to concurrent social issues over time. In The Nature of Prejudice (1945), social psychologist Gordon Allport famously argues that we have stereotypes to rationalize our behavior toward an individual in a particular category. One definition in the contemporary psychology literature states that stereotypes are impressions which remain unchanged, even after being presented new information relevant to the conclusion.
Today, the social and cognitive psychology literature describes bias as something that is implicit and inevitable in our thought process as we categorize the world around us. The stereotype, therefore, is a foundational component of social psychology’s attempt to theorize why people engage in the kinds of behaviors that allow social hierarchies to persist.
[caption id="attachment_20431" align="alignnone" width="2268"]
Given this history, when we say “an algorithm is biased,” we, in some ways, are treating an algorithm as if it were a flawed individual, rather than an institutional force. In the progression from “stereotype” to “bias,” we have conveniently lost the negative connotation of “stereotype” from Lippmann’s original formulation. We have retained the concept of an unescapable mentalizing process for individual sensemaking, particularly in the face of uncertainty or fear—yet algorithms operate at the level of institutions. Algorithms are deployed through the technologies we use in our schools, businesses, and governments, impacting our social, political, and economic systems. By using the language of bias, we may end up overly focusing on the individual intents of technologists involved, rather than the structural power of the institutions they belong to.
By using the language of bias, we may end up overly focusing on the individual intents of technologists involved, rather than the structural power of the institutions they belong to.
In fact, Lippmann acknowledges that stereotypes are “derived culturally,” but he makes no real commitment to theorizing the role of culture and institutions, firmly situating his analysis at the level of individual perception. One can see how “bias” suffers from a similar theoretical deficiency, for example, in trying to identify “racial bias in algorithms.” In 1967, Kwame Ture (Stokely Carmichael) and Charles V. Hamilton coined “institutional racism” to refer to accepted social and political institutions of the status quo, which produce racially disparate outcomes. If algorithms function at the level of institutions, then they enforce policies of institutional racism within a structurally racist society.
As Camara Phyllis Jones argues, race should be considered using a framework including micro (individual), meso (institutional), and macro (systemic) levels of analysis. Bias as a term obscures more than it explains because we are not equally concerned about all biases for the same reasons. We specifically care about dismantling algorithmic biases that enable continued harm to those belonging to one or more historically oppressed social identity groups.
When we use “bias” to talk about inequalities extended by algorithms, we are, as Lippmann did, bounding our analysis at the individual — but theories of “bias” alone will not enable us to engage in critiques of broader socio-technical systems. Perhaps because we insist on using bias as the starting point for our critical technology conversations, we have been slow to take up Safiya Noble’s identification of “oppression” as the impact of technologies which stereotype. What would happen if we cited Kwame Ture and Charles V. Hamilton as faithfully as we do Walter Lippmann in the development of our theoretical frames?
Only a shift to institutionally-focused language will make room for systemic critique, allowing us to see more clearly what’s at stake when we talk about the future risks of the technologies we build and to identify who specifically experiences the harmful consequences of a technology, no matter how well meaning the technologist may be. And when we clearly name the problem, only then can we be held accountable to addressing it.
Kinjal Dave is a research analyst with the Media Manipulation Initiative at Data & Society. She is an incoming PhD student at the University of Pennsylvania’s Annenberg School for Communication.
This piece first appeared on Data & Society on May 31, 2019. It is republished here under Creative Commons license BY-NC-ND 2.0.
Kinjal Dave, Data & Society Research Analyst at Data & Society Research Institute, Incoming PhD student at the University of Pennsylvania Annenberg School of Communication. Data & Society: Points is the blog of Data & Society Research Institute
Perspectives
10 min read
As it struggles to contain the impact of the Covid-19 pandemic, Nepal also faces an uphill battle with technology regulation.
Features
6 min read
Shallow fakes, where videos are edited to remove context andpresent statements in a different light, are just as dangerous as deep fakes.
Features
6 min read
Meet six young Nepali artists with stories to tell, styles of their own, and a passion for art that’s digital.
Features
9 min read
The formation of a Cyber Sena to defend Prime Minister Oli’s interests raises the spectre of censorship, trolling, and harassment.
Features
8 min read
The way Nepalis approach love and courtship has changed a great deal over time, and the emergence of digital platforms has had a significant impact on how dating dynamics have evolved in Nepal.
COVID19
Photo Essays
4 min read
How students and teachers of Dhulabari Higher Secondary School coped with distance learning in 2020
Opinions
10 min read
Recent events reveal just how vulnerable data privacy is in Nepal