Re-envisioning Algorithmic Decision-Making

The Notion of “Fairness” in AI: Encoded Inequity and Algorithmic Bias

Bobbie Dousa
April 8, 2020
April 16, 2020

Increasingly accompanying narratives of AI prowess and potency found across headlines, bylines, and, of course, colloquial conversations, are portentous and admonitory accounts of AI bias. A cursory dip into the discourse surrounding AI may leave one drenched in examples of AI’s propensity to reproduce and amplify existing social inequities. Indeed tech legal scholar Julia Powles confirms:

“The tales of bias are legion: online ads that show men higher-paying jobs; delivery services that skip poor neighborhoods; facial recognition systems that fail people of color; recruitment tools that invisibly filter out women.”

Our algorithmic instruments and AI tools which function to identify patterns within vast quantities of data are often conceptualized as offering a quantified view into the diverse mechanisms of our socially contingent worlds.

Considering that AI is built from, operates through, and produces outputs garnered from the existing data our societies make available to technologists, these and other instances of algorithmic bias born of the reigning data-driven paradigm reflect the exigencies of our socially-stratified societies. In other words, critical algorithm scholars, journalists, and technologists alike advocate for understanding pervasive and pernicious algorithmic biases as reflective of historically embedded racial, gender, and class inequalities that shape and dictate the contours of our imaginations, our actions, our instruments, and our societies.

Given the wealth of evidence that AI tools can exacerbate conditions of injustice rather than alleviate them, practitioners have coalesced around aspirations of making AI more “accurate” in order to produce more “neutral”, “objective” outputs. For example, the emergent subfield of machine learning engineering FAT ML (fairness, accountability, and transparency in machine learning) constitutes a premiere opening within computer science for researching, producing, and implementing better algorithmic tools. Still, many data scientists and scholars of technology remain critical of the limitations of what is sometimes referred to as a “quality control approach” toward algorithmic bias. Many view such an approach, often called “algorithmic fairness” within the realm of computer science, as akin to “procedural fairness” within the field of political philosophy. In other words, political philosophers at Princeton University explain:

“the application of the same impartial decision rules and the use of the same kind of data for each individual subject to algorithmic assessments, as opposed to a more ‘substantive’ approach to fairness, which would involve interventions into decision outcomes and their impact on society (rather than decision processes only) in order to render the former more just.”

"There is a wealth of evidence the AI tools can exacerbate conditions of injustice rather than alleviate them"

Not only does “addressing algorithmic biases as a computational problem obscure its root causes,” but even if these tools are modified under the objectives of procedural fairness, we must ask whether narrow notions of neutrality truly correspond to “fairness” amongst societal conditions of (well-documented) rampant inequality and structural injustice. For example, journalists and legal scholars have exposed how predictive policing algorithms and criminal risk assessment algorithms, reliant on historical crime data and socio-demographic data (exempting racial data) respectively, disproportionately target Black, indigenous, and Latinx communities for stronger policing and longer, harsher sentences given a history (and thus data sets reflective of) racist over-policing and criminalization of these communities. Accordingly, many data scientists, scholars, and journalists are demanding that practitioners think more carefully about what problems and issues require automation and whether algorithmic solutions are not only essential to apply but will lead to better material outcomes from those who will be made subject to often obfuscated forms of algorithmic decision-making. Even if these systems can be designed to operate without dramatic racial biases, their deployment, data scientist Ben Green argues, can “perpetuate injustice by hindering more systemic reforms of the criminal justice system.”



Understanding Data Scientists as Political Actors

Attempting to expand current notions of algorithmic fairness and practitioner neutrality, critical algorithm scholars and data scientists such as Green, are calling for technologists and engineers to realize their influence and understand themselves as political actors. “Politics”, as invoked by Green, does not solely refer to the mechanisms of electoral issues concerning political candidates or parties. Instead, Green’s invocation of politics more closely aligns to definitions delineated by political scientists Harold Lasswell and Adrian Leftwich in which politics essentially refers to “collective social activity—public and private, formal and informal, in all human groups, institutions and societies” which affects who gets what, when, and how.”

Green argues that:

“by developing tools that inform, influence, impact important social or political decisions — who receives a job offer, what news people see, where police patrol, —data scientists play an increasingly important role in constructing society.”

As such, Green argues that it is imperative that data scientists move away from conceptions of technological instruments as neutral tools that can “be designed to have good or bad outcomes” and instead recognize how the technologies they are developing “play a vital role in producing the social and political conditions of the human experience.” By this logic, Green asserts that data scientists must also come to recognize themselves as political actors engaged in the “process of negotiating competing perspectives, goals, and values” rather than as neutral researchers merely coding away in their offices. Given that “technology embeds politics and shapes social outcomes” Green contends a position of true neutrality for data scientists remains an “unachievable goal.” Green argues that:

“first, it is impossible to engage in science and politics without being influenced by one’s background, values, and interests and second, striving to be neutral is not itself a politically neutral position—it is a fundamentally conservative one” as such a stance functions to maintain a radically inequitable status quo."

Correspondingly, Green also takes aim at the common mantra within the realm of data science, “‘we shouldn’t let the perfect be in the enemy of the good.’” Green highlights that data science lacks any theories or coherent discourse “regarding what ‘perfect’ and ‘good’ actually entail and that furthermore, the field “fails to articulate how data science should navigate the relationship” between the two notions. Instead, Green understands  such a claim as taking “for granted that technology-centric, incremental reforms is an appropriate strategy for social progress.”


Strengthening Democratic Insight and Oversight

While insisting that data science is capable of improving society, Green also maintains that algorithmic and data science solutions be “evaluated against alternative reforms as just one of many options rather than evaluated merely against the status quo as the only possible reform.There should not be a starting presumption that machine learning (or any other type of reform) provides an appropriate solution for every problem.” Green contends that practitioners building tools designed for algorithmic decision-making must allow room for interrogation and deliberation as to whether their intervention is most appropriate and most just.

Similarly, Julia Powles insists on a need for accountability mechanisms to regulate the use and development of systems of algorithmic decision making that are both independent of corporate influence and accessible to public oversight. Powles explains:

“any AI system that is integrated into people’s lives must be capable of contest, account, and redress to citizens and representatives of the public interest. And there must always be the possibility to stop the use of automated systems with appreciable societal costs, just as there is with every other kind of technology.”

In addition to increased democratic oversight, other scholars of technology call for more robust, bottom-up democratic agenda-setting measures. They see that while calls for strengthened consumer protection laws and the founding of new regulatory departments might be well-reasoned, reforms may also take place at the local level. An example of a bottom-up, democratic agenda setting is the city of San Francisco’s ban on the use of facial recognition technology for policing “due to the increasing amount of empirical evidence of algorithmic bias in law enforcement technology.” Local, bottom-up measures, they argue, can comprise significant measures in regulating algorithmic bias.

These journalists, technologists, practitioners, and scholars advocate for an approach to algorithmic decision making that attuned to “common human interests, equitable distribution of social goods, resources, and opportunities, and a commitment to foster empowered political participation,”— what science and technology scholars Mamo and Fisherman define as “justice.” Together, their calls for rethinking how practitioners, citizens, and governing bodies alike address encoded inequities signal not only the urgency of the task given AI’s potential to further entrench social ills, but they also offer critical entry points to rethinking what may yet be possible for our instruments, our health, and our collective futures.



  • Written by Bobbie Dousa, Patient Experience Researcher at CCG.ai
  • Edited by Belle Taylor, Strategic Communications and Partnerships Manager at CCG.ai
  • Thanks to Geoffroy Dubourg Felonneau and Yasmeen Kussad for valuable discussions.


References consulted:

This is some text inside of a div block.