Grasping the predicaments at the heart of tech’s institutionalization of ethics
We have all been privy to the discourses shaped by media, marketing, and the tech industry itself that commend emerging AI technologies and the companies that develop them as pivotal forces in unlocking improvements for sectors such as healthcare, social engagement, entertainment, transportation, and scientific research (to only name a few).
In parallel, by now many of us can no longer feign ignorance regarding tech companies’ and their products’ propensity to yield new dangers, instigate harm, and exacerbate social inequities. Each week seems to produce fresh headlines and startling reports exposing moral quandaries and ethical scandals as constituent to business-as-usual in Silicon Valley. AI-driven tech and automated systems have enabled alarming developments across a diversity of domains.
Just this week, for example, news outlets such as CNN announced the controversial facial recognition software company, Clearview AI, suffered a massive data breach; reports indicate that stolen data includes the company’s entire client list. Used by hundreds of policing agencies, Clearview’s app works by comparing a photo to a database of over three billion pictures that Clearview has derived from Facebook, Venmo, and YouTube among other sites. The app then links the photo to matches and the sites from which it gleans its matches. From there, names and other identifying personal information can be acquired. Privacy advocates and U.S. senators from both sides of the aisle have decried the “chilling privacy risk” the company poses; Clearview’s data breach seems to only add credence to these concerns.
Indeed media reports regularly expose the tech industry as operating with lax (if existent) regard to ethical commitments. Journalists, scholars, and other technology researchers have brought reports of the variety of pernicious social impacts AI systems have incurred or bolstered to the fore of public awareness such as:
At least partly in response to the public outcry these revelations produced, many in the tech industry have funnelled resources into establishing institutional practices, new staff roles, and corporate frameworks to support ethical behavior, engineering, and business protocols. Nevertheless what constitutes ethical behavior and action remains highly contested even within the bounds of industry frameworks. Some researchers, such as the anthropologists of technology Jacob Metcalf and Emmanuel Moss, argue that while some may have once believed it possible to grasp a crystalline view of what “ethics” within the tech industry amounts to, “doing ethics” now hails a “site of power” that tech companies are increasingly setting claim to. Ethics, in Moss and Metcalf’s estimation, references a crucial site of contestation which necessarily leaves unsettled who gets to determine what exactly comprises the meaning and practices of “ethics;” in this way, ethics itself remains fundamentally evasive, inexhaustible, and indeterminate.
For example, they ask, what does “doing ethics,” as some tech companies claim to endeavor, actually accomplish? Does it begin and end with “robust processes”? What are the goals of tech ethicists and how do they understand their work and “theorize change”? Is this work subsumed by measures companies already use to derive value? How does “doing ethics” affect or interact with a company’s bottom line and how does this change among start-ups, established companies, and Big Tech firms?
To answer these questions, Metcalf and Moss join technology scholar danah boyd in probing the current tech ethics landscape by interviewing and studying the work of “ethics owners” or those doing the work of ethics inside companies. In their paper, “Owning Ethics: Corporate Logics, Silicon Valley, and the Institutionalization of Ethics,” the authors note that an “owner” in corporate vernacular refers to “someone who is responsible for coordinating a domain of work across the different units of an organization.” Metcalf, Moss, and boyd similarly insist that:
“‘ethics’ is a capacious term, holding a range of meanings indexed to context of use” and thus “the ambiguity of the term is central to the challenge of capturing what it means to ‘own ethics’ in the technology sector. While others in these companies might ‘own legal,’ ‘own security’, or ‘own corporate social responsibility,’ ethics owners do not benefit from an existing set of practices and evaluative measures to guide their actions—although how such practices became institutionalized may be instructive here—nor are there clear external regulations or requirements driving their approach.”
In their view, ethical issues are never resolvable but instead must be continuously navigated and negotiated—a labor which is transferred onto “ethics owners” within tech’s corporate structures. To better understand these tensions, the authors treat ethics not as a “form of argument or abstraction,” but rather the authors draw upon recent debates in ethnographic theory concerning how ethics and morality might be instead treated as social phenomena inasmuch as ethics and morality structure social life. This approach attends to “how everyday practices reveal the moral commitments embedded in actions, in contrast to the tendency to treat ethics as a form of argument or an abstraction.”
Their research indicates that the central challenge ethics owners face involves navigating the terrain between “external pressures to respond to ethical crises at the same time that they must be responsive to the internal logics of their companies and the industry.” While external criticisms goad “ethics owners” to challenge corporate priorities and business practices, the internal corporatized logics of Silicon Valley (and the economic pressures business are shaped by more generally) “create pressures to establish or restore predictable processes and outcomes that still serve the bottom line.”
Their ethnographic fieldwork, interviews, and textual sources convey that as “doing ethics,” —whether via product design or governance goals— becomes institutionalized by tech firms, the “practices associated with these goals are being crafted and executed according to the existing logics and structures of the technology industry, even as they are responding to outside critiques of these logics and structures.” As such, the practices and structural arrangements of “owning ethics” in tech firms “produces pitfalls that threatened to prematurely foreclose what can be thought or done under the heading of ‘ethics.’”
Through their research, Metcalf, Moss, and boyd identify and unpack how three salient and interlocking Silicon Valley logics shape the tension “ethics owners” encounter as they are suspended between “internal” and “external” forces and demands. They identify these three logics as:
Metcalf, Moss, and boyd explain that these, in conjunction with other Silicon Valley logics, reinforce a belief that “trenchant social problems can be addressed through innovative technical solutions developed by those with the most aptitude and creative energy and that an unencumbered market will recognize, reward, and disseminate the best solutions.” They conclude that these logics “underwrite business as usual at the same time that they are implicated in many industry approaches to ‘doing ethics.’”
According to the authors, the “founding myth” of the tech industry is meritocracy as the industry “has long been held up as a paragon of meritocratic achievement, in which its outsized economic and cultural power has been closely coupled with the technical and entrepreneurial skills needed to build and market products.” While the tech industry “claims to be a meritocracy,” one of their interlocutors confirmed, “it is not.” Correspondingly, the authors assert that it is important to understand that meritocracy as a concept:
“[Meritocracy] provides an ideological explanation for unequal distributions of wealth and power as arising from difference in individual abilities. Such differences in ability are often naturalized or otherwise reified while at the same time obscuring power- and privilege-laden social structures that produce and perpetuate such inequalities.”
Meritocratic belief led their interlocutors to stress how imperative it is to hire the best people from top schools and reward those who are so highly skilled. The “can-do attitude” that follows meritocratic belief “insinuates that those who work in tech are sufficient to whatever task is presented to them, including the task of ‘doing ethics.’” Through their interview work, the authors found that interlocutors often spoke of a strong belief in the sufficiency of engineers themselves to grapple with the “hard questions on the ground” —a trust that engineers can evaluate and discern to a sufficient degree “the ethical stakes of their products.” Metcalf and Moss write that:
“while there are some rigorous procedures that help designers scan for the consequences of their products, sitting in a room and ‘thinking hard’ about the potential harms of a product in the real world is not the same as thoroughly understanding how someone (whose life is very different that a software engineer) might be affected by things like predictive policing or facial recognition technology, as obvious examples.”
The authors also found meritocratic logic was further invoked to routinely dismiss critiques and calls for external oversight and structural regulation. They note that to this end, ethics is often framed as a form of “self regulation” able to aptly bypass increased governmental regulation. Moreover, the authors argue that codes of ethics, checklists, and ethics trainings focus on enabling engineers to make ethical decisions within corporate contexts, and doing so, they believe, privileges engineers as “the locus of ethical agency,” grants too strong of an authority to their particular individual perspectives, and furthermore creates conditions whereby when an issue emerges, “blame can be placed on individual failure rather than institutional problems.” Ethics owners are thus torn as they are made to negotiate the belief that technical staff is perfectly and assuredly fit to the “task of doing ethics” while at the same time attempting action that satisfies their understanding that “ethics is a specialized domain that requires deep contextual understanding.”
Metcalf, boyd, and Moss often found that within tech industry contexts the issue of ethics is typically framed as a “problem that can eventually be ‘solved’ once and for all’—the authors interpret this as evocative of the ingrained and oft-evoked Silicon Valley logic of technological solutionism. They write that this idea “that technology can solve problems has been reinforced by the rewards the industry has reaped for producing technology that they believe does solve problems. Belief in the universality of tractable technical problems leads to a conception of ethical behavior as manifest in “organizational practices that facilitate technical successes” such as “ the search for checklists, procedures, and evaluative metrics that could break down messy questions of ethics into digestible engineering work.”
Secondly, they found that technological solutionism leads to a pessimism such that even when ethics is framed as a technical issue, doing ethics or laboring for ethical outcomes becomes too “intractable” or “too big of an issue to tackle.” The authors explain:
“Describing ethics problems as best-practices problems centers ethics in the practices of technologists, not in social worlds they develop technical systems for and within.” For example, while there is ample work on fixing algorithmic bias via complex statistical methods there is far less work that takes up how to address disparities and inequities in data collection and “real world” datasets themselves. To illustrate this issue another way, although one could labor to ensure that their algorithm is “fair” when applied to diverse data (i.e., operates in the same ways), “fairness” is only one aspect of ethical interrogation. Although an algorithm might be “fair,” Metcalf and Moss ask “what good is fairness if it only leads to a less biased set of people harmed by a dangerous product?”
Finally, the logic of market fundamentalism refers to how ethics owners “often constrain their own capacity to effect change within the narrow remit of what ‘the market’ might allow.” In other words, the authors note how ethics owners continuously asserted that organizational resources needed for ethical corporate behavior and interrogation must be justified in “market-friendly terms.” One senior ethics owner they interviewed for their study explained that this :
“‘means that the system that you create has to be something that people feel adds value and is not a massive roadblock that adds no value, because if it is a roadblock that has no value, people literally won’t do it, because they don’t have to.’”
As such, market pressures and the need to maximize share-holder value shape the boundaries and horizons by which ethics can be enacted within the tech sector. This is further compounded by tech executives’ insistence that the “consumer market motivates corporate decision making”, which provides the cynical reasoning for why smaller and newer tech companies are not expected to invest in ‘doing ethics.’ Market fundamentalism demands that ethics owners navigate between “avoiding measurable downside risks and promoting the upside benefits of more ethical AI.” Metcalf and Moss further explain:
“Arguing against releasing a product before it undergoes additional testing for racial or gender bias, or in order to avoid a potential lawsuit is one thing. Arguing that the more extensive testing will lead to greater sales numbers is something else.”
Both, he claims, are central for ethics owners despite how they envelop different domains (in this instance, for example, a legal/compliance team vs a product team).
Metcalf, Moss, and boyd delineate that the institutionalization of ethics within the tech industry can be broadly described as “open-ended”, “undefined”, and “unaccountable” endeavors that are heavily focused on “achieving a robust process” rather than “substantive commitments to just outcomes.” While ethics owners and actors within the tech industry from engineers to product managers may indeed have the best intentions and desire to curb their products’ capacities to enact harm, the authors’ study shows that relegating such efforts within the bounds of corporate structures necessarily subjects these efforts to the entrenched logics that animate the tech industry itself. “Enmeshed in organizational structures that reward metric-oriented and fast-paced work with more resources,” tech ethics owners are pressured to “fit in” to these cultures and temper their critiques. The authors argue that these conditions create a scenario in which ethics owners are negotiating tensions rather than resolving these tensions. Success and failure, they insist, become difficult to differentiate in this context as “moral victories can look like punishment while ethically questionable products earn big bonuses.”
Furthermore, Metcalf, Moss, and boyd contend that these conditions contribute to the normalization of ethical mishaps and heighten “blinkered isomorphism” within the tech industry. Performing ethics becomes more critical than actual changes to products. The authors explicate how as companies attempt to avoid risks, they “steer their decision-making to respond to, and ideally avoid, public calamities”, but in doing so, they “are far less likely to learn from others’ successful actions.” The significance of attending to these processes is immense as tech ethics will continue to have a tremendous impact on not only “administrative regulations, algorithmic accountability documentation, investment priorities, and human resource decisions,” but immediate conditions (including the conceivable violences, and avenues of recourse) through which the public will encounter technological instruments within the contemporary socially-stratified milieu.