On Health Equity And AI

Driving questions for the development of AI systems for healthcare

Bobbie Dousa
January 29, 2020
February 10, 2020

"A.I. Could Worsen Health Disparities" cautions the title of a recent opinion piece published by The New York Times. In the article, Dr. Dhruv Khullar of New York-Presbyterian Hospital maintains that with the increasing emergence of A.I. (or, artificial intelligence) capable of diagnostic and clinical assessment comparable to (and, at times, exceeding) that of physicians, it is urgent that we not only scrutinize the technical prowess of these technologies but also, interrogate their potential for further entrenching existing inequities in medicine. To illustrate this negative potential, Dr. Khullar points to three critical facets that contour the implementation of A.I. in healthcare:

Firstly, Dr. Khullar reasons that if A.I. are trained on datasets that are not sufficiently representative — for example, if training datasets fail to incorporate enough data on women or patients of color — their conclusions may be unreliable. Next, Dr. Khullar contends that ‘because A.I. is trained on real-world data, it risks incorporating, entrenching, and perpetuating the economic and social biases that contribute to health disparities in the first place.’ In exemplifying this, Dr. Khullar points outside of the realm of medicine as he cites how algorithms designed to aid judges in deciding sentences by predicting recidivism rates exhibit demonstrable racial biases. Finally, he contends that ‘even ostensibly fair, neutral A.I. has the potential to worsen disparities if its implementation has disproportionate effects for certain groups’. Nonetheless, Dr. Khullar holds that these issues are already immanent to medicine today inasmuch as the U.S. healthcare system is rife with— to name but a few — income, gender, and race-based inequities.

To combat the potential for A.I. to exacerbate existing disparities, Dr. Khullar recommends consistently “monitoring both the output of algorithms and their downstream effects” and suggests the potential need for the development of counter-bias algorithms to address and correct systemic discrimination. He advocates for the universal adoption of the understanding that ‘humans, not machines, are still responsible for caring for patients’ as requisite for reaping the advantages of A.I.’s potential for enabling more equitable, accurate, and efficient healthcare. In concluding his article, Dr. Khullar insists that A.I. be understood as an auxiliary tool to aid rather than replace clinicians in caring for patients.


While A.I. systems have been in medical usage since at least the 1970s with the development of computer-assisted clinical decision support tools, anthropologist of technology, M.C. Elish argues that ‘A.I.’ remains a ‘nebulous’ demarcation more closely aligned with marketing rhetoric rather than strictly technical terminology. Still, A.I. is typically understood to encapsulate machine learning-driven systems. Elish explains that ‘machine learning’ invokes ‘a specific set of computer science and statistical techniques that refer to a type of computer program or algorithm that enables a computer to “learn” from a provided dataset and make appropriate predictions based on that data…Learning, in this case, is narrowly defined and refers essentially to the capacity for an algorithm to recognize a defined characteristic in a dataset in relation to a defined goal and to improve the capacity to recognize this characteristic by repeated exposure to the dataset’. As acknowledged in Dr. Khullar’s article, recent developments in machine learning have enabled computer programs to accurately diagnosis seizures, diabetic retinopathy, and skin cancer. Machine learning is also utilized to identify modifications in drug treatment protocols and predict clinical outcomes.

Nevertheless, the research of both professor of medicine and clinical surgery at the University of Illinois, Robert A. Winn, and anthropologist Kadija Ferryman of the Data and Society Research Institute, enjoins them to contextualize these successes — especially in the realm of cancer care — against the backdrop of enduring health disparities in the U.S. In a recent editorial, Ferryman and Winn weigh the sobering fact that people of color continue to have disproportionately higher incidence and mortality rates for multiple cancers (among them: kidney, breast, cervical, and prostate cancer) as they pose the following question: ‘As big data comes to cancer care, how can we ensure that it is addressing issues of equity, and that these new technologies will not further entrench disparities in cancer?’ Winn and Ferryman offer their own assessments of how the development and integration of A.I.-driven systems might account for structural inequities. They reason that due to the nature of clinical care, clinicians must be able to assess, understand, and explain machine learning-driven systems to patients. In other words, these systems cannot be fully ‘black-boxed’.

With the capacity of these technologies to refigure clinicians’ responsibilities, legal scholars forewarn that this might incur both a transformation of the patient-doctor relationship as well as a reconceptualization of the regulations surrounding malpractice. Winn and Ferryman point out that these potential changes to clinicians’ obligations to explain these systems may have the most consequence for patients with ‘limited access to high quality clinical care, limited health literacy, earned mistrust of medical providers, and those individuals who may be exposed to interpersonal and institutional racism and discrimination in their healthcare encounters’. They argue that it is critical that the potential ramifications of the integration of this technology in the clinic for vulnerable patients be consistently acknowledged and intentionally managed. Finally, Winn and Ferryman offer three principles for thinking about the potentials of A.I. in medicine and the effects of these systems on health equity: (1) Prioritize health equity in AI in medicine; (2) Address algorithmic bias for health equity; (3) Collect non-biological data, too. With the final principle, it is their hope that data-driven analytics might also be utilized to investigate how cancer might reflect interactions between genes and environmental factors.


So far, my own research suggests that only a small number of cancer patients consider themselves to have a working knowledge of predictive analytics and AI-driven systems. Nevertheless, my current research also indicates that the majority of patients feel positively toward and, are interested in and energized by, the possibilities of these systems to improve cancer care.

I spoke with the CEO of Cambridge Cancer Genomics (CCG.ai), Dr. John Cassidy, as well as their CTO, Dr. Harry Clifford, and the head of Machine Learning team, Geoffroy Dubourg Felonneau, to learn more about how they respond to concerns that AI might exacerbate health disparities and how, in addition to regular public outreach and social research initiatives, they are actively implementing methods of prioritizing health equity in the development of their technology. They are explicit in their agreement that these are urgent potentialities that need to be perpetually addressed. Foremost, Cassidy, Dubourg Felonneau, and Clifford are adamant that the tools they are creating are in no way meant to replace clinicians, their expertise, or their oversight. Rather, they hope that CCG.ai’s software might instead support the ability of oncologists to easily and responsibly harness all relevant information they might need in coming to their expert decisions.

Cassidy is especially sensitive to the possibility that the increasingly democratic proliferation of these clinical decision enhancing systems might in fact add to the existing labor of oncologists. The information offered by these platforms will add another factor to consider in making their clinical decisions. To manage this, Cassidy stresses the imperative that their platform present its findings in a readily discernible, accurate, and concise manner. Furthermore, Cassidy maintains that it is utterly essential that they explain what their system does and how it does it in plain language to patients and clinicians alike. Clifford describes the process in which CCG.ai’s tech team examines the safety of their systems as ‘consistent’ and ‘iterative’. Clifford explains that he meets regularly with Dubourg Felonneau to analyze their research and the insights their systems are already producing. Together, they frequently pose the following questions of their software and its findings: ‘Is it going to be useful? Is it going to safe for people to use? Is it responsive and responsible?’

Like Dr. Winn, Ferryman, and Khullar, Dubourg Felonneau maintains that it is critical they are training their systems on the very best datasets at their disposal — the most representative and the most thorough. When I analyzed their current datasets garnered from The Cancer Genome Atlas (TCGA) and the International Cancer Genome Consortium (ICGC), I found that their samples were representative of the gender and racial make-ups of their originating provenances within 1 to 2 percentage points. Moreover, Dubourg Felonneau stresses that it is integral that CCG.ai remains dedicated to building a diverse, thoughtful, and talented team of engineers so that the technological instruments they develop reflect more than the priorities, circumstances, and needs of select groups. Still, the three were unequivocal in stating that their datasets, engineering pool, and their machine learning-driven systems remain subject to historical and structural disparities. Cassidy acknowledged that, at present, the individuals with the most wealth and greatest health access and thus, most complete health records, are not the globe’s most marginal and vulnerable populations. Inevitably, historical biases continue to plague both current medicine and emerging health technologies.

At the same time, Cassidy contends that these technologies can be improved and we might begin to tackle these disparities by empowering these populations and the clinicians who serve them. For instance, it is Cassidy’s hope that the decreasing costs and more widespread use of sequencing technologies as well as the predictive analytics which study the genomic data they aggregate will aid in this effort. Cassidy remains positive that the insights amassed from A.I.-driven systems can be improved with more representative and diverse datasets and that, despite potential legal hurdles, these systems can be frequently updated to mitigate inequities in training sets and determined outcomes. Crucially, a thoroughly contextual and historical understanding of health disparities is as indispensable to eradicating them as the commitment to do so. Addressing how A.I.-driven systems might impact and transform possibilities of health equity requires both.


  • Written by Roberta Dousa, Patient Experience Researcher at CCG.ai
  • Edited by Belle Taylor, Strategic Communications and Partnerships Manager at CCG.ai

References consulted:

This is some text inside of a div block.