Patient Perceptions Of AI-Driven Systems

What potential ramifications should patients be aware of, or concerned about, regarding its usage?

Bobbie Dousa
January 23, 2020
February 10, 2020

With the introduction of computer-assisted clinical decision support tools in the 1970s, the development of Artificial Intelligence (AI) systems for medical usage has shaped healthcare for nearly fifty years. Recent developments in AI have enabled computer programs to accurately diagnose skin cancer, diabetic retinopathy, and seizures. Today, AI is otherwise utilised to predict clinical outcomes and identify modifications in drug treatment protocols. The past two decades have witnessed an increasing urgency among medical researchers, engineers, investors, clinicians, and patients in enjoining what else AI systems might begin to address.

Nevertheless, the data that I and other researchers are gathering indicates that patients are still asking: what is AI? With varied (although certainly increasing) levels of sophistication, patients are questioning: What potential ramifications should patients be aware of, or concerned about, regarding its usage? When and how will this technology affect or augment our standard of care? How should patients feel about AI, what questions should patients be asking, and how can patients stay informed?

In order for healthcare technologies to be chiefly responsive to patients’ needs, it is critical that institutions, persons, or entities involved in developing instruments that can affect patients’ quality of care not only assess patients’ knowledge and perceptions of emerging technology, but also heed and address their ensuing questions and concerns.

In this article, I will draw from my current research to offer a sense of the kinds of concerns and questions cancer patients are already raising with regard to AI systems in oncology and the medical field more broadly.

In conducting my research, I have interviewed patient advocates, clinicians, nurses, and patients (a minority of whom are currently undergoing treatment and a majority of whom have completed treatment) in the US and the UK. I have also attended and observed discussions in patient workshops, support groups, and conferences. The following insights are largely derived from approximately thirty semi-structured interviews with current and former cancer patients, of whom the majority reside in the UK and are middle class, retired, and above the age of fifty. Before examining these findings, it may be helpful to first offer a pithy overview of what ‘AI’ refers to…


What is AI?

AI, or artificial intelligence, broadly invokes the capacity of computer programs to perform operations analogous to human mental functions such as logical deduction and inference, or the ability to respond appropriately to spoken language. M.C. Elish, a cultural anthropologist whose work examines the societal impacts of AI systems, argues that what characterises ‘AI’ is its ‘slipperiness’ —with the meaning of ‘intelligence’ as but one of the term’s central yet unresolved dimensions. Against a conception of ‘AI’ as adhering to strictly technical terminology, Elish argues that AI is a ‘nebulous’ demarcation more closely aligned with marketing rhetoric. Nevertheless, current conceptions of AI are typically understood to involve machine learning techniques. Elish explains that ‘machine learning’ invokes ‘a specific set of computer science and statistical techniques that refer to a type of computer program or algorithm that enables a computer to “learn” from a provided dataset and make appropriate predictions based on that data…Learning, in this case, is narrowly defined and refers essentially to the capacity for an algorithm to recognise a defined characteristic in a dataset in relation to a defined goal and to improve the capacity to recognise this characteristic by repeated exposure to the dataset’.

In her recent book, If…Then: Algorithmic Politics and Power, media and communications scholar, Taina Bucher, helps us to distinguish machine learning algorithms and techniques from the properties of deterministic algorithms. Bucher clarifies: ‘Given a particular input, a deterministic algorithm will always produce the same output by passing through the same sequence of steps. The learning type, however, will predict outputs based on previous examples of input data and outputs….In contrast to the strict logical rules of traditional programming, machine learning is about writing programs that learn to solve problems from examples.’ Bucher further reasons that although ‘algorithms are “trained” on a corpus of data from which they may “learn” to make certain kinds of decisions without human oversight…machines do not learn in the same sense that humans do’. Rather, Bucher argues, ‘the kind of learning machines do should be understood in a more functional sense’. Citing legal scholar Harry Surden, Bucher explains that machine learning-driven systems are “capable of changing their behaviour to enhance their performance on some task through experience”. Invoking the work of Adrian Mackenzie and Jenna Burrell, Bucher points out that while there are a variety of machine learning and algorithmic techniques that can be utilised to “impose a shape on the data” (e.g., neural networks, support vector machines, random forests, logistic regression models, k-nearest neighbours, etc.,) what determines whether to use one technique over another “depends upon the domain (i.e., loan default prediction vs. image recognition), its demonstrated accuracy in classification, and available computational resources, among other concerns”.

Scholars of ‘critical algorithm studies’, such as Taina Bucher and M.C. Elish, scrutinise what machine learning-driven systems “are actually doing as a set of situated practices”. Hailing from a diverse range of intellectual disciplines from film studies to computer science, these scholars recognise the urgency of, and are consequently engaged in, efforts to analyse the existing ramifications of algorithmic implementation beyond the sensationalised accounts found in popular news sources and other media. Patients, I have found, are guided by a similar trenchant pragmatism.


Patients and AI

“How should patients feel about AI, what questions should patients be asking, and how can patients stay informed?”

What do patients currently understand of AI? Of the current sample of patients I have interviewed thus far, the majority (approximately 70%) admitted an extremely limited to an elementary level of understanding of what AI or machine learning techniques constitute (all of the patients I have spoken with have encountered the terms before). Those with rudimentary understandings conceded that they had gathered most of what they understood of AI from speculative fiction (i.e., books, film, and television) as well as news media. Some of these patients imparted that their imperfect understandings of what ‘AI’ refers to caused them to feel incredulous and confused —and sometimes misled—about the ‘promise’ of AI, Big Data analytics, and precision medicine. Many in this group concluded that while they would like to have a sharper conception of what these terms mean and of implications of technologies, the demands of their illness or treatment precluded this desire. In our conversations, some explained that coupled with their bodily distress, they were further concerned with attending to their mental health, an acceptance of death, or with other enduring commitments (e.g., providing for and taking care of their families and friends, work obligations, etc).

The remaining portion of patients were all below the age of sixty-five and many had, either themselves or through a spouse, encountered AI systems vis-a-vis their employment (i.e., as engineers, government officials, marketing directors, and healthcare professionals). Those in this second group tended to have an intermediate to advanced knowledge of the principles of machine learning techniques and the complexities of its usage. They tended to voice the fiercest optimism and support for furthering this technology within the realm of healthcare although nearly all of the patients I have spoken to thus far expressed a neutral acceptance, if not an eagerness and positivity, toward the potentials of AI-driven systems for cancer care and medicine, more broadly. Patients commonly expressed a belief that an increased implementation of AI-driven technology in the medical field remains both “inevitable” and “the way of the future”.

Based on their existing knowledge, this group of current and former patients and I also discussed their concerns relating to the usage of machine learning -driven systems for healthcare. Their concerns can perhaps be classified into three interrelated themes:

  • Concerns about implementation regarding the mechanics of AI systems
  • Concerns about the potential for these systems to further entrench inequities and existing social disparities in healthcare
  • Concerns about the institutional shifts in healthcare these technologies have the potential to precipitate

Patient concerns

The first set of concerns regarding the mechanics of AI systems can be exemplified in the following questions: How will developers ensure they are building systems from the very best training sets available? How will developers, medical professionals, legal arbiters and other intermediaries ensure that these systems will be sufficiently relevant and updatable — that is, adaptable and responsive to new data, insights, and enhanced techniques? Patients also questioned:

How can we ensure that the technological feats and benefits being promised by those developing, marketing, and selling these systems remain reproducible in clinical settings and thus, maximally beneficial to the largest amount of patients possible?

The patients that raised this last question had a fairly robust conception of how AI-driven systems operate; they stressed the need for honesty and accountability in delineating the shortcomings of emerging tech.

In questioning the potential for this technology to address health disparities or further entrench them, some patients raised concerns of how researchers and developers are grappling with the limitations of existing health data. Often this data is representative of only a small portion of world’s population. Patients fear that if AI-systems are trained on inadequate or unrepresentative data, these systems could potentially reify medical insights (as well as produce medicines and outcomes) with limited efficacy. A smaller number of patients questioned whether emerging health tech might only be fully responsive to and efficacious for the demographic groups resembling developers. These patients worry that as the tech industry is dominated by affluent to middle class, cis-, white, male developers (see: the Myers article cited below for more information), the questions, issues, and systems developers are currently pursuing might bear the (un)conscious markings of developers’ particular privileges, interests, politics, desires, and bodies. These patients reason that the needs, world-views, and commitments of those developing technological instruments will inevitably influence how these instruments will take shape in the world.

Compounded by the troubling homogeneity of the tech industry, these patients foresee that this tech has the potential to embody unconscious prejudices, blindspots, or inherent bias that could disproportionately affect the most vulnerable in society.

Like this group of patients, Taina Bucher advances a similar reasoning in If…Then. Machine learning involves using data to make models that have certain features. ‘Feature engineering’, Bucher writes, ‘or the process of extracting and selecting the most important features from the data, is arguably one of the most important aspects of machine learning…the problem of determining what is “most important” depends on the data and the outcome you want to optimise…the understanding of data and what it represents then is not merely a matter of a machine that learns but also of humans who specify the states and outcomes in which they are interested in the first place’.

The final set of patient concerns relate to the potential institutional shifts in healthcare that AI or machine learning-driven systems might precipitate. Firstly, a large proportion of patients I spoke with asserted their desire for ‘human buffers’ — that is, medical professionals — who might oversee and manage AI systems within healthcare. A majority of patients disclosed that no matter how efficacious the system, it would comfort them to retain the oversight of a medical professional who might assess and administer the care called for or instigated by the AI system.

To ensure the quality of their care, patients described an individual who might serve a dual role — to serve as the ‘human touch’ in their medical treatment and to supervise these systems, providing the final say in their treatment so as to mitigate potential error.

Moreover, a few patients voiced concerns over the possibility that certain clinicians might face job insecurity following widespread implementation of, for example, AI- powered diagnostic tools. Slightly more common, was the concern over the potential financial costs large-scale implementation of AI-systems within oncology and the greater health field will engender. Patients want to know who will bear the brunt of these costs and whether the arguably beleaguered health systems of the UK and the US might pass the costs onto patients. A few patients emphasised that in order for them to feel fully supportive of widespread implementation of AI systems in the medical field, governments and health institutions will need to first address issues of fragmentation and bureaucratic disarray to secure these systems and avoid potential data mismanagement or breaches. Finally, patients were eager to know how long it would take for emerging machine learning-driven technologies to be established in their local hospitals and enhance their current care.


Educating patients

Beyond popular news sources or the burgeoning scholarly work of critical algorithm studies, patients continue to assert their desire to learn more about the AI-driven systems that have the potential to considerably impact their treatment from trustworthy sources. Patient advocates reason that given the aforementioned demands on patients as well as the nature of clinical care, more advocates, researchers, and clinicians must be trained in how these systems operate and how they might affect patients in order for them to help patients navigate and assess the potential ramifications that this technology may have on their treatment. Undeniably, more initiatives need to be put forth to educate patients in how machine learning-driven systems operate, what their efficacy might be, and what greater social effects they might precipitate. Such educational initiatives would serve as a crucial first step in assisting current, former, and future patients in understanding where and how we all can act to ensure that patients receive the quality of care they deserve.


  • Written by Roberta Dousa, Patient Experience Researcher at CCG.ai
  • Edited by Belle Taylor, Strategic Partnerships Manager at CCG.ai
  • Thanks to Geoffroy Dubourg-Felonneau and Harry Clifford for valuable discussions and interviews

References consulted:

This is some text inside of a div block.