Skip to main content
KBS_Icon_questionmark link-ico
ethical AI in healthcare 3 ;

What can humans do to guarantee an ethical AI in healthcare? Part II

Dr Raquel Iniesta

Senior Lecturer in Statistical Learning for Precision Medicine

30 October 2023

This is the second blog by Dr Raquel Iniesta on ethical AI in healthcare. Previously she discussed why it is so important to maintain the human element within or alongside AI in healthcare in order for its use to be ethical. In this blog she will discuss what needs to be in place or considered for ethical AI in healthcare to happen and propose a number of frameworks and approaches.

One of the key elements to ethical AI in healthcare is the relationship between healthcare practitioners and AI systems, but what is not often considered is the importance of extending this relationship to patients and AI developers. With a tripartite collaboration at its basis, we can start to establish key ethical principles for all the players to agree upon, understand and implement.

Nurturing the relationship between doctors and AI

In order to create a relationship between doctors and AI, doctors should be capable and confident to interact with the AI system. Doctors have a moral obligation to identify any gaps in their knowledge to improve their practice, in particular, about how an AI support system works and the associated ethical issues. If clinicians understand how AI algorithms deploy medical suggestions, they will be better equipped to establish trust in the system, assess the outputs, incorporate such information into their decisions or be aware of inaccurate or unfair predictions.

Studies into existing AI tools have provided insight into how best to develop a harmonious relationship between clinicians and AI to maximise the potential. For example last year researchers investigated how doctors in the UK were responding to Mia software which is an AI platform for breast screening that works alongside radiologists to help identify and flag suspicious cases from scanning data for more accurate oversight of potential cancer cases.

Doctors approved the use of AI in this context and acknowledged that it could replace certain elements of the process but they also highlighted that there was a need for validation. When asked which form of evidence they would prefer for this validation and to inform how they work with Mia software and there was a clear preference for guidelines and studies at a national level.

Mia study AI in healthcare

Studies such as this indicate an appetite for using AI but we must also be cautious if clinicians become too dependent on automatically deployed suggestions without consideration of individual and social factors , particularly the less expert professionals faced with a difficult case. Disempowerment of clinicians should be avoided by promoting self-clinical judgement development. We should not forget that clinicians are responsible for their decisions so any recommendation made on the basis of an AI deployment should be well understood by the clinician —and also by the patients— if a shared decision is to be taken. This is where developers come onto the scene.

It takes three to tango… the role of developers

The basic team that revolves around AI will involve the collaboration of clinicians, patients and AI developers. Communication is essential for successfully achieving team goals and for this happen AI developers should design systems that are transparent and explainable to all members of the team. The role of transparency is to elucidate the so called “black-box” of AI algorithms, in which the patterns the algorithm follows to derive an output for a given person are opaque to the person and even to the expert developer.

Developers should prioritise explainable AI that allows humans to understand the reasoning behind decisions or predictions made by an AI system even if it’s a black-box algorithm. Furthermore, respect between teammates is key. Clinicians’ decisions should be respected as those made by a competent medical human agent. Patients’ opinions and autonomy should be respected in clinical decisions, and patients should be empowered by being educated on the AI systems and the related ethical issues.

Therefore, patients would also hold responsibility for clinical decisions – and have the right to deny that an AI system makes significant decisions about their health under General Data Protection Regulation (GDPR) . Developers should respect patients’ health by being educated about related ethical issues and by delivering AI systems that produce fair and non-discriminatory outputs. They should also promote artificial intelligence that is responsive and sustainable.

It is of paramount importance that developers and their institutions are aware of the harm that biased AI systems that favour some groups and undermine other interests can produce on humans’ health. It’s fair to say that developers also hold ethical responsibility for the consequences of the outputs of their AI tools.

Ethical challenges in AI in healthcare

Basis of ethical AI in healthcare

Identifying the key values that AI in medicine should align with is a challenging task. Four classical principles by Beauchamp & Childress (1979) have been relevant in the field of medical ethics and are useful to help us reflect on the ethical dilemmas that encompass the emergence of AI in medicine. These are respect for autonomy, beneficence, non-maleficence and justice. 

Indeed there is already a plethora of work in this area: The European Commission has recently published guidelines for ethical and trustworthy AI which echoes the principles of medical ethics and a systematic review published early this year that examined 45 academic documents and ethical guidelines related to AI in healthcare. This review found 12 common ethical issues: justice and fairness, freedom and autonomy, privacy, transparency, patient safety and cyber security, trust, beneficence, responsibility, solidarity, sustainability, dignity, and conflicts. The guidance provided by the WHO outlined six principles to make sure AI works to the public benefit of every country: protect autonomy, promote human well-being, human safety and the public interest, ensure transparency, explainability and intelligibility, foster responsibility and accountability, ensure inclusiveness and equity, and promote artificial intelligence that is responsive and sustainable. Just from scanning the terms used in these guidelines and frameworks it can be seen there is already meaningful convergence between the different sources.

Actions to ensure use of ethical values

Central to developing AI in medicine that aligns with these ethical values, are activities that promote collaboration among developers, clinicians and patients. We all know that unfortunately the time allowed for a visit to the doctor is scarce. For an ethical AI in medicine it will be essential to enable spaces where collaboration among developers, clinicians and patients can happen. Patient and Public Involvement and Engagement ( “PPIE”) activities facilitate the involvement of citizens in the development of research projects, to engage the public in understanding the technology and related ethical issues, and to include patients’ points of view, experiences, and expectations in algorithm design.

Raquel Iniesta at Bush House AI festival_small
Dr Raquel Iniesta leading a demo session on the ethical issues of integrating AI tools in medicine and the need of enlightening the human role to enable an ethical design, development and implementation of AI tools in healthcare (Bush House, London, 2023).

I recently led a demo session at the King’s AI festival (Bush House, London, 2023) where I showed a group of 20 attendants how an AI tool for clinical decision-making works and introduced them to related ethical dilemmas. I then invited them to express their concerns, fears and desires. Participants were very engaged and asked a lot of questions about the AI agent, its function and limitations. They wanted to know more about their rights as patients, and discussed what information patients would agree to include in a model, how the relevant information to patients’ health should be translated, understanding clinicians’ liability and whether patients would be listened to by practitioners assisted with AI systems.

Where does the responsibility lie? On all of us

As a researcher and developer I listened carefully to their views and worries, and reflected on how my practice could incorporate their reflections. I think the session met the aims of both empowering and educating citizens but it also gave me a lot of food for thought in my work. Clinicians and developers should run these activities systematically and regularly, and should promote the involvement of patients in search of the principle around Respect for autonomy. And for patients they are responsible in enrolling themselves in PPI activities to better understand how decisions on their health are made.

As part of my NIHR Maudsley BRC funded work I have published a paper in the journal AI and Ethics which describes five facts that can help guarantee an ethical AI in healthcare. By providing this simple, evidence based explanation of ethical AI and who needs to be accountable I hope to help provide guidance on the human action that ensures an ethical implementation of AI in healthcare. 

Five facts ethical AI in healthcare

Governments all over the world, particularly in the US and China, are making big investments to integrate AI systems into healthcare, trusting the potential of AI technology to enhance health outcomes and help making cost-efficient clinical decisions.

Ensuring education around medical ethics and AI basics for all stakeholders —clinicians, developers and patients— is fundamental to avoid dehumanisation and promote empowerment in an AI-assisted patient-centred medicine. Enabling collaboration, shared decision-making and responsibility across human stakeholders is also crucial. The role of each human agent in contributing to an ethical AI-assisted healthcare system must be recognised and respected. That way, everyone involved would succeed at their goal of implementing an AI that works for the good health of all.

Raquel is a Senior Lecturer in Statistical Learning for Precision Medicine and runs the Fair Modelling lab at the Institute of Psychiatry, Psychology & Neuroscience. Her work is supported by the National Institute for Health and Social Care (NIHR) Maudsley Biomedical Research Centre and she is part of the BRC Prediction Modelling Group.

What can humans do to guarantee an ethical AI in healthcare? Part I

In the first blog on Ethical AI in healthcare, Raquel Iniesta explored the reasons why we need an ethical framework to enable AI to work for healthcare.  

In this story

Raquel Iniesta

Raquel Iniesta

Reader in Statistical Learning for Precision Medicine

Latest news