AI still needs humans
AI systems rely on human beings that envisage, design, develop and implement them.
No AI system would exist without human knowledge. However, as the algorithmic power grows, there are reasonable concerns that AI could surpass human intelligence. Furthermore, there are serious risks that AI tools could be the basis for a human replacement — medical professionals, in this context— that could lead to dehumanisation in medicine. In the case of doctors, in an extreme situation, machines fed with huge amounts of patient data could be making diagnoses or recommending treatments without the supervision of a competent human, or human patients could end up dealing with chatBots and care robots instead of human medical professionals.
Dehumanisation in medicine would be a consequence of the lack of human presence and deindividuating practices. This would risk a reduction in empathy as only humans are equipped with consciousness of the patient’s situation, can develop empathy with other human beings, and fully understand the environment and context. Indeed these are essential factors to meet the global moral imperative of the medical profession, which states that “each patient must be treated as a person” to preserve human dignity.
Moreover, if machines replace medical professionals, this would also risk the disempowerment of both patients and clinicians. Patients would not be listened to about their suffering, which would represent a big risk for their recovery. As mentioned earlier, patients are not just made of data, and their experience deserves to be heard by another human being, who can understand them and empathise with them.
Subjectivity is also essential
It is well known that including patients’ subjective experience in medical decisions is essential in achieving a good response to treatment. This idea sets the basis of patient-centred and evidence-based medicine, as well as the biopsychosocial model of disease: these three approaches have been highlighted by national and international bodies (Institute of Medicine of America, World Health Organisation) as crucial for medical institutions and professionals to ensure quality care and patients’ welfare.
Essential elements of these approaches suggest that:
- The patient should be placed at the center of the medical stage.
- Patients should become active participants in their care and decisions about their health should be shared with the clinician.
- Patients should have a time in their doctor’s appointment to express their concerns, as this is crucial for their health improvement.
As explained in Sackett’s training manual on evidence-based medicine “the unique preferences, concerns and expectations each patient brings to a clinical encounter must be integrated into clinical decisions if they are to serve the patient”. Patients are not just made up of interconnected systems, a vision that would risk objectification and dehumanisation. The interaction with social and psychological dimensions should also be considered, and not only the biological dimension represented by data measurements.
But would an AI system ever be able to accomplish this? AI medical tools can hardly listen in the same way humans can or incorporate the subjective experiences of patients in their automatic decisions even if significant effort is being invested in the field. Even if Chatbots or chatGPT can show an apparently conscious behaviour in a conversational way, this is not spontaneous or intelligent behaviour, but a task learnt from existing patterns and performed unconsciously. It will be hard, if not impossible, for AI algorithms to capture the holistic human essence.
Remembering the value of non-artificial intelligence
In order to achieve high-quality care it is essential to recognise human presence. As such artificial Intelligence should complement the “non-Artificial” Intelligence of medical professionals.
Doctors have the potential to know the facts, the science behind the facts, the patient context, and their own clinical skillset better than AI does. Doctors capture information through their five senses —and that will never be recorded in full in Electronic Health Records. Clinicians have the potential to listen to their patients and establish a human relationship with them, so that real person-centred care can be established where decisions are shared with empowered patients. Clinicians apply explicit knowledge, the knowledge of textbooks, but also tacit knowledge, that is related with medical intuition and deepens with experience.
AI systems cannot benefit from that knowledge as it cannot be codified. Nevertheless, AI has already demonstrated superiority in a number of medical tasks, including incredible performance in imaging. Although the predictive capabilities from Machine Learning have yet to be proven in some scenarios, AI easily outperformed humans in producing predictions where lots of datapoints were available for each patient. However, this prediction of outcomes can still benefit when a human steps in to check the predictions make sense or apply their understanding.
Clearly AI already has a place within our healthcare systems and naturally there are concerns about how it will be used and what that will mean for patients. What is becoming clear is that for AI in healthcare to provide value to patients, clinicians and healthcare systems it must be done ethically and this involves humans.
Raquel is a Senior Lecturer in Statistical Learning for Precision Medicine and runs the Fair Modelling lab at the Institute of Psychiatry, Psychology & Neuroscience. Her work is supported by the National Institute for Health and Social Care (NIHR) Maudsley Biomedical Research Centre and she is part of the BRC Prediction Modelling Group.