As a radiologist and researcher, I read a lot about how AI will replace me. This research shows that the general public, like patients and physicians, are comfortable with AI as a tool, but not as an autonomous decision-maker. This perspective is vital in informing how we implement AI programmes, keeping patients and future healthcare users (i.e. the public) not only informed, but at the heart of what we do.
Dr Carolyn Horst, NIHR Clinical Lecturer & Radiology Registrar at King's and GSTT
08 April 2025
Doctors stay, AI assists - new study examines public perceptions of AI in healthcare
A new study conducted by researchers at the School of Biomedical Engineering & Imaging Sciences published in BJR | Artificial Intelligence, has found that most people support AI’s role in medicine, but draw a clear line: AI should assist, but not replace doctors

The research, conducted as part of an exhibition at Science Gallery London, surveyed over 2,000 visitors on their views about AI (artificial intelligence) in healthcare. The results found that 80% of people said AI should be used in medicine, while just over half (56%) felt it would be safe. But when it came to trusting AI with major decisions, participants were more wary - more than 70% rejected the idea that AI could take over doctors' roles. Even if AI makes fewer mistakes, respondents were not comfortable letting it act alone. "Most people would not be happy for AI to make decisions without considering their feelings", the researchers noted.
The study aimed to fill a gap in the existing research landscape, as most previous studies have focused on physician or patient perspectives rather than the general public. Notably, older respondents (50+) were more likely to consider AI safe (62%) compared to younger participants (55%). Gender differences were also evident—88% of men supported AI’s implementation in healthcare systems, compared to 77% of women.
The results align with existing research on the attitudes of healthcare professionals, particularly radiologists, who generally view AI as a complementary tool rather than a replacement. The findings also underscore two critical factors for AI adoption in medicine: public consent and transparent communication. Trust in AI, the study suggests, will depend on clear explanations of how these technologies operate and ensuring that human oversight remains central to clinical decision-making.
As AI continues to integrate into healthcare systems, balancing innovation with ethical and practical concerns will be essential. While the technology offers significant potential for improving efficiency and accuracy, maintaining public confidence will require a careful, patient-centered approach