Clinical Early Warning Scores are vital tools for identifying patient deterioration early on. However, high false alarm rates pose challenges, leading to desensitisation or delayed responses among healthcare practitioners. While deep learning models trained on electronic health records offer promise in enhancing predictive capabilities, they are susceptible to biases inherent in such data. Based on the CogStack Foresight model, we will design and implement a much-needed fair, robust and reproducible computational early warning system.
Dr Zina Ibrahim, Senior Lecturer in Artificial Intelligence in the Department of Biostatistics and Health Informatics, King’s IoPPN
09 February 2024
King's College London announced winner of the Government's AI Fairness Innovation Challenge
A project led by Dr Zina Ibrahim, Senior Lecturer in Artificial Intelligence in Medicine at the Institute of Psychiatry, Psychology & Neuroscience, has won the Government’s artificial intelligence (AI) Fairness Innovation Challenge. The winning project will use AI to address bias in early warning systems used to predict cardiac arrest in hospital wards.
Launched by the UK Government’s Department for Science, Innovation and Technology (DSIT) and delivered by Innovate UK, the Fairness Innovation Challenge aims to drive new solutions to address multiple sources of bias in AI systems across higher education, healthcare, finance, and recruitment. King’s College London is one of four organisations awarded funding.
The new project is led by Dr Zina Ibrahim in collaboration with co-investigator Professor James Galloway, Professor of Rheumatology at the King's Faculty of Life Sciences & Medicine, a number of advisors across informatics and medicine, as well as external partners King’s College Hospital and OpenClinical. It aims to identify, quantify and mitigate bias and robustness issues in deep learning (DL) clinical early warning systems.
The project builds on the existing Foresight model, part of the CogStack platform, developed at King's College London in collaboration with NIHR Maudsley Biomedical Research Centre (BRC), University College London Hospitals BRC, King’s College Hospital NHS Foundation Trust and Guy’s and St Thomas’ NHS Foundation Trust. Foresight is a novel GPT-based pipeline that is trained on electronic health records to forecast future medical events such as disorders, medications, symptoms and interventions.
Machine learning, including DL, are sophisticated algorithmic models that can perform useful predictive and prescriptive tasks by unearthing statistical patterns from extensive datasets. These models are designed to improve their accuracy over time as more data is processed, thereby enhancing decision-making processes across various domains.
DL models, trained on vast electronic health records, aim to improve the prediction of deterioration compared to traditional early warning scores. However, they inherit biases prevalent in electronic health data, such as limited information on underrepresented patients and rare conditions. Additionally, reliance on historical records exposes DL systems to propagate human biases like gender, age, and race biases.
The researchers will design and implement a fair and robust early warning system. To do this, they will embed the ability to recognise and reason about clinical knowledge in a DL model to identify sources of bias and to steer the predictions made by a DL early warning system when bias is detected. This will also improve the accuracy of DL early warning system prognoses where data is insufficient due to less frequent monitoring.
DSIT will now invest more than £465,000 across the four winning bids.
Other winning solutions are:
- Higher Education: The Open University will look at ways to improve the fairness of AI systems in higher education.
- Finance: The Alan Turing Institute will create a fairness toolkit for SMEs and developers to self-assess and monitor fairness in Large Language Models (LLMs) used in the financial sector.
- Recruitment: Coefficient Systems Ltd.’s solution will focus on reducing bias in automated CV screening algorithms that are often used in the recruitment sector.
The winners were selected by expert assessors chosen by DSIT and Innovate UK. The rigorous evaluation process considered the potential impact, innovation, and alignment with the proposed AI regulatory principles, including fairness, set out in the UK White Paper, A pro-innovation approach to AI regulation.
Michelle Donelan MP, Secretary of State for Science, Innovation and Technology said:
“Our AI White Paper is fostering greater public trust in the development of AI, while encourage a growing number of people and organisations to tap into its potential. The winners of the Fairness Innovation Challenge will now develop state-of-the-art solutions, putting the UK at the forefront of leading the development of AI for public good.”
DSIT looks forward to the continued progress and impact of these new solutions in helping to shape a fair AI landscape for the future. The projects will start by 1 May and winners will be supported by DSIT and regulators, the EHRC and the ICO, to ensure their solutions marry up with data protection and equality legislation.
In this story
Related departments
- Institute of Psychiatry, Psychology & Neuroscience
- School of Mental Health & Psychological Sciences
- Department of Biostatistics & Health Informatics
- Faculty of Life Sciences & Medicine
- NIHR Maudsley Biomedical Research Centre
- School of Immunology & Microbial Sciences
- King’s Institute for Artificial Intelligence