Skip to main content

22 April 2025

Scientists use ChatGPT to make nation's favourite podcasts more accessible

The work breaks new ground on audio media accessibility for people with complex communication needs.

Tim Neate Thumbnail

In partnership with the BBC, researchers from King’s College London and collaborators have created the world’s first AI-powered accessibility tool, aimed at stroke survivors, for radio and podcasting.

After a stroke, a number of people develop aphasia, a language impairment that can lead to difficulties in speaking, reading and language comprehension. Although over 350,000 people in the UK currently live the condition it is not well provided for in comparison to other disabilities – particularly in relation to audio media.

According to a 2024 Ofcom study, 92% of the UK listens to some form of audio content, radio, podcasts, or audiobook, at least once a week. While large audio platforms such as Spotify and BBC Sounds have introduced subtitles to help hard of hearing listeners, this does not help those who may struggle to comprehend the content of a podcast or audiobook – like those living with aphasia.

Harnessing the power of AI, our programme is the first to help open that world to those with aphasia – one of the least recognised hidden disabilities. In so doing, not only can we make our everyday a little more equitable; we can also show that generative AI can help marginalised groups, as opposed to just misrepresent them.”

Dr Timothy Neate

To overcome this, the team worked with local patient groups and charity Aphasia Re-connect to design Simplico, an app that uses generative AI to simplify the language of an audio recording, making it more comprehensible for listeners who may struggle to understand its content.

Taking the audiobook of Louisa May Alcott’s ‘Little Women’, the programme used ChatGPT-4 to pause and simplify its content, while providing a summary of each section of the book through text or speech. Once simplified, the programme also lets users simplify content again to differing reading levels to support comprehension.

The programme can also provide alternate representations of passages listeners might find difficult through keywords, emojis or AI-generated images.

“Podcasts, audio dramas and documentaries have become an enormous part of our listening culture in recent years, but until now very little work had been put in for accessibility concerns beyond hard of hearing."

Dr Timothy Neate

Dr Timothy Neate, Senior Lecturer in Computer Science at King’s College London and lead researcher behind the Ca11y project which created this technology, said “Podcasts, audio dramas and documentaries have become an enormous part of our listening culture in recent years, but until now very little work had been put in for accessibility concerns beyond hard of hearing.

“Harnessing the power of AI, our programme is the first to help open that world to those with aphasia – one of the least recognised hidden disabilities. In so doing, not only can we make our everyday a little more equitable; we can also show that generative AI can help marginalised groups, as opposed to just misrepresent them.”

The team are now working with the BBC R&D team who have provided15 episodes of a popular drama on BBC Sounds for their programme to work with, with the ultimate goal of integrating the software into the app to make the nation’s favourite audio content more accessible.

In this story

Timothy Neate

Senior Lecturer in Computer Science