Skip to main content

28 April 2023

Reliable and Trustworthy AI for Defence

Read about King's work on reliable and trustworthy artificial intelligence for defence, as featured in the Bringing the Human to the Artificial exhibition.

An image of a data processing centre. The perspective is of looking down a hallway with bright overhead lights and glass floor to ceiling units on either side.

The UK has developed an AI strategy for defence and national security. The military is experimenting with new autonomous platforms and with the doctrine and concepts that might allow their effective employment. Autonomous systems are already at work in data processing and intelligence analysis.

However, there are concerns about the application of AI to defence, both from an ethical standpoint, and in terms of performance. The debates underway about the ethical implications of using AI in national security, including in decisions about the employment of lethal force, include some of the following ideas.

Human decision-maker

Even in an era with pervasive and increasingly sophisticated machine cognition, the importance of the human decision-maker endures, reflecting an ethical and cultural desire to preserve meaningful human control.

Skillsets

There is a need to upskill the workforce involved in defence, and more broadly to promote AI literacy in wider society so that the concerns and implications can be better understood.

Mass and scale

While AI may allow maintenance of qualitative advantage at scale, there remain profound practical questions, for example of how to organise (and lead) a mixed platoon of humans and autonomous machines.

Vulnerabilities

AI systems are potentially vulnerable to electronic warfare countermeasures like jamming or spoofing, and are susceptible to offensive cyber warfare. There are also difficulties of assurance and trust such as when AI is susceptible to bias in training data. AI that is used for defence must be reliable and trustworthy, and sufficiently transparent that humans can understand its decision-making.

Project Lead

Kenneth Payne
School of Security Studies
Faculty of Social Science & Public Policy
King's College London

In this story

Kenneth Payne

Professor of Strategy