I agree that AI will bring radical changes to defence, but much careful thinking will be needed to deliver the MoD’s vision. My recent book, ‘I, Warbot,’ tackles some salient issues, and ahead of the review, I made some recommendations – several of which were echoed in the Command Paper, like the need for a new Defence Centre for AI, in order to drive through the changes that are needed.
AI will make an increasingly significant contribution to what the military calls ‘fighting power’, so it’s important to get this right. Doing so requires a keen appreciation of AI’s abilities and limitations. It demands a sensitive understanding of the ethics involved in automating decisions, including those that may result in violent action. The Ministry of Defence must carefully balance the risk of reshaping its organisations and concepts, including sacrificing legacy capabilities, like the Warrior IFV, in order to free up the funding to invest in those that are currently experimental, or even merely anticipated.
So far so good. In addition to the Defence Centre for AI, the MoD has established an AI ethics committee engaging with a range of actors, including in industry, politics and academia. It’s also doing lots of experimental work and wargaming with new platforms and systems, especially at sea and in the air.
Less publicly, AI is playing an increasing role in cyber-activities. It would be surprising if the United Kingdom, with its long history in electronic warfare and signals intelligence, were not already playing a leading role here too.
What are the challenges? I think some of these issues will be occupying the MoD’s strategists in the years ahead:
1) AI will enable automated decision-making, the speed and scale of which makes ‘meaningful’ human control’ of lethal weapons difficult, if not impossible, at least, in the moment of combat itself. The UK’s position so far has been that humans are always involved in such decisions, but this will be tricky to sustain. There will soon be too many fast-moving robots in a swarm. As a result, the military will need to craft careful rules of engagement for AI systems that capture our human intentions, even as they evolve. This is not just about regulating AI ‘trigger pullers’ – autonomy will make a contribution throughout military systems, so that the judgment of human operators will be bounded by AI’s involvement elsewhere.
2) The MoD will need to develop ways of mitigating the considerable risks arising from the widespread adoption of AI systems such as;
- Poor performance, for example as the result of bias in its training data, or an inability to respond flexibly to novelty.
- AI’s vulnerability to hostile actors, including via electronic countermeasures such as deception via ‘spoofing' or via offensive cyber-activities.
- The possibility of small-scale failures that rapidly and catastrophically cascade through complex, interconnected systems, too quickly for humans to mitigate them.
- The possibility that the AI simply misinterprets what its being asked to do and cannot be re-tasked in time.
3) There’s much more work to be done on understanding the interaction between humans and machines in the military domain. Among the issues here are the extent to which humans will trust machines – too much, as with ‘automation bias’, of the ‘computer says no’ variety; or too little, which will slow decision-making and limit scale. There’s much speculation here, but as with many of the practicalities of fielding AI systems, the only way to get a firm idea is by testing it, in extensive and rigorous exercises, wargames and simulations.
4) The MoD needs to reconcile the demands of warfighting (including the pressures to automate that arise from an intense ‘security dilemma’) with evolving norms and ethics of AI in warfare. Importantly, its approach to AI will both shape and be shaped by larger forces than the bare technical possibilities of AI, or the strategic imperatives of war. Different cultures develop and instrumentalise technologies in their own, distinctive ways – so the United Kingdom will employ military AI in a different way from China, say, and even from its close allies, like the United States and France.
That’s the reverse problem to that facing opponents of lethal autonomous weapons – too often their argument floats free from an understanding of the forces underlying the acquisition of AI systems. The result is their utopian desire for a total ban that takes no account of the motivations driving AI acquisition, or the immense, likely insurmountable, challenges of international regulation.
To tackle these and other challenges that lie ahead, the MoD needs a blend of technical and non-technical approaches, or epistemologies. In some literature on AI there’s a danger of ‘technological determinism’ – or of seeing technology as somehow arriving from the blue, ready-formed and prompting irresistible changes in what societies do. The Ministry will certainly need to learn from technical experts – especially those able to explain some of the limitations in modern AI. But it also has much to learn from the humanities and social sciences, particularly from historians of military technology who can ground today’s developments in a deeper process of innovation in warfare.
The Integrated Review was unambiguous – recognising the tremendous military potential of AI, the UK intends to transform its defence capabilities, accepting considerable risk in so doing. Let’s hope its leaders now have the good sense to reflect on these and other challenges ahead.