To many decision-makers and members of the public, automated systems seem to operate like a ‘black box’. This causes a real headache for public and private organisations that fear the legal repercussions of implementing flawed systems of this sort. The potential costs of high-profile court cases, both in financial and reputational terms, are considerable. Calls are mounting to deploy only systems that are explainable in that the reasoning behind machine decisions can be traced back and evaluated retrospectively. In the US, legislators currently debate the ‘Algorithmic Accountability Act’ that would require organisations to conduct robust AI fairness checks. For the European Commission, explainability has emerged ‘as a key element for a future regulation’ of AI beyond existing measures enshrined in the GDPR, such as in Article 13 of the recently proposed AI Act. And in the UK, even the intelligence service GCHQ has joined the debate and promises to build on Explainable AI wherever possible.
With explainability gaining so much traction and attention, the big-prize question emerges, how can it be achieved in practice? How can abstract demands for transparency and accountability translate into actual AI design principles? Explainability itself is an umbrella term for a whole bag of fine-grained concepts and several engineering approaches compete in this space. A team at King's Department of Informatics offers a unique perspective on the complex challenge of Explainable AI: if we wish to explain the decision a system makes, let’s take a closer look at the processes of how it gets there, how it has been configured in the first place and what kinds of data it uses.
King's provenance-based approach
To make headway on this sizeable problem, the team of Professor Luc Moreau and Dr Trung Dong Huynh and their project partners from the University of Southampton collaborated with the UK’s Information Commissioner’s Office (ICO). They’ve set themselves the following challenge: can we build a proof of concept for an automated decision-making system that provides details of meaningful, interpretable decisions at each step of the way? Any such system would have to meet Article 15 of the GDPR, which explicitly states that ‘data controllers’, i.e. anyone who collects and uses data, must be able to provide ‘meaningful information about the logic involved’ in automated decisions. ‘Leveraging technology to see how socio-technical systems are constituted is a way to address their inherent opacity, as long as the explanations that are generated support rather than substitute human intervention’, Professor Sophie Stalla-Bourdillon from the University of Southampton explains. ‘Provenance offers the possibility to de-construct automated decision-making pipelines and empower both organisations to reach a high level of accountability and individuals to take action and exercise their rights’.
Over the course of only a couple of months, the team has delivered some impressive results. Professor Moreau explains: