23 November 2023
King's researchers comment on the outcomes of the UK AI Safety Summit
Members of the King’s Institute for AI community share thoughts on the AI Safety Summit outcomes
Earlier in November, representatives from across the world including government leaders and ministers, and industry, academia and civil society leaders came together at the AI Safety Summit at Bletchley Park to identify next steps for the safe development of frontier AI. King’s researchers share their thoughts on the outcomes.
Dr Peta Masters, Faculty of Natural, Mathematical & Engineering Sciences:
‘I'm concerned that it is in the interests of the 'leading frontier AI developers' to mystify non-specialists, overclaim on the capabilities of their products while simultaneously hyping the dangers in order to keep themselves in the driver's seat for future decision making. The biggest current threat from AI is in the facilitation of misinformation and disinformation (whether intentional or accidental). But in my view, a greater danger is that companies with such obvious vested interests are oiling up to pretend that only they can protect us from the AI beast.’
Dr Mercedes Bunz, Faculty of Arts & Humanities:
'While the Government's AI Safety Summit primarily discussed long-term concerns and the potential risks of AI, other events presented perspectives from the business sector, public interest groups, and academic circles on building responsible AI systems. The idea of creating ‘AI for everyone’ was successfully demonstrated by the 'AI Fringe' events, where we saw prominent entities like the British Academy and Google's DeepMind participating next to smaller but highly relevant research groups like the Public Law Project, which monitors AI usage in the UK government; many of King’s College London’s academics contributed, too. The event series has brought together different communities, insights, and concerns overwhelmingly voicing a need for AI regulation, a sentiment that many in the industry share as they seek clarity on the UK's direction with AI.
AI is an exciting technology but still very new, which is why it does need to be monitored. The risk of AI technology going wrong is very real - despite its impressive performance all AI systems show sudden collapses that are often difficult to explain. In the UK, we test and operate a wide range of AI technologies already. While this openness is good, citizens should be able to inform themselves when and where they are confronted with those AI systems.
Algorithms assisting the government to make intelligent decisions always might go wrong, such as the algorithm used by exam regulator Ofqual that further downgraded disadvantaged students. This is why AI systems in public use should be listed openly on a website accessible to all citizens to allow quality control and avoid harm.
The Bletchley declaration reminds companies who develop AI to have a strong responsibility for the safety of those AI systems but falls short to provide business with a framework to follow. While countries like the US, China, or the European Union work on creating national frameworks to mitigate risk including a a national testing of AI systems, the UK falls behind. Here, the work the government is supposed to do, is left to businesses, which is astonishing. The lack of a UK approach will simply mean that the frameworks of other countries will be applied here.
The prime minister’s statement that it is “hard to regulate something [like AI] if you don't fully understand it” is interesting, the more as we are currently testing these ‘unfamiliar’ technologies on citizens, for example in policing. The police minister just endorsed facial recognition technologies as part of the Government’s AI summit. Moreover, in UK government AI is already assisting in areas like welfare benefits, social housing management, and migration often without much transparency or risk assessment. This is worrying.’
Dr Juan Grigera, Faculty of Social Science & Public Policy:
‘A missed opportunity to discuss how AI will affect all us, as communities and workers most affected by AI have not been included in this closed-door Summit.’
Dr Raquel Iniesta, Institute of Psychiatry, Psychology & Neuroscience:
'I think there were three major outcomes from the AI safety summit: the agreement about the need of thoroughly testing advanced AI models before release, the announcement of a new UK-based AI Safety Institute, and the proposal for an International Panel on Artificial Intelligence Safety (independence from political interference) that can inform policymakers and the public.
The advances are very welcome, and the intentions seem to go in the right direction: protecting people whilst leveraging the AI potential globally. I think as a first summit it beat expectations. It achieved the challenging goal of not being only diplomacy, but putting forward most of the real issues as the need of placing human values at the centre of the discussion —and even suggesting potential solutions. Key questions as how to set thresholds on what make AI systems dangerous, how to build fruitful collaborations worldwide were discussed and remained open to be defined.
Future summits, discussion, research in the field, and collaboration between stakeholders is urgently needed. The summit basis of uniting governments, scientists, industry and the civil society to debate and define these very challenging questions is the way to go. I am very much looking forward to discussing on concrete proposals by the new AI Safety institute, panel and future summits that can make a real contribution on how to translate intentions into action for a safe and ethical AI.'
Steven Jiawei Hai, Faculty of Social Science and Public Policy:
‘The UK AI Safety Summit, through the Bletchley Declaration, indicated a global consensus among the major forces of AI and technological innovation. This consensus emphasizes an international management framework for cutting-edge artificial intelligence that balances safety and development, achievable through prudent negotiation and inclusive, mutually beneficial approaches.
This Summit has made the global community realize that frontier AI is a concept in continuous evolution, with capabilities and risks tied to social policy, economic development, political systems, and international cooperation. A more inclusive cognitive system is needed for precise assessment and the integrated implementation of related policies.
In an increasingly multipolar world of technology and politics, the UK government, in collaboration with other international forces, can proactively lead by centering inclusive development in the vision for global AI cooperation, especially in areas of safety and risk.
One lesson for the UK from the Bletchley Declaration is the need for proactive leadership by the UK government and various domestic and international forces. They should assist stakeholders and participants in providing a "universal AI risk management-feasibility transformation model" to balance national and international frameworks. This effort should harness the productivity and creativity from the interplay between national and international frameworks, promoting more sustainable innovation at a higher dimension.
The UK government should gradually guide the experiences and reflections from the Bletchley Declaration and AI Safety Summit into international organizational cooperation, starting from G7 to G20, OECD, and then to the UN, to achieve a more internationalized, democratic risk-sharing and participatory development model.
However, the summit lacked participation and consultation from more developing countries and the Global South. Many commercial entities were large multinational tech giants, excluding startups and SMEs from detailed framework discussions. Higher education institutions and research organizations should be institutionally encouraged to play a more enlightening role.’