select search filters
briefings
roundups & rapid reactions
before the headlines
Fiona fox's blog

expert reaction to the Bletchley Declaration by countries attending the AI Safety Summit

Scientists react to The Bletchley Declaration from countries at the AI Safety Summit. 

 

Dr Stuart Armstrong, Co-Founder and Chief Researcher, Aligned AI, said:

“It’s our view that alignment is a capability: if an AI is likely to act against its users’ interests, then it’s unusable – and hence unprofitable. Only the most reliable AIs can be effectively used. And the short-term harms are a preview of the long-term risks: AIs that don’t do what we want them to do, and deviate in unforeseen and unexpected ways. We strongly believe that by focusing on this core issue, bringing AIs in alignment with human values, we can address the big and the small dangers of AI.”

 

Will Cavendish, global digital leader at sustainable development consultancy Arup, and former DeepMind and UK Government advisor:

“It’s great to see the communique acknowledge the enormous global opportunities for AI as well as safety and regulation, which is rightly an important part of the summit. We can’t afford to be scared of AI, as we simply can’t solve humanity’s biggest challenges, like climate change and the biodiversity crisis, without embracing safe and effective AI. When examining regulation, attendees at the summit must remember to consider an ‘ethic of use’ – where we have an obligation to use technology that will help humanity – rather than only an ethic of ‘do no harm’.”

 

Professor Anthony Cohn, Professor of Automated Reasoning at the University of Leeds and Foundational Models Theme lead at the Alan Turing Institute, said:

“The Bletchley declaration is to be welcomed as an initial step towards regulating and managing risks of AI whilst still allowing the world to benefit from the many potential (and existing) benefits that AI can bring.  The involvement of many of the key countries and organisations worldwide adds to the credibility of the declaration. This is inevitably the first of many steps that will be needed and indeed two further meetings are already envisaged within the next year. 

The present declaration is heavy on vision, but, unsurprisingly, light on detail, and it is likely that the “devil will indeed be in the detail” to actually create an effective regulatory regime internationally, which ensures safe deployment of AI, whilst also facilitating beneficial applications. The degree of regulation should be commensurate with the potential risks, though these may not always be apparent or easy to estimate. Whilst further meetings are already planned for 2024, there are also immediate risks of AI, particularly in the context of forthcoming elections, where AI generated text, and “deep fakes”, have the potential to disrupt the democratic process. 

There is an urgent need to educate the public on the risks and limitations of AI and to be wary of believing everything they see or read in the unregulated media. Moreover, anyone using an AI system, like a chatbot, should clearly be made aware that they are engaging with an AI, not a human. These short-term risks, which also include problems deriving from biased training data and resulting biased outputs are more urgent to address than possible, but still distant, risks relating to the conceivable creation of AGI which the declaration is more focused on.  It is worth noting that is likely that further agreements will be needed to make clear where the responsibility for a deployed AI system rests. Finally, the declaration makes clear the need for further research on AI safety which will be important not only to evaluate the safety of AI systems but also to ensure their safety and trustworthiness.”

 

Dr Andrew Rogoyski, Director of Innovation and Partnerships at the Surrey Institute for People-Centred AI, University of Surrey, said:

“The Bletchley communique is welcome and illustrates the importance of AI but it falls short of binding arrangements that will control and shape the type of AI being developed at such great pace. The UK and US AI Safety Institutes are a sensible step forward but my fear is that by the time they are established and understand how to operate, the world of AI will have already moved on.

“Ian Hogarth illustrated the anticipated growth of AI power in 2024, indicating that the control of the development of AI needs to go beyond good intentions and fine principles, and to do so at a pace that will be unfamiliar to many policy makers.

“Vera Jourova, on behalf of the EU, made the important point that the AI debate shouldn’t be about risk vs benefit, it should be both at the same time, it’s not one or the other. In order to reap the rewards of such powerful technologies, we will need to create controls, technically and legislatively, ideally in international accord.

“The UAE made the point that governance of AI needs to take new forms, unfamiliar to traditional governmental approaches, a way that can more effectively match the extreme pace of technical development.

“It was interesting to see China stress the importance of open source, alongside the general consensus emerging that AI must be a benefit for all, and not serve to further concentrate power and wealth in the hands of the few.”

“One of the concerns in Europe is that the EU AI Act will disadvantage European AI companies. Not everyone agrees but I noted a throwaway line by Vera Jourova that the EU were going to make available some major high performance computing facilities, free of charge, to companies wishing to develop AI platforms, which could prove to be a major draw for talent in the EU.”

 

Professor Michael Huth, Head of the Department of Computing, Imperial College London, said:

“This joint declaration of 29 nations on the safety of AI is important as an expression of intent for future collaborations that transcend, yet are consistent with, national approaches to AI regulation. The development of global standards for the safety of AI, and the establishment of similar standards for privacy and other aspects of trustworthiness of AI should be a key objective of those collaborations. The declaration strengthens the value of transparent approaches to AI R&D – which researchers in our Department of Computing and at my start-up xayn.com already practice.”

 

Dr Marc de Kamps, Associate Professor in the University of Leeds’ School of Computing, whose areas of expertise include Machine Learning, said: 

“The creation of an AI safety institute could be a very positive step as it could play a major role in monitoring international developments and in helping to foster a broad discussion about the societal impacts of AI.

“However, a moratorium on risky AI will be impossible to enforce. No international consensus will emerge about how this should work. From a technological perspective it seems impossible to draw a boundary between ‘useful’ and ‘risky’ technology and experts will disagree on how to balance these. The government, rightly, has chosen to engage with the risks of AI, without being prescriptive about whether certain research is off limits.”

“However, the communique is unspecific about the ways in which its goals will be achieved and is not explicit enough about the need for engagement with the public.”

 

Rashik Parmar, CEO of BCS, The Chartered Institute for IT, said:

“The declaration takes a more positive view of the potential of AI to transform our lives than many thought, and that’s also important to build public trust.  

“I’m also pleased to see a focus on AI issues that are a problem today – particularly disinformation, which could result in ‘personalised fake news’ during the next election – we believe this is more pressing than speculation about existential risk. The emphasis on global co-operation is vital, to minimise differences in how countries regulate AI.

“After the summit, we would like to see government and employers insisting that everyone working in a high-stakes AI role is a licensed professional and that they and their organisations are held to the highest ethical standards. It’s also important that CEOs who make decisions about how AI is used in their organisation, are held to account as much as the AI experts; that should mean they are more likely to heed the advice of technologists.

“We also need to see a greater emphasis on the role of AI in education because young people have the right to be taught about its capabilities, risks and potential to ensure they can thrive in life and work.”

 

Professor Robert F. Trager, Director, Oxford Martin AI Governance Initiative at the University of Oxford, said:

“The declaration says “We resolve to work together” to ensure safe AI, but is short on details of how countries will cooperate on these issues. The Summit appears to have achieved a declaration of principles to guide international cooperation without having agreed on a roadmap for international cooperation.

“The declaration says that “actors developing frontier AI capabilities, in particular those AI systems which are unusually powerful and potentially harmful, have a particularly strong responsibility for ensuring the safety of these AI systems.” This suggests that governments are continuing down the road of voluntary regulation, which is very likely to be insufficient. It also places the declaration somewhat behind the recent US executive order, which leverages the Defense Production Act and other legal instruments to create binding requirements. This is confirmed when the declaration “encourage[s]” industry leaders to be transparent.”

 

Prof Nello Cristianini, Professor of Artificial Intelligence, University of Bath, said:

“Along with many opportunities, AI does also present some potential risks to our values and it is important that many different countries agree that it should be developed in a responsible manner. This is what happened on November 1st with the “Bletchley Statement”, where 29 countries for the first time stated that AI must be developed in a way that protects human rights, privacy, transparency, fairness, and a number of other values.  The “Bletchley Statement” also mentions disinformation, cybersecurity and biotechnologies as three areas of concern. Who needs to do that and how is left rather unspecified, as well as what needs to be done: “deepening our understanding of these potential risks and of actions to address them is especially urgent.” and identifying AI safety risks of shared concern, and building respective risk-based policies. There will be a new summit next year in France, but it is possible that the most concrete steps will come from the EU AI Act.”

 

Dr Heba Sailem, the Head of Biomedical AI and Data Science at King’s College London, said:

“In light of the growing gap in resources between industry and academia, it is crucial to address the disparity in capabilities for training large-scale AI models. At present, the academic sector lags significantly behind the industrial sector in terms of the scale of models that can be developed.”

“It is imperative to invest in computational infrastructure at national and international level that allow regulating AI models effectively. The recent announcement of the AI Safety Institute in the UK marks a significant and positive initial step towards achieving this goal.”

 

Dr Sean O’Heigeartaigh, Director of the AI: Futures and Responsibility Programme (part of the Centre for the Future of Intelligence) at the University of Cambridge, said:

“The Bletchley Declaration represents an important step forward on managing the risks and realising the benefits of frontier AI systems. It shows international consensus that frontier AI systems represent significant risks, with the potential for catastrophic harms from future developments if safety is not made a priority. It correctly highlighted the “particularly strong responsibility” that lies with actors developing frontier AI capabilities – these actors are predominantly well-resourced technology companies, and the governments of countries that house them.

“I was pleased to see a call for transparency and accountability from these actors on their plans to monitor potentially harmful capabilities. The AI safety policies released by six leading companies last week represented a good step in this direction. However, our analysis found that they were still lacking in terms of key detail (http://lcfi.ac.uk/news-and-events/news/2023/oct/31/ai-safety-policies/). It is crucial that the Declaration lead to further focus on developing these policies, with appropriate external oversight by academia, civil society, and government.

“The Declaration highlighted the importance of evaluation of AI, especially for powerful, general-purpose systems. While I agree, there is much work to be done to develop this nascent area of work to the level of robustness needed to properly understand the capabilities of these systems and anticipate harms. For example, it is exceptionally challenging to establish safety guarantees for AI systems capable of taking a broad range of actions in open-ended environments; and key aspects of AI alignment remain under-defined. I call on the relevant governments and companies both to collaborate with, and make resources available, to academic and civil society groups with the necessary expertise to mature these techniques.

“It is important that the Statement highlighted the importance of protection of human rights, explainability, fairness, accountability, bias mitigation, and privacy protection. While these issues will apply to present and future frontier systems, they apply across a huge range of AI systems in use today, many affecting vulnerable communities. A focus on the emerging risks of frontier AI must not detract from the crucial work of addressing these concrete harms.

“Lastly, it is tremendously heartening to see the level of consensus across nations on these priorities, with the signatories including countries ranging from Kenya to the USA, to China. It indicates that there is far more room for agreement than is often perceived on these globally significant issues. However, the test will be seeing cooperation on the concrete governance steps that need to follow. I hope to see the next Summit in South Korea forge a consensus around international standards and monitoring, including establishing a shared understanding of which standards must be international, and which can be left to national discretion.”

 

Stephanie Baxter, Head of Policy at The Institution of Engineering and Technology (IET), said:

“The AI Safety Summit is an important milestone in the UK’s role in achieving safer AI, and it’s great to see the comprehensive communique that has been signed today which reinforces how AI can be used for good, in helping people, sectors and industries, and spurring innovation and productivity. It is welcoming to see that the need for international cooperative policies and appropriate legal and regulatory structures has been addressed. This must be in place to allow AI’s safe development and use.

“But with emerging technologies, like AI, being fundamental to sector growth, it’s important to recognise that education and training is key to the safe use of AI and we should look at upskilling and reskilling the current workforce. Employers are telling us that there is a lack of skills in industry to take advantage of AI so we need to be agile and offer options for rapid training, such as micro credentials, to adapt and make best use of new technologies. Government plays a key role in supporting initiatives that enable employers to stay competitive and innovative in this space.”

 

Dr Jonathan Aitken, Senior University Teacher in Robotics at the University of Sheffield, said: 

“The Bletchley Declaration sets the scene for the next series of developments in AI. It’s clear that the focus around AI will be on the responsible use of tools. It is clear that the intention is to create a wide-ranging consortium of interested parties, through the engagement with 28 countries and a range of different academic and industrial partners. This in itself presents a risk, whilst those inside the declaration agree with the framework there are two key questions: does being outside the framework remain something akin to a lawless wild west, and also what framework will be in place for regulation both inside and outside of the parties involved.

“Without doubt developers of these systems have a responsibility, but with the internet an open and opportunistic environment the exact nature of how these elements will be controlled requires some more detail. Given that the tools required are easily available, and the report recognises the potential for misuse of these tools, it would be good to hear more detail of how the risks posed by AI used in this manner could be handled.”

 

Dr Caitlin Bentley, Lecturer in Artificial Intelligence at King’s College London, said:

“The Bletchley Declaration on AI safety represents an important milestone, solidifying the commitment to responsible AI development and acknowledging substantial risks, potential harm, and the imperative for international cooperation.

“However, we must remember that AI inclusivity remains a real concern, and that the needs and aspirations of individuals and communities at risk of marginalisation often become overshadowed by national imperatives. To strike a balance, we must invest in AI education for all, promoting awareness and informed decision-making to ensure AI is not only responsible, but equitable in its effects.”   

 

Professor Elena Simperl, Professor of Computer Science at King’s College London, said:

“International collaboration is needed on these grand challenges, so the fact that the declaration has a global reach, including China, and builds on top of ongoing cross-government work by OECD, GPAI and other fora is encouraging.

“More worrying is the continued emphasis on frontier models rather than the whole range of very powerful AI systems that have been put to use in the last 10 years, which are already doing real harms, documented in the media or in AI incidents databases. This is not just about a small set of technologies released very recently, it is about any form of data-driven technology that is not used responsibly.

“There is limited recognition in the declaration of the role of data in ensuring AI can be used safely and responsibly. This is unfortunate, as many concerns and controversies around the quality, limitations and impact of the latest AI solutions stem clearly from poor and sometimes unethical data practices. This lack of attention to data also limits AI adoption in the public sector and within organisations.”

 

Professor Hamed Haddadi, Professor of Human-Centred Systems, Imperial College London, said:

“Given the new Executive Order from the White House and its particular emphasis on protecting privacy, I hope to see a commitment to developing safe, trusted AI that respects individuals’ privacy and security during both the development and implementation of AI-driven technologies.”

 

Prof Aldo Faisal, Professor of AI & Neuroscience, Imperial College London, said:

“This is a very sensible declaration reflecting the current thinking.  I particularly welcome that the focus has shifted from recent emphasis on science fiction inspired existential risks towards pragmatic and multilateral regulation of AI. The real work starts now on fleshing out how to practically and sensibly structure regulation and its compliance.”

 

Dr Andrew Rogoyski, Director of Innovation and Partnerships at the Surrey Institute for People-Centred AI, University of Surrey, said:

“The Bletchley communique is welcome and illustrates the importance of AI but it falls short of binding arrangements that will control and shape the type of AI being developed at such great pace. The UK and US AI Safety Institutes are a sensible step forward but my fear is that by the time they are established and understand how to operate, the world of AI will have already moved on.

“Ian Hogarth illustrated the anticipated growth of AI power in 2024, indicating that the control of the development of AI needs to go beyond good intentions and fine principles, and to do so at a pace that will be unfamiliar to many policy makers.

“Vera Jourova, on behalf of the EU, made the important point that the AI debate shouldn’t be about risk vs benefit, it should be both at the same time, it’s not one or the other. In order to reap the rewards of such powerful technologies, we will need to create controls, technically and legislatively, ideally in international accord.

“The UAE made the point that governance of AI needs to take new forms, unfamiliar to traditional governmental approaches, a way that can more effectively match the extreme pace of technical development.

“It was interesting to see China stress the importance of open source, alongside the general consensus emerging that AI must be a benefit for all, and not serve to further concentrate power and wealth in the hands of the few.”

 

* https://www.gov.uk/government/publications/ai-safety-summit-2023-the-bletchley-declaration/the-bletchley-declaration-by-countries-attending-the-ai-safety-summit-1-2-november-2023

 

Declared interests

Prof Nello Cristianini: “Nello Cristianini is the author of “The Shortcut, why intelligent machines do not think like us” CRC Press, 2023. “

Prof Aldo Faisal: None.

Professor Elena Simperl: None.

Professor Hamed Haddadi: No specific competing interests.

Prof Andrew Rogoyski: no conflicts of interest.

Dr Heba Sailem: No competing interest.

Dr Sean O’Heigeartaigh: None.

Dr Caitlin Bentley: no competing interests

Dr Jonathan Aitken: No competing interest.

Prof Michael Huth: co-founder of xayn.com

Prof Tony Cohn: I am seconded to the Alan Turing Institute on a project to evaluate the commonsense reasoning abilities of Foundation Models such as LLMs.

Dr Stuart Armstrong: Co-Founder and Chief Researcher, Aligned AI

For all other experts, no reply to our request for DOIs was received.

in this section

filter RoundUps by year

search by tag