select search filters
briefings
roundups & rapid reactions
before the headlines
Fiona fox's blog

expert reaction to the UK AI Safety Summit

Scientists react to the UK AI Safety Summit. 

 

Professor Elena Simperl, a Professor of Computer Science at King’s College London, said:

“While there are many positive developments in the latest policy announcements, principles around testing, monitoring, and reporting of AI risks, vulnerabilities, and limitations which are aligned across jurisdictions and technology stacks are welcome. There are at least three critical next steps for these principles to make a difference over the next years; attention to data, the theory practice gap, and AI literacy.

“Without data, there is no AI, whether that’s machine learning, reasoning, planning, knowledge graphs, or any other AI technique. Common AI principles around identifying, evaluating, auditing, and mitigating risks apply to datasets and addressing them require approaches, tools, standards, and best practices to ensure data is accurate, representative, free of bias. Equally, the people and organisations involved in creating or enriching that data, including gig workers that label datasets, need to be part of the data value chain in AI.

“We have seen many frameworks and approaches to audit AI systems at design, development and deployment time. Yet studies with AI engineers or compliance teams or decision makers using AI show that there is a lot of ambiguity in how to operationalise them and limited support in the development environments, tools, and processes that these professionals use.

“AI systems make mistakes in a different way than people do. We need to get used to a world where prompting, AI copilots, and people co-auditing AI outputs will become the norm. As AI models and technology providers improve their safety practices and as more and more benchmark datasets become available, it’s critical to teach everyone how to use AI with confidence and to rely on AI only when appropriate. This is the only way to ensure the benefits of AI will truly benefit everyone.”

 

Michael Birtwistle, Associate Director at the Ada Lovelace Institute, said:

“The AI Safety Summit is predicated on the assumption that the impact of AI on our society and economy will be transformational. In some respects, it already is:  AI is being deployed in important areas of scientific discovery such as genomics, and across important societal challenges such as climate change adaptation and mitigation.

“But it will be important to engage critically with claims from industry and governments about how central AI will be to all our lives, and to assess those claims against the best available evidence – to not just accept the hype, but to reflect carefully on what actually we can expect here, and on what costs are acceptable in light of a balanced view of benefits.

“If we are going to engage with the Summit’s premise of AI’s transformational potential at face value, we should expect its governance to reflect that of similarly consequential systems. 

“So as the government repeats this week that it won’t ‘rush to regulate’ AI, we should remember what’s at stake. Because if AI is indeed going to have yet more impact on all our lives, and regulation of AI will put us on the path to a more just, equitable society, then surely it makes sense to get started now.

“We do not need better evidence to justify the regulation of AI. While the exact capabilities of some newer models may not yet be fully understood, there is over a decade of global research documenting AI impacts and how to manage them.

“As international representatives gather at Bletchley, it will be critical for governments to secure meaningful, detailed commitments from industry on how they will carry the responsibility of mitigating all AI harms – not just those at the frontier – in the interim, and hold industry publicly to account for meeting those expectations ahead of fuller regulation. 

“Governments should set out roadmaps to regulation, clearly signaling that the sticking plasters we implement in the short term will be bolstered as soon as possible. Regulation doesn’t need to come at the expense of the benefits of the technology; quite the contrary.”

 

Dr Alina Patelli, Senior Lecturer in Computer Science, Aston University, said:

What your thoughts are on the summit/scope of the summit?

“A summit on AI safety is long overdue. As is the case with all groundbreaking technologies, AI’s transformative potential for public good is only matched by its risks, which are unlikely to be successfully avoided, if AI tech design and deployment are left unregulated and therefore open to misuse, either intentionally or accidentally. The scope of the summit is appropriate, reflective of Government’s cautious approach to managing interactions with AI safety experts from multiple nations and disciplines: the summit focus is kept narrow, to five objectives only, and the number of participants is wisely limited to 100, to keep the conversation productive.”

 

What is likely to come out of this summit?

“The summit’s main output will most likely be a bare-bones regulatory document comprising (1) a shared understanding of AI (i.e., a generally accepted definition of the term reflective of all summit participants’ views, not just those of tech experts), (2) a list of major risks associated to AI misuse, both in terms of potential damage as well as likelihood of becoming a reality, and (3) a policy draft outlining the core elements that a yet-to-be-developed governance framework should include.”

 

What AI safety could/should look like?  

“Although it would be premature to venture a definition of AI safety ahead of the summit, one thing that is certain is that a comprehensive, therefore effective, AI regulatory framework would encompass more than just laws. Non-legally binding codes of conduct, tech design and development processes that are bound by moral and ethical values, both in the commercial ecosystem, as well as when it comes to individual entrepreneurs, revised open-access licenses under which AI should be used in the public domain, etc. are equally important pieces. The best way to integrate all these in a cohesive, overarching governance plan is perhaps a topic to explore in one of the post-summit events.” 

 

What are the potential practicalities for a route forward towards safe AI?

“The practical way to systematically regulate AI is incremental. Initially, the development and application of those AI tools deemed to be high-risk will most likely be restricted to controlled environments, where the potential benefits justify the risks and where sound mitigation procedures can be quickly and effectively enforced to mitigate those risks. As regulations become better prescribed, AI’s (safe and legal) application space will gradually expand, making its benefits available to larger groups of people without any of the downsides.”

 

Dr Natasha McCarthy, Associate Director, Policy, at the Royal Academy of Engineering, said:

“The rapid development of AI is one of the world’s greatest opportunities for transformative new technologies, but it also brings very real risks, including bias, lack of inclusive design, potential misuse and beyond. As well as addressing the systemic risk that AI may bring in the long-term, we hope that the AI Safety Summit will result in a clear plan of action for managing and preventing the near-term risks that AI already poses.

“AI has no borders and developing frameworks to enable AI to be used safely and securely is a global imperative. We welcome plans for an international network to build cooperation between experts among the UK’s international partners and hope that the lessons learned by the engineering community through creating safety-critical systems for previous emerging technologies will be shared and heeded.

“Engineers throughout history have been vital for embedding appropriate safeguards in technologies that we use every day in areas as broad as transport, medicine and banking. We know that the engineering profession’s expertise in creating ever-safer technologies and infrastructure, including tools and techniques designed for safety, as well as the institutions that accredit and certify people, skills and education to build responsible practice, will be incredibly valuable as part of a cross-sector and cross-discipline approach to mitigating the risks associated with AI today. We hope that engineering will be well represented in the forums and advisory bodies that take the discussions from the Summit forwards.”

 

Professor Mary Ryan, Vice-Provost (Research and Enterprise), Imperial College London, said:

“We need to start thinking about regulation as ‘enabling’ rather than of ‘policing’ AI. The UK can draw on the deep scientific and technical expertise found in universities like Imperial to bridge between cutting-edge AI research, industry adoption and pragmatic, adaptable policy solutions. Only then can we combine safety, trust and security in a framework that adds to and enables the UK’s AI-driven innovation ecosystem.”

 

Professor Aldo Faisal, UKRI Turing AI Fellow and Director of the UKRI Centre in AI for Healthcare, said:

“Development towards AGI has rapidly accelerated, and we are on the verge of huge disruption. It is critical that we respond proactively and choose sensible rules for AI. Regulation must support the whole ecosystem and not leave innovation to the few biggest players. Learning from our pragmatic approach in medical regulation, where the UK is a world leader, is a more sensible approach than more restrictive regulatory frameworks, for example in the nuclear industry.”

 

Dr Yves-Alexandre de Montjoye, Associate Professor of Applied Mathematics and Computer Science, said:

“A key aspect to enable responsible AI innovation will be access to datasets, in particular medical datasets. It is crucial that these datasets are used anonymously when training AI systems. This had so far been an impossible task but, recently, AI tools have been shown to be capable of detecting that data was being misused and individual re-identified.

“On the other side of this, transparency of the data used to train frontier AI models such as Large Language Models is crucial. Here too, AI tools can help unpack what data was used to train these large models, helping us understand what they learn and where they learn from.”

 

Dr Nicole Wheeler, Birmingham Fellow, Institute of Microbiology and Infection, School of Computer Science, at the University of Birmingham, and Turing Fellow, said:

“The UK AI Safety Summit represents a key opportunity for the UK to step up in the leadership of policy, norms and technical advances for AI safety. It will bring together international governments, civil society groups and research experts to explore the risks posed by AI and how they can be mitigated.

“AI has enormous potential to improve nations’ economies, education, and welfare. However, the increasing capabilities and generality of these models create significant risks, many of which are hard to anticipate. There is an urgent need to bring experts from different nations and disciplines together to map and evaluate these risks, identify appropriate guardrails, and begin immediate work to develop or implement them.

“So far, the UK has taken a light-touch approach to AI regulation, stating in its AI Strategy a desire to build ‘the most pro-innovation regulatory environment in the world’ while acknowledging the need to ‘safely advance AI and mitigate catastrophic risks’. Developments in AI capabilities in recent years have taken policymakers and technologists by surprise, and the global conversation has shifted swiftly to growing concerns about shifts in the risk landscape outpacing our capacity to govern technology.

 

Your thoughts on the summit/scope of the summit?

“The Summit will focus on two types of AI: ‘frontier’ AI (general-purpose AI at the leading edge of current capabilities) and specific ‘narrow’ AI models (AI models with capabilities limited to a specific domain) that are considered to have dangerous capabilities.

“There has been criticism of the scope of the Summit being too narrow, and bowing to industry interest to focus regulations on frontier models and away from current commercial products. The Summit can’t cover all possible safety issues in just two days, but I felt the focus and direction of the program was appropriate, directing attention to regulatory issues that have emerged suddenly after being relatively neglected, and need coordinated international action. My only misgiving was the shift in focus from ‘foundation’ models (AI models trained on vast amounts of data and adaptable for a wide range of tasks, such as answering questions and summarising text) to frontier models, which does indeed limit our ability to meaningfully regulate existing deployed AI models with the potential to cause harm. The distinction between frontier and foundation models is fuzzy, and leaves room for uncertainty about which capabilities any new regulations would apply to. The frontier of AI is also a moving target, potentially leading to the relaxation of restrictions on capable AI models as soon as an appreciably more advanced one is developed.

“While the Summit aims to examine a broad range of AI risks, I was thrilled to see that biosecurity risks are a priority concern for the Summit. This is a topic I have been working on intensively with biosecurity experts at the Nuclear Threat Initiative, a nonprofit global security organisation focused on reducing nuclear and biological threats imperiling humanity.  We have consulted an extensive set of experts on the emerging risks at the intersection of AI and the life sciences, and we have prepared a report that makes recommendations for governance options. This report will be launched on the margins of the Summit on October 30, 2023. I sincerely agree that the limited understanding of risk associated with AI applied to biology and lack of regulation are pressing serious risks that needs discussion and concrete action, and look forward to hearing the next steps agreed upon at the Summit.”

 

What AI safety could/should look like?

“AI safety can’t be a reactive process. We have real examples of widespread harm caused by AI today, and measures to redress this harm have fallen short. The scale of harms being considered at the Summit can’t be addressed after they have occurred. 

“AI governance needs to move faster than efforts to put guardrails around previous enabling technologies. The political will to make this happen is evident in the rapid spin-up of policies, strategies and task forces around the world. The right governance strategies to balance innovation and risk are not obvious, trade-offs are inevitable, and mistakes will be made. These are unprecedented times for technology governance, and we need to approach the problem in an agile, curious, and cross-disciplinary way.

“Much of the discussion is likely to centre on safety measures for frontier AI models, but it will also be important to establish which narrow AI applications pose serious risks and determine whether the guardrails being discussed can actually be applied to these tools or not. Narrow AI models may need special consideration because they often require less computing power to train and run, they may process a particular type of data, and they may be substantially better at solving niche problems. An example is in the biological design tool space, where a model could be built on a personal computer and speak the ‘language’ of DNA or proteins, rather than the data typically processed by frontier models, which we find easier to interpret, like text, images or sound. An expert can look at the outputs of ‘biodesign’ tools and not know whether that design could be used to cause harm or not. Mapping out the types of designs we wouldn’t want these tools creating may in itself produce dangerous information. If researchers around the world can make these models, how do we tell them to make sure their models avoid these outputs?

“AI safety is an ongoing and evolving field that requires collaboration among AI researchers, policymakers, industry leaders, and the wider public to ensure that AI technologies benefit society while minimising potential harms. This is easy to say but may require resources and infrastructure to achieve in practice.

“We need a means of facilitating this collaboration between different stakeholders while acknowledging that competition between companies and countries will endure. This may require international treaties and agreements to limit the development of AI that we don’t have the ability to adequately assess or control. Verifying compliance with such an agreement will be important and challenging.

“Broad AI literacy will be important in helping experts from different disciplines that interact with AI identify risks that may not be obvious to AI developers, in helping policymakers develop clear and practical regulations, and in helping the public engage meaningfully in the conversation.

“A structured approach to uncovering risks arising from different models and sharing the information responsibly should be a priority.”

 

What are the potential practicalities for a route forward towards safe AI?

“It is unrealistic to expect AI developers to anticipate all possible risks created by their products. We need a mechanism for scoping risks, evaluating models, and reporting harms that is comprehensive but does not share information too broadly when there are no viable safeguards in place to mitigate the threat.

“An international agenda of research efforts to develop robust safeguards for frontier and foundation models should be developed. Being at the frontier of safe, responsible AI should be seen as a more appealing target than being at the frontier of AI capabilities.

“Frameworks for evaluating the safety of models should be formalised, but remain flexible to updates as we learn more about the risks and vulnerabilities associated with AI capabilities. Third parties should conduct these evaluations. AI developers should have access to resources that help them align their models’ behaviour with an evolving idea of what safety looks like in practice.”

 

 

Declared interests

Michael Birtwistle: No declarations of interest

Professor Elena Simperl: None.

For all other experts, no reply to our request for DOIs was received.

in this section

filter RoundUps by year

search by tag