select search filters
briefings
roundups & rapid reactions
before the headlines
Fiona fox's blog

expert reaction to PM speech on AI and accompanying GO Science discussion paper on capabilities and risks of Frontier AI

Scientists react to the PM’s AI speech and GO Science’s paper on AI risks and capabilities. 

 

Dr Stuart Armstrong, Co-Founder and Chief Researcher, Aligned AI, said:

“The government report makes the bold claim that frontier AI may pursue its own goals and destroy humanity. According to the report, this could result from unintended goal-direct behaviour. [1] Aligned AI (our frontier alignment startup) has been focusing on this problem since it’s inception, and has made substantial progress in the capability to shape frontier AI to robustly follow human intent (see eg https://fortune.com/2023/09/28/u-k-startup-aligned-ai-claims-coinrun-ai-safety-breakthrough/), even in “in situations sufficiently unlike their training data”, by extending goals safely to new situations. [2]”

 

[1] Capabilities and risks from frontier AI, by the Department for Science, Innovation and Technology, p27

[2] ibid, p16

 

Dr Kieran Zucker, Clinical Lecturer in Health Data Science and Machine Learning Engineering in the University of Leeds’ School of Medicine, said:  

“The report focussed on what the government has termed frontier AI. This is a term that has emerged over the last few months as a way of describing a single AI that can do lots of different tasks. It specifically leaves out the other AI which is focussed instead on more specific tasks. This more task-specific AI is much more common now and are the sorts of examples they were used within the speech. Unfortunately, most of the issues around risks of the use of frontier AI applies equally to these more specific types being sold and deployed already. 

“The proposals today appear to focus on the identification of issues over time without any actual mechanism to limit their potential harm. The proposal appears to be advocating wide-scale AI adoption without understanding how well it works in a particular setting or not, all in the name of efficiency and being pro innovation. The Prime Minister said that AI firms should not be trusted to mark their own homework but yet seems content to continue to allow this to happen anyway. It is difficult to advocate for focussing on safety whilst at the same time advocating adoption of tools that, by your own admission, the safety of which is unknown.

“The summit itself aims to focus on gaining international consensus but does not seem to have much by way of focus on agreed limits and assumes that bad actors potentially using AI are from the governments being represented or government at all. Whilst the speech implies that the costs of LLMs (large language models) are outside of the reach of most due to cost, this is simply not the case with many open-source tools available and being built relatively cheaply.”

 

Dr Harish Tayyar Madabushi from the Department of Computer Science and the Bath Institute for the Augmented Human, University of Bath: 

“The prime minister’s assurance that the UK will not rush to regulate AI is indeed comforting. While it is important to address the existing potential for the misuse of AI, such as the creation of fake news and the heightened risk of fraud, it would be premature to enact regulations based on perceived existential threats. This is especially so given that there is significant disagreement among researchers worldwide. Distinguished figures in the field, like Yann LeCun and AI scientists at Meta, maintain the perspective that AI does not present an existential threat. 

“The incredible potential of AI systems is currently overshadowed by the prevailing notion of an existential threat. This narrative not only diverts attention from the genuine issues that require our focus but also prevents the widespread adoption of these technologies. A significant factor contributing to the fear of an existential threat is the idea of “emergent abilities” in Language Models, which is their apparent potential to master new skills without being explicitly programmed to do so.   

Our research, which was also cited in the Frontier AI capabilities and risks discussion paper,  offers a different perspective, addressing these concerns by revealing that the emergent abilities of LLMs [Large Language Models like ChatGPT], other than those which are linguistic abilities, are not inherently uncontrollable or unpredictable, as previously believed. Rather, our novel theory attributes them to the manifestation of LLMs’ ability to complete a task based on a few examples, an ability referred to as “in-context learning” (ICL). We demonstrate that a combination of ICL, memory, and the emergence of linguistic abilities (linguistic proficiency) can account for both the capabilities and limitations exhibited by LLMs, thus showing the absence of emergent reasoning abilities in LLMs. 

“The implications of these findings are as follows: 

  1. By demonstrating the lack of emergent reasoning abilities in LLMs, our work underscores their user controllability. As a result, we show that these models can be deployed without concerns regarding latent hazardous abilities and the prospect of an existential threat. 
  2. Our work provides insights into the fundamental mechanisms that dictate the functioning of LLMs, thus shedding light on both their capabilities and shortcomings. This includes previously unexplained phenomena including their tendency to generate text not in line with reality (hallucinations), and their need for carefully-engineered prompts to exhibit good performance. 
  3. Our work suggests that merely continuing to increase model size is unlikely to lead models to gaining emergent reasoning abilities. 
  4. Our work calls into doubt that further scaling will eliminate existing shortcomings of LLMs, including hallucination and the necessity for prompt engineering.”

 

Prof Anthony G Cohn, FREng, Professor of Automated Reasoning, University of Leeds and Foundational Models Theme lead at the Alan Turing Institute, said:

“While the often-mentioned existential risks from AI are certainly not imminent, it certainly makes sense to already start considering a regulatory framework, that must be international as well as national.  Such a framework will be complex and require difficult-to-achieve agreements at a global scale, so the time to start such negotiations is now, especially when there are immediate risks from AI in the area of deep fakes and misinformation, particularly ahead of important elections next year both nationally and internationally. I, therefore, welcome the government’s focus on such nearer-term risks, particularly from frontier AI systems — I note that the EU is already taking AI regulation much more seriously. In advance of regulation, guidelines for responsible deployment will also be important, even if not enforceable, as a way of signalling future regulatory intentions. I disagree with the Prime Minister about having to wait until “we fully understand” the technology before considering regulation, though it is certainly important to get regulation right, so all the more reason to start now.  I also welcome the announcement on the formation of an AI safety institute to investigate the capabilities of AI systems, although researching into methods for certification of AI is also important.  The decision that the institute will publish their evaluations is welcome in the interests of transparency. An important area for the government to consider is how to ensure that the concentration of AI expertise does not rest just with large international AI corporations – this is important not just economically but also to help ensure AI safety.

“Whilst it is important to be concerned about the risks of AI it is important to remember that AI is already a daily part of our lives and has huge potential to further improve our lives, not only in the health sector, but also in other areas such as transport, education, energy and decarbonisation, so regulation must still enable, and may even stimulate, innovation and application in such areas. “AI for good” is a mantra that is becoming increasingly widespread among the international AI research community and there are increasingly many examples of such initiatives.  AI will undoubtedly affect economies and jobs on a global scale, like many previous technological innovations and governments and policy leaders must propose mitigations for the negative effects of AI. Finally, I note that it is important the general public is better educated about the risks and limitations of AI, and the need for validating advice given by AI systems – treating them as assistants rather than authorities, unless certifiably so.”

 

Prof Maria Liakata, Professor in Natural Language Processing, Queen Mary University of London (QMUL), said:

“The reports do a good job of synthesising the literature in a number of different areas related to latest developments in LLMs, associated challenges & risks and future scenarios. Some scenarios such as super-intelligent systems driving humanity to extinction are far less likely than job disruption by AI, but it is very important to carefully consider how we make AI models safe to use in different walks of life, while also avoiding over-regulation that could lead to stifling innovation. My concern is around involving a diverse set of experts to address these multi-faceted challenges, especially more academic experts in the field of language technology, academics with experience in social science research. The potential societal implications are too important to leave decision making to big tech.”

 

Dr Nicole Wheeler, Birmingham Fellow, Institute of Microbiology and Infection, School of Computer Science, at the University of Birmingham, and Turing Fellow, said:

“In his speech today, Sunak pushed a narrative that the UK can have it all – economic growth, attractiveness to industry, safety and responsibility. The reality is there will be trade-offs, and the UK’s ambition to be the most pro-industry regulatory environment in the world will conflict with its ambition to be a world leader in safety.

“One thing that we can offer as an attractive benefit to industry is clarity in the regulations that we provide and alignment with international governance regimes. At the moment, it’s difficult for AI developers to know whether their products fit safety requirements because the requirements are vague.

“Sunak raised the concern that the only people in the world testing AI for safety are AI developers, which creates a conflict of interest. However, this isn’t really true. AI companies hire external consultants to test the safety of their systems because they recognise that AI developers are poorly placed to anticipate specific risks that could arise from general-purpose AI models. We need experts across different disciplines to weigh in on the risks they anticipate, and this requires access to the models. An external evaluation process required as part of an approvals process for releasing AI models would be better than the current industry approach to self-regulation.

“Sunak states that only governments can properly assess risks to national security, but governments also face a conflict of interest, which is evident in today’s speech emphasising that the UK will remain attractive to industry and create many more jobs in AI.

“The speech did surface an important question – how can we regulate something we don’t fully understand? This is a core challenge underpinning efforts to develop safeguards for broadly capable, continually evolving technologies. We will need mechanisms in place that move faster than government policy-making or a way to update what compliance with AI safety policy looks like in practice as technology evolves.

“I don’t think it’s appropriate to say the UK is doing far more than any other country to keep people safe and base this purely on spending. The EU are being much more forward-leaning in their approach to AI safety, and their regulations may well become the de facto standard that AI developers need to meet if they want to reach a global market. The UK, in contrast, are applying a very light-touch approach to AI governance.

“There is much speculation about whether the AI Safety Summit will spark progress in AI safety, or whether it will have little impact. The goals of producing an international statement on the nature of frontier AI risks and establishing a global expert panel are both promising, but neither represents a concrete move in terms of regulation.

“Sunak emphasises the need to promote AI for social good, but his choice to highlight tackling benefits fraud as a key example seems inappropriate. Previous attempts to automate benefit fraud detection had disastrous impacts on people legitimately seeking benefits. An algorithm deployed in Michigan to flag benefit fraud from 2013-2015 falsely accused more than 30,000 people over two years, and one used in Australia from 2015-2019 accused around 400,000. A decision made by a black box or proprietary algorithm is difficult to contest, and people with limited financial resources aren’t well placed to challenge these systems. There are plenty of ways AI can be used for social good, but targeting punitive measures at people on low incomes is not the right place to start.

“The risks posed by AI highlighted in the report released today are not over-hyped. We must have safeguards in place well before we approach certainty that superintelligence is around the corner, and many of the other risks have already been demonstrated today. The challenge in tackling emerging risks posed by AI is that we can’t afford to wait until we’ve seen evidence of harm. This is especially true for systems that will be released open source because we have no way of placing guardrails on them or tracking their use.

“The risk landscape is going to change substantially in the coming years, and this is the right time to begin international coordination on tackling extreme risks. We are behind the curve in tackling concrete harms caused by AI today, and we can’t allow the same thing to happen with existential risks.

“Our report on risks of biological attacks enabled by AI was featured in the report, and I was pleased by the UK Government’s framing of the issues. In my opinion, frontier AI models currently pose less of a risk than models built for specific tasks in biology, but this is likely to change as models begin to integrate different types of data, such as literature, images, and biological data, in a single framework. It’s also important to remember that these systems don’t exist in isolation, and are increasingly being combined with other capabilities like web search, software, and robotics.”

 

Dr Mike Cook, Senior Lecturer in Computer Science at King’s College London, said:

“It’s clear to see the influence of private companies on the government’s messaging, which is mixed at best. Sunak’s speech claims that the only people testing AI are the companies developing it, however this isn’t true. Many public organisations, public interest groups and research labs around the world have dedicated themselves to testing AI systems for safety and scrutinising the claims made by their developers.

“The phrase “AI Safety” keeps reappearing in these reports and speeches, but unless a common consensus can agree on what this means, all claims about founding AI taskforces and creating AI Safety Leaders lack meaning. Whilst the Prime Minister’s speech talks about risks from terrorism, chemical weapons and even the risk of human extinction, these emotive claims only serve to distract us from the AI risks that are happening right now.

“Avoiding regulation to try and attract big companies to the UK is short-term thinking that will harm the public, and result in us losing to bigger economies like the US. If the UK wants to stand out, it has to stand up for the general public who are relying on them for guidance and leadership, and stop private corporations from dictating the rules on AI.”

 

Dr Sanjay Modgil, Reader in Artificial Intelligence at King’s College London, said:

“The prime minister is right to draw attention to AI risks, while also acknowledging the benefits that AI will bring. The measures and ambitions he outlines in his speech are commendable. However, the transformative impact of AI on society will dwarf that of the industrial revolution. Every aspect of how we experience and interact with the world around us will be changed, and in ways that requires an interdisciplinary analysis that transcends the science-humanities boundary. It remains to be seen then, whether the current emphasis on recruiting technologists to advise on AI safety is broadened to a more holistic approach that includes at its very heart, experts from other disciplines including philosophers, social and political scientists, and the like.”

 

Dr Kerry McInerney, a researcher in AI ethics at the University of Cambridge’s Leverhulme Centre for the Future of Intelligence, said:

“While it is encouraging to see the building of greater awareness around the risks of AI, it is crucial that bias and misinformation are not pushed to the side as mere ‘social harms’. Any conversation about AI safety must grapple meaningfully with how AI could exacerbate forms of political and economic inequality.”

 

Professor Gina Neff, Director of the Minderoo Centre for Technology and Democracy at the University of Cambridge, said:

“The UK Government have dealt all of us in the responsible AI space a summit that does not further conversations most people worry about. The concerns that most people care about are not on the table, from building digital skills to how we work with powerful AI tools. This brings its own risks for people, communities, and the planet.”

 

Dr Jonathan Aitken, an expert in AI at the University of Sheffield, said: 

“One thing that really stands out from the speech and the report is the lack of focus on education. Education is key to us understanding the potential advantages and dangers of AI, but also to how we successfully implement it into society and harness the opportunities it presents.

“We need to ensure that our education system gives people the skills they need to understand AI and also utilise the technology, so the UK develops a strong workforce who are at the cutting edge of the industry. 

“The report also seems to have missed a key issue around people working in the AI industry. The report does give examples that consider bias and fairness, and this is true, we need to ensure that inequalities of any type are not backed into AI. But in doing this we need to look at the inequalities that already exist in the AI industry. These areas are still dominated by white men in the UK. If we are to truly seize the opportunities available from AI then we need to broaden the industry across gender, race and society, otherwise there is a risk that the old status quo will remain.”

 

 Will Cavendish, Global Digital Leader, Arup, said:

“While the Prime Minister is right to carefully consider the real risks AI poses, it’s heartening to hear him acknowledge that it can help solve some of the greatest social challenges of our time. But I would go even further – we simply can’t solve our biggest challenges, like climate change and the biodiversity crisis, without embracing safe and effective AI. We must remember to consider an ‘ethic of use’ – where we have an obligation to use technology that will help humanity – rather than only an ethic of ‘do no harm’.”

 

Michael Osborne, Professor of Machine Learning in the Department of Engineering Science, University of Oxford, said:

“I welcome the governance of AI—both its rewards and its risk—by democracies, as the current state-of-play is that this transformational technology is really governed only by a small number of opaque tech firms.”

 

Prof Angeliki Kerasidou, Associate Professor in Bioethics in the Oxford Department of Population Health, University of Oxford, said:

“AI holds a lot of potential for good, but this potential will not be realised unless the risks are mitigated, and the harms minimized and properly addressed. What we need is an open and honest global discussion about what kind of world we want to live in, what values we want to promote, and how AI can be harnessed to get us there. I hope that this AI Summit is the beginning of that discussion.”

 

Professor Brent Mittelstadt, Director of Research, Oxford Internet Institute, University of Oxford, said:

“In his speech Rishi Sunak suggested that the UK will not “rush to regulate” AI because it is impossible to write laws that make sense for a technology we do not yet understand. The idea that we do not understand AI and its impacts is overly bleak and ignores an incredible range of research undertaken in recent years to understand and explain how AI works and to map and mitigate its greatest social and ethical risks. This reluctance to regulate before the effects of AI are clearly understood means AI and the private sector are effectively the tail wagging the dog—rather than government proactively saying how these systems must be designed, used, and governed to align with societal values and rights, they will instead only regulate reactively and try to mitigate its harms without challenging the ethos of AI and the business models of AI systems. The business models behind frontier AI systems should not be given a free pass; they may be built on theft of intellectual property and violations of copyright, privacy, and data protection law at an unprecedented scale.

“There are many examples of value-driven regulation existing well before the emergence of fundamentally transformative technologies like AI—look at the 1995 Data Protection Directive and its rules for automated data processing, or non-discrimination laws which clearly set out societal values and expectations to be adhered to with frontier technologies regardless of the technological capabilities and underlying business models. My worry is that with frontier AI we are effectively letting the private sector and technology development determine what is possible and appropriate to regulate, whereas effective regulation starts from the other way around. 

“With that said, I am relieved to see in the reports released by the government today a greater focus on the known, near-term societal risks of frontier AI systems. Initial indications suggested the AI Safety Summit would focus predominantly on far-fetched long-term risks rather than the real near-term risks of these systems that are fundamentally reshaping certain industries and types of work. However, the lack of attention given in the reports to the environmental impact of frontier AI is a huge oversight and one that should be quickly remedied.”

 

Prof Gopal Ramchurn, Professor of Artificial Intelligence, University of Southampton, said:

“The PM’s speech and the report demonstrates that the thinking is shifting from the existential risks of AI to more real risks such as the misuse and abuse of AI technologies. With the elections in the UK and the US on the horizon as well as the instabilities in the Middle East and Eastern Europe, we should all be concerned about the risks of AI powered misinformation that could completely disrupt democratic regimes. The report also points to real worries around unemployment but focuses, in my view, too much on LLMs. There are other unintended consequences and abuses of AI that need to be addressed as exemplified by the lawsuits against Facebook for its impact on young adults’ mental health. This also includes the growing energy demands from specific AI technologies that add little value to the poorest in society.”

 

Dr Natasha McCarthy, Associate Director, Policy, at the Royal Academy of Engineering, said:

“It is imperative that we foster a safe and secure AI ecosystem in order to realise the enormous potential benefits of these transformative new technologies and we welcome the Prime Minister’s initiative in convening next week’s summit and establishing a new AI safety institute. We must manage and prevent the real short-term risks that AI already poses, as well as addressing longer term, systemic risk and preventing misuse wherever possible. Ensuring AI safety means creating the markers of trust that can enable safe and beneficial adoption, providing assurance to developers and users. 

“There is a lot to learn from the wider engineering sector in creating ever-safer technologies and infrastructure, from developing tools and technologies to design for safety, to creating the institutions that accredit and certify people, skills and education to build professional practice. We must also learn from previous mistakes and accidents, particularly in safety-critical systems, to avoid societal harms as we shape the future of this fast-evolving set of technologies. 

“We welcome the call for international dialogue on AI and look forward to seeing more detail about the Global Expert Panel. We believe engineering has a major role in mitigating risks while delivering real world benefits and would urge that engineering voices play a role in this initiative. We will be exploring these themes further at our event, ‘Building an AI Safety Culture’, at the AI Fringe on 2 November.”

 

Prof Carissa Veliz, Associate Professor in Philosophy, Institute for Ethics in AI, University of Oxford, said:

“The UK, unlike Europe, has been thus far notoriously averse to regulating AI, so it is interesting for Sunak to say that the UK is particularly well suited to lead the efforts of ensuring the safety of AI. In his speech, Sunak places regulation and innovation in opposition. But oftentimes it is precisely regulation (e.g., of safety in cars) that leads to the most impressive and important innovations.”

 

Prof Michael Rovatsos, Professor of Artificial Intelligence, The University of Edinburgh, said:

“The Prime Minister’s speech rightly highlights the important risks of ‘bad actors’ using AI – but it failed to say anything about the, perhaps less obvious, but enormous impact AI systems already in use are having on society. Establishing an AI Safety Institute is a sensible thing to do, but at a time when most of the world’s AI is controlled by a handful of billionaires, I do not expect it will address the fundamental problem of how unprepared governments and citizens are in terms of exercising control over the rather reckless behaviours we have observed for decades from big tech companies. I anticipate that this institute will focus quite narrowly – and likely in close collaboration with dominant corporate players – on very specific national security threats rather than serving the broader interests of our citizens and businesses.”

 

Dr Andrew Rogoyski, Director of Innovation and Partnerships at the Surrey Institute for People-Centred AI, University of Surrey, said:

“The new report is a good summary of the state of AI but we need to be careful about mixing messages about existential threat with economic opportunity – we need to be clear on both.

“AI, as a technology, has been with us for years and serves us daily, from search engines and social media, to drug discovery and combating climate change. The latest large language models, like Chat-GPT, Claude, Llama and others have opened up new possibilities which are very exciting. However, these developments have brought us a step closer to creating truly intelligent machines and that’s something where caution is needed. We need to ensure that powerful AIs are aligned to human interests – nobody knows how to do that yet.

“The formation of the AI Safety Institute is a welcome development, especially as they will publish their evaluations publicly. A clear test of its viability will be whether it gets access to the AIs being developed in US, China, and other countries, whether it can evaluate AIs quickly, and whether it can actually influence global deployments.

“The importance of education was stressed by the Prime Minister. An understanding of the tremendous potential and pitfalls associated with this powerful technology is fast becoming a life skill in an age of life-long learning and where careers that last years rather than decades. It’s through education we can get beyond the ages-old meme of AI being about killer robots and job losses.

“As with all powerful technologies, criminals, terrorists and hostile nation states will find ways to misuse it. The problem isn’t AI, it’s bad people.

“The idea that AI might give us too much information, and thereby allow terrorists and other bad actors to create new weapons, is a legitimate concern. However, there are technical challenges in creating AIs that can filter sensitive information, and ethical problems of who gets to decide what information can and can’t be shared.

“There are literally hundreds of organisations around the world that have published AI principles, describing how AI should be developed and deployed, however, there is very little progress in operationalising these principles, i.e. making them stick. I would hope that the UK AI Safety Institute looks at practical ways it can bring a principled approach to AI development.

“As pointed out in the report, AI technologies will disrupt jobs, careers and industries, so we need to see some responsibility being taken by companies displacing human workers, providing support for retraining and upskilling so that people can still do meaningful and enjoyable work in the future.

“There are legitimate concerns that AI technology can be used to create fakes, impacting social norms, democratic processes and other parts of our lives. This is an escalation of what has been happening for many years through technologies like social media. I’m afraid that it’s becoming more important than ever to make sure we get our information from reputable sources and that we all get into the habit of checking and challenging all sources, learning to tell fact from fiction.

“There are companies investing billions in AI, primarily in the US and China, outpacing and outscaling anything individual governments can do. We need to recognise that the UK has very limited sovereign control over the development and deployment of Frontier AI, so international collaboration is paramount.”

 

Rashik Parmar MBE, CEO of BCS, The Chartered Institute for IT, said:

“Over 1,300 technologists and leaders signed our open letter calling AI a force for good rather than an existential threat to humanity.

“AI won’t grow up like The Terminator. If we take the proper steps, it will be a trusted co-pilot from our earliest school days to our retirement.

“The Prime Minister is correct, the public needs confidence in the people creating and making political decisions about AI.

“One way to build that trust is for AI to be developed and managed by licensed professionals who meet international ethical standards. This is what we’d like to see agreed after the AI Safety Summit.”

 

In response to the PM’s AI speech, Professor Peter Bannister, biomedical engineer and chair of the healthcare panel at the IET, said:

“AI has had a longstanding presence across sectors, such as manufacturing and construction for many years, providing valuable data and certainty on specific processes and outcomes. But given its recent proliferation in technical discussions, media coverage and mass public use, such as ChatGPT, it’s understandable that there are concerns about its role in the future and its impact.

“We need to harness the current interest in AI, and provide more information, training and firm rules on its use based on professional best practice, transparency, robustness, avoidance of unwanted bias (fairness), privacy and security. Mitigation is also key – so we need engineers and policymakers working together to better understand AI, and its full potential. This is necessary to ensure AI is used safely and to help prevent incidents from occurring – and it is fundamental to maintaining public trust, which underpins the economic and social benefits AI can bring.”

 

* https://www.gov.uk/government/publications/frontier-ai-capabilities-and-risks-discussion-paper

 

Declared interests

Prof. Peter Bannister: Managing Director of Romilly Life Sciences Ltd and Honorary Chair at the University of Birmingham Centre for Regulatory Science and Innovation

Prof Carissa Veliz: No conflicts to declare.

Prof Michael Rovatsos: No conflicts.

Prof Gopal Ramchurn: I am the CEO of Responsible AI UK (rai.ac.uk) and Co-CEO of Empati Ltd (empati.ai). RAI UK is a UKRI funded research programme. Empati focuses on decarbonisation and aI.

Prof Tony Cohn: No CoI

Prof Maria Liakata: consulted for the GO science report on Frontier AI.

Dr Stuart Armstrong: Co-Founder and Chief Researcher, Aligned AI

For all other experts, no reply to our request for DOIs was received.

in this section

filter RoundUps by year

search by tag