In a statement published on the Centre for AI Safety website, AI experts and public figures express their concern about AI risk.
Prof Noel Sharkey, Emeritus Professor of Artificial Intelligence and Robotics, University of Sheffield, said:
“AI poses many dangers to humanity but there is no existential threat or any evidence for one. The risks are mostly caused by the Natural stupidity of believing the hype. Many AI systems employed in policing, justice, job interviews, airport surveillance and even automatic passports have been shown to be both inaccurate and prejudiced against people of colour, women, people with disabilities. There is a lot of research on this and we desperately need to regulate AI. Looking for risks that don’t yet exist or might never exist distracts from the fundamental problems.”
Prof Martyn Thomas FREng, Emeritus Professor of IT, Gresham College, said:
What is the existential risk posed by AI?
“We do not know any compelling reasons why some computer system could not do anything that the human brain can or could do. We could therefore rephrase the question “if a large number of superintelligent aliens had access to the Internet, might they be able to devise a way to eliminate humans? (For example by devising a succession of deadly viruses and inventing and achieving a way in which they would be synthesised and enter the environment.) If we cannot prove that this is impossible then it seems prudent to regard it as a threat to our existence.
“If some leader of an end-of-the-world cult were able to control and assist such superintelligent aliens, would that increase the threat?”
Is there an existential risk posed by current technology?
“It is possible that the current AI systems could create powerful enough disinformation and fake evidence to tip the balance of a conflict into a nuclear war, or to persuade enough people to ignore the threat from climate heating so that global catastrophe becomes unstoppable, or to create such fear of modern medicine and faith in alternative remedies that world healthcare systems are overwhelmed by infections. We have seen how conspiracy theories and panics can spread based on a few rumours and false ‘facts’ (for example, the claim that MMI vaccination causes autism). Current AI already has the power to generate huge volumes of fake supporting evidence if people choose to use it that way. These risks are real and in the example of climate heating, undoubtedly existential, but they can be countered and possibly controlled as they develop. Threats from future AI technology might materialise much more suddenly.”
What could we do to prevent the existential risk posed by AI tech now/in the future?
“Nothing. The possibility has existed and been recognised since at least Alan Turing’s paper in 1950. It seems inevitable that AI capabilities will be developed further and that AI systems will be used to invent ways to develop AI capabilities further. We do not know that there is a limit.
“Nor could we tell that some critical point had been passed. No one has fully solved the “philosophically hard problem” of consciousness, so there is no definitive way to detect whether an AI system is ‘conscious’ or ‘sentient’ and no universal agreement on what these terms even mean.”
Should we be worried about the existential risk or more imminent risks posed by current AI tech?
“If humanity’s life is to be unexpectedly ended, we might consider helping each other to enjoy what is left.
“It would be good to minimise shorter-term risks for as long as we can, though I don’t know how to do that because countries do not trust each other with extreme power, countries will compete for the economic and political benefits they expect from AI and it’s hard to tell where or whether new AI technology is being created in secret.”
Any other general comments on the situation?
“It is unusual to see technology leaders in industry calling for greater government regulation of their business activities. We should be alert for some anti-competitive special pleading.”
Prof Nello Cristianini, Professor of Artificial Intelligence, University of Bath, said:
“There are indeed social risks in the deployment of AI, and I have been working on them for over ten years. However, this is a 22 words statement, and the part about AI is just this: “Mitigating the risk of extinction from AI should be a global priority”. While certainly well-intentioned, a vague statement like this is not ideal: not a single indication is given about which specific scenario could lead to the extinction of 8 billion individuals. Certainly not unemployment, nor misinformation, nor digital addiction, nor discrimination. If the authors of the statement see a specific path to such an event, they should really explain it. The previous petition of March 22nd listed various forms of discrimination and injustice, none of which would lead to extinction, and at the US Senate hearing Sam Altman was asked twice to be specific about his worst case scenario, and he answered: “My worst fears are that we, the field, the technology, the industry, cause significant harm to the world”. While I personally could speculate and come up with some abstract scenarios, I really think that the burden should be on those who have written this statement to be a bit more specific.”
Prof Maria Liakata, Professor in Natural Language Processing, Queen Mary University of London (QMUL), said:
“Existential risk of extinction from AI should indeed be discussed and mitigated. However, the most severe and immediate risks to humanity from AI are not posed by the possibility that AI might one day autonomously turn against us in the way HAL 9000 did in the film 2001: A Space Odyssey. The most severe risks come from human shortsightedness in terms of: using AI to easily generate and spread highly plausible fake content, undermining the concept of truth and making it hard to know who and what to trust; over reliance on AI technology resulting in job losses without careful plans about retraining, with an irrevocable increase in social inequalities and the concentration of knowledge and power in the hands of a small number of big tech companies; the loss of critical thinking skills as we increasingly rely on AI for reasoning tasks, leading to the stupification of humans as in the 2008 animation Wall-E. I think these risks, posed by the misuse of AI technology by humans, should be given the highest priority so that we can reap the benefits of AI, such as new scientific discoveries and breakthroughs in health and education, while mitigating serious harms.”
Dr Mhairi Aitken, Ethics Research Fellow, Alan Turing Institute, said:
“The narrative that AI might one day develop its own form of intelligence that could surpass that of humans and ultimately pose a threat to the future of humanity is very familiar and comes around all too often – but it is unrealistic and ultimately a deception from the real risks posed by AI.
“Recently, these claims have been coming increasingly from big tech players, predominantly in Silicon Valley. While some suggest that is because of their awareness of advances in the technology, I think it’s actually serving as a distraction technique. It’s diverting attention away from the decisions of big tech (people and organisations) who are developing AI and driving innovation in this field, and instead focusing attention at hypothetical future scenarios, and imagined future capacities of AI. In suggesting that AI itself – rather than the people and organisations developing AI – presents a risk the focus is on holding AI rather than people accountable. I think this is a very dangerous distraction, especially at a time where emerging regulatory frameworks around AI are being developed. It is vital that regulation focuses on the real and present risks presented by AI today, rather than speculative, and hypothetical far-fetched futures.
“The narrative of super intelligent AI is a familiar plotline from countless Hollywood blockbuster movies, and that familiarity makes it compelling, but it is nonetheless false. AI is all around us today, but these are computer programmes that do what they are programmed to do. Therefore, when we speak about risks from AI the focus must always be on accountability of people and organisations designing, developing and deploying those systems.”
Prof Tony Cohn FREng, Professor of Automated Reasoning, University of Leeds, said:
“Whilst it is true that AI (in particular systems based on Large Language Models (LLMs) such as (Chat)GPT and Bard) has indeed taken what seem to be remarkable strides forward recently in terms of the ability to produce fluent text which can frequent respond in an apparently intelligent and reasonable way to a huge variety of requests made to it, AI is still is a long way from the dream of Artificial General Intelligence (AGI). Apart from LLMs’ propensity to “hallucinate” (i.e. make up “facts”), a more serious deficiency from the point of view AGI is the inability to reliably construct complex plans, taking account of real-world dynamic constraints and achieve complex goals. It is likely that eventually AGI will eventually be achieved but that is not imminent, and will likely not be based purely on LLMs, but on a mixture of different AI systems, including “embodied” systems, i.e. which have a physical presence in the real world, and also more traditional AI reasoning methods which do not rely, as LLMs do, purely on statistical associations mined from text. At present, the goals an AI system aim to achieve are all ultimately set by humans and this remains a strong safeguard, though the possibility of accidentally or maliciously giving dangerous goals to an AI system is ever present, though limited by what actions the AI system can actually perform. What is of real concern today, is the possibility of AI systems being used to generate “fake news” especially on social media which may have the effect of causing humans to believe things which aren’t true, and the possibility of such misinformation to “go viral” is of real concern. Equally, with the increasing use of LLMs in chatbots and by the general public in their everyday lives and work, care must be taken to regard these systems as assistants, rather than infallible experts. Thus as a society we must all, individually and collectively be on our guard for such misinformation and be sure to verify claims using original and/or reputable sources (noting that LLMs often fabricate citations!). Put differently, the fallibility of even the very impressive AI systems recently constructed, needs to be realised and not relied upon, whether by individual users, companies, governments or the electorate. The biased datasets which these systems are trained on is also a cause for concern since, as has been pointed out many times already, this may lead to biased advice or statements from AI systems. The creators of these systems continue to try to construct “guard rails” to help limit the possibility of such outputs being generated, but these can often still be bypassed. AI, like many technologies, has the potential to be of huge benefit for humanity but also has the potential to be misused and abused, so regulation is certainly required, and the time to start planning for much more powerful AI systems than those which currently exist is certainly now. Just as planning for other global risks such as disease, and nuclear war requires time and complex international negotiations, so will the regulation of AI, but we should not forget the huge potential benefits of AI.”
Dr Andrew Rogoyski, Director of Innovation and Partnerships at the Surrey Institute for People-Centred AI, University of Surrey, said:
“I think the threat could be real, but the method might not be quite what we expect. My abiding concern is that the pace of adoption, displacing jobs, disrupting business sectors and altering global supply chains, could cause an economic shock akin to recent Ukraine and COVID shocks. Society just won’t have time to adapt. In the long-term, people will adapt and find new jobs, skills and careers. Humans are imaginative and adaptable.
“There’s another form of threat – the possibility of being overwhelmed by false information. We’ve already seen what humans can do to each other through misuse of social media, imagine a malicious AI capable of manipulating what we perceive to be true.
“That’s without a super-intelligent AI.
“Imagine how easy it would be for a malicious AI to wreck our fragile economies and cause great harm that way. This is much simpler than the Terminator/Skynet Armageddon by AI.
“The problem is that we don’t want to stop development of AI.
“Let’s not forget that AI is already doing great good. We don’t complain that AI is drastically accelerating drug discovery, improving fraud detection in finance, advancing disease diagnostics, or improving battery technologies. Would you want to do without those?
“It may take an ‘AI Incident’ for the global nations to get around the table to discuss.
“Regulation of AI is really hard. It’s a fast-moving technology, with a ‘winner takes all’ prize for those that dominate. Designing interventions that are adopted worldwide, that won’t disrupt the obvious benefits of AI, is really difficult.
“We know that AI is a powerful technology, the generative AI revolution over the last 6 months has made that perfectly clear. What we don’t yet know is what to do about it.
“I’m relieved that the UK Government has woken up to the fact that they’re out of step with international opinion – Europe, the US and China are progressing with AI regulation that was way ahead of the UK’s laissez faire approach to AI regulation.”
Dr Oscar Mendez Maldonado, Lecturer in Robotics and Artificial Intelligence, University of Surrey, said:
“The document signed by AI experts is significantly more nuanced than current headlines would have you believe. “AI could cause extinction” immediately brings to mind a terminator-esque AI takeover. The document is significantly more realistic than that: there are warnings about misuse of current AI capability (such as medicine finding ai used to develop chemical weapons, or robotic navigation used for weapons targeting) by bad actors. The document is much more an indictment of malicious or neglectful use of AI, than it is a warning about AI takeover. This is an incredibly important distinction, as other AI experts have already commented on how unlikely an AI takeover is with current technology. In short, it’s much more about what we do to each other using ai than what the AI can do to us.”
Dr Carissa Veliz, Associate Professor in Philosophy, Institute for Ethics in AI, University of Oxford, said:
“I worry that the emphasis on existential threat is distracting away from more pressing issues, like the erosion or demise of democracy, that CEOs of certain companies do not want to face.
“AI can create huge destruction short of existential risk. I find it much more likely that we can seriously harm people and democracy by relying on AI that is more stupid than smart (or at least as stupid in some ways as it is smart in other ways), than that we face extinction at the hands of AGI.”
Dr Christian Schroeder de Witt, Postdoctoral Research Assistant in Artificial Intelligence, University of Oxford, said:
“Today’s AI safety debate pays too little attention to cybersecurity aspects. The real threat from AI systems is not a lack of alignment with human preferences, but the inadvertent or malicious abuse of AI systems by incompetent or malicious users. Both AI safety and cyber security researchers urgently need to come together to develop a holistic blueprint for AI safety, particularly as our world is becoming more and more infused by multi-agent systems including both AI and humans.”
Statement published on the Centre for AI Safety website: https://www.safe.ai/statement-on-ai-risk
Prof Noel Sharkey: “No vested interests.”
Prof Nello Cristianini is the author of “The Shortcut – why intelligent machines do not think like us”, (CRC Press, 2023)
Dr Mhairi Aitken: “I don’t have any conflicts of interest to declare.”
Prof Maria Liakata: “No conflicts.”
Dr Oscar Mendez Maldonado: “I am a lecturer in Robotics and AI at the University of Surrey, as well as an AI Fellow at the Institute for People Centred AI, member of the British Machine Vision Association (BMVA) Executive committee and recipient of InnovateUK funding in robotics and AI for agritech.”
Dr Andrew Rogoyski: “I have no relevant disclosure to make.”
Prof Tony Cohn: “I am seconded to the Alan Turing Institute on a project to evaluate the commonsense reasoning abilities of Foundation Models such as LLMs.”
Dr Christian Schroeder de Witt: “I am currently building an alliance of cyber security and AI safety researchers to advance this aim.”
For all other experts, no reply to our request for DOIs was received.