It has been reported that Geoffrey Hinton has left Google and spoken about the dangers of AI.
Prof Duc Pham, Chance Professor of Engineering and Head of School, University of Birmingham, said:
“Some of the current comments on the dangers of AI to humanity could be regarded as hype. The generative AI as represented by ChatGPT and similar programmes is indeed powerful. However, on its own, generative AI is not dangerous. Just like nuclear power, it is how people use it and what they do with it that could pose risks. I am very happy to use generative AI to help increase productivity and relieve me of tedious chores.”
Dr Mark Stevenson, an expert in AI from the University of Sheffield, said:
“Geoffrey Hinton is one of the pioneers of deep learning and the recent explosion of Artificial Intelligence applications, such as ChatGPT, can be traced directly back to his foundational work. His resignation from Google, where he has worked for the last decade, is an interesting development in the ongoing debate around these technologies.
“AI technologies already provide significant benefits to society through existing applications to, for example, face/voice recognition as alternatives to passwords and fraud detection algorithms. The risks from the current generative AI technologies largely relate to their ability to generate convincing text and images. It doesn’t require much imagination to see how this could be exploited to create disinformation and, to some extent, this is starting to happen already.
“However, Geoffrey Hinton also warned about the longer-term risks associated with the possibility that AI becomes more “intelligent” than its human creators. He correctly points out that digital intelligence is likely to have the ability to share information far faster than humans are able to. This risk is longer-term but is one that should be considered given the rapid development of these technologies. Doing so won’t be straightforward since this intelligence might take a different form from our own, similar to animal intelligence.”
Dr Michael Cook, Senior Lecturer in Informatics, King’s College London, said:
“AI is an exciting field that has the potential to do a lot of good in the world, but as they are largely concentrated in the hands of a few, their use isn’t always aligned to public needs. Researchers like Hinton have supported Google for years, fully aware of their approach to developing systems, and so this sudden change of heart is a little surprising to me.
“Because of the recent excitement about ChatGPT and its impressive features, it is easy to think that someone like Hinton is departing Google because he has seen something scary, or that a sci-fi dystopia is on the horizon. In my opinion, rather than worrying about the technology itself, we should instead be asking about the structures in place which means AI could be developed in this way, without the necessary guardrails.
“Ultimately, there is clearly a need for there to be a degree of oversight about what is released by firms into the public sphere. We have strict safety and testing standards for food, for medicine, or for cars. AI-focused tech companies should be expected to behave responsibly, too.
“GPT-4, like all the other AI systems that came before it, does not pose a risk to us or our world on its own. It has no autonomy, no independence, and no way of affecting the world outside. Like most human inventions, the real risk comes from who ends up in control of it, and what they decide to do with it.
“In that sense, GPT-4 is already causing disruption to many parts of everyday life, from education to business, from dating to job interviews. What we really need to be worried about is what tools we are making available to people, and what systems there are in place to protect us from the worst effects of it.
“Our understanding of information is changing all the time. When Photoshop launched in the 1990s we were tricked by clever digital edits. Over time we learn to watch out for certain kinds of tricks, and we also learn what new technology can and cannot do. That’s why education and outreach is so important for helping people protect themselves from misinformation.
“Right now, we’re in that phase where new technology is available, like Photoshop, but we don’t have good tools for detecting it, or good public understanding about what it can do. That makes it quite a dangerous time, and even researchers, including myself, have been tricked by images on the web. I’m confident we can make it past this phase and get over the worst effects of AI fakes, but it’ll take a concerted effort.
“We need much tighter legislation on a range of AI-related topics. These include: laws relating to how data is collected and used to train AI systems, the legal responsibility companies bear when their AI systems cause harm, and clear rules on requiring AI to be understandable. Not only does this legislation need to happen, but it also needs to happen in a way that’s unified, and it needs to happen quickly. By diluting regulatory enforcement to a number of small existing governmental bodies, adding to their increasing workload, governmental response risks losing public confidence in interacting with AI on a daily basis.
“We could institute far stronger requirements on AI systems. For example, the best detection systems for AI-generated text can only identify ChatGPT authored text about 25% of the time, and it falsely identifies human-authored text as fake about 9% of the time. We could legislate that AI systems cannot be released to the public unless their output can be detected confidently with high accuracy. This would force companies to consider safety and detection as a core part of their engineering work, rather than an afterthought.”
Prof Nello Cristianini, Professor of Artificial Intelligence, University of Bath, said:
“AI is a technology that has the power of revolutionising many economic sectors, for the better, and needs to be pursued, but we should do so with caution. The specific form that AI has taken today is the result of a series of choices, or shortcuts, that allowed it to overcome significant technical problems: for over twenty years AI has been created through the use of statistical machine learning, and this means that we cannot easily audit the knowledge that intelligent agents create and use; furthermore those learning algorithms need to be trained on enormous quantities of data, which it therefore sources “from the wild”, that is by observing human behaviour in various ways. The combination of the two means that privacy, biased decisions, opaque decision processes, are all issues. In the case of GPT and its numerous family, which generate content rather than making simpler decisions such as recommendations, there is also an issue of copyright, as well as of generation of spam, or fake-news, and inaccurate but convincing content.”
“Once coupled with a society, modern AI can learn from it as well as shaping it, by selecting or even generating the contents that form part of its culture: that means that we need to understand the complex relation between humans and machines, and this should be an urgent undertaking for the natural, social and human sciences. The future challenge for AI is to understand the effect of the shortcuts that we have taken, and how to mitigate their consequences, by creating the right legislation. This is forthcoming in Europe. More work across disciplinary boundaries will be needed”
Prof David Hogg, Professor of Artificial Intelligence in the School of Computing, University of Leeds, said:
“Large language models such as GPT4 exploit the simple idea of building a probabilistic model from the sequences of words in huge collections of textual data available on the internet. The standard use is to extend a piece of text in a natural way.
“The surprise was in how these models could be used to do many other things (albeit with frequent mistakes) such as answer general queries, summarise text, translate from one language to another and hold a conversation (as in ChatGPT).
“New understanding of the composition of images is leading to rapid developments on this front also, with well documented applications in the generation of new media.
“There is enormous potential for good in society from these developments, particularly within the health domain for example.
“By contrast, there is also potential for harm to society.
“The speed with which all of this is happening means that the normal development of public debate and regulatory frameworks around the world has not been able to keep up. We need concerted action to prioritise these developments at pace.
“The challenge is to channel future applications of AI towards positive benefit for the world and away from the negative consequences that can all too easily be envisaged.”
Prof Tony Cohn FREng, Professor of Automated Reasoning, University of Leeds, said:
“Hinton’s resignation helps highlight some important issues for AI, society, governments and regulators. AI is not about to take over the world – whilst the current generation of “Foundation Models” are impressive in many ways, and display high levels of AI performance not yet expected – they are certainly not yet at the level of AGI (Artificial General Intelligence). In particular Large Language Models (LLMs) like ChatGPT are not able to reason reliably; indeed the biggest flaw in the current generation of such systems – sometimes they perform amazingly well, but in the next sentence may make a “schoolboy error” displaying great naivety. Moreover, it is not clear that the way these systems are built – by being trained on “the whole of the internet” and making next word predictions based on the resulting model will indeed lead to systems which can reason about novel situations. The performance of current systems is best when the task set is related to its training data (which is indeed true of any learning system). The problem of “hallucinations” is also still a major problem with LLMs – because of the way they are trained, they don’t actually memorise information – just statistical associations, so they don’t ever actually recall facts and information, but rather generate text which is statistically likely to be true, based on the training data. A problem for the future is that with more and more LLM generated text now out on the internet, future training of LLMs will be contaminated by such (possibly flawed) LLM text rather than human-written text.
“There certainly are both long and short term risks of AI (as well as many potential benefits). In the short term, Hinton highlights risks posed by “bad actors” misusing AI technology, and this is certainly a real problem which needs to be addressed now. The level of fluency of LLMs is impressive and it’s often hard to tell the output apart from human generated text, and this means that generating text to influence people and indeed electors at scale has become easier and this is certainly a cause for concern – “fake news” has become much easier to generate. There are also long term risks, but no immediate or even medium term “existential threat” in my view. It would still be sensible though to start building regulatory frameworks to be ready for when or if that time comes.
“The issue of ethics has also been raised with regard to LLMs – because they have been trained on biased data, they often generate text which has ethical issues. Certain safeguards have been incorporated into the current generation of systems, but these are not fool proof and indeed can often be circumvented by clever “prompting”.
“The “black box” nature of LLMs is also of concern – we don’t have a good ways of determining how or why they produce the text they do. Even if asked to explain their reasoning, it may not correspond to the actual answer given precisely because of the statistical nature of the text generation process!
“Hinton also comments on the nature of the intelligence being developed in these systems; I agree it is different in the way he suggests but there is also at least one other way in which it differs: these systems learn in a very different way to that in which humans learn – from massive scale data, rather than a relatively small number of examples, and also humans learn by being embodied in the world – we experience the world through multiple modalities (sound, vision, touch, taste…) which LLMs don’t, and even for those AI systems which do have multi-modal training data their experience is still impoverished compared to a human child learning about the world. This makes it likely that the responses given by a system will not always agree with a human generated response. Moreover, whilst humans can easily adapt and alter their behaviour based on just one piece of feedback, this is not easy to effect in an LLM which are based on the statistical associations made between billions of lines of text.
“In conclusion, I completely agree that regulation is needed, especially in the short term for social media. Hinton’s statement helps highlight the potential issues of AI systems – the public (as well as legislators) need to be made more aware of the limitations and possible problems with the use of AI systems such as ChatGPT. Historically, people have shown a tendency to believe information generated by a computer and to ascribe human qualities to computers which are not really warranted. The best use cases for the current generation of Foundation Models is as an assistant, where the human remains firmly in control of their use, and responsible for the final text disseminated – such as was proposed recently by doctors using an LLM to *draft* responses to patient queries. I would welcome a coordinated, international effort to build a framework for regulation, endorsed by tech sector, governments, and other stakeholders.”
Professor Daniele Faccio FRSE, a Royal Academy of Engineering Chair in Emerging Technologies and Professor of Quantum Technologies, University of Glasgow, said:
“It would seem that we are on the verge of paradigm change in AI technology and AI capability. There are many opinions circulating about the true nature and extent of this change, but I think it is very hard to predict exactly what will happen and how paradigm-shifting the next advances in AI will actually be. Scientists are still trying to understand the extent and implications of LLMs such as GPT4, implying that the technology is moving forward faster than expected and is proving hard to keep up with. I think that it is this last aspect that is creating concern – it is not so much a result of what we do know but rather, it is about what we don’t know.
“I think that the main concern is about entering uncharted territory without knowledge of what could happen. Opinions range from “AI will never be truly intelligent” to doomsday scenarios where AI gains the ability to reinvent and reprogram itself in an exponential explosion of intelligence. We simply do not know at the moment which scenario will prevail.
“I think that misinformation and the hype around some aspects of AI do not pose significant risks, or at least, not any more so than the risks posed by any other misinformation spread though social media for example. However, we might want to be concerned about the misinformation that AI itself could start to spread on social media. LLMs are trained using data from the internet, i.e. it is trained on human interactions that occur over the web and it is therefore, in principle, learning how to manipulate social media and social interactions at a global planetwide level. Should any AI gain any form of autonomy, regardless if this is “intelligent”, “conscious” or directed by humans, this is maybe an aspect that could pose a serious threat.
“We have examples from other areas of research and technology that have been subjected to regulations without curbing research and progress. For example, cloning technology is still moving forward but is not allowed on humans. This is of course a vast oversimplification but gives an example that “responsible innovation” can still take place with both “innovation” and “responsible” being key parts of the equation. But it seems that we are indeed running the risk of the “innovation” significantly outpacing any attempts to make it “responsible”.”
Dr Jonathan Aitken, an expert in robotics at the University of Sheffield, said:
“The new developments in AI have been enabled by the development of Large Language Models. These are neural networks that can be trained using text, and can use the structure and organisation of the text in order to build structures that show features of wider scale intelligence. The large language models capture facets of how humans communicate and this means that we can ask natural questions and get answers. This has powered the popularity of ChatGPT as we can ask it any question and get an answer.
“The problem with this is the answer is solely determined by the data that we train the system on. Any information in the training set is taken as accurate and true. The issue with training a system on live data available on the internet is whether that data is correct. This can have simple explanations due to similarities in data – for example two people have the same name and a data facet is then associated with that name rather than the individual.
“However, there is also something more of a nefarious reason. The internet contains opinions, mistruths and sometimes deliberately controversial opinions. All of this is contained in the training set and the model can then be shaped by controversial, inaccurate or ill-founded data. This poses one of the most serious risks. As technology has developed we have begun to rely on its data. We use GPS for navigation, to the extent that we rely on it to lead us to a destination. The risk is the GPS doesn’t have knowledge of the world so there are numerous cases of it leading drivers down roads unsuitable for cars. ChatGPT could do exactly the same thing for information, ask a question and it could lead you down a garden path using either mistaken training data, or by deliberate misdirection.
“One of the biggest things we can do as people using these systems is to always ask questions, and not take what we read at face value. There have been constant calls for regulation, and the biggest difficulty of these systems is around the topics that cause debate. Any AI needs to be trained using a fair network which is reflective of information. The separation of opinion is an important step, but difficult when needing to maintain a balanced point of view. As ever as people using these systems will use them for a range of different reasons and issues, being mindful around what these are will always help.
“We also need to see the opportunities for ChatGPT, and using these tools will become a skill. The key thing being, how do you ask a good question? This will become a new skill for people, and efficient and clever use of questions to showcase ChatGPT will drive efficiency in the workplace.”
Prof Michael Wooldridge, Professor of Computer Science, University of Oxford, and Director of Foundational AI Research at The Alan Turing Institute, said:
“The rapid takeup of ChatGPT has meant that everyone with a web browser can access the most sophisticated AI on the planet. This has meant that issues with the technology that were previously only visible in the research lab are now being experienced by tens of millions of people every day. While in many cases this may be nothing more than an irritation, in some cases it may have serious consequences – for example, someone looking for medical guidance from a chat bot and being given incorrect information. I would be surprised if this sort of situation was not happening right now. Overall, the technology has tremendous value – but we need to be alert to issues like this, and not be afraid to regulate when this is called for. The providers of the technology need to do everything they can to anticipate and prevent misuse.
“I don’t think we need to be concerned about it in the sense that it poses an existential threat – GPT-4 is not going to crawl out of the computer and take over the world. But there are immediate risks. The problem of “hallucinations” (the technology getting things wrong) is high on my list of concerns; it can be used to provide misinformation; and it can be used by criminals to open up a whole new array of scams to try to separate us from our money. I think it is essential to raise awareness of these immediate risks.
“The risks are serious and imminent. Tools like ChatGPT can be used to produce very plausible fake news stories to order – they can even be automatically customized to different demographic groups. We have elections coming in the UK and USA. I am very concerned that we’ll see the weaponization of AI in these – that social media will be swamped with high quality misinformation that will disrupt the democratic processes. The technology to do it exists, and is available; and there are surely many individuals and groups who have an interest in doing so. At the moment I don’t see any obstacle to this.
“The UK’s proposed AI regulation is a pragmatic way forward. But the technology is a moving target, and we need to be pro-active in identifying the challenges and mitigating risks. The most important of those right now is around the use of AI on social media.”
Prof Noel Sharkey, Emeritus Professor of Artificial Intelligence and Robotics, University of Sheffield, said:
“Geoffrey Hinton and others have warned of the danger of AI becoming smarter than us and rising against us for a long time. There is still no evidence for this.
“But Hinton is absolutely right about the dangers of how humans use the technology. We know that AI algorithms exhibit racial and gender bias in applications including job advertising and interviewing, policing, sentencing and even passport applications. These are the obvious biases that are being noticed, but there may be many more that we have yet to find.
“Another big danger in recent times is fakery. We already have Deepfake videos that look like they are the real person talking.
“ChatGPT is revolutionary. It is a brilliant piece of work that uses a phenomenal amount of data to predict the next word in a sentence in a given topic. The biggest problem is that it has no knowledge or awareness as to whether what it is writing is true or false. It gives answers irrespective of truth. Open AI does not claim otherwise. In my own tests, I have found many errors as well as manufactured sources and facts. If you are a scholar, you will fact check but many will take it at its word and that is the danger. It could be used in so many perilously disruptive ways if humans chose to use it badly.
“The algorithm relies on a huge and impenetrable black box of numbers that gives no explanation of its performance. Any biases in the system will only be detected through analysis of its behaviour. We have no idea to find how and why it does what it does.”
“There is a lot of discussion about students using ChatGPT or other future versions like Google’s BARD to help them with assignments and write their essays. That may be the least of our worries when it is used by bad actors to spread fake political ideas and perpetuate fraudulent behaviour and conspiracy theories in ways that we cannot yet imagine.”
Prof Maria Liakata, Professor in Natural Language Processing, Queen Mary University of London (QMUL), said:
“GPT 4 and other large language models are able to generate very fluent text, with an authoritative style as well as images, bringing together diverse sources of information. It will be much easier than in the past to generate fake content, at a pace that humans will be hard to verify, especially since the content can be highly plausible. Therefore, the spread of misinformation can increase significantly. Perhaps this will mean that we will return to only trusting credible sources such as established news organisations. However, as these models become better at reasoning and their hallucinations less obvious, which could be happening very soon, it is important that we don’t over-rely on them, which could pose a real danger in all sectors and walks of life. It is really important to think hard about the end game in developing this technology and its implications.”
Prof James Davenport, Hebron and Medlock Professor of Information Technology, University of Bath, said:
“Almost all technology, not just AI, has risks of using it, risks of abusing it, and risks from not using it. A good example is anaesthesia: medical use is highly regulated, it can go wrong, there are issues at the moment with the abuse of “laughing gas” etc., but no-one is seriously proposing to ban all anaesthesia, or put a stop to anaesthesia research. Nor is anyone proposing a complete ban on production of the relevant gases and their precursors. But anaesthesia has been around for nearly 200 years, and we have had time to adjust to it. Similarly, various forms of AI have substantial advantages – “predictive text” in mobile phones is a relatively simple form of AI that many people use, and either correct its (relatively rare) mistakes, or send a “damned predictive text” follow up. Preliminary results on the “self-driving vehicles only” lanes on some US interstates seem positive, in that there are fewer accidents, though a lot more experimentation and “like for like” statistics still needs to be done. AI doesn’t get distracted or fall asleep!
“We do need to be worried about the misuse of AI, particularly “Large Language Models” such as ChatGPT. Note that the same challenge is posed by all such models, and lies with the concept, not with the particular implementation, and cannot be cured by “more training” or “a larger model”. There is no sense of ‘meaning’ or ‘truth’ in these systems: they are following the probability distribution(s), or stochastics, of the texts they are trained on: hence the description of them as “stochastic parrots”. This sort of AI can never be relied on. A good example is the Australian libel issue (https://www.reuters.com/technology/australian-mayor-readies-worlds-first-defamation-lawsuit-over-chatgpt-content-2023-04-05). The complainant was involved in a bribery case, but as a whistleblower. However, these are relatively rare, so ChatGPT followed the probability, that people involved in a bribery case that goes to court are normally found guilty and serve a prison sentence, and invented the “fact” that this person had been found guilty and served a prison sentence.
“Again, the problem is the “Large Language Model” style of AI, not all AI. The risk of mis-information here is higher, and essentially uncontrollable. There are numerous attempts to build “guardrails” around them, but the literature is full of flaws in them, and many people suspect that a perfect, or even hard-to-subvert, guardrail is impossible.
“There is a great deal of (largely uncoordinated) regulation being drafted in many countries. Much of it is about making sure that those who manufacture or deploy AI take suitable (and there are substantial regulatory and technical discussions about what this might mean) precautions. Geoffrey Hinton has essentially resigned from Google because he believes that they (and others in the AI race, and Google is largely playing “catch up” in terms of release of technology without adequate risk assessment or “guardrails”) are releasing technology that, in its current form, is too dangerous to be released in the wild.
“Note that these “Large Language Models” have legitimate uses. Translation is one, where the generation of language is guided by the original text, and hence is unlikely to “invent” jail sentences etc. We have seen early research (see last week) on how they can produce text for patients that is more empathetic than that which doctors produced (though they experiment wasn’t really “like for like”). Many computing professionals are using these systems to produce first drafts of programs.”
Prof Alan Winfield, Professor of Electronic Engineering, University of the West of England (UWE Bristol), said:
“I think Hinton is right to call out the dangers of GPT4 and related AI technology. As someone who has been worrying about robot and AI ethics for nearly 15 years, and advocating for both responsible innovation and regulation, I feel someone vindicated – although no less worried. However, as one of the signatories of the FLI open letter, I am pleased that we – and Hinton – have shone a spotlight that has caught the world’s attention. It is not too late. While regulation is essential and on the way, the tools and standards already exist for ethical risk assessment, AI audit and transparency. I call upon OpenAI, Microsoft, Google and other developers to urgently and verifiably use these tools and standards and – if they can – prove to the world that the technology is safe and in good hands.”
Dr Carissa Veliz, Associate Professor in Philosophy, Institute for Ethics in AI, University of Oxford, said:
“That so many experts are speaking up about their concerns regarding the safety of AI, with some computer scientists going as far as regretting some of their work, should alarm policymakers. The time to regulate AI is now. Up until now we have had a fundamentally optimistic view of digital technology, thinking about best case scenarios and downplaying the risks, assuming that the benefits will always outweigh the drawbacks; that approach is not responsible enough, and it’s unsustainable. What is at stake is our democracies.”
Prof Ibrahim Habli, Professor of Safety-Critical Systems, University of York, said:
“The nature of AI systems such as ChatGPT renders most of our tried and tested risk management techniques deficient. These techniques, which underpin many safety standards and regulations, hinge on the ability to define the purpose of the system (what it is for) and its boundary (where and how it’s used) and this is very hard to do for large and highly interconnected AI systems. This is particularly concerning when AI is invisible, say in clinical diagnosis or judicial reviews. It’s the ghost in the machine and as such it’s challenging to spot and evaluate its potential harm.”
Prof Nello Cristianini: “author of “The Shortcut – why intelligent machines do not think like us” (CRC Press, 2023)”
Professor Daniele Faccio: “No conflicts.”
Prof Noel Sharkey: “no conflict of interests.”
Prof Tony Cohn: “No CoI.”
Prof Alan Winfield: “No conflicts of interest.”
Prof Ibrahim Habli: “I have no competing interests to declare. I currently have funding from UKRI, the Lloyd’s Register Foundation, the MPS Foundation and NHS England.”
Prof James Davenport: “I have no connection with Hinton, and am a user of Google like most of the Western World. I have experimented with Large Language Models in the context of writing computer programs.”
Dr Carissa Veliz: “no conflict of interest.”
For all other experts, no reply to our request for DOIs was received.