select search filters
briefings
roundups & rapid reactions
before the headlines
Fiona fox's blog

expert reaction to reports of new Meta large language model, Llama 2

There have been reports of a new large language model from Meta, Llama 2 (Large Language Model Meta AI).

 

Dr Stuart Armstrong, Co-Founder and Chief Researcher at Aligned AI, said:

“Meta’s Llama 2 is probably the most powerful publicly available LLM – but it is ultimately limited in capabilities because it isn’t safe and reliable enough. Self-driving cars fail when they go out of their training areas to new cities; ChatGPT’s functionality was locked down around everything controversial, just to stop it being racist or insulting. Llama 2 will have the same problems: it will misbehave in many new situations, and this makes it unsuited to general use in delicate and dangerous situations (such as running bank accounts, social media posts for corporations, customer service…).”

 

Prof Anthony G Cohn FREng, Professor of Automated Reasoning, University of Leeds, and Turing Fellow at The Alan Turing Institute, said:

“The decision by Meta to make their Llama-2 model is open source is interesting but not game changing – there are already other open source LLMs already released as open source though this is probably amongst the most powerful so far.  In general, releasing software as open source is a good thing as it enables scrutiny by the wider scientific community and enables others to build on it and learn from it, especially since it will be accessible via an API.  The fact that the licence also allows commercial use may help expand uses of LLMs, though it will be important for such providers to ensure that their applications act as assistants rather than decision makers – the unreliability of reasoning in LLMs and their propensity to “hallucinate” means that their output should always be subject to human review.   However, there are many potential benefits of AI technology as has been highlighted by the recent open letter from the British Computer Society calling for AI to be a force for good, transforming every area of our lives from healthcare to the workplace. Sir Nick Clegg mentioned on the Today programme that Meta decided not to open-source their voice model (VoiceBox) however which is probably a good idea given the sensitivity and increased potential for damaging misuse compared to an LLM. It should be noted that they have not made clear what Llama’s training dataset is which is frustrating to the academic community as it does not expose possible biases in the training data and limits the usefulness of this open-sourcing. Finally, there has been a lot of discussion recently about the need for codes of practice and regulation in this area, and that is certainly needed; clearly such regulation needs to cover not only the Big Tech providers, but also any user or developer of an AI system.  Enforcing such regulation will be challenging though as noted by Dame Wendy Hall on the Today programme.”

 

Prof Maria Liakata, Professor in Natural Language Processing, Queen Mary University of London (QMUL), said:

“It is great to have open-source models that can be officially used for both research and commercial purposes, especially ones that seem to perform on par with closed world models on several benchmark tasks and outperform most existing open-source chat models. There are different LLM model sizes in LLama-2, which helps with academic budgets, and finally small companies can also have a chance in developing products based on high performing LLMs. However, LLama-2 will not be appropriate for all applications as it is unclear from the corresponding paper what data the model has been trained on and what kind of inherent biases it may contain.”

 

Dr Daniel Chávez Heras, Lecturer in Digital Culture and Creative Computing at King’s College London, said:

“In the scientific and wider academic communities, the enthusiasm for Llama is largely rooted on access to the inner workings of these foundational models, the recipes so to speak, as opposed to only the outputs. Meta’s logic is in this sense closer to Stability AI, in that they see an advantage in steering the open source and creative ecosystem towards their models, in contrast to for example Open AI, which keeps its models fenced off in order to put their outputs behind a paywall and integrate with Microsoft existing services.

“None of these companies have figured out yet; they are testing approaches to get the right balance between compute and human resources against the backdrop of imminent regulation in different parts of the world. The gambit of opening up foundational models is to entice a broader research community to engage and develop with these models. This push for openness is not to be confused with any kind of democratic idealism, it is a strategic brain grab designed to compete. And it is important to understand there is no reason other organisations cannot compete for brain and creative power too, think for example of Vicuna, yet another large language model named after a South American camelid, which is open-source and was developed by a consortium of universities.”

 

Dr Mark Stevenson, Senior Lecturer in Computer Science at the University of Sheffield, said:

“Meta’s release of the Llama 2 large language model follows quickly on the heels of Google making BARD, their own large language model, available in Europe last week. Llama 2 is built using very similar technology to its predecessor, LLama which has been available since February, but is trained using many more examples of text. Increasing the amount of text used for training tends to improve the performance of large language models, although it is difficult to say by how much. The most significant change is that LLama 2 is more open than its predecessor. Meta has released many of the details about how the model was created, including the code, allowing researchers to explore its properties. Llama 2 has also been released under a more flexible licence which supports both research and commercial use.”

 

Prof Duc Pham, Chance Professor of Engineering, University of Birmingham, said:

“We should welcome Llama.  The more competition in this space, the better.  Llama being open source is great news as this should promote verifiability and eventually trust.”

 

Dr Mhairi Aitken, Ethics Fellow, The Alan Turing Institute, said:

“Meta are clearly signalling that they are taking a different approach to their competitors – particularly OpenAI – in releasing their model open source. But we should be cautious about some of the rhetoric that is being used here. Yes, the model has been made available to be used and for developers to build on it, which also enables scrutiny of some of its limitations and risks, but this openness doesn’t extend to transparency around what is in the training data the model was trained on. Meta remain tight lipped on important details here.

The idea that Meta’s approach reflects a commitment to democratising AI is highly questionable. While the model is available for use, it has been developed behind closed doors. Truly democratising AI requires wider public discussions around important decisions that are made in the design, development and deployment phases, including which datasets, or whose data is used to train the models. We also need greater public discussion around the ways these models are used and what safeguards are needed to protect people from their harms – these conversations need to happen before the models are released into the public domain. The worry here is that as the models are increasingly accessible and being used in an ever wider range of ways, rather than democratising AI we will instead see marginalised or vulnerable communities increasingly experience the worst of its impacts, while developers find new ways to profit from its use.”

 

Dr Kim Watts, Lecturer in the School of Management, University of Bath, said:

“What is the most powerful aspect of this news is the tie-up with Microsoft Azure. Making the open-source code available to developers releasing modules for aspects of knowledge management such as Data Lakes and Data Warehouses can really help people avoid the data swamp which often occurs with digital transformation projects, and have better business intelligence.

“The open-source aspect will also help universities and the next generation of students prepare for careers in this exciting and fast-moving space. Meta’s benchmarks of bias and hallucinations are also a positive move and it would be good if the rest of the industry can adopt them. As ever with these things, it’s a race for the top of mind and market share as well as being the ones that shape the standardisation efforts, as well as raising the drawbridge to limit new entrants.”

 

Dr Oli Buckley, Associate Professor in Cyber Security, School of Computing Sciences, University of East Anglia, said:

“I think the truth is that we don’t know if this is going to be a problem, right now. While I don’t think we’re going to face a killer robot scenario in the near future, it is important to understand these technologies a little more before releasing the source code. Every significant technological innovation in the last 100-years has had some capacity for misuse, with no shortage of people ready and willing to actually misuse it. The difference between a nuclear weapon and an LLM is that we are at least able to identify people procuring the pieces they need to make a nuclear weapon, it’s much harder to identify who is exploiting AI for something untoward.”

 

Prof Neil Lawrence, DeepMind Professor of Machine Learning, Department of Computer Science and Technology, University of Cambridge, said:

“This is surfacing an important debate in generative AI. So far we’ve mainly heard arguments that only large tech companies can control this technology, but if that is true then it’s also very disturbing. It would mean we are devolving the policing and regulation of our information society to a small number of entities with no democratic accountability. Many in the global tech community have been very concerned to see the speed with which UK policy seems to have been captured by these voices and if nothing else Meta’s announcement should serve to raise awareness of the spectrum of possible futures this technology offers.”

 

Prof James Davenport, Hebron and Medlock Professor of Information Technology, University of Bath, said:

“Immediate reaction: “Yawn”. If you want an analogy, it’s like a truck manufacturer announcing, not a new model, but new conditions of service and garages. Interesting if you own that sort of truck, possibly slightly interesting if you’re looking at buying a truck, but doesn’t change most people’s lives.

“More seriously, another development in large language models, trained on ‘we know not what’, with uncertain (and probably ever-changing as the builders play “whack-a-mole” with the hackers) “guardrails”. Different, and possibly more favourable, contractual terms (but do they include any liability warranties?).”

 

Dr Andrew Rogoyski, Director of Innovation and Partnerships at the Surrey Institute for People-Centred AI, University of Surrey, said:

“A leaked paper from Google back in May, “We have no moat, and neither does OpenAI”, highlighted internal concerns that the open-source community could erode any advantage gained by proprietary AI leaders like OpenAI. Meta’s decision to open source their models may play directly to this concern.

“Meta are evidently following a different strategy on AI, having fallen behind leaders like OpenAI and Anthropic. Open sourcing their LLMs is great from the research and developer community – it democratises access to AI and improves scrutiny of these powerful technologies. That said, it also puts these technologies in the hands of people who will misuse LLMs. We’re already seeing a rise in AI-enabled spam, phishing and fake imagery.

“What we shouldn’t overlook is that Meta have one of the most valuable sources of data with which to train LLMs, their 2 billion plus users of Facebook, Instagram and now Threads. So perhaps they don’t need to keep their LLMs proprietary, like their competition, as long as they lead on training data.

“It is striking that Microsoft is backing both Meta and OpenAI in really significant ways, they’re playing the odds on AI, ensuring that they don’t miss out as they have on mobile and social media.”

 

 

e.g. Independent https://www.independent.co.uk/tech/meta-llama-chatgpt-ai-rival-b2377802.html

AP News https://apnews.com/article/meta-ai-zuckerberg-llama-chatgpt-9431120efcc8e598c3d34af9b5201d1c

 

 

Declared interests

Dr Stuart Armstrong: “I cofounded Aligned AI, which is working on making AIs safe and aligned.”

Prof Anthony G Cohn: “No conflicts of interest to declare.”

Prof Duc Pham: “I have no conflict of interest to declare.”

Dr Mhairi Aitken: “I do not have any conflicts of interests to declare.”

Dr Oli Buckley: “no COIs.”

Prof James Davenport: “No COI.”

For all other experts, no reply to our request for DOIs was received.

in this section

filter RoundUps by year

search by tag