LLaMA by Meta leaked by an anonymous forum: Questions Arises on Meta
Last Updated on July 17, 2023 by Editorial Team
Author(s): Aditya Anil
Originally published on Towards AI.
How this leak by 4Chan poses a critical question about superintelligent models.
Only a few weeks ago Meta announced its new A.I. tool for AI researchers. And yesterday it took a different turn.
Recently, the βstate-of-the-artβ A.I. model developed by Meta was leaked online. As a consequence, the leak received wide notice from the internet and major AI communities.
After OpenAI released ChatGPT, Bing presented its Bing AI, and Google unveiled its BARD. Meta was no different. As just like the new Bing with ChatGPT got leaked, LLaMA by meta also got leaked by an anonymous forum, 4chan.
After OpenAI released ChatGPT, Google unveiled its BARD. And several others; Meta was no different. As just like the new Bing with ChatGPT got leaked, LLaMA by meta also got leaked by an anonymous forum, 4chan.
LLaMA: Metaβs new AI tool
According to the official release, LLaMA is a foundational language model developed to assist βresearchers and academicsβ in their work (as opposed to the average web user) to understand and study these NLP models. Leveraging AI in such a way could give researchers an edge in terms of time spent.
You may not know this, but this would be Metaβs third LLM after Blender Bot 3 and Galactica. However, the two LLMs were shut down soon, and Meta stopped their further development, as it produced erroneous results.
Before moving further, it is important to emphasize that LLaMA is NOT a chatbot like ChatGPT. As I mentioned before, it is a βresearch toolβ for researchers. We can expect the initial versions of LLaMA to be a bit more technical and indirect to use as opposed to the case with ChatGPT, which was very direct, interactive, and a lot easy to use.
βSmaller, more performant models such as LLaMA enable β¦ research community who donβt have access to large amounts of infrastructure to study these models.. further democratizing access in this important, fast-changing field,β said Meta in its official blog.
Metaβs effort of βdemocratizingβ access to the public could shed light on one of the critical issues of Generative AI β toxicity and bias. ChatGPT and other LLMs (obviously, I am referring to Bing) have a track record of responding in a way that is toxic and, wellβ¦ evil. The Verge and major critics have covered it in much detail.
Oh and the community did get the access, but not in the way Meta anticipated. On March 3rd, a downloadable torrent of the LLaMA system was posted on 4chan. 4chan is an anonymous online forum known for its controversial content and diverse range of discussions, which has nearly 222 million unique monthly visitors.
LLaMA is currently not in use on any of Metaβs products. But Meta has plans to make it available to researchers before they can use them in their own products. Itβs worth mentioning that Meta did not release LLaMA as a public chatbot. LLaMA is more of an open-source package that can be accessed by trusted authorities upon request.
This article is part of Creative Block newsletter. Creative Block is my personal newsletter where I shares articles and posts about Tech and AI. If youβd like to read more content like this, check out the Creative Block website!
Metaβs intention behind releasing Llama, as I covered before, was to democratize access to AI to research its problems. Unlike OpenAIβs ChatGPT, LLaMA does not require much large infrastructure and computing power to run it. Thus making its access easy to the consumers (who, for the time being, are predominantly researchers). The company believes that smaller, more performant models such as LLaMA will enable researchers without access to large amounts of infrastructure to study these models.
βEven with all the recent advancements in large language models, full research access to them remains limited because of the resources that are required to train and run such large models,β according to this blog post by Meta. Further, they said, βThis restricted access has limited researchersβ ability to understand how and why these large language models work, hindering progress on efforts to improve their robustness and mitigate known issues, such as bias, toxicity, and the potential for generating misinformation.β
Upon the leak of the news, the reactions by the AI community were rather mixed. Some say the leak will have troubling consequences, essentially blaming Meta for distributing the technology too freely.
βGet ready for loads of personalized spam and phishing attempts,β tweeted cybersecurity researcher Jeffrey Ladish after the news broke. Others say that greater access will improve AI safety.
Powerful LLMs: What to hope
Whether to agree with Ladishβs views or not is debatable. Personally, I feel open-sourcing AI models could only benefit the AI community to scrutinize the model and improve them for the better. What do you think? After all, one of LLaMAβs major goals is to βdemocratizeβ access to such models. But this access in the form of a leak put Meta into question β how it handles its tools and conducts release in public?
Most of the users that got the leaked copies soon discovered that LLaMA was not at all similar to ChatGPT. βDownloadingβ LLaMA is going to do very little for the average internet user because itβs a βrawβ AI system that needs a decent amount of technical expertise to get up and running.
However, as I am writing this, Meta hasnβt acknowledged the leak to the public yet. Neither did they comment on it.
There are both positive and negative consequences to this leak. On the one hand, unrestricted access to Llama could help researchers understand how and why large language models work, which could lead to improvements in robustness, bias, and the toxic nature of LLMs. This could really help in reducing the potential for generating misinformation by these troublesome machines.
On the other hand, however, the leak could lead to people misusing the model itself. It is not yet perfect. Hence Meta hasnβt released it fully to the public yet. Risks such as spam and phishing could be really hard to tackle if such superintelligent machines are put to the test.
Thus, much safeguard must be applied to the use of these models. We can see such tools, like OpenAI Text Classifier, emerging. So there is a positive hope for this.
AI is exciting, no doubt. But a lot scarier if we lose our control over it.
See you in the next post
Creative Block U+007C Aditya Anil
Explore AI and Tech insights that matter to you in a creative way. Weekly Tech and AI newsletter by Aditya Anil. Clickβ¦
creativeblock.substack.com
Join thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming aΒ sponsor.
Published via Towards AI