Name: Towards AI Legal Name: Towards AI, Inc. Description: Towards AI is the world's leading artificial intelligence (AI) and technology publication. Read by thought-leaders and decision-makers around the world. Phone Number: +1-650-246-9381 Email: [email protected]
228 Park Avenue South New York, NY 10003 United States
Website: Publisher: https://towardsai.net/#publisher Diversity Policy: https://towardsai.net/about Ethics Policy: https://towardsai.net/about Masthead: https://towardsai.net/about
Name: Towards AI Legal Name: Towards AI, Inc. Description: Towards AI is the world's leading artificial intelligence (AI) and technology publication. Founders: Roberto Iriondo, , Job Title: Co-founder and Advisor Works for: Towards AI, Inc. Follow Roberto: X, LinkedIn, GitHub, Google Scholar, Towards AI Profile, Medium, ML@CMU, FreeCodeCamp, Crunchbase, Bloomberg, Roberto Iriondo, Generative AI Lab, Generative AI Lab Denis Piffaretti, Job Title: Co-founder Works for: Towards AI, Inc. Louie Peters, Job Title: Co-founder Works for: Towards AI, Inc. Louis-François Bouchard, Job Title: Co-founder Works for: Towards AI, Inc. Cover:
Towards AI Cover
Logo:
Towards AI Logo
Areas Served: Worldwide Alternate Name: Towards AI, Inc. Alternate Name: Towards AI Co. Alternate Name: towards ai Alternate Name: towardsai Alternate Name: towards.ai Alternate Name: tai Alternate Name: toward ai Alternate Name: toward.ai Alternate Name: Towards AI, Inc. Alternate Name: towardsai.net Alternate Name: pub.towardsai.net
5 stars – based on 497 reviews

Frequently Used, Contextual References

TODO: Remember to copy unique IDs whenever it needs used. i.e., URL: 304b2e42315e

Resources

Unlock the full potential of AI with Building LLMs for Productionβ€”our 470+ page guide to mastering LLMs with practical projects and expert insights!

Publication

LLaMA by Meta leaked by an anonymous forum: Questions Arises on Meta
Latest   Machine Learning

LLaMA by Meta leaked by an anonymous forum: Questions Arises on Meta

Last Updated on July 17, 2023 by Editorial Team

Author(s): Aditya Anil

Originally published on Towards AI.

How this leak by 4Chan poses a critical question about superintelligent models.

Image generated via Stable Diffusion

Only a few weeks ago Meta announced its new A.I. tool for AI researchers. And yesterday it took a different turn.

Recently, the β€˜state-of-the-art’ A.I. model developed by Meta was leaked online. As a consequence, the leak received wide notice from the internet and major AI communities.

After OpenAI released ChatGPT, Bing presented its Bing AI, and Google unveiled its BARD. Meta was no different. As just like the new Bing with ChatGPT got leaked, LLaMA by meta also got leaked by an anonymous forum, 4chan.

After OpenAI released ChatGPT, Google unveiled its BARD. And several others; Meta was no different. As just like the new Bing with ChatGPT got leaked, LLaMA by meta also got leaked by an anonymous forum, 4chan.

LLaMA: Meta’s new AI tool

Image by Author

According to the official release, LLaMA is a foundational language model developed to assist β€˜researchers and academics’ in their work (as opposed to the average web user) to understand and study these NLP models. Leveraging AI in such a way could give researchers an edge in terms of time spent.

You may not know this, but this would be Meta’s third LLM after Blender Bot 3 and Galactica. However, the two LLMs were shut down soon, and Meta stopped their further development, as it produced erroneous results.

Before moving further, it is important to emphasize that LLaMA is NOT a chatbot like ChatGPT. As I mentioned before, it is a β€˜research tool’ for researchers. We can expect the initial versions of LLaMA to be a bit more technical and indirect to use as opposed to the case with ChatGPT, which was very direct, interactive, and a lot easy to use.

β€œSmaller, more performant models such as LLaMA enable … research community who don’t have access to large amounts of infrastructure to study these models.. further democratizing access in this important, fast-changing field,” said Meta in its official blog.

Meta’s effort of β€œdemocratizing” access to the public could shed light on one of the critical issues of Generative AI β€” toxicity and bias. ChatGPT and other LLMs (obviously, I am referring to Bing) have a track record of responding in a way that is toxic and, well… evil. The Verge and major critics have covered it in much detail.

Oh and the community did get the access, but not in the way Meta anticipated. On March 3rd, a downloadable torrent of the LLaMA system was posted on 4chan. 4chan is an anonymous online forum known for its controversial content and diverse range of discussions, which has nearly 222 million unique monthly visitors.

LLaMA is currently not in use on any of Meta’s products. But Meta has plans to make it available to researchers before they can use them in their own products. It’s worth mentioning that Meta did not release LLaMA as a public chatbot. LLaMA is more of an open-source package that can be accessed by trusted authorities upon request.

This article is part of Creative Block newsletter. Creative Block is my personal newsletter where I shares articles and posts about Tech and AI. If you’d like to read more content like this, check out the Creative Block website!

Meta’s intention behind releasing Llama, as I covered before, was to democratize access to AI to research its problems. Unlike OpenAI’s ChatGPT, LLaMA does not require much large infrastructure and computing power to run it. Thus making its access easy to the consumers (who, for the time being, are predominantly researchers). The company believes that smaller, more performant models such as LLaMA will enable researchers without access to large amounts of infrastructure to study these models.

β€œEven with all the recent advancements in large language models, full research access to them remains limited because of the resources that are required to train and run such large models,” according to this blog post by Meta. Further, they said, β€œThis restricted access has limited researchers’ ability to understand how and why these large language models work, hindering progress on efforts to improve their robustness and mitigate known issues, such as bias, toxicity, and the potential for generating misinformation.”

Upon the leak of the news, the reactions by the AI community were rather mixed. Some say the leak will have troubling consequences, essentially blaming Meta for distributing the technology too freely.

β€œGet ready for loads of personalized spam and phishing attempts,” tweeted cybersecurity researcher Jeffrey Ladish after the news broke. Others say that greater access will improve AI safety.

Powerful LLMs: What to hope

Image by Author

Whether to agree with Ladish’s views or not is debatable. Personally, I feel open-sourcing AI models could only benefit the AI community to scrutinize the model and improve them for the better. What do you think? After all, one of LLaMA’s major goals is to β€˜democratize’ access to such models. But this access in the form of a leak put Meta into question β€” how it handles its tools and conducts release in public?

Most of the users that got the leaked copies soon discovered that LLaMA was not at all similar to ChatGPT. β€œDownloading” LLaMA is going to do very little for the average internet user because it’s a β€œraw” AI system that needs a decent amount of technical expertise to get up and running.

However, as I am writing this, Meta hasn’t acknowledged the leak to the public yet. Neither did they comment on it.

There are both positive and negative consequences to this leak. On the one hand, unrestricted access to Llama could help researchers understand how and why large language models work, which could lead to improvements in robustness, bias, and the toxic nature of LLMs. This could really help in reducing the potential for generating misinformation by these troublesome machines.

On the other hand, however, the leak could lead to people misusing the model itself. It is not yet perfect. Hence Meta hasn’t released it fully to the public yet. Risks such as spam and phishing could be really hard to tackle if such superintelligent machines are put to the test.

Thus, much safeguard must be applied to the use of these models. We can see such tools, like OpenAI Text Classifier, emerging. So there is a positive hope for this.

AI is exciting, no doubt. But a lot scarier if we lose our control over it.

See you in the next post

Creative Block U+007C Aditya Anil

Explore AI and Tech insights that matter to you in a creative way. Weekly Tech and AI newsletter by Aditya Anil. Click…

creativeblock.substack.com

Join thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming aΒ sponsor.

Published via Towards AI

Feedback ↓