Master LLMs with our FREE course in collaboration with Activeloop & Intel Disruptor Initiative. Join now!

Publication

Meta’s Self-Rewarding Models, the Key to SuperHuman LLMs?
Artificial Intelligence   Data Science   Latest   Machine Learning

Meta’s Self-Rewarding Models, the Key to SuperHuman LLMs?

Last Updated on January 31, 2024 by Editorial Team

Author(s): Ignacio de Gregorio

Originally published on Towards AI.

Meta, the company behind Facebook, Whatsapp, and Rayban’s Meta glasses, has announced a recent, highly promising AI breakthrough, Self-Rewarding Language Models.

Their results have allowed their LLaMa-2 70B fine-tuned model to surpass models like Claude 2, Gemini Pro, and GPT-4 0613, despite being at least an order of magnitude smaller.

However, that is not the true breakthrough, as these new models also show signs of being a reasonable path to creating the first superhuman LLMs, even if that means humans taking one step closer to losing complete control over our best AI models.

But what does that mean? And is that a good thing?

Let’s find out.

This insight and others I share in Medium have mostly been previously shared in my weekly newsletter, TheTechOasis.

If you want to be up-to-date with the frenetic world of AI while also feeling inspired to take action or, at the very least, to be well-prepared for the future ahead of us, this is for you.

U+1F3DDSubscribe belowU+1F3DD

The newsletter to stay ahead of the curve in AI

thetechoasis.beehiiv.com

To this day, in all frontier models like ChatGPT, or Claude, humans play a crucial role in their creation.

As explained in my newsletter from two weeks ago, the later stages of the training process of… Read the full blog for free on Medium.

Join thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming a sponsor.

Published via Towards AI

Feedback ↓