Llama Dive
Last Updated on October 31, 2024 by Editorial Team
Author(s): Derrick Mwiti
Originally published on Towards AI.
This member-only story is on us. Upgrade to access all of Medium.
Photo by Gustavo EspΓndola on UnsplashItβs no doubt that generative AI will change every industry. The current state of generative AI, particularly text-to-image generation and text generation is a culmination of years of research. For example, current large language models (LLM) are based on the Transformer architecture that was invented by Google in 2017. The U-Net model is a critical part of current text-to-image generation models such as stable diffusion.
The generative AI space has shown massive potential courtesy of highly performant models such as Stable Diffusion from Stability AI and Llama by Meta AI. In this post, the focus will be on the Llama model that has been trained on trillions of tokens. Llama 2 is particularly revolutionary because of its performance and permissive license.
Llama 2 was trained on 40% more data compared to Llama 1. Llama 2-Chat is a fine-tuned version of Llama 2 for chat. The Llama 2 model was trained using grouped-query attention to improve inference scalability.
Llama 2 training is done as follows:
Pretraining using publicly available datasetsCreate LLama 2-Chat using supervised fine-tuningFurther training using Reinforcement Learning with Human Feedback (RLHF)
https://arxiv.org/pdf/2307.09288.pdf
Notable technical details about Llama 2 are,… Read the full blog for free on Medium.
Join thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming aΒ sponsor.
Published via Towards AI