Fine-Tuning Models using Prompt-Tuning with Hugging Face’s PEFT Library
Author(s): Pere Martra Originally published on Towards AI. Prompt Tuning is such a simple technique that it’s surprising how remarkably efficient it can be. It’s the form of fine-tuning that requires the fewest weight modifications and the only one that allows multiple …
WizardCoder: Why It’s the Best Coding Model Out There
Author(s): Luv Bansal Originally published on Towards AI. In this blog, we will dive into what WizardCoder is and why it stands out as the best coding model in the field. We’ll also explore why its performance on the HumanEval benchmark is …
LangChain 101: Part 2ab. All You Need to Know About (Large Language) Models
Author(s): Ivan Reznikov Originally published on Towards AI. This is part 2ab of the LangChain 101 course. It is strongly recommended to check the first part to understand the context of this article better (follow the author in order not to miss …
How to Fit Large Language Models in Small Memory: Quantization
Author(s): Ivan Reznikov Originally published on Towards AI. Large Language Models can be used for text generation, translation, question-answering tasks, etc. However, LLMs are also very large (obviously, Large language models) and require a lot of memory. This can make them challenging …
How Does an LLM Generate Text?
Author(s): Ivan Reznikov Originally published on Towards AI. This article won’t discuss transformers or how large language models are trained. Instead, we will concentrate on using a pre-trained model. All the code is provided on Github and Colab Let’s look at the …
How I Use ChatGPT as an LLM Engineer to Create Projects Fast
Author(s): Kris Ograbek Originally published on Towards AI. Even if I don’t know the technologies I’m using in the project. This member-only story is on us. Upgrade to access all of Medium. Image created by the author using Leonardo.ai. Prompt: “A collaboration …
Candle and Falcon: A Guide to Large Language Models in Rust
Author(s): Ulrik Thyge Pedersen Originally published on Towards AI. Step-by-Step Guide to Text Generation Using HuggingFace’s Falcon Model This member-only story is on us. Upgrade to access all of Medium. Image by Author with @MidJourney Artificial Intelligence (AI) continues to shape the …
What Is the Future of Conversational Assistance In the ChatGPT Era?
Author(s): Patrick Meyer Originally published on Towards AI. With the emergence of ChatGPT, the world of conversational assistance solutions has undergone a seismic shift. This article looks at the remarkable capabilities of these solutions and the profound impact they have on meeting …
How to Use LangChain’s Chains and GPT Models to Generate Endless Content Ideas: A Step-by-step Guide.
Author(s): Kris Ograbek Originally published on Towards AI. Large Language Models shine when you combine them. This member-only story is on us. Upgrade to access all of Medium. Image created by the author with Leonardo.ai. ChatGPT is powerful, but it has a …
Create a Self-Moderated Commentary System with LangChain and OpenAI
Author(s): Pere Martra Originally published on Towards AI. We’re going to create a self-moderated comment response system using two of the models available on OpenAI chained with LangChain. Preventing our system from being trolled. This member-only story is on us. Upgrade to …
How to Create LLaMa 2 Chatbot with Gradio and Hugging Face in Free Colab.
Author(s): Kris Ograbek Originally published on Towards AI. Thanks to Gradio, you build the chatbot UI in one line of code! This member-only story is on us. Upgrade to access all of Medium. Image created by the author with Leonardo.ai. Prompt: “A …
Sentiment Analysis performed on Turkey Earthquake Tweets
Author(s): Claudio Giorgio Giancaterino Originally published on Towards AI. Sentiment analysis is a Natural Language Process technique used in order to tag a given text to its sentiment like positive, negative or neutral. Usually, sentiment analysis is used in Marketing to better …
Cracking the Code of Large Language Models: What Databricks Taught Me
Author(s): Anand Taralika Originally published on Towards AI. Learn to build your own end-to-end production-ready LLM workflows Photo by Brett Jordan on Unsplash In a world increasingly shaped by artificial intelligence, Large Language Models (LLMs) have emerged as the crown jewels of …
Exploring Large Language Models -Part 3
Author(s): Alex Punnen Originally published on Towards AI. Fine-tuning, Model Quantisation, Low Ranked Adapters, Instruct Tuning and using LLMs to generate training data This article is written primarily for self-study. So it goes broadly and also deep. Feel free to skip certain …
Exploring Large Language Models -Part 3
Author(s): Alex Punnen Originally published on Towards AI. Fine-tuning, Model Quantisation, Low Ranked Adapters, Instruct Tuning and using LLMs to generate training data This article is written primarily for self-study. So it goes broadly and also deep. Feel free to skip certain …