How LLMs Know When to Stop Generating?
Last Updated on May 12, 2024 by Editorial Team
Author(s): Louis-François Bouchard
Originally published on Towards AI.
Understand how LLMs like GPT-4 decide when they have answered your question
Originally published on louisbouchard.ai, read it 2 days before on my blog!
A few days ago, I had a random thought: How does ChatGPT decide when it should stop answering? How does it know it has given a good enough answer? How does it stop talking?
Two scenarios can make the model stop generating: βEOS tokensβ (<|endoftext|>) and βMaximum Token Lengths.β We will learn about them, but we must first take a small detour to learn more about tokens and GPTβs generation processβ¦
LLMs donβt see words. They have never seen a word. What they see are called tokens. In the context of LLMs like GPT-4, tokens are not strictly whole words; they can also be parts of words, common subwords, punctuations or even parts of images like pixels. For example, the word βunbelievableβ might be split into tokens like βunβ, βbelievβ, and βable.β We break them down into familiar components based on how often they are repeated in the training data, which we call the tokenization process
But we still have words, and models donβt know wordsβ¦ We need to transform them into numbers so that the model can do math operations on them, as it has no clue what a word is…. Read the full blog for free on Medium.
Join thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming aΒ sponsor.
Published via Towards AI