Master LLMs with our FREE course in collaboration with Activeloop & Intel Disruptor Initiative. Join now!

Publication

How LLMs Know When to Stop Generating?
Latest   Machine Learning

How LLMs Know When to Stop Generating?

Last Updated on May 12, 2024 by Editorial Team

Author(s): Louis-François Bouchard

Originally published on Towards AI.

Understand how LLMs like GPT-4 decide when they have answered your question

Originally published on louisbouchard.ai, read it 2 days before on my blog!

A few days ago, I had a random thought: How does ChatGPT decide when it should stop answering? How does it know it has given a good enough answer? How does it stop talking?

Two scenarios can make the model stop generating: ‘EOS tokens’ (<|endoftext|>) and ‘Maximum Token Lengths.’ We will learn about them, but we must first take a small detour to learn more about tokens and GPT’s generation process…

LLMs don’t see words. They have never seen a word. What they see are called tokens. In the context of LLMs like GPT-4, tokens are not strictly whole words; they can also be parts of words, common subwords, punctuations or even parts of images like pixels. For example, the word “unbelievable” might be split into tokens like “un”, “believ”, and “able.” We break them down into familiar components based on how often they are repeated in the training data, which we call the tokenization process

But we still have words, and models don’t know words… We need to transform them into numbers so that the model can do math operations on them, as it has no clue what a word is…. Read the full blog for free on Medium.

Join thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming a sponsor.

Published via Towards AI

Feedback ↓