Master LLMs with our FREE course in collaboration with Activeloop & Intel Disruptor Initiative. Join now!


A Gentle Introduction To Large Language Models
Artificial Intelligence   Latest   Machine Learning

A Gentle Introduction To Large Language Models

Last Updated on November 5, 2023 by Editorial Team

Author(s): Abhishek Chaudhary

Originally published on Towards AI.

Image generated using DALL-E

Hi, glad you found your way to this Gentle Introduction to Large Language Models or LLMs. Now, since you are here, it’s safe to assume that you have been pulling your hairs out trying to figure out this 3 alphabet acronym that’s taken over your newsfeeds for the last year. Fret no more! This is exactly what this blog post is intended for. We’ll take a walk along the fantastic landscape of large language models and in the process, discuss some of the core concepts and how/why they work. We’ll start from neural networks, brush up on deep learning, figure out what the heck is NLP, and eventually, after the not-so-painstaking process, learn about the working of large language models. Let’s get started.

What is artificial intelligence?

Simply put, intelligence refers to the capacity to think, learn, comprehend and solve problems. It enables humans and certain animals to make sense of their surroundings, adapt to situations and make decisions based on their experiences and knowledge.

Intelligence encompasses reasoning abilities, the ability to learn from errors and utilizing information effectively to overcome challenges.

Essentially it is our capability that allows us to navigate the world around us and engage with it successfully. Now let's delve into the concept of intelligence (AI).

AI can be seen as an assistant that is skilled at processing vast amounts of information while making intelligent choices based on that data. Think of it as having a brain by your side that can help with tasks like suggesting movies or songs you might enjoy, aiding doctors in analyzing medical images accurately or even driving vehicles autonomously without human intervention.

What makes AI fascinating is its reliance, on algorithms (step by step instructions) and data for its functioning. Consider it akin, to a computer learning from its mistakes. Progressively improving at its assigned tasks through practice. This means that AI can be explained and comprehended by those to explore its workings.

What is Machine Learning?

Now that we have an understanding of AI we naturally wonder how machines actually acquire knowledge and comprehension. This is where machine learning becomes relevant.

Image generated using DALL-E

Imagine you have a computer. You want it to do something clever like identifying whether a picture shows a cat or a dog. One way to tackle this is by giving the computer instructions to search for features like fur, ears, tails and so on. However, this approach can get extremely complicated.

Machine Learning takes a different route. It’s akin to teaching the computer through examples. You would expose it to pictures of cats and dogs and gradually, it would start comprehending what distinguishes a cat from a dog on its own. It learns by spotting patterns and similarities within the images.

In essence Machine Learning forms the learning aspect of AI. Where computers learn from data to tasks. Thus, AI encompasses broader capabilities such, as reasoning, problem solving and language comprehension. All of which can be greatly enhanced through Machine Learning.

What is a Machine Learning model?

Once we grasp the concepts of AI and ML, it becomes essential to understand the significance of Language Models (LLMs). To comprehend LLMs, we must first grasp the meaning of a “model” (which makes up one-third of the term). Think of it as the mind or intelligence behind a machine that learns from data examples, rules, and patterns. For instance, it can learn distinguishing features such as cats having whiskers or dogs having longer legs. Utilizing these learned experiences or patterns, when presented with an image, it can make informed decisions or predictions.

To delve deeper into our analogy, envision Language Models as models with the ability to perform certain operations involving text. These models are trained on large amounts of text data from the internet and possess the capability to generate text content comparable to that produced by humans — sometimes even surpassing human quality. For example, models like GPT 4 have demonstrated their prowess in crafting poetry, answering questions intelligently, and even generating computer code. They truly are wizards, in terms of language mastery!

What are neural networks?

In order for the model to generate predictions it needs to gain an understanding of the patterns observed in the data. There are approaches to achieving this and one such method is through the utilization of Neural Networks.

Neural Networks play an important role in machine learning and artificial intelligence enabling us to tackle complex tasks such as image recognition, language comprehension and making predictions.

They consist of layers of units known as neurons that collaborate in processing information. Imagine your brain as a network comprised of neurons. When you encounter an image of a cat, for instance, neurons within your brain activate to identify it based on distinguishing features like fur, ears and whiskers. Neural networks operate similarly by employing neurons across layers to recognize patterns within data. However, they are significantly simpler, than the workings of the human brain.

Neural Network

Let's dive into the world of networks using a relatable analogy involving baking a cake. Imagine you’re on a mission to create a network that can accurately predict whether a cake will turn out delicious or not based on its ingredients and baking process. Here’s how the different concepts in networks align with this baking analogy;

  • Think of your ingredients and recipe as the input data, similar to the raw materials you gather for your neural network.
  • The entire process of baking symbolizes the structure of a network, composed of interconnected layers that work together.
  • Every step in the process represents a neuron functioning with its activation function. This is akin to adjusting your recipe based on factors like temperature and mixing time.
  • Just as tweaking ingredient quantities can impact the flavor of your cake weights in a network determine how strongly neurons are connected.
  • Ultimately, your goal is to produce a cake mirroring how a neural network strives for accurate predictions or classifications.
  • If your cake falls short of expectations, you refine your recipe. Just like backpropagation in neural networks.

What’s Deep Learning?

In the context of the baking analogy, the main difference between deep learning and a normal neural network lies in the depth of the network, which refers to the number of hidden layers. Let’s clarify the distinction:

Deep Neural Network

A regular neural network, also known as a neural network, usually consists of one or a few hidden layers positioned between the input and output layers. In the case of a neural network, each hidden layer can be seen as representing various stages or aspects of the baking process. For instance, a hidden layer might take into account factors such as mixing time, temperature and ingredient quantities. By combining these features, the network is able to make predictions about the quality of a cake.

A deep learning neural networks are characterized by having multiple hidden layers stacked on top of each other. These deep neural networks capture complex and abstract features with each additional hidden layer. For example while the first hidden layer might focus on features like mixing time and temperature, subsequent layers can delve into more intricate aspects such as ingredient interactions. This hierarchical representation capability allows the network to grasp patterns and relationships within data.

What are language models?

Imagine you’re playing a word game where the aim’s to complete a sentence. You come across a sentence like “The sun is shining and the birds are singing “. You have to guess the next word.

A language model drawing on its knowledge of words from sentences would make an informed guess such as “bright” or “beautiful “ since those words often follow phrases about pleasant weather.

It doesn’t end there. Language models assign probabilities to each word that could come next. For instance, it might assign a higher probability to “bright” and a lower probability to “elephant” because “elephant” isn’t typically used after discussing the weather. So, language models do their best to predict based on these probabilities.
Language models can be seen as word wizards that rely on patterns from past instances to determine the most likely next word in a sentence. They aren’t flawless, but are quite proficient at aiding tasks like providing autocomplete suggestions on your phone or predicting the following word as you type out a message.

What are Encoders and Decoders?


Encoders within a language model can be likened to listeners paying close attention to the words you speak. They analyze the preceding words of a sentence, such as “The sun is shining and the birds are “ carefully considering their meanings and relationships to establish a contextual understanding. This summarized context is then passed on to the “decoders.”


Decoders serve as word suggesters. They receive the information from encoders, which may indicate that the sentence pertains to birds and generate a list of probable next words. For example they might suggest “singing” or “chirping” since these words are commonly associated with birds and pleasant weather. Each suggestion from decoders is assigned a probability with the likely word receiving the highest probability.

In our word game analogy encoders grasp context from words while decoders utilize this context to make educated guesses about the next word by considering probabilities associated with various options. It’s akin to engaging in conversation, with a partner (the encoder) who listens intently and an expert advisor (the decoder) who provides optimal word choices based on what they have heard. They work together to assist you in constructing sentences that have significance.

What is Context in Encoder-Decoder setup?

In a setup where there is an encoder and a decoder the term “context” refers to the details about the input sequence (usually a series of words or tokens) that are stored and utilized by the decoder to create the output sequence.

The role of the encoder is to capture and encode this context from the input while the decoders task is to make use of this context in order to generate an output.

Encoder-Decoder Architecture

Here’s an explanation of how context works in an encoder-decoder setup:

Encoding Context (Encoder)

The encoder takes in the input sequence. Processes it step by step, typically transforming each element (like words in a sentence) into a representation of fixed length.

This representation, known as the context vector summarizes all information from the entire input sequence. It captures relationships and dependencies between elements of the input.

Utilizing Context (Decoder)

The decoder receives the context vector from the encoder as its state. This condensed form of information contains details about what was present in the input sequence.

Using this context one element at a time the decoder generates each part of the output sequence. It may also consider previously generated elements from within that output sequence.

By utilizing this context, the decoder can make informed decisions about what should come next in order to produce an output that is coherent and relevant, within its given context.

Generative Models

The term Generative models refer to “Masked” language models. Now, What exactly do we mean by “Masked” language models?

Masked Language Models (MLMs) are incredibly skilled at playing a word guessing game. Let me explain how they work in terms using some examples.

Imagine you have a sentence with a word, like “The [MASK] is chasing the ball.” The challenge is to figure out the suitable word to fill in the blank. MLMs are experts at solving these word puzzles.

What sets them apart is that they don’t rely on guesses. Instead they carefully analyze all the words preceding and following the space in the sentence.

These models have undergone training using abundant internet text. They have observed how words come together in sentence structures. As a result, they possess predictive abilities when it comes to completing sentences with appropriate words based on what they’ve learned.

We utilize different mask models such, as BERT and GPT, both of which are generative models known for their remarkable proficiency in predicting the next word within a sentence. GPT stands for Generative Pre-trained Transformer. We previously discussed the generative and pretrained aspects; now, let's delve into what “Transformer” means.

What is a Transformer?

Transformers are a type of deep learning model introduced by Vaswani et al. in their 2017 paper “Attention Is All You Need.” They are particularly effective in processing sequential data, such as text, due to their ability to capture long-range dependencies efficiently.

The transformer architecture is built upon two components; the encoder and the decoder. Both of these parts have layers.

  • Encoder: The encoder takes an input sequence, such as a sentence. Processes it token by token. Each token is initially transformed into a vector in a dimensional space. Then in each layer of the encoder self, attention mechanisms come into play, allowing the model to understand the importance of each relative to all the other tokens in the input sequence. By combining these weighted representations, the model effectively captures information. Furthermore, feedforward neural networks are used in each layer of the encoder to refine these representations.
  • Decoder: Similar to the encoder, the decoder also consists of layers but includes an additional attention mechanism that focuses on the output from the encoder. During decoding, this model generates an output sequence step by step. At each step, it utilizes self-attention to consider previously generated tokens and pays attention to the output from the encoder that encompasses information from the input sequence. This bidirectional attention enables the decoder to produce tokens based on the context in its output.


Self-attention plays an important role in transformer models, making them incredibly powerful in understanding the connections between words within a sequence.

It allows the model to grasp the relationships between words or elements in the text sequence enabling it to assign importance to each word based on its relevance to every other word in the sequence. This process generates representations that are highly meaningful.

The significance of self-attention in transformer models can be attributed to key factors:

  1. Grasping Context: Self-attention empowers transformers to capture context and understand how words relate to one another within a sequence. Instead of just considering neighboring words self, attention takes into account the entire sequence, which is essential for comprehending natural language context.
  2. Learning Long-Range Dependencies: Self attention is instrumental in helping transformers learn dependencies that span across distances within data. This capability proves crucial for tasks such as language translation or text generation, where meaningful phrases or words might be apart within a sentence.

I suggest checking out this blog written by @JayAlammar. It provides an insight into Transformers and self-attention which are highly versatile and applicable to various natural language processing tasks.

What’s going on with ChatGPT?

ChatGPT combines concepts from the world of transformers, masked models, encoder decoders and more. What makes ChatGPT powerful is its blend of transformer-based architecture, extensive pretraining on large datasets, fine-tuning for specific tasks and its ability to generate coherent context-aware and adaptive responses.

Here are a few reasons why ChatGPT is so effective:

  • Transformer Architecture: ChatGPT is built on transformer architecture, which excel at handling data and is particularly well suited for understanding and generating human language.
  • Extensive Pretraining: Before being used for tasks like chatbot interactions, ChatGPT undergoes training on a vast amount of text data sourced from the internet. This pretraining phase equips the model with an understanding of language grammar rules and general knowledge.
  • Bidirectional Self Attention ChatGPT utilizes self-attention mechanisms that allow tokens (words or parts of words) to consider both preceding and succeeding words within a sentence. This bidirectional understanding helps the model capture context and dependencies between words in order to be more contextually aware.
  • Fine Tuning Following the initial pretraining phase, ChatGPT goes through fine tuning, where it refines its abilities for specific tasks such, as chatbot interactions.

Additionally, there is another technique called Reinforcement Learning, from Human Feedback (RLHF) that contributes to the uniqueness of ChatGPT.

Reinforcement Learning from Human Feedback (RLHF)

Reinforcement Learning from Human Feedback (RLHF) is a technique used to enhance the performance of ChatGPT by combining data generated by humans and reinforcement learning. Here’s an overview of how RLHF works for ChatGPT:

As mentioned before, ChatGPT undergoes pretraining, where it learns from a vast amount of text data.

Next ChatGPT goes through a process called tuning. During this phase AI trainers engage in conversations with the model. Provide responses based on guidelines. These trainers simulate user interactions. The resulting dialogue dataset, along with comparison data where trainers rank model responses, serves as a reward system.

The model is then trained to maximize its rewards based on this reward system (known as Reinforcement Learning). Essentially it learns to generate responses that’re more likely to align with human preferences and fit into the given context.

Through rounds of fine-tuning and reinforcement learning the model gradually improves its performance over time. Each iteration helps it generate accurate and user-friendly responses.


In this post, we have delved into the domain of artificial intelligence, specifically focusing on machine learning and its advanced subfield, deep learning. We focused on language models, which serve as predictive algorithms for determining subsequent words in sentences based on contextual cues. We went over Transformers and Self Attention while also glossing over RLHF.

I hope I was able to provide an easy-to-follow guide to you, which can help you navigate the world of LLMs.

Join thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming a sponsor.

Published via Towards AI

Feedback ↓