Learning Transformers: Code, Concepts, and Impact
Last Updated on January 3, 2025 by Editorial Team
Author(s): Aditya Kumar Manethia
Originally published on Towards AI.
Introduction
The paper βAttention is All You Needβ (Vaswani et al.,2017) introduced the Transformer architecture, a model that revolutionized NLP by completely discarding the standard Recurrent Neural Network (RNN) components. Instead, it leveraged something called βattentionβ to let the model decide how to focus on specific parts of an input (like words in a sentence) when generating an output.
Prior to Transformer, RNN-based models like LSTMs dominated NLP. These models processed text one token at a time and struggled to capture long-range dependencies effectively. Transformers, on the other hand, parallelize the data flow and rely on attention to figure out important relationships between tokens. This step had massive implications leading to huge leaps in areas like machine translation, text generation (like GPT), and even computer vision tasks.
In this blog, weβll walk through a code implementation inspired by the the Transformer model and see each component.
Before diving into the implementation of the Transformer model, itβs highly recommended to have a basic understanding of deep learning concepts. Familiarity with topics such as neural networks, embeddings, activation functions, and optimization techniques will make it much easier to follow the code and understand how the Transformer works. If youβre new to these concepts, consider exploring introductory resources on deep learning frameworks, as well as foundational topics like backpropagation.
Additional Source for Deep Learning:
- Understanding backpropagation β Andrej Karpathy
- Neural Network- Zero to hero by Andrej Karpathy
- PyTorch Documentation
Importing Libraries
We will use PyTorch as our deep-learning framework. PyTorch provides all the essentials for building and training neural networks:
import torch
import torch.nn as nn
import math
These imports brings:
torch
: The main PyTorch library.torch.nn
: Contains neural network-related classes and functions, likenn.Linear
,nn.Dropout
, etc.math
: For common math operations.
Letβs start with our Transformer Architecture
Input Embedding
What is an Embedding?
An embedding is a dense vector representation of a word or token. Instead of representing words as one-hot encoded vectors, embeddings map each word to lower-dimensional continuous vector space. These embeddings capture semantic relationships between words. For example, the word embeddings for βManβ and βWomanβ might be closer in vector space than βManβ and βDog.β
Hereβs the code for the embedding layer:
class InputEmbedding(nn.Module):
def __init__(self, d_model: int, vocab_size: int):
super().__init__()
self.d_model = d_model
self.vocab_size = vocab_size
self.embedding = nn.Embedding(vocab_size, d_model)
def forward(self, x):
return self.embedding(x) * math.sqrt(self.d_model)
Explanation:
nn.Embedding
: Convert word indices into dense vectors of sized_model
- Scaling by
sqrt(d_model)
: This is used in paper to stabilize gradients during training.
Example:
If we have vocabulary size of 6 (e.g., tokens like [βByeβ, βHelloβ, etc.]) and d_model
is 512, the embedding layer will map each token to a 512-dimensional vector.
Positional Encoding
what is Positional Encoding?
Transformers process input sequences in parallel, so they lack the inherent notion order (unlike RNNs, which process tokens sequentially). Positional encoding is added to embeddings to give the model information about the relative or absolute position of tokens in a sequence.
From Paper:
Hereβs the code for positional encoding:
class PositionalEncoding(nn.Module):
def __init__(self, d_model: int, seq_len: int, dropout: float) -> None:
super().__init__()
self.d_model = d_model
self.seq_len = seq_len
self.dropout = nn.Dropout(dropout)
pe = torch.zeros(seq_len, d_model)
position = torch.arange(0, seq_len).unsqueeze(1).float()
div_term = torch.exp(torch.arange(0, d_model, 2).float() * (-math.log(10000.0) / d_model))
pe[:, 0::2] = torch.sin(position * div_term)
pe[:, 1::2] = torch.cos(position * div_term)
pe = pe.unsqueeze(0)
self.register_buffer('pe', pe)
def forward(self, x):
x = x + (self.pe[:, :x.shape[1], :]).requires_grad_(False)
return self.dropout(x)
Explanation:
- Sinusoidal Functions: The encoding alternates between sine and cosine functions for even and odd dimensions.
- Why Sinusoidal?: These functions allow the model to generalize to sequences longer than those during training.
register_buffer
: Ensures the positional encoding is saved with the model but not updated during training.
We only need to compute the positional encodings once and then reuse them for every sentence.
Layer Normalization
What is Layer Normalization?
Layer normalization is a technique to stabilize and speed up training by normalizing the inputs across the feature dimension. It ensures that the mean is 0 and the variance is 1 for each input vector.
We also introduce two parameters, gamma (multiplicative) and beta (additive) that introduce fluctuations in the data. The network will learn to tune these two learnable parameters to introduce fluctuations when required.
Hereβs the code:
class LayerNormalization(nn.Module):
def __init__(self, eps: float = 1e-6) -> None:
super().__init__()
self.eps = eps
self.alpha = nn.Parameter(torch.ones(1))
self.bias = nn.Parameter(torch.zeros(1))
def forward(self, x):
mean = x.mean(dim=-1, keepdim=True)
std = x.std(dim=-1, keepdim=True)
return self.alpha * (x - mean) / (std + self.eps) + self.bias
Explanation:
alpha
andbias
: Learnable parameters that scale and shift the normalized output.eps
: A small value added to the denominator to prevent division by zero.
Feed-Forward Block
what is a Feed-Forward Block?
It is a simple two-layer neural network applied to each position in the sequence independently. It helps the model learn complex transformations.
Hereβs the code:
class FeedForwardBlock(nn.Module):
def __init__(self, d_model: int, d_ff: int, dropout: float) -> None:
super().__init__()
self.linear_1 = nn.Linear(d_model, d_ff)
self.dropout = nn.Dropout(dropout)
self.linear_2 = nn.Linear(d_ff, d_model)
def forward(self, x):
return self.linear_2(self.dropout(torch.relu(self.linear_1(x))))
Explanation:
- First Linear Layer: Expands the input dimension from
d_model
tod_ff
that is 512 β2048. - ReLU Activation: Adds non-linearity.
- Second Linear Layer: Projects back to
d_model
.
Multi-Head Attention
What is Attention?
Attention allows the model to focus on relevant parts of the input when making predictions. It computes a weighted sum of values (V), where the weights are determined by the similarity between queries (Q) and keys (K).
What is Multi-Head Attention?
Instead of computing a single attention score, multi-head attention splits the input into multiple βheads hβ to learn different type of relationships.
- Q(Query): Represents the current word or token.
- K(Key): Represents all the words or tokens in the sequence.
- V(Value): Represents the information associated with each word or token.
- Softmax: Converts the similarity scores into probabilities, so they sum to 1.
- Scaling by sqrt(d_k): Prevents the dot product from becoming too large, which can destabilize the softmax function.
Each head computes its own attention, and the results are concatenated and projected back to the original dimension.
Hereβs the code for the Multi-Head Attention Block:
class MultiHeadAttentionBlock(nn.Module):
def __init__(self, d_model: int, h: int, dropout: float) -> None: # h is number of heads
super().__init__()
self.d_model = d_model
self.h = h
# Check if d_model is divisible by num_heads
assert d_model % h == 0, "d_model must be divisible by num_heads"
self.d_k = d_model // h
# Define matrices W_q, W_k, W_v , W_o
self.w_q = nn.Linear(d_model, d_model) # W_q
self.w_k = nn.Linear(d_model, d_model) # W_k
self.w_v = nn.Linear(d_model, d_model) # W_v
self.w_o = nn.Linear(d_model, d_model) # W_o
self.dropout = nn.Dropout(dropout)
@staticmethod
def attention(query, key, value, d_k, mask=None, dropout=nn.Dropout):
# Compute attention scores
attention_scores = (query @ key.transpose(-2, -1)) / math.sqrt(d_k)
if mask is not None:
attention_scores.masked_fill(mask == 0, -1e9) # Mask padding tokens
attention_scores = torch.softmax(attention_scores, dim=-1)
if dropout is not None:
attention_scores = dropout(attention_scores)
return (attention_scores @ value), attention_scores
def forward(self, q, k, v, mask):
# Compute Q, K, V
query = self.w_q(q)
key = self.w_k(k)
value = self.w_v(v)
# Split into multiple heads
query = query.view(query.shape[0], query.shape[1], self.h, self.d_k).transpose(1, 2)
key = key.view(key.shape[0], key.shape[1], self.h, self.d_k).transpose(1, 2)
value = value.view(value.shape[0], value.shape[1], self.h, self.d_k).transpose(1, 2)
# Compute attention
x, self.attention_scores = MultiHeadAttentionBlock.attention(query, key, value, self.d_k, mask, self.dropout)
# Concatenate heads
x = x.transpose(1, 2).contiguous().view(x.shape[0], -1, self.h * self.d_k)
# Final linear projection
return self.w_o(x)
Explanation:
- Linear Layers (
W_q
,W_k, W_v
): These transform the input into queries, keys, and values. - Splitting into Heads: The input is split into
h head
, each with a smaller dimension(d_k = d_model /h
). - Concatenation: The outputs of all heads are concatenated and projected back to the original dimension using
w_o
.
Example:
If d_model=512
and h=8
, each head will have a dimension of d_k=64
. The input is split into 8 heads, and each head computes attention independently. The results are concatenated back into a 512-dimensional vector.
Residual Connection and Layer Normalization
What is a Residual Connection?
A residual connection adds the input of a layer to its output. This helps prevent the βVanishing gradientβ problem.
Hereβs the code for the residual connection:
class ResidualConnection(nn.Module):
def __init__(self, dropout: float) -> None:
super().__init__()
self.dropout = nn.Dropout(dropout)
self.norm = LayerNormalization()
def forward(self, x, sublayer):
return x + self.dropout(sublayer(self.norm(x)))
Explanation:
x
: Input to the layer.sublayer
: Function representing the layer (e.g., attention or feed-forward block.- The output is the sum of the input and the sublayerβs output, followed by dropout.
Encoder Block
The Encoder Block combines all the components we have discussed so far: multi-head, feed-forward block, residual connection, and layer normalization.
Hereβs the code for Encoder block:
class EncoderBlock(nn.Module):
def __init__(self, self_attention_block: MultiHeadAttentionBlock, feed_forward_block: FeedForwardBlock, dropout: float) -> None:
super().__init__()
self.self_attention_block = self_attention_block
self.feed_forward_block = feed_forward_block
self.residual_connections = nn.ModuleList([ResidualConnection(dropout) for _ in range(2)])
def forward(self, x, src_mask):
x = self.residual_connections[0](x, lambda x: self.self_attention_block(x, x, x, src_mask))
x = self.residual_connections[1](x, self.feed_forward_block)
return x
Explanation:
- Self-Attention: The input attends to itself to capture relationships between tokens.
- Feed-Forward Block: Applies a fully connected network to each token.
- Residual Connections: Add the input back to the output of each sublayer.
Decoder Block
The Decoder Block is similar to Encoder Block but includes an additional cross-attention layer. This allows the decoder to attend to the encoderβs output.
Hereβs the code for decoder block:
class DecoderBlock(nn.Module):
def __init__(self, self_attention_block: MultiHeadAttentionBlock, cross_attention_block: MultiHeadAttentionBlock, feed_forward_block: FeedForwardBlock, dropout: float) -> None:
super().__init__()
self.self_attention_block = self_attention_block
self.cross_attention_block = cross_attention_block
self.feed_forward_block = feed_forward_block
self.residual_connections = nn.ModuleList([ResidualConnection(dropout) for _ in range(3)])
def forward(self, x, encoder_output, src_mask, tgt_mask):
x = self.residual_connections[0](x, lambda x: self.self_attention_block(x, x, x, tgt_mask))
x = self.residual_connections[1](x, lambda x: self.cross_attention_block(x, encoder_output, encoder_output, src_mask))
x = self.residual_connections[2](x, self.feed_forward_block)
return x
Explanation:
- Self-Attention: The decoder attends to its own output.
- Cross-Attention: The decoder attends to the encoderβs output.
- Feed-Forward Block: Applies a fully connected network to each token.
Final Linear Layer
This layer projects the decoderβs output to the vocabulary size, converting embeddings into probabilities for each word. This layer will have a linear layer and a softmax layer. Here log_softmax
is used to avoid underflow and make this step more numerically stable.
Hereβs is the code for final linear layer:
class ProjectionLayer(nn.Module):
def __init__(self, d_model: int, vocab_size: int) -> None:
super().__init__()
self.proj = nn.Linear(d_model, vocab_size)
def forward(self, x):
return torch.log_softmax(self.proj(x), dim=-1)
Explanation:
- Input: The decoderβs output (shape:
[batch, seq_len, d_model]
). - Output: Log probabilities for each word in the vocabulary (shape:
[Batch, seq_len, vocab_size]
).
Transformer Model
The Transformer ties everything together: the encoder, decoder, embeddings, positional encodings, and projection layer.
Hereβs the code:
class Transformer(nn.Module):
def __init__(self, encoder: Encoder, decoder: Decoder, src_embed: InputEmbedding, tgt_embed: InputEmbedding, src_pos: PositionalEncoding, tgt_pos: PositionalEncoding, projection_layer: ProjectionLayer) -> None:
super().__init__()
self.encoder = encoder
self.decoder = decoder
self.src_embed = src_embed
self.tgt_embed = tgt_embed
self.src_pos = src_pos
self.tgt_pos = tgt_pos
self.projection_layer = projection_layer
def encode(self, src, src_mask):
src = self.src_embed(src)
src = self.src_pos(src)
return self.encoder(src, src_mask)
def decode(self, encoder_output, tgt, src_mask, tgt_mask):
tgt = self.tgt_embed(tgt)
tgt = self.tgt_pos(tgt)
return self.decoder(tgt, encoder_output, src_mask, tgt_mask)
def project(self, decoder_output):
return self.projection_layer(decoder_output)
Final Build Function Block
The final block is a helper function that construct the entire Transformer model by combining all the components we have seen so far. It allows us to specify hyperparameters which are used in paper.
Here is the code:
def build_transformer(src_vocab_size: int, tgt_vocab_size: int, src_seq_len: int, tgt_seq_len: int,
d_model: int = 512, N: int = 6, h: int = 8, dropout: float = 0.1, d_ff: int = 2048) -> Transformer:
# Create the embedding layers for source and target
src_embed = InputEmbedding(d_model, src_vocab_size)
tgt_embed = InputEmbedding(d_model, tgt_vocab_size)
# Create positional encoding layers for source and target
src_pos = PositionalEncoding(d_model, src_seq_len, dropout)
tgt_pos = PositionalEncoding(d_model, tgt_seq_len, dropout)
# Create the encoder blocks
encoder_blocks = []
for _ in range(N):
encoder_self_attention_block = MultiHeadAttentionBlock(d_model, h, dropout)
feed_forward_block = FeedForwardBlock(d_model, d_ff, dropout)
encoder_block = EncoderBlock(encoder_self_attention_block, feed_forward_block, dropout)
encoder_blocks.append(encoder_block)
# Create the decoder blocks
decoder_blocks = []
for _ in range(N):
decoder_self_attention_block = MultiHeadAttentionBlock(d_model, h, dropout)
decoder_cross_attention_block = MultiHeadAttentionBlock(d_model, h, dropout)
feed_forward_block = FeedForwardBlock(d_model, d_ff, dropout)
decoder_block = DecoderBlock(decoder_self_attention_block, decoder_cross_attention_block, feed_forward_block, dropout)
decoder_blocks.append(decoder_block)
# Create the encoder and decoder
encoder = Encoder(nn.ModuleList(encoder_blocks))
decoder = Decoder(nn.ModuleList(decoder_blocks))
# Create the projection layer
projection_layer = ProjectionLayer(d_model, tgt_vocab_size)
# Create the Transformer model
transformer = Transformer(encoder, decoder, src_embed, tgt_embed, src_pos, tgt_pos, projection_layer)
# Initialize the parameters using Xavier initialization
for p in transformer.parameters():
if p.dim() > 1:
nn.init.xavier_uniform_(p)
return transformer
The function takes these hyperparameters as input:
src_vocab_size
: The size of the source vocabulary (number of unique tokens in the source language).tgt_vocab_size
: The size of the target vocabulary (number of unique tokens in the target language).src_seq_len
: The maximum sequence length for the source input.tgt_seq_len
: The maximum sequence length for the target input.d_model
: The dimensionality of the model (default: 512).N
: The number of encoder and decoder blocks (default: 6).h
: The number of attention heads in the multi-head attention mechanism (default: 8).dropout
: The dropout rate to prevent overfitting (default:0.1).d_ff
: The dimensionality of the feed-forward network (default: 2048).
We can use this function to create a Transformer model with desired hyperparameters. For example:
src_vocab_size = 10000
tgt_vocab_size = 10000
src_seq_len = 50
tgt_seq_len = 50
transformer = build_transformer(src_vocab_size, tgt_vocab_size, src_seq_len, tgt_seq_len)
βAttention is All You Needβ Changes the AI world
- Parallelization: RNNs process words sequentially, but Transformers process entire sentences in parallel. This drastically reduces training time.
- Versality: The attention mechanism can be adapted to various tasks like- translation, text classification, question-answering, computer vision, speech recognition, and many more.
- Helping hand for Foundation Models: This architecture paved way for massive LLMs like BERT, GPT, and T5.
Conclusion and Closing words
In very short time, the Transformer has gone from a novel idea to the backbone of most state-of-the-art NLP systems. Its impact on the AI industry has been enormous.
This wraps up our guide to the very basic Transformer code implementation. We covered everything embeddings, positional encodings, multi-head attention, to feed-forward networks, and explained how it all ties together in the final architecture.
Feel free to experiment with different hyperparameters. The beauty of Transformer is their flexibility.
Thank you for reading!
Sources:
- Vaswani et al., βAttention Is All You Need,β 2017
- Jay Alammarβs Illustrated Transformer blog
- Umar Jamil β Attention is all you need
- ChatGPT
Join thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming aΒ sponsor.
Published via Towards AI