Name: Towards AI Legal Name: Towards AI, Inc. Description: Towards AI is the world's leading artificial intelligence (AI) and technology publication. Read by thought-leaders and decision-makers around the world. Phone Number: +1-650-246-9381 Email: pub@towardsai.net
228 Park Avenue South New York, NY 10003 United States
Website: Publisher: https://towardsai.net/#publisher Diversity Policy: https://towardsai.net/about Ethics Policy: https://towardsai.net/about Masthead: https://towardsai.net/about
Name: Towards AI Legal Name: Towards AI, Inc. Description: Towards AI is the world's leading artificial intelligence (AI) and technology publication. Founders: Roberto Iriondo, , Job Title: Co-founder and Advisor Works for: Towards AI, Inc. Follow Roberto: X, LinkedIn, GitHub, Google Scholar, Towards AI Profile, Medium, ML@CMU, FreeCodeCamp, Crunchbase, Bloomberg, Roberto Iriondo, Generative AI Lab, Generative AI Lab VeloxTrend Ultrarix Capital Partners Denis Piffaretti, Job Title: Co-founder Works for: Towards AI, Inc. Louie Peters, Job Title: Co-founder Works for: Towards AI, Inc. Louis-François Bouchard, Job Title: Co-founder Works for: Towards AI, Inc. Cover:
Towards AI Cover
Logo:
Towards AI Logo
Areas Served: Worldwide Alternate Name: Towards AI, Inc. Alternate Name: Towards AI Co. Alternate Name: towards ai Alternate Name: towardsai Alternate Name: towards.ai Alternate Name: tai Alternate Name: toward ai Alternate Name: toward.ai Alternate Name: Towards AI, Inc. Alternate Name: towardsai.net Alternate Name: pub.towardsai.net
5 stars – based on 497 reviews

Frequently Used, Contextual References

TODO: Remember to copy unique IDs whenever it needs used. i.e., URL: 304b2e42315e

Resources

Our 15 AI experts built the most comprehensive, practical, 90+ lesson courses to master AI Engineering - we have pathways for any experience at Towards AI Academy. Cohorts still open - use COHORT10 for 10% off.

Publication

Optimizing Transformer Inference with Grouped Query Attention
Artificial Intelligence   Latest   Machine Learning

Optimizing Transformer Inference with Grouped Query Attention

Last Updated on September 29, 2025 by Editorial Team

Author(s): Deepanshu

Originally published on Towards AI.

Optimizing Transformer Inference with Grouped Query Attention

In the relentless race to build larger and more capable Large Language Models (LLMs), we often celebrate breakthroughs in model architecture and training scale. However, some of the most impactful innovations are less glamorous. They are clever engineering tricks which make these LLMs practical to run. Grouped Query Attention(GQA) [2] is one such innovation.

It’s a critical component in many open source models like Llama 2, Llama 3, and Mixtral, enabling them to handle longer contexts and run inference faster. It is an elegant compromise between two extremes: the powerful but memory-hungry Multi-Head Attention (MHA) and the lean but potentially quality-degrading Multi-Query Attention (MQA).

In this blog we’ll take a deep dive into Grouped Query Attention. We’ll start with a quick refresher on its predecessors, then unravel the mathematical machinery of GQA, and finally, explore why this “middle-ground” approach has become a new standard for efficient transformer architectures.

The Attention Landscape

To understand GQA, we must first understand the problem it solves. The bottleneck in LLMs is often memory bandwidth and not just computation. Specifically, loading the attention mechanism’s Key-Value (KV) cache from high-bandwidth memory (HBM) to on-chip SRAM is a major limiting factor in how quickly a model can generate the next token.

Let’s look at the two architectures that came before GQA.

Multi-Head Attention

This is the classic attention method, introduced in the “Attention is all you need” [1] paper. The idea here is to allow the model to jointly attend to information from different representation subspaces at different positions.

Instead of one big attention calculation, this mechanism splits the queries (Q), keys (K), and values (V) into multiple smaller “heads.”

Process:

  1. The input embedding of dimension d_model is linearly projected into h different sets of queries, keys, and values.
  2. Each of these h heads has a dimension d_head (where h * d_head = d_model)
  3. Scaled dot-product attention is computed in parallel for each head.
  4. The outputs of all heads are concatenated and projected back to the original d_model dimension.

The main problem with this mechanism is the KV-cache problem. During autoregressive decoding (i.e., generating text token by token), the keys and values for all previous tokens are cached to avoid re-computation.

So for Multi Head Attention we must store a separate Key and Value vector for each head. The size of the KV cache for a single layer then becomes:

Cache Size = 2 * (batch_size * seq_len * num_heads * d_head)

So as you can see, as the context length (seq_len) grows, this cache becomes enormous, consuming gigabytes of VRAM and saturating memory bandwidth. For a large model with 64 heads, this is a significant cost.

Multi Query Attention

This mechanism was proposed as a radical solution to the KV cache problem. Instead of having h separate Key and Value heads, this mechanism just has one K and V head and shares it across all h Query heads.

Process:

  1. Project the input into h Query heads.
  2. Project the input into just one Key head and one Value head.
  3. All h Query heads perform attention using the same Key and Value.

The KV cache here then gets reduced by a factor of h, which is a massive saving. However the drawback here is that it can sometimes lead to a drop in model’s quality by forcing the model to pull information from the same, single representation of keys and values.

Grouped Query Attention

This attention mechanism strikes a balance between Multi Head Attention and Multi Query Attention. The idea is simple, instead of having one KV cache for all the heads, we group the heads. And then each group can share a single KV cache.

Let’s define some terms:

  • h_q: Total number of query heads
  • h_kv: Total number of KV heads, this will be equal to the number of groups here
  • Group size: Equal to h_q / h_kv

Mathematical Deep Dive

Let’s assume:

  • Batch size: b
  • Sequence length: s
  • Model dimension: d_model
  • Number of Query heads: h_q
  • Number of Key/Value heads (groups): h_kv
  • Head dimension: d_head
  • The input tensor X has a shape of (b, s, d_model).

First, we project our input X into Q, K, and V matrices. This is done using learned weight matrices.

Since we need h_q query heads,

Weight matrix W_Q has shape: (d_model, h_q * d_head).

Q = X * W_Q

The resulting Q matrix has shape (b, s, h_q * d_head). We then reshape this to (b, s, h_q, d_head) to separate the heads.

Weight matrix W_K has shape: (d_model, h_kv * d_head)

Weight matrix W_V has shape: (d_model, h_kv * d_head)

And,

K = X * W_K

V = X * W_V

The resulting K and V matrices will have shape (b, s, h_kv * d_head). We reshape these to (b, s, h_kv, d_head).

Now we have h_q query heads but only h_kv key/value heads. To perform the attention score calculation, the number of heads must match. We achieve this by “sharing” the K/V heads across the query heads in their respective groups.

In practice, this is implemented by repeating the K and V heads to match the number of Q heads. Let g = h_q / h_kv be the group size. Each of the h_kv heads for K and V then must be repeated g times.

Let’s visualize the tensors’ logical shapes for clarity (ignoring batch and sequence length for a moment):

  • Q: (h_q, d_head)
  • K: (h_kv, d_head)
  • V: (h_kv, d_head)

To perform the calculation, we can reshape Q to explicitly show the groups:

  • Q_reshaped: (h_kv, g, d_head)

And then repeat K and V:

  • K_repeated: We can think of this as expanding K to a shape of (h_kv, g, d_head), where each of the g items within a group i is a copy of K[i].
  • V_repeated: Similarly expanded to (h_kv, g, d_head).

With the shapes aligned, we can now perform the standard attention calculation. The operation is identical to MHA, but the content of the K and V tensors is structured differently due to the repetition.

Scores = Q * K_repeated.transpose(-2, -1)

The shape of Scores will be (b, h_q, s, s). The calculation is effectively happening in parallel for all h_q query heads, but the key vectors they are comparing against are shared within groups.

Scaled_Scores = Scores / sqrt(d_head)

Attention_Weights = softmax(Scaled_Scores, dim=-1)

Output = Attention_Weights * V_repeated

The shape of Output will then be (b, s, h_q, d_head).

Finally, we concatenate the outputs of all heads and project them back to the model’s dimension. The Output tensor is reshaped from (b, s, h_q, d_head) to (b, s, h_q * d_head), which is (b, s, d_model).

This concatenated output is then passed through a final linear layer, governed by a weight matrix W_O of shape (d_model, d_model).

Final_Output = Output_concatenated * W_O

Impact of Grouped Query Attention

Let’s compare the cache sizes for each of the attention mechanisms we’ve discussed in this blog.

  • Multi Head Attention Cache: 2 * b * s * h_q * d_head
  • Grouped Query Attention Cache: 2 * b * s * h_kv * d_head
  • Multi Query Attention Cache: 2 * b * s * 1 * d_head

Let’s talk about Llama2 70b model, which is one of the open source models using Grouped Query Attention. For h_q = 64, h_kv = 8, d_head = 128, compared to a hypothetical Multi Head Attention version, Grouped Query Attention reduces the KV cache size by a factor of 64 / 8 = 8x. For a sequence length of 8192, this saves tens of gigabytes of VRAM, making it possible to run such long contexts on existing hardware.

This directly translates to:

  • Longer context windows: The model can “remember” more of the preceding text.
  • Larger batch sizes: More users can be served concurrently on the same hardware.

Another major impact is the increase in inference speed. By reducing the KV cache size, we reduce the amount of data that needs to be read from slow HBM to fast SRAM for each token generation. Since this memory transfer is the main bottleneck, GQA directly speeds up inference, leading to lower latency.

The Grouped Query Attention paper [2] also showed that a Grouped Query Attention model trained from scratch achieves nearly the same quality as an Multi Head Attention model, while being significantly better than an Multi Query Attention model. It successfully captures the benefits of Multi Query Attention (speed, memory efficiency) without paying the full price in performance degradation.

References

[1] [1706.03762] Attention Is All You Need

[2] [2305.13245] GQA: Training Generalized Multi-Query Transformer Models from Multi-Head Checkpoints

Join thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming a sponsor.

Published via Towards AI


Take our 90+ lesson From Beginner to Advanced LLM Developer Certification: From choosing a project to deploying a working product this is the most comprehensive and practical LLM course out there!

Towards AI has published Building LLMs for Production—our 470+ page guide to mastering LLMs with practical projects and expert insights!


Discover Your Dream AI Career at Towards AI Jobs

Towards AI has built a jobs board tailored specifically to Machine Learning and Data Science Jobs and Skills. Our software searches for live AI jobs each hour, labels and categorises them and makes them easily searchable. Explore over 40,000 live jobs today with Towards AI Jobs!

Note: Content contains the views of the contributing authors and not Towards AI.