Papers
arxiv:2505.02922

RetroInfer: A Vector-Storage Approach for Scalable Long-Context LLM Inference

Published on May 5
· Submitted by iofu728 on May 7
Authors:
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,

Abstract

The growing context lengths of large language models (LLMs) pose significant challenges for efficient inference, primarily due to GPU memory and bandwidth constraints. We present RetroInfer, a novel system that reconceptualizes the key-value (KV) cache as a vector storage system which exploits the inherent attention sparsity to accelerate long-context LLM inference. At its core is the wave index, an Attention-aWare VEctor index that enables efficient and accurate retrieval of critical tokens through techniques such as tripartite attention approximation, accuracy-bounded attention estimation, and segmented clustering. Complementing this is the wave buffer, which coordinates KV cache placement and overlaps computation and data transfer across GPU and CPU to sustain high throughput. Unlike prior sparsity-based methods that struggle with token selection and hardware coordination, RetroInfer delivers robust performance without compromising model accuracy. Experiments on long-context benchmarks show up to 4.5X speedup over full attention within GPU memory limits and up to 10.5X over sparse attention baselines when KV cache is extended to CPU memory, all while preserving full-attention-level accuracy.

Community

🚀 Meet RetroInfer: a new system that rethinks the KV cache as vector storage in a GPU–CPU co-execution setup to accelerate long-context LLM inference. Powered by wave index and wave buffer, it achieves 4.5×–10.5× speedups over FlashAttention—without accuracy loss.

For example, when processing 120K contexts on a single A100, RetroInfer attains a decoding speed of 386 tokens/s, which is significantly faster than the 86 tokens/s achieved with FlashAttention. Moreover, RetroInfer efficiently extends the context length supported by a single GPU. It allows the processing of 1M contexts at decoding speed of 27 tokens/s, whereas prior GPU-CPU inference solutions could achieve a maximum of only 2.63 tokens/s.

The source code will be released soon. Stay tuned!

Your need to confirm your account before you can post a new comment.

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2505.02922 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2505.02922 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2505.02922 in a Space README.md to link it from this page.

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.