dKV-Cache: The Cache for Diffusion Language Models
Abstract
A KV-cache-like mechanism, delayed KV-Cache, accelerates diffusion language models' inference without significantly degrading performance.
Diffusion Language Models (DLMs) have been seen as a promising competitor for autoregressive language models. However, diffusion language models have long been constrained by slow inference. A core challenge is that their non-autoregressive architecture and bidirectional attention preclude the key-value cache that accelerates decoding. We address this bottleneck by proposing a KV-cache-like mechanism, delayed KV-Cache, for the denoising process of DLMs. Our approach is motivated by the observation that different tokens have distinct representation dynamics throughout the diffusion process. Accordingly, we propose a delayed and conditioned caching strategy for key and value states. We design two complementary variants to cache key and value step-by-step: (1) dKV-Cache-Decode, which provides almost lossless acceleration, and even improves performance on long sequences, suggesting that existing DLMs may under-utilise contextual information during inference. (2) dKV-Cache-Greedy, which has aggressive caching with reduced lifespan, achieving higher speed-ups with quadratic time complexity at the cost of some performance degradation. dKV-Cache, in final, achieves from 2-10x speedup in inference, largely narrowing the gap between ARs and DLMs. We evaluate our dKV-Cache on several benchmarks, delivering acceleration across general language understanding, mathematical, and code-generation benchmarks. Experiments demonstrate that cache can also be used in DLMs, even in a training-free manner from current DLMs.
Community
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- SentenceKV: Efficient LLM Inference via Sentence-Level Semantic KV Caching (2025)
- Model Reveals What to Cache: Profiling-Based Feature Reuse for Video Diffusion Models (2025)
- Efficient Pretraining Length Scaling (2025)
- Head-Aware KV Cache Compression for Efficient Visual Autoregressive Modeling (2025)
- Unifying Autoregressive and Diffusion-Based Sequence Generation (2025)
- FastVAR: Linear Visual Autoregressive Modeling via Cached Token Pruning (2025)
- PARD: Accelerating LLM Inference with Low-Cost PARallel Draft Model Adaptation (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper