Sparse-dLLM: Accelerating Diffusion LLMs with Dynamic Cache Eviction Paper • 2508.02558 • Published 19 days ago • 9
LongLLaDA: Unlocking Long Context Capabilities in Diffusion LLMs Paper • 2506.14429 • Published Jun 17 • 45
Beyond Homogeneous Attention: Memory-Efficient LLMs via Fourier-Approximated KV Cache Paper • 2506.11886 • Published Jun 13 • 21