KeDiff: Key Similarity-Based KV Cache Eviction for Long-Context LLM Inference in Resource-Constrained Environments Paper • 2504.15364 • Published Apr 21 • 3
Direct Alignment of Draft Model for Speculative Decoding with Chat-Fine-Tuned LLMs Paper • 2403.00858 • Published Feb 29, 2024 • 1
CAOTE: KV Caching through Attention Output Error based Token Eviction Paper • 2504.14051 • Published Apr 18 • 1
Recursive Speculative Decoding: Accelerating LLM Inference via Sampling Without Replacement Paper • 2402.14160 • Published Feb 21, 2024 • 1
On Speculative Decoding for Multimodal Large Language Models Paper • 2404.08856 • Published Apr 13, 2024 • 13