FastKV: KV Cache Compression for Fast Long-Context Processing with Token-Selective Propagation Paper • 2502.01068 • Published Feb 3 • 16
Mixture of Scales: Memory-Efficient Token-Adaptive Binarization for Large Language Models Paper • 2406.12311 • Published Jun 18, 2024 • 7