InfiniteHiP: Extending Language Model Context Up to 3 Million Tokens on a Single GPU Paper • 2502.08910 • Published 9 days ago • 139
TransMLA: Multi-head Latent Attention Is All You Need Paper • 2502.07864 • Published 10 days ago • 43