SigLIP 2: Multilingual Vision-Language Encoders with Improved Semantic Understanding, Localization, and Dense Features Paper • 2502.14786 • Published 1 day ago • 79
Cramming 1568 Tokens into a Single Vector and Back Again: Exploring the Limits of Embedding Space Capacity Paper • 2502.13063 • Published 3 days ago • 57
Sailor2: Sailing in South-East Asia with Inclusive Multilingual LLMs Paper • 2502.12982 • Published 3 days ago • 9
Native Sparse Attention: Hardware-Aligned and Natively Trainable Sparse Attention Paper • 2502.11089 • Published 6 days ago • 129
InfiniteHiP: Extending Language Model Context Up to 3 Million Tokens on a Single GPU Paper • 2502.08910 • Published 9 days ago • 139
TextAtlas5M: A Large-scale Dataset for Dense Text Image Generation Paper • 2502.07870 • Published 10 days ago • 42
TransMLA: Multi-head Latent Attention Is All You Need Paper • 2502.07864 • Published 10 days ago • 43
Scaling Laws in Patchification: An Image Is Worth 50,176 Tokens And More Paper • 2502.03738 • Published 16 days ago • 10
QuEST: Stable Training of LLMs with 1-Bit Weights and Activations Paper • 2502.05003 • Published 14 days ago • 41
Scaling Pre-training to One Hundred Billion Data for Vision Language Models Paper • 2502.07617 • Published 10 days ago • 27
LLMs Can Easily Learn to Reason from Demonstrations Structure, not content, is what matters! Paper • 2502.07374 • Published 11 days ago • 32