Attention System 2 Attention (is something you might need too) Paper • 2311.11829 • Published Nov 20, 2023 • 43 Transformers are Multi-State RNNs Paper • 2401.06104 • Published Jan 11, 2024 • 39 The Era of 1-bit LLMs: All Large Language Models are in 1.58 Bits Paper • 2402.17764 • Published Feb 27, 2024 • 618
System 2 Attention (is something you might need too) Paper • 2311.11829 • Published Nov 20, 2023 • 43
The Era of 1-bit LLMs: All Large Language Models are in 1.58 Bits Paper • 2402.17764 • Published Feb 27, 2024 • 618
Mamba+Transformers Jamba: A Hybrid Transformer-Mamba Language Model Paper • 2403.19887 • Published Mar 28, 2024 • 111