DecodingTrust: A Comprehensive Assessment of Trustworthiness in GPT Models Paper • 2306.11698 • Published Jun 20, 2023 • 12
Benchmarking and Building Long-Context Retrieval Models with LoCo and M2-BERT Paper • 2402.07440 • Published Feb 12, 2024 • 1
Simple linear attention language models balance the recall-throughput tradeoff Paper • 2402.18668 • Published Feb 28, 2024 • 19
Just read twice: closing the recall gap for recurrent language models Paper • 2407.05483 • Published Jul 7, 2024
LoLCATs: On Low-Rank Linearizing of Large Language Models Paper • 2410.10254 • Published Oct 14, 2024
DataComp-LM: In search of the next generation of training sets for language models Paper • 2406.11794 • Published Jun 17, 2024 • 50
Understanding the differences in Foundation Models: Attention, State Space Models, and Recurrent Neural Networks Paper • 2405.15731 • Published May 24, 2024
Zoology: Measuring and Improving Recall in Efficient Language Models Paper • 2312.04927 • Published Dec 8, 2023 • 2