Should We Still Pretrain Encoders with Masked Language Modeling? Paper • 2507.00994 • Published 12 days ago • 72
view article Article Should We Still Pretrain Encoders with Masked Language Modeling? By Nicolas-BZRD and 3 others • 12 days ago • 20
Is Preference Alignment Always the Best Option to Enhance LLM-Based Translation? An Empirical Analysis Paper • 2409.20059 • Published Sep 30, 2024 • 17
Towards Trustworthy Reranking: A Simple yet Effective Abstention Mechanism Paper • 2402.12997 • Published Feb 20, 2024 • 9