Sailor2: Sailing in South-East Asia with Inclusive Multilingual LLMs
Abstract
Sailor2 is a family of cutting-edge multilingual language models for South-East Asian (SEA) languages, available in 1B, 8B, and 20B sizes to suit diverse applications. Building on Qwen2.5, Sailor2 undergoes continuous pre-training on 500B tokens (400B SEA-specific and 100B replay tokens) to support 13 SEA languages while retaining proficiency in Chinese and English. Sailor2-20B model achieves a 50-50 win rate against GPT-4o across SEA languages. We also deliver a comprehensive cookbook on how to develop the multilingual model in an efficient manner, including five key aspects: data curation, pre-training, post-training, model customization and evaluation. We hope that Sailor2 model (Apache 2.0 license) will drive language development in the SEA region, and Sailor2 cookbook will inspire researchers to build more inclusive LLMs for other under-served languages.
Community
Overall, the Sailor2 project contributes to the following outcomes:
- (1) A family of open models, optimized for Southeast Asian (SEA) languages;
- (2) A comprehensive cookbook detailing the process of building multilingual LLMs, covering data curation, model training, and thorough evaluation.
✈️ Building upon the foundation of Qwen2.5 , Sailor2 is continually pre-trained over 500B high-quality tokens to support 15 languages, including English, Chinese, Burmese, Cebuano, Ilocano, Indonesian, Javanese, Khmer, Lao, Malay, Sundanese, Tagalog, Thai, Vietnamese, Waray.
🔨 During development, we employ a range of advanced technologies to ensure top-tier performance and efficiency:
1⃣️ model expansion 📈
2⃣️ optimized data mixing strategies 🧬
3⃣️ multi-stage pre-training protocols 🔬
4⃣️ advanced multilingual post-training ⚡️
🎯 Our 20B chat model can achieve a 50-50 win rate with GPT-4o on most SEA languages using GPT-4o as judger!
📚 Blog: https://sea-sailor.github.io/blog/sailor2/
🤖️ Model: https://huggingface.co/collections/sail/sailor2-language-models-674d7c9e6b4dbbd9a869906b
💬 Demo: https://huggingface.co/spaces/sail/Sailor2-20B-Chat
📣 Sailor2 Community: https://huggingface.co/sailor2
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Multilingual Machine Translation with Open Large Language Models at Practical Scale: An Empirical Study (2025)
- InfiR : Crafting Effective Small Language Models and Multimodal Small Language Models in Reasoning (2025)
- Steel-LLM:From Scratch to Open Source -- A Personal Journey in Building a Chinese-Centric LLM (2025)
- TituLLMs: A Family of Bangla LLMs with Comprehensive Benchmarking (2025)
- SeaExam and SeaBench: Benchmarking LLMs with Local Multilingual Questions in Southeast Asia (2025)
- DCAD-2000: A Multilingual Dataset across 2000+ Languages with Data Cleaning as Anomaly Detection (2025)
- Adapting Language-Specific LLMs to a Reasoning Model in One Day via Model Merging - An Open Recipe (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 30
Browse 30 models citing this paperDatasets citing this paper 0
No dataset linking this paper