Datasets:

ArXiv:
License:
mmBERT-decay-data / README.md
orionweller's picture
Create README.md
a7a5b96 verified
metadata
license: mit
task_categories:
  - fill-mask
tags:
  - pretraining
  - encoder
  - multilingual

MMBERT Decay Phase Data

License: MIT Paper Models GitHub

Phase 3 of 3: Annealed language learning decay phase (100B tokens) with massive multilingual expansion to 1833 languages.

πŸ“Š Data Composition

NOTE: there are multiple decay data mixtures: this mixture described below is the Decay-Cont mixture. However, the data in this repository is the Decay-Eng. If you are interested in the others, please let me know so I can prioritize it.

Data Source Tokens (B) Percentage Description
FineWeb2 78.5 76.0% High-quality multilingual web crawl data
Wikipedia (MegaWika) 9.5 9.2% Encyclopedia articles (1833 languages)
Arxiv 3.3 3.2% Academic preprints
Textbooks (ProLong) 3.1 3.0% Educational content
Code (ProLong) 2.8 2.7% Code repositories and files
Books 2.2 2.1% Literature and reference books
DCLM (Dolmino) 2.0 2.0% High-quality English web data
Tulu Flan 1.0 1.0% Instruction-following data
Starcoder 0.5 0.5% Code repositories
Dolmino Math 0.5 0.5% Mathematical content
Total 103.3 100.0% Optimized for rapid language acquisition

🌍 Massive Language Coverage

This phase dramatically expands language coverage to 1833 languages, implementing the novel Cascading Annealed Language Learning (ALL) approach:

  • Temperature Schedule: Ο„=0.3 (most uniform sampling)
  • Low-resource Focus: Includes 1723 new languages with minimal data
  • Rapid Learning: Demonstrates 68% performance improvement on Tigray and 26% on Faroese
  • Script Diversity: Covers virtually all writing systems in FineWeb2

Key Innovation: Annealed Language Learning

Rather than training on all languages simultaneously, MMBERT uses a cascading approach:

  1. Phase 1: 60 high-resource languages (Ο„=0.7)
  2. Phase 2: 110 languages including mid-resource (Ο„=0.5)
  3. Phase 3: 1833 languages with focus on low-resource (Ο„=0.3)

This enables rapid learning of new languages while maintaining performance on high-resource ones.

βš™οΈ Key Features

  • Ultra-low Masking: 5% mask rate for optimal learning efficiency
  • Model Merging: Three decay variants (English-focused, 110-lang, 1833-lang) merged using TIES. This is the English focused version.
  • Quality Focus: Emphasizes highest-quality data sources

πŸš€ Usage

For decay phase training, see the ModernBERT repo: https://github.com/AnswerDotAI/ModernBERT

Direct Access

from streaming import StreamingDataset

# Load the streaming dataset
dataset = StreamingDataset(
    remote='https://huggingface.co/datasets/jhu-clsp/mmbert-decay',
    local='/tmp/mmbert-decay-data',
    shuffle=True
)

# Access samples
for sample in dataset:
    text = sample['text']
    # Process your data...

🎯 Performance Impact

The decay phase demonstrates remarkable efficiency in low-resource language learning:

  • Tigray (TiQuAD): 68% improvement (12.1 F1 points) from including the language
  • Faroese (FoQA): 26% improvement (15.4 F1 points)
  • SOTA Performance: Can even outperforms GPT-4o, Gemini 2.5 Pro
  • Rapid Acquisition: Significant gains with only 100B tokens of exposure

πŸ”— Related Resources

Citation

@misc{marone2025mmbertmodernmultilingualencoder,
      title={mmBERT: A Modern Multilingual Encoder with Annealed Language Learning}, 
      author={Marc Marone and Orion Weller and William Fleshman and Eugene Yang and Dawn Lawrie and Benjamin Van Durme},
      year={2025},
      eprint={2509.06888},
      archivePrefix={arXiv},
      primaryClass={cs.CL},
      url={https://arxiv.org/abs/2509.06888}, 
}