Dataset Viewer
The dataset viewer is not available for this split.
Cannot extract the features (columns) for the split 'train' of the config 'default' of the dataset.
Error code:   FeaturesError
Exception:    ArrowInvalid
Message:      Schema at index 1 was different: 
shards: list<item: struct<column_encodings: list<item: string>, column_names: list<item: string>, column_sizes: list<item: null>, compression: string, format: string, hashes: list<item: null>, raw_data: struct<basename: string, bytes: int64, hashes: struct<>>, samples: int64, size_limit: int64, version: int64, zip_data: struct<basename: string, bytes: int64, hashes: struct<>>>>
version: int64
vs
total_duplicated_tokens: int64
total_tokens_written: int64
total_tokens_skipped: int64
percentiles: struct<0th: int64, 10th: int64, 20th: int64, 30th: int64, 40th: int64, 50th: int64, 60th: int64, 70th: int64, 80th: int64, 90th: int64, 95th: int64, 99th: int64, 100th: int64>
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/split/first_rows.py", line 228, in compute_first_rows_from_streaming_response
                  iterable_dataset = iterable_dataset._resolve_features()
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 3422, in _resolve_features
                  features = _infer_features_from_batch(self.with_format(None)._head())
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 2187, in _head
                  return next(iter(self.iter(batch_size=n)))
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 2391, in iter
                  for key, example in iterator:
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 1882, in __iter__
                  for key, pa_table in self._iter_arrow():
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 1904, in _iter_arrow
                  yield from self.ex_iterable._iter_arrow()
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 527, in _iter_arrow
                  yield new_key, pa.Table.from_batches(chunks_buffer)
                File "pyarrow/table.pxi", line 4116, in pyarrow.lib.Table.from_batches
                File "pyarrow/error.pxi", line 154, in pyarrow.lib.pyarrow_internal_check_status
                File "pyarrow/error.pxi", line 91, in pyarrow.lib.check_status
              pyarrow.lib.ArrowInvalid: Schema at index 1 was different: 
              shards: list<item: struct<column_encodings: list<item: string>, column_names: list<item: string>, column_sizes: list<item: null>, compression: string, format: string, hashes: list<item: null>, raw_data: struct<basename: string, bytes: int64, hashes: struct<>>, samples: int64, size_limit: int64, version: int64, zip_data: struct<basename: string, bytes: int64, hashes: struct<>>>>
              version: int64
              vs
              total_duplicated_tokens: int64
              total_tokens_written: int64
              total_tokens_skipped: int64
              percentiles: struct<0th: int64, 10th: int64, 20th: int64, 30th: int64, 40th: int64, 50th: int64, 60th: int64, 70th: int64, 80th: int64, 90th: int64, 95th: int64, 99th: int64, 100th: int64>

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

mmBERT Pre-training Data P1

License: MIT Paper Models GitHub

Phase 1 of 3: Diverse multilingual pre-training data mixture (trained for 2.3T tokens) used to train the mmBERT model suite.

NOTE: this is only P1 of the pre-training data due to HF limits, you need to download and combine all three into one folder

This dataset contains the pre-training phase data used to train all mmBERT encoder models. The data is provided in MDS format ready for use with Composer and the ModernBERT training repository.

πŸ“Š Data Composition

Data Source Tokens (B) Percentage Description
FineWeb2 1,196.6 60.2% High-quality multilingual web crawl data
DCLM 600.0 30.2% High-quality English web crawl data
Starcoder 100.6 5.1% Code repositories and files
Arxiv 27.8 1.4% Academic preprints
StackExchange 18.6 0.9% Q&A forums
Tulu Flan 15.3 0.8% Instruction-following data
Dolmino Math 11.2 0.6% Mathematical content
PeS2o 8.4 0.4% Scientific papers
Wikipedia (MegaWika) 4.7 0.2% Encyclopedia articles
Books 4.3 0.2% Literature and reference books
StackExchange (Dolmino) 1.4 0.1% Curated Q&A content
Total 1,989.0 100.0% Diverse mixture for foundation training

🌍 Language Coverage

This phase covers 60 languages plus code, with an inverse temperature sampling schedule starting at Ο„=0.7. Languages include:

  • High-resource: English (34.5%), Russian (5.8%), German (4.4%), Spanish (4.5%), French (4.0%), Chinese (5.2%)
  • Mid-resource: Italian, Portuguese, Japanese, Dutch, Polish, and 45 others
  • Scripts: Latin, Cyrillic, Arabic, Chinese, Japanese, Thai, and many more

πŸš€ Usage

For pre-training, see the ModernBERT repo: https://github.com/AnswerDotAI/ModernBERT

Direct Access

from streaming import StreamingDataset

# Load the streaming dataset
dataset = StreamingDataset(
    remote='https://huggingface.co/datasets/jhu-clsp/mmbert-pretrain-p1-fineweb2-langs',
    local='/tmp/mmbert-pretraining-data',
    shuffle=True
)

# Access samples
for sample in dataset:
    text = sample['text']
    # Process your data...

πŸ”— Related Resources

Citation

@misc{marone2025mmbertmodernmultilingualencoder,
      title={mmBERT: A Modern Multilingual Encoder with Annealed Language Learning}, 
      author={Marc Marone and Orion Weller and William Fleshman and Eugene Yang and Dawn Lawrie and Benjamin Van Durme},
      year={2025},
      eprint={2509.06888},
      archivePrefix={arXiv},
      primaryClass={cs.CL},
      url={https://arxiv.org/abs/2509.06888}, 
}
Downloads last month
5,421

Models trained or fine-tuned on jhu-clsp/mmBERT-pretrain-p1-fineweb2-langs

Collection including jhu-clsp/mmBERT-pretrain-p1-fineweb2-langs