
The dataset viewer is not available for this split.
Error code: FeaturesError Exception: ArrowInvalid Message: Schema at index 1 was different: shards: list<item: struct<column_encodings: list<item: string>, column_names: list<item: string>, column_sizes: list<item: null>, compression: string, format: string, hashes: list<item: null>, raw_data: struct<basename: string, bytes: int64, hashes: struct<>>, samples: int64, size_limit: int64, version: int64, zip_data: struct<basename: string, bytes: int64, hashes: struct<>>>> version: int64 vs total_duplicated_tokens: int64 total_tokens_written: int64 total_tokens_skipped: int64 percentiles: struct<0th: int64, 10th: int64, 20th: int64, 30th: int64, 40th: int64, 50th: int64, 60th: int64, 70th: int64, 80th: int64, 90th: int64, 95th: int64, 99th: int64, 100th: int64> Traceback: Traceback (most recent call last): File "/src/services/worker/src/worker/job_runners/split/first_rows.py", line 228, in compute_first_rows_from_streaming_response iterable_dataset = iterable_dataset._resolve_features() File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 3422, in _resolve_features features = _infer_features_from_batch(self.with_format(None)._head()) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 2187, in _head return next(iter(self.iter(batch_size=n))) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 2391, in iter for key, example in iterator: File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 1882, in __iter__ for key, pa_table in self._iter_arrow(): File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 1904, in _iter_arrow yield from self.ex_iterable._iter_arrow() File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 527, in _iter_arrow yield new_key, pa.Table.from_batches(chunks_buffer) File "pyarrow/table.pxi", line 4116, in pyarrow.lib.Table.from_batches File "pyarrow/error.pxi", line 154, in pyarrow.lib.pyarrow_internal_check_status File "pyarrow/error.pxi", line 91, in pyarrow.lib.check_status pyarrow.lib.ArrowInvalid: Schema at index 1 was different: shards: list<item: struct<column_encodings: list<item: string>, column_names: list<item: string>, column_sizes: list<item: null>, compression: string, format: string, hashes: list<item: null>, raw_data: struct<basename: string, bytes: int64, hashes: struct<>>, samples: int64, size_limit: int64, version: int64, zip_data: struct<basename: string, bytes: int64, hashes: struct<>>>> version: int64 vs total_duplicated_tokens: int64 total_tokens_written: int64 total_tokens_skipped: int64 percentiles: struct<0th: int64, 10th: int64, 20th: int64, 30th: int64, 40th: int64, 50th: int64, 60th: int64, 70th: int64, 80th: int64, 90th: int64, 95th: int64, 99th: int64, 100th: int64>
Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
mmBERT Mid-training Data
Phase 2 of 3: High-quality mid-training data mixture (600B tokens) with context extension to 8192 tokens.
This dataset contains the mid-training phase data used to train all mmBERT encoder models. This phase focuses on higher quality data sources and extends the context length from 1024 to 8192 tokens. The data is provided in MDS format ready for use with Composer and the ModernBERT training repository.
π Data Composition
Data Source | Tokens (B) | Percentage | Description |
---|---|---|---|
FineWeb2 | 506.7 | 84.3% | High-quality multilingual web crawl data |
DCLM (Dolmino) | 40.0 | 6.7% | Filtered high-quality English web data |
Starcoder | 17.2 | 2.9% | Code repositories and files |
Arxiv | 5.4 | 0.9% | Academic preprints |
Dolmino Math | 4.3 | 0.7% | Mathematical content |
Books | 3.9 | 0.7% | Literature and reference books |
PeS2o | 3.2 | 0.5% | Scientific papers |
Tulu Flan | 3.1 | 0.5% | Instruction-following data |
StackExchange | 3.0 | 0.5% | Q&A forums |
StackExchange (Dolmino) | 2.8 | 0.5% | Curated Q&A content |
Wikipedia (MegaWika) | 1.2 | 0.2% | Encyclopedia articles |
Total | 600.8 | 100.0% | High-quality data for context extension |
π Language Coverage
This phase covers 110 languages plus code, with inverse temperature sampling at Ο=0.5. Expands from the initial 60 languages to include:
- Additional mid-resource languages: Uzbek, Bosnian, Catalan, Albanian, and 46 others
- Enhanced quality: Uses filtered FineWeb2-HQ and higher quality DCLM
- Longer contexts: Optimized for 8192 token sequences
βοΈ Key Features
- Context Extension: RoPE base frequency adjusted to 160k for 8192 token support
- Quality Upgrade: Switches to filtered, higher-quality versions of datasets
- Reduced Masking: Mask rate lowered to 15% (from 30% in pre-training)
- Language Expansion: Adds 50 new languages while maintaining data quality
π Usage
For mid-training, see the ModernBERT repo: https://github.com/AnswerDotAI/ModernBERT
Direct Access
from streaming import StreamingDataset
# Load the streaming dataset
dataset = StreamingDataset(
remote='https://huggingface.co/datasets/jhu-clsp/mmbert-midtraining',
local='/tmp/mmbert-midtraining-data',
shuffle=True
)
# Access samples
for sample in dataset:
text = sample['text']
# Process your data...
π Related Resources
- Models: mmBERT Model Suite
- Phase 1: Pre-training Data (2.3T tokens)
- Phase 3: Decay Phase Data (100B tokens)
- Checkpoints: Training Checkpoints
- Paper: Arxiv link
- Code: GitHub Repository
Citation
@misc{marone2025mmbertmodernmultilingualencoder,
title={mmBERT: A Modern Multilingual Encoder with Annealed Language Learning},
author={Marc Marone and Orion Weller and William Fleshman and Eugene Yang and Dawn Lawrie and Benjamin Van Durme},
year={2025},
eprint={2509.06888},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2509.06888},
}
- Downloads last month
- 18,792
Models trained or fine-tuned on jhu-clsp/mmBERT-midtraining-data
