Qwen3-Inspired Pre-training Dataset
Overview
This dataset is a curated mixture of high-quality text data designed for large language model pre-training, inspired by the Qwen3 methodology. The dataset includes both training and validation splits.
Dataset Statistics
Total Size: 10.42 billion tokens
- Training Split: 9.89 billion tokens (94.9%)
- Validation Split: 0.53 billion tokens (5.1%)
Data Sources (Combined)
- dclm_baseline: 5.06B tokens (48.56%) - 4,088,916 documents
- the_stack: 1.65B tokens (15.79%) - 383,490 documents
- common_corpus: 1.5B tokens (14.36%) - 381,841 documents
- mini_pile: 1.43B tokens (13.73%) - 999,858 documents
- math_pile: 0.79B tokens (7.55%) - 72,936 documents
Training Split Statistics
- dclm_baseline: 4.81B tokens (48.61%) - 3,884,088 documents
- the_stack: 1.58B tokens (15.97%) - 363,502 documents
- common_corpus: 1.42B tokens (14.37%) - 361,913 documents
- mini_pile: 1.36B tokens (13.78%) - 949,859 documents
- math_pile: 0.72B tokens (7.26%) - 68,947 documents
Validation Split Statistics
- dclm_baseline: 0.25B tokens (47.69%) - 204,828 documents
- common_corpus: 0.08B tokens (14.22%) - 19,928 documents
- math_pile: 0.07B tokens (12.89%) - 3,989 documents
- mini_pile: 0.07B tokens (12.86%) - 49,999 documents
- the_stack: 0.07B tokens (12.33%) - 19,988 documents
Data Processing Pipeline
- Data Collection: Sourced from multiple high-quality datasets
- Standardization: All data transformed to consistent format with
text
,info
, andsource_data
fields - Train/Validation Split: Created 95%/5% splits within each source dataset
- Exact Deduplication: Removed identical documents within each split
- Near Deduplication: Used MinHashLSH with Jaccard similarity threshold of 0.85
- Quality Filtering: Applied content-based filtering during processing
- Shuffling: Applied shuffling within each large shard for better data distribution
Data Format
Each example contains:
text
: The main text contentinfo
: Metadata from the original dataset (as string)source_data
: Source dataset identifier
Splits
The dataset contains two splits:
train
: Training data (95% of each source dataset)validation
: Validation data (5% of each source dataset)
Tokenization
Token counts were computed using the Llama3 tokenizer (meta-llama/Meta-Llama-3-8B
).
Usage
from datasets import load_dataset
# Load the entire dataset
dataset = load_dataset("bluelightai-dev/qwen_clt_pretrain_data")
# Load specific splits
train_dataset = load_dataset("bluelightai-dev/qwen_clt_pretrain_data", split="train")
val_dataset = load_dataset("bluelightai-dev/qwen_clt_pretrain_data", split="validation")
Dataset Sources
The dataset combines data from the following sources:
- DCLM Baseline: High-quality web text from DataComp-LM
- Common Corpus: Multilingual web text corpus
- The Stack: Deduplicated source code
- Mini Pile: Academic and reference texts
- Math Pile: Mathematical content and reasoning datasets
License
Please refer to the individual source dataset licenses. This mixture is provided for research purposes.
Citation
If you use this dataset, please cite the original source datasets and this work.