The dataset viewer is not available for this dataset.
Error code: ConfigNamesError Exception: ValueError Message: Expected data_files in YAML to be either a string or a list of strings or a list of dicts with two keys: 'split' and 'path', but got [{'split': 'train', 'path': '20250801/ace/train/train_part_*.parquet', 'num_examples': 26180}, {'split': 'full', 'path': '20250801/ace/full/full_part_*.parquet', 'num_examples': 26180}, {'split': 'metadata', 'path': '20250801/ace/metadata.json', 'num_examples': 1}, {'split': '1000', 'path': '20250801/ace/samples/1000_part_*.parquet', 'num_examples': 1000}, {'split': '5000', 'path': '20250801/ace/samples/5000_part_*.parquet', 'num_examples': 5000}, {'split': '10000', 'path': '20250801/ace/samples/10000_part_*.parquet', 'num_examples': 10000}] Examples of data_files in YAML: data_files: data.csv data_files: data/*.png data_files: - part0/* - part1/* data_files: - split: train path: train/* - split: test path: test/* data_files: - split: train path: - train/part1/* - train/part2/* - split: test path: test/* PS: some symbols like dashes '-' are not allowed in split names Traceback: Traceback (most recent call last): File "/src/services/worker/src/worker/job_runners/dataset/config_names.py", line 66, in compute_config_names_response config_names = get_dataset_config_names( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/inspect.py", line 161, in get_dataset_config_names dataset_module = dataset_module_factory( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 1031, in dataset_module_factory raise e1 from None File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 996, in dataset_module_factory return HubDatasetModuleFactory( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 604, in get_module metadata_configs = MetadataConfigs.from_dataset_card_data(dataset_card_data) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/utils/metadata.py", line 153, in from_dataset_card_data cls._raise_if_data_files_field_not_valid(metadata_config) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/utils/metadata.py", line 100, in _raise_if_data_files_field_not_valid raise ValueError(yaml_error_message) ValueError: Expected data_files in YAML to be either a string or a list of strings or a list of dicts with two keys: 'split' and 'path', but got [{'split': 'train', 'path': '20250801/ace/train/train_part_*.parquet', 'num_examples': 26180}, {'split': 'full', 'path': '20250801/ace/full/full_part_*.parquet', 'num_examples': 26180}, {'split': 'metadata', 'path': '20250801/ace/metadata.json', 'num_examples': 1}, {'split': '1000', 'path': '20250801/ace/samples/1000_part_*.parquet', 'num_examples': 1000}, {'split': '5000', 'path': '20250801/ace/samples/5000_part_*.parquet', 'num_examples': 5000}, {'split': '10000', 'path': '20250801/ace/samples/10000_part_*.parquet', 'num_examples': 10000}] Examples of data_files in YAML: data_files: data.csv data_files: data/*.png data_files: - part0/* - part1/* data_files: - split: train path: train/* - split: test path: test/* data_files: - split: train path: - train/part1/* - train/part2/* - split: test path: test/* PS: some symbols like dashes '-' are not allowed in split names
Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
π Wikipedia Monthly
Last updated: August 02, 2025, 17:47 UTC
This repository provides monthly, multilingual dumps of Wikipedia, processed and prepared for easy use in NLP projects.
π Live Statistics
Metric | Value |
---|---|
π Languages Available | 29 |
π Total Articles | 402,139 |
πΎ Total Size | 0.13 GB |
Why Use This Dataset?
- Freshness: We run our pipeline monthly to capture the latest versions of all articles.
- Clean & Ready: We handle the messy parts of parsing MediaWiki markup. You get clean plain text ready for use.
- Easy Access: Load any language with a single line of code using π€
datasets
.
Usage
from datasets import load_dataset
# Load the English dataset from the latest dump
dataset = load_dataset("omarkamali/wikipedia-monthly", "latest.en", split="train", streaming=True)
π Language Subsets
This dataset is organized into configurations, one for each language dump. The table below lists all available subsets. For new languages, the article count will show as "Processing..." until the first full run is complete.
Language Code | Configuration Name | Articles |
---|---|---|
ab |
latest.ab |
6.5K |
ace |
latest.ace |
26.2K |
ady |
latest.ady |
1.5K |
ak |
latest.ak |
1 |
alt |
latest.alt |
3.3K |
am |
latest.am |
42.4K |
ami |
latest.ami |
1.8K |
an |
latest.an |
79.7K |
ang |
latest.ang |
5.0K |
ann |
latest.ann |
488 |
anp |
latest.anp |
9.4K |
arc |
latest.arc |
2.0K |
ary |
latest.ary |
21.4K |
as |
latest.as |
19.4K |
atj |
latest.atj |
2.1K |
av |
latest.av |
3.7K |
avk |
latest.avk |
29.8K |
awa |
latest.awa |
3.7K |
ay |
latest.ay |
5.4K |
ban |
latest.ban |
63.1K |
bbc |
latest.bbc |
2.3K |
bcl |
latest.bcl |
20.9K |
bdr |
latest.bdr |
669 |
bh |
latest.bh |
8.9K |
bi |
latest.bi |
1.6K |
bjn |
latest.bjn |
11.4K |
blk |
latest.blk |
3.2K |
bm |
latest.bm |
1.3K |
bpy |
latest.bpy |
25.2K |
Dataset Creation Process
Our pipeline is designed for transparency and robustness:
- Download: We fetch the latest
pages-articles.xml.bz2
dump for each language directly from the official Wikimedia dumps server. - Filter: We stream the dump and process only main articles (namespace
0
), filtering out user pages, talk pages, and other metadata. - Process: Using
mwparserfromhell
, we parse the MediaWiki syntax to extract clean, readable content. - Format: We generate a plain text representation by applying additional post-processing on the output from
mwparserfromhell
. - Upload: The resulting dataset is uploaded to the Hugging Face Hub with a configuration name corresponding to the dump date and language (e.g.,
20250710.en
).
Data Fields
Each row in the dataset corresponds to a single Wikipedia article and contains the following fields:
id
: The unique Wikipedia page ID (string
).url
: The URL to the live article (string
).title
: The title of the article (string
).text
: The clean, plain text content of the article (string
).raw_mediawiki
: The original, unprocessed MediaWiki source content (string
).
Maintainer
Wikipedia Monthly is compiled, processed and published by Omar Kamali based on the official Wikipedia dumps.
License
Wikipedia Monthly is built on top of the incredible work by the Wikimedia Foundation and the open-source community. All content maintains the original CC-BY-SA-4.0 license.
- Downloads last month
- 1,426