Dataset Card for Lingua Custodia/Clean-Wikipedia-English-Articles
Lingua Custodia is delighted to announce the release of the cleanest markdown extract of Wikipedia articles so far, a high-quality resource to train your LLMs!
Dataset Summary
The Clean-Wikipedia-English-Articles dataset contains the comprehensive bodies of English articles, i.e. without appendices like References, See also, Bibliography, etc.
It has been pointed out here that the Wikimedia Wikipedia dataset was missing parts of sentences and one of the suggested solutions was to extract articles from Wikipedia HTML dumps thanks to the mwparserfromhtml library as shown in this Notebook.
This was a major improvement but some other parts were missing like titles, lists, tables, LateX formulas, superscripts and subscripts.
Some modules of the mwparserfromhtml library were modified and new code was written in order to remedy this issue, as well as reading the HTML dump backwards to remove duplicates by keeping the most recent revision of each article.
Only article bodies containing at least 2 sections and 50 tokens were retrieved in order to exclude empty redirection pages and short drafts.
The dataset was built from the enwiki-NS0-20250220-ENTERPRISE-HTML.json.tar.gz
HTML dump released on the 20th February 2025, contains a single train split and consists of 7B tokens.
Dataset Structure
Data Instances
Here is an example:
{
'infobox': '',
'categories': [ "Articles with short description", "Short description is different from Wikidata", ...],
'text': '# Open-source artificial intelligence **Open-source artificial intelligence** is an AI system...'
'id': 74399351,
'token_count': 5278,
'url': 'https://en.wikipedia.org/wiki/Open-source_artificial_intelligence',
'title': 'Open-source artificial intelligence',
'revdate': 2025-02-19T15:38:11,
'entity': 'Q120785614'
}
Data Fields
Each sample in the dataset includes the following fields:
infobox (str)
: Markdown formatted text content of the infobox (empty string if no infobox in article).categories (list(str))
: List of categories linked to the article.text (str)
: Markdown formatted text content of the article without appendices.id (int)
: ID of the article.token_count (int)
: Number of tokens contained in the text field computed using Qwen/Qwen2.5-7B-Instruct tokenizer.title (str)
: Title of the article.url (str)
: URL of the article.revdate (datetime)
: Revision date of the article.entity (str)
: Wikidata QID linked to the article.
License
Copyright licensing information: https://dumps.wikimedia.org/legal.html
All original textual content is licensed under the GNU Free Documentation License (GFDL) and the Creative Commons Attribution-Share-Alike 3.0 License. Some text may be available only under the Creative Commons license; see their Terms of Use for details. Text written by some authors may be released under additional licenses or into the public domain.
Citation
If you use this dataset in your research or projects, please cite it appropriately.
@misc{Clean-Wikipedia-English-Articles,
title={Clean-Wikipedia-English-Articles},
author={Foly, Sabine and Liu, Jingshu and Barthelemy, Jean-Gabriel and Caillaut, Gaëtan and Qader, Raheel and Nakhle, Mariam and Sadoune, Arezki},
url={https://huggingface.co/datasets/LinguaCustodia/Clean-Wikipedia-English-Articles},
year={2025}
}
- Downloads last month
- 54