wikimedia_dataset / README.md
gowthamgoli's picture
Update README.md
0d48981 verified
metadata
license: cc-by-nc-sa-3.0

📚 Dataset Description

This dataset is a cleaned and structured version of the English Wikipedia XML dump, curated for use in NLP, machine learning, and large language model (LLM) training. Each page has been processed to include metadata such as namespace, page ID, title, timestamp, categories, extracted entities, concepts, and things, alongside fully cleaned plain text. All markup—including templates, infoboxes, links, and references—has been stripped to create high-quality text suitable for modeling.

Due to the large size of the full dataset (over 100GB), we are uploading it in daily batches. If you're accessing this early, please check back regularly as new segments are continuously added until the full corpus is available.

🔧 Use Cases

  • Training and fine-tuning LLMs (GPT, BERT-style, etc.)
  • Semantic search, RAG pipelines, and document retrieval
  • Entity linking and knowledge graph construction
  • Educational use in NLP and AI courses
  • Benchmarking text models on diverse, encyclopedic data

📌 Source Acknowledgment

This dataset is derived from the English Wikipedia XML dump provided by the Wikimedia Foundation. All original content is freely available under the Creative Commons Attribution-ShareAlike License. We do not claim ownership of the original data—our work focuses solely on cleaning and enriching it for easier downstream use.