gowthamgoli commited on
Commit
0d48981
·
verified ·
1 Parent(s): a6de973

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +27 -3
README.md CHANGED
@@ -1,3 +1,27 @@
1
- ---
2
- license: cc-by-nc-sa-3.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: cc-by-nc-sa-3.0
3
+ ---
4
+
5
+
6
+ ### 📚 Dataset Description
7
+
8
+ This dataset is a cleaned and structured version of the English Wikipedia XML dump, curated for use in NLP, machine learning, and large language model (LLM) training. Each page has been processed to include metadata such as `namespace`, `page ID`, `title`, `timestamp`, `categories`, extracted `entities`, `concepts`, and `things`, alongside fully cleaned plain text. All markup—including templates, infoboxes, links, and references—has been stripped to create high-quality text suitable for modeling.
9
+
10
+ Due to the large size of the full dataset (over 100GB), we are uploading it in **daily batches**. If you're accessing this early, please check back regularly as new segments are continuously added until the full corpus is available.
11
+
12
+ #### 🔧 Use Cases
13
+
14
+ * Training and fine-tuning LLMs (GPT, BERT-style, etc.)
15
+ * Semantic search, RAG pipelines, and document retrieval
16
+ * Entity linking and knowledge graph construction
17
+ * Educational use in NLP and AI courses
18
+ * Benchmarking text models on diverse, encyclopedic data
19
+
20
+ #### 📌 Source Acknowledgment
21
+
22
+ This dataset is derived from the [English Wikipedia XML dump](https://dumps.wikimedia.org/enwiki/) provided by the Wikimedia Foundation. All original content is freely available under the [Creative Commons Attribution-ShareAlike License](https://creativecommons.org/licenses/by-sa/3.0/). We do not claim ownership of the original data—our work focuses solely on cleaning and enriching it for easier downstream use.
23
+
24
+ ---
25
+
26
+
27
+