Create README.md
Browse files
README.md
ADDED
@@ -0,0 +1,19 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
language:
|
3 |
+
- zh
|
4 |
+
- yue
|
5 |
+
---
|
6 |
+
|
7 |
+
### Dataset Summary
|
8 |
+
Cantonese has been a low-resource language in NLP. This dataset is a major step towards changing that.
|
9 |
+
|
10 |
+
To our knowledge, this is the **first large-scale, properly curated, and deduplicated web dataset built specifically for Cantonese**. It was created by filtering years of Common Crawl data and a Cantonese language detector, followed by a deduplication process using MinHash.
|
11 |
+
|
12 |
+
The result is a high-quality collection of **226,273 unique documents** containing approximately **150 million words**. (as of now with the 2020~2025 dumps) 🚀
|
13 |
+
|
14 |
+
### Dataset Curation Process
|
15 |
+
* Downloaded and processed Common Crawl using [code](https://github.com/jedcheng/c4-dataset-script)
|
16 |
+
* The resultant traditional Chinese dataset can be found [here](https://huggingface.co/datasets/jed351/Traditional-Chinese-Common-Crawl-Filtered)
|
17 |
+
* Cantonese text filtered using [CantoneseDetect](https://github.com/CanCLID/cantonesedetect) and the resultant dataset can be found [here](https://huggingface.co/datasets/jed351/Cantonese_Common_Crawl_Filtered)
|
18 |
+
* Using MinHash deduplication to yield the final web dataset (Wikipedia and LIHKG excluded)
|
19 |
+
|