|
--- |
|
language: |
|
- zh |
|
- yue |
|
--- |
|
|
|
### Dataset Summary |
|
Cantonese has been a low-resource language in NLP. This dataset is a major step towards changing that. |
|
|
|
To our knowledge, this is the **first large-scale, properly curated, and deduplicated web dataset built specifically for Cantonese**. It was created by filtering years of Common Crawl data and a Cantonese language detector, followed by a deduplication process using MinHash. |
|
|
|
The result is a high-quality collection of **~250K unique documents** containing **~150 million words**. 🚀 |
|
|
|
### Dataset Curation Process |
|
* Downloaded and processed Common Crawl using [code](https://github.com/jedcheng/c4-dataset-script) |
|
* The resultant traditional Chinese dataset can be found [here](https://huggingface.co/datasets/jed351/Traditional-Chinese-Common-Crawl-Filtered) |
|
* Cantonese text filtered using [CantoneseDetect](https://github.com/CanCLID/cantonesedetect) and the resultant dataset can be found [here](https://huggingface.co/datasets/jed351/Cantonese_Common_Crawl_Filtered) |
|
* Using MinHash deduplication to yield the final web dataset (Wikipedia and LIHKG excluded) |
|
|
|
### Dataset Files |
|
[CantoneseDetect](https://github.com/CanCLID/cantonesedetect) has two modes: fully Cantonese text or Cantonese text that only present in quotes. |
|
|
|
Both sets of data were cleaned with MinHash and are present in the dataset. |
|
|
|
### Acknowoledgement |
|
Special thanks to [Eons Data Communications Limited](https://eons.cloud/) and [Votee AI](https://votee.ai/) for the compute resources to download and process the Common Crawl Dumps. |