jed351 commited on
Commit
87a8ba1
·
verified ·
1 Parent(s): ca0e835

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +5 -2
README.md CHANGED
@@ -9,7 +9,7 @@ Cantonese has been a low-resource language in NLP. This dataset is a major step
9
 
10
  To our knowledge, this is the **first large-scale, properly curated, and deduplicated web dataset built specifically for Cantonese**. It was created by filtering years of Common Crawl data and a Cantonese language detector, followed by a deduplication process using MinHash.
11
 
12
- The result is a high-quality collection of **~250K unique documents** containing **~150 million words**. (as of now with the 2020~2025 dumps) 🚀
13
 
14
  ### Dataset Curation Process
15
  * Downloaded and processed Common Crawl using [code](https://github.com/jedcheng/c4-dataset-script)
@@ -20,4 +20,7 @@ The result is a high-quality collection of **~250K unique documents** containing
20
  ### Dataset Files
21
  [CantoneseDetect](https://github.com/CanCLID/cantonesedetect) has two modes: fully Cantonese text or Cantonese text that only present in quotes.
22
 
23
- Both sets of data were cleaned with MinHash and are present in the dataset.
 
 
 
 
9
 
10
  To our knowledge, this is the **first large-scale, properly curated, and deduplicated web dataset built specifically for Cantonese**. It was created by filtering years of Common Crawl data and a Cantonese language detector, followed by a deduplication process using MinHash.
11
 
12
+ The result is a high-quality collection of **~250K unique documents** containing **~150 million words**. 🚀
13
 
14
  ### Dataset Curation Process
15
  * Downloaded and processed Common Crawl using [code](https://github.com/jedcheng/c4-dataset-script)
 
20
  ### Dataset Files
21
  [CantoneseDetect](https://github.com/CanCLID/cantonesedetect) has two modes: fully Cantonese text or Cantonese text that only present in quotes.
22
 
23
+ Both sets of data were cleaned with MinHash and are present in the dataset.
24
+
25
+ ### Acknowoledgement
26
+ Special thanks to [Eons Data Communications Limited](https://eons.cloud/) and [Votee AI](https://votee.ai/) for the compute resources to download and process the Common Crawl Dumps.