Datasets:
Update README.md
Browse files
README.md
CHANGED
|
@@ -107,34 +107,34 @@ We applied extensive data processing using our CURATE pipeline.
|
|
| 107 |
We first filter documents by their language content using [FastText](https://fasttext.cc/docs/en/language-identification.html). Only documents with at least 50% of characters in Catalan are kept. We then perform exact document deduplication. After this stage, we score each document with a tested set of 8 heuristic evaluators, inspired from other web filterings and from our own creation.
|
| 108 |
|
| 109 |
The following pre-existing datasets were used:
|
| 110 |
-
- `
|
| 111 |
-
- `
|
| 112 |
-
- `
|
| 113 |
-
- `
|
| 114 |
-
- `
|
| 115 |
-
- `
|
| 116 |
-
- `
|
| 117 |
|
| 118 |
#### Who are the source language producers?
|
| 119 |
|
| 120 |
Apart from the pre-existing datasets, all of them coming from [CommonCrawl](https://commoncrawl.org/) dumps, the following
|
| 121 |
sources provided their data on Open Data Agreements:
|
| 122 |
-
- Media Groups
|
| 123 |
-
- `
|
| 124 |
-
- `
|
| 125 |
-
- `
|
| 126 |
-
- `
|
| 127 |
-
- `
|
| 128 |
-
- `
|
| 129 |
-
- `
|
| 130 |
-
- Academic & Book Repositories
|
| 131 |
-
- `
|
| 132 |
-
- `
|
| 133 |
-
- `
|
| 134 |
-
- Government Institutions
|
| 135 |
- [`Valencian Parliament`](https://www.cortsvalencianes.es/)
|
| 136 |
-
- `
|
| 137 |
-
- `
|
| 138 |
|
| 139 |
### Annotations
|
| 140 |
|
|
|
|
| 107 |
We first filter documents by their language content using [FastText](https://fasttext.cc/docs/en/language-identification.html). Only documents with at least 50% of characters in Catalan are kept. We then perform exact document deduplication. After this stage, we score each document with a tested set of 8 heuristic evaluators, inspired from other web filterings and from our own creation.
|
| 108 |
|
| 109 |
The following pre-existing datasets were used:
|
| 110 |
+
- [`OSCAR-2301`](https://huggingface.co/datasets/oscar-corpus/OSCAR-2301)
|
| 111 |
+
- [`OSCAR-2201`](https://huggingface.co/datasets/oscar-corpus/OSCAR-2201)
|
| 112 |
+
- [`CaText`](https://zenodo.org/records/5483031)
|
| 113 |
+
- [`MaCoCu-ca 1.0`](http://hdl.handle.net/11356/1837)
|
| 114 |
+
- [`caWaC`](https://huggingface.co/datasets/cawac)
|
| 115 |
+
- [`Colossal OSCAR 1.0`](https://huggingface.co/datasets/oscar-corpus/colossal-oscar-1.0)
|
| 116 |
+
- [`mC4`]({https://huggingface.co/datasets/mc4)
|
| 117 |
|
| 118 |
#### Who are the source language producers?
|
| 119 |
|
| 120 |
Apart from the pre-existing datasets, all of them coming from [CommonCrawl](https://commoncrawl.org/) dumps, the following
|
| 121 |
sources provided their data on Open Data Agreements:
|
| 122 |
+
- **Media Groups**
|
| 123 |
+
- [`IB3`](https://ib3.org/)
|
| 124 |
+
- [`Grup El Món`](https://grupmon.cat/)
|
| 125 |
+
- [`Vilaweb`](https://www.vilaweb.cat/)
|
| 126 |
+
- [`Nació Digita`](https://www.naciodigital.cat/)
|
| 127 |
+
- [`ACN`](https://www.acn.cat/)
|
| 128 |
+
- [`Racó Català`](https://www.racocatala.cat/)
|
| 129 |
+
- [`Aquí Berguedà`](https://www.aquibergueda.cat/)
|
| 130 |
+
- **Academic & Book Repositories**
|
| 131 |
+
- [`Tesis Doctorals en Xarxa`](https://www.tesisenred.net/)
|
| 132 |
+
- [`Wikipedia`](https://ca.wikipedia.org/)
|
| 133 |
+
- [`Project Gutenberg`](https://www.gutenberg.org/)
|
| 134 |
+
- **Government Institutions**
|
| 135 |
- [`Valencian Parliament`](https://www.cortsvalencianes.es/)
|
| 136 |
+
- [`Diari Oficial de la Generalitat Valenciana`](https://dogv.gva.es/)
|
| 137 |
+
- [`Butlletí Oficial de la Universitat d'Alacant`](https://www.boua.ua.es/)
|
| 138 |
|
| 139 |
### Annotations
|
| 140 |
|