Datasets:

Modalities:
Text
Languages:
Spanish
ArXiv:
Libraries:
Datasets
License:
versae commited on
Commit
9b1ad67
·
1 Parent(s): 0e0595b

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +6 -54
README.md CHANGED
@@ -38,17 +38,6 @@ task_ids:
38
  - [Data Splits](#data-splits)
39
  - [Dataset Creation](#dataset-creation)
40
  - [Curation Rationale](#curation-rationale)
41
- - [Source Data](#source-data)
42
- - [Initial Data Collection and Normalization](#initial-data-collection-and-normalization)
43
- - [Who are the source language producers?](#who-are-the-source-language-producers)
44
- - [Annotations](#annotations)
45
- - [Annotation process](#annotation-process)
46
- - [Who are the annotators?](#who-are-the-annotators)
47
- - [Personal and Sensitive Information](#personal-and-sensitive-information)
48
- - [Considerations for Using the Data](#considerations-for-using-the-data)
49
- - [Social Impact of Dataset](#social-impact-of-dataset)
50
- - [Discussion of Biases](#discussion-of-biases)
51
- - [Other Known Limitations](#other-known-limitations)
52
  - [Additional Information](#additional-information)
53
  - [Dataset Curators](#dataset-curators)
54
  - [Licensing Information](#licensing-information)
@@ -80,7 +69,7 @@ for config in ("random", "stepwise", "gaussian"):
80
  print(config, sample)
81
  break
82
  ```
83
- Alternatively, you can bypass the `datasets` library and quickly download (~1.5hrs, depending on connection) a specific config in the same order used to pre-train BERTIN models in a massive (~200GB) JSON-lines files:
84
 
85
  ```python
86
  import io
@@ -110,7 +99,8 @@ def main(config="stepwise"):
110
  with gzip.open(bio, "rt", encoding="utf8") as g:
111
  for line in g:
112
  json_line = json.loads(line.strip())
113
- f.write(json.dumps(json_line) + "\n")
 
114
 
115
 
116
  if __name__ == "__main__":
@@ -159,51 +149,13 @@ The split `validation` is exactly the same as the original `mc4` dataset.
159
 
160
  ### Curation Rationale
161
 
162
- [More Information Needed]
163
-
164
- ### Source Data
165
-
166
- #### Initial Data Collection and Normalization
167
-
168
- [More Information Needed]
169
-
170
- #### Who are the source language producers?
171
-
172
- [More Information Needed]
173
-
174
- ### Annotations
175
-
176
- #### Annotation process
177
-
178
- [More Information Needed]
179
-
180
- #### Who are the annotators?
181
-
182
- [More Information Needed]
183
-
184
- ### Personal and Sensitive Information
185
-
186
- [More Information Needed]
187
-
188
- ## Considerations for Using the Data
189
-
190
- ### Social Impact of Dataset
191
-
192
- [More Information Needed]
193
-
194
- ### Discussion of Biases
195
-
196
- [More Information Needed]
197
-
198
- ### Other Known Limitations
199
-
200
- [More Information Needed]
201
 
202
  ## Additional Information
203
 
204
  ### Dataset Curators
205
 
206
- [More Information Needed]
207
 
208
  ### Licensing Information
209
 
@@ -224,6 +176,6 @@ AllenAI are releasing this dataset under the terms of ODC-BY. By using this, you
224
 
225
  ### Contributions
226
 
227
- Dataset contributed by [@versae](https://github.com/versae).
228
 
229
  Thanks to [@dirkgr](https://github.com/dirkgr) and [@lhoestq](https://github.com/lhoestq) for adding the original mC4 dataset.
 
38
  - [Data Splits](#data-splits)
39
  - [Dataset Creation](#dataset-creation)
40
  - [Curation Rationale](#curation-rationale)
 
 
 
 
 
 
 
 
 
 
 
41
  - [Additional Information](#additional-information)
42
  - [Dataset Curators](#dataset-curators)
43
  - [Licensing Information](#licensing-information)
 
69
  print(config, sample)
70
  break
71
  ```
72
+ Alternatively, you can bypass the `datasets` library and quickly download (\~1.5hrs, depending on connection) a specific config in the same order used to pre-train BERTIN models in a massive (\~200GB) JSON-lines files:
73
 
74
  ```python
75
  import io
 
99
  with gzip.open(bio, "rt", encoding="utf8") as g:
100
  for line in g:
101
  json_line = json.loads(line.strip())
102
+ f.write(json.dumps(json_line) + "\
103
+ ")
104
 
105
 
106
  if __name__ == "__main__":
 
149
 
150
  ### Curation Rationale
151
 
152
+ This dataset was built from the original [`mc4`](https://huggingface.co/datasets/mc4) by applying perplexity-sampling via [`mc4-sampling`](https://huggingface.co/datasets/bertin-project/mc4-sampling) for Spanish.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
153
 
154
  ## Additional Information
155
 
156
  ### Dataset Curators
157
 
158
+ Original data by [Common Crawl](https://commoncrawl.org/).
159
 
160
  ### Licensing Information
161
 
 
176
 
177
  ### Contributions
178
 
179
+ Dataset contributed by [@versae](https://github.com/versae) for BERTIN Project.
180
 
181
  Thanks to [@dirkgr](https://github.com/dirkgr) and [@lhoestq](https://github.com/lhoestq) for adding the original mC4 dataset.