Walelign commited on
Commit
3e82471
·
verified ·
1 Parent(s): 4cc90ae

Upload README

Browse files
Files changed (1) hide show
  1. README +215 -0
README ADDED
@@ -0,0 +1,215 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Corpus Name: NLLB
2
+ Package: NLLB.am-en in Moses format
3
+ Website: http://opus.nlpl.eu/NLLB-v1.php
4
+ Release: v1
5
+ Release date: Mon Sep 4 01:07:48 EEST 2023
6
+ License: <a href=https://opendatacommons.org/licenses/by/1-0/>ODC-By</a>
7
+ Source: https://huggingface.co/datasets/allenai/nllb
8
+
9
+ This package is part of OPUS - the open collection of parallel corpora
10
+ OPUS Website: http://opus.nlpl.eu
11
+
12
+ Please, cite the following papers: <ul><li>Holger Schwenk, Guillaume Wenzek, Sergey Edunov, Edouard Grave, Armand Joulin and Angela Fan, <a href="https://arxiv.org/abs/1911.04944">CCMatrix: Mining Billions of High-Quality Parallel Sentences on the WEB</a></li> <li>Angela Fan, Shruti Bhosale, Holger Schwenk, Zhiyi Ma, Ahmed El-Kishky, Siddharth Goyal, Mandeep Baines, Onur Celebi, Guillaume Wenzek, Vishrav Chaudhary, Naman Goyal, Tom Birch, Vitaliy Liptchinsky, Sergey Edunov, Edouard Grave, Michael Auli, and Armand Joulin. <a href="https://arxiv.org/abs/2010.11125">Beyond English-Centric Multilingual Machine Translation</a></li><li>NLLB Team et al., <a href="https://arxiv.org/abs/2207.04672, 2022">No Language Left Behind: Scaling Human-Centered Machine Translation</a>, <a href="https://arxiv.org/abs/2207.04672, 2022">Arxiv https://arxiv.org/abs/2207.04672, 2022</a>.</li></ul> and also acknowledge OPUS for the service provided here by citing <a href="https://www.aclweb.org/anthology/L12-1246/">Jörg Tiedemann, <it>Parallel Data, Tools and Interfaces in OPUS</it></a> (<a href="https://www.aclweb.org/anthology/L12-1246.bib">bib</a>, <a href="http://www.lrec-conf.org/proceedings/lrec2012/pdf/463_Paper.pdf">pdf</a>)
13
+
14
+ This dataset was created based on metadata for mined bitext released by Meta AI. It contains bitext for 148 English-centric and 1465 non-English-centric language pairs using the stopes mining library and the LASER3 encoders (Heffernan et al., 2022). The complete dataset is ~450GB.This release is based on the data package released at huggingface through AllenAI. More information about instances for each language pair in the original data can be found in the dataset_infos.json file. Data was filtered based on language identification, emoji based filtering, and for some high-resource languages using a language model. For more details on data filtering please refer to Section 5.2 (NLLB Team et al., 2022). This release also includes data from CCMatrix for language pairs that are not updated in NLLB.
15
+ Mappings between the original NLLB language IDs and OPUS language IDs can be found in this table. The sentence alignments include LASER3 scores (see XCES align files), language ID scores, source information and URLs from where the data has been extracted (see language XML files).
16
+
17
+ Additional information:
18
+
19
+ # Dataset Card for No Language Left Behind (NLLB - 200vo)
20
+
21
+ ## Table of Contents
22
+ - [Table of Contents](#table-of-contents)
23
+ - [Dataset Description](#dataset-description)
24
+ - [Dataset Summary](#dataset-summary)
25
+ - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
26
+ - [Languages](#languages)
27
+ - [Dataset Structure](#dataset-structure)
28
+ - [Data Instances](#data-instances)
29
+ - [Data Fields](#data-fields)
30
+ - [Data Splits](#data-splits)
31
+ - [Dataset Creation](#dataset-creation)
32
+ - [Curation Rationale](#curation-rationale)
33
+ - [Source Data](#source-data)
34
+ - [Annotations](#annotations)
35
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
36
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
37
+ - [Social Impact of Dataset](#social-impact-of-dataset)
38
+ - [Discussion of Biases](#discussion-of-biases)
39
+ - [Other Known Limitations](#other-known-limitations)
40
+ - [Additional Information](#additional-information)
41
+ - [Dataset Curators](#dataset-curators)
42
+ - [Licensing Information](#licensing-information)
43
+ - [Citation Information](#citation-information)
44
+ - [Contributions](#contributions)
45
+
46
+ ## Dataset Description
47
+
48
+ - **Homepage:** [Needs More Information]
49
+ - **Repository:** [Needs More Information]
50
+ - **Paper:** https://arxiv.org/pdf/2207.0467
51
+ - **Leaderboard:** [Needs More Information]
52
+ - **Point of Contact:** [Needs More Information]
53
+
54
+ ### Dataset Summary
55
+
56
+ This dataset was created based on [metadata](https://github.com/facebookresearch/fairseq/tree/nllb) for mined bitext released by Meta AI. It contains bitext for 148 English-centric and 1465 non-English-centric language pairs using the stopes mining library and the LASER3 encoders (Heffernan et al., 2022). The complete dataset is ~450GB.
57
+
58
+ [CCMatrix](https://opus.nlpl.eu/CCMatrix.php) contains previous versions of mined instructions.
59
+
60
+ #### How to use the data
61
+ There are two ways to access the data:
62
+ * Via the Hugging Face Python datasets library
63
+
64
+ For accessing a particular [language pair](https://huggingface.co/datasets/allenai/nllb/blob/main/nllb_lang_pairs.py):
65
+ ```
66
+ from datasets import load_dataset
67
+ dataset = load_dataset("allenai/nllb", "ace_Latn-ban_Latn")
68
+ ```
69
+
70
+ * Clone the git repo
71
+ ```
72
+ git lfs install
73
+ git clone https://huggingface.co/datasets/allenai/nllb
74
+ ```
75
+
76
+ ### Supported Tasks and Leaderboards
77
+
78
+ N/A
79
+
80
+ ### Languages
81
+
82
+ Language pairs can be found [here](https://huggingface.co/datasets/allenai/nllb/blob/main/nllb_lang_pairs.py).
83
+
84
+ ## Dataset Structure
85
+
86
+ The dataset contains gzipped tab delimited text files for each direction. Each text file contains lines with parallel sentences.
87
+
88
+
89
+ ### Data Instances
90
+
91
+ The number of instances for each language pair can be found in the [dataset_infos.json](https://huggingface.co/datasets/allenai/nllb/blob/main/dataset_infos.json) file.
92
+
93
+ ### Data Fields
94
+
95
+ Every instance for a language pair contains the following fields: 'translation' (containing sentence pairs), 'laser_score', 'source_sentence_lid', 'target_sentence_lid', where 'lid' is language classification probability, 'source_sentence_source', 'source_sentence_url', 'target_sentence_source', 'target_sentence_url'.
96
+
97
+ * Sentence in first language
98
+ * Sentence in second language
99
+ * LASER score
100
+ * Language ID score for first sentence
101
+ * Language ID score for second sentence
102
+ * First sentence source (See [Source Data Table](https://huggingface.co/datasets/allenai/nllb#source-data))
103
+ * First sentence URL if the source is crawl-data/\*; _ otherwise
104
+ * Second sentence source
105
+ * Second sentence URL if the source is crawl-data/\*; _ otherwise
106
+
107
+ The lines are sorted by LASER3 score in decreasing order.
108
+
109
+ Example:
110
+ ```
111
+ {'translation': {'ace_Latn': 'Gobnyan hana geupeukeucewa gata atawa geutinggai meunan mantong gata."',
112
+ 'ban_Latn': 'Ida nenten jaga manggayang wiadin ngutang semeton."'},
113
+ 'laser_score': 1.2499876022338867,
114
+ 'source_sentence_lid': 1.0000100135803223,
115
+ 'target_sentence_lid': 0.9991400241851807,
116
+ 'source_sentence_source': 'paracrawl9_hieu',
117
+ 'source_sentence_url': '_',
118
+ 'target_sentence_source': 'crawl-data/CC-MAIN-2020-10/segments/1581875144165.4/wet/CC-MAIN-20200219153707-20200219183707-00232.warc.wet.gz',
119
+ 'target_sentence_url': 'https://alkitab.mobi/tb/Ula/31/6/\n'}
120
+ ```
121
+
122
+ ### Data Splits
123
+
124
+ The data is not split. Given the noisy nature of the overall process, we recommend using the data only for training and use other datasets like [Flores-200](https://github.com/facebookresearch/flores) for the evaluation. The data includes some development and test sets from other datasets, such as xlsum. In addition, sourcing data from multiple web crawls is likely to produce incidental overlap with other test sets.
125
+
126
+
127
+ ## Dataset Creation
128
+
129
+ ### Curation Rationale
130
+
131
+ Data was filtered based on language identification, emoji based filtering, and for some high-resource languages using a language model. For more details on data filtering please refer to Section 5.2 (NLLB Team et al., 2022).
132
+
133
+
134
+ ### Source Data
135
+
136
+
137
+ #### Initial Data Collection and Normalization
138
+
139
+ Monolingual data was collected from the following sources:
140
+
141
+ | Name in data | Source |
142
+ |------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
143
+ | afriberta | https://github.com/castorini/afriberta |
144
+ | americasnlp | https://github.com/AmericasNLP/americasnlp2021/ |
145
+ | bho_resources | https://github.com/shashwatup9k/bho-resources |
146
+ | crawl-data/* | WET files from https://commoncrawl.org/the-data/get-started/ |
147
+ | emcorpus | http://lepage-lab.ips.waseda.ac.jp/en/projects/meiteilon-manipuri-language-resources/ |
148
+ | fbseed20220317 | https://github.com/facebookresearch/flores/tree/main/nllb_seed |
149
+ | giossa_mono | https://github.com/sgongora27/giossa-gongora-guarani-2021 |
150
+ | iitguwahati | https://github.com/priyanshu2103/Sanskrit-Hindi-Machine-Translation/tree/main/parallel-corpus |
151
+ | indic | https://indicnlp.ai4bharat.org/corpora/ |
152
+ | lacunaner | https://github.com/masakhane-io/lacuna_pos_ner/tree/main/language_corpus |
153
+ | leipzig | Community corpora from https://wortschatz.uni-leipzig.de/en/download for each year available |
154
+ | lowresmt2020 | https://github.com/panlingua/loresmt-2020 |
155
+ | masakhanener | https://github.com/masakhane-io/masakhane-ner/tree/main/MasakhaNER2.0/data |
156
+ | nchlt | https://repo.sadilar.org/handle/20.500.12185/299 <br>https://repo.sadilar.org/handle/20.500.12185/302 <br>https://repo.sadilar.org/handle/20.500.12185/306 <br>https://repo.sadilar.org/handle/20.500.12185/308 <br>https://repo.sadilar.org/handle/20.500.12185/309 <br>https://repo.sadilar.org/handle/20.500.12185/312 <br>https://repo.sadilar.org/handle/20.500.12185/314 <br>https://repo.sadilar.org/handle/20.500.12185/315 <br>https://repo.sadilar.org/handle/20.500.12185/321 <br>https://repo.sadilar.org/handle/20.500.12185/325 <br>https://repo.sadilar.org/handle/20.500.12185/328 <br>https://repo.sadilar.org/handle/20.500.12185/330 <br>https://repo.sadilar.org/handle/20.500.12185/332 <br>https://repo.sadilar.org/handle/20.500.12185/334 <br>https://repo.sadilar.org/handle/20.500.12185/336 <br>https://repo.sadilar.org/handle/20.500.12185/337 <br>https://repo.sadilar.org/handle/20.500.12185/341 <br>https://repo.sadilar.org/handle/20.500.12185/343 <br>https://repo.sadilar.org/handle/20.500.12185/346 <br>https://repo.sadilar.org/handle/20.500.12185/348 <br>https://repo.sadilar.org/handle/20.500.12185/353 <br>https://repo.sadilar.org/handle/20.500.12185/355 <br>https://repo.sadilar.org/handle/20.500.12185/357 <br>https://repo.sadilar.org/handle/20.500.12185/359 <br>https://repo.sadilar.org/handle/20.500.12185/362 <br>https://repo.sadilar.org/handle/20.500.12185/364 |
157
+ | paracrawl-2022-* | https://data.statmt.org/paracrawl/monolingual/ |
158
+ | paracrawl9* | https://paracrawl.eu/moredata the monolingual release |
159
+ | pmi | https://data.statmt.org/pmindia/ |
160
+ | til | https://github.com/turkic-interlingua/til-mt/tree/master/til_corpus |
161
+ | w2c | https://lindat.mff.cuni.cz/repository/xmlui/handle/11858/00-097C-0000-0022-6133-9 |
162
+ | xlsum | https://github.com/csebuetnlp/xl-sum |
163
+
164
+ #### Who are the source language producers?
165
+
166
+ Text was collected from the web and various monolingual data sets, many of which are also web crawls. This may have been written by people, generated by templates, or in some cases be machine translation output.
167
+
168
+ ### Annotations
169
+
170
+ #### Annotation process
171
+
172
+ Parallel sentences in the monolingual data were identified using LASER3 encoders. (Heffernan et al., 2022)
173
+
174
+ #### Who are the annotators?
175
+
176
+ The data was not human annotated.
177
+
178
+ ### Personal and Sensitive Information
179
+
180
+ Data may contain personally identifiable information, sensitive content, or toxic content that was publicly shared on the Internet.
181
+
182
+ ## Considerations for Using the Data
183
+
184
+ ### Social Impact of Dataset
185
+
186
+ This dataset provides data for training machine learning systems for many languages that have low resources available for NLP.
187
+
188
+ ### Discussion of Biases
189
+
190
+ Biases in the data have not been specifically studied, however as the original source of data is World Wide Web it is likely that the data has biases similar to those prevalent in the Internet. The data may also exhibit biases introduced by language identification and data filtering techniques; lower resource languages generally have lower accuracy.
191
+
192
+ ### Other Known Limitations
193
+
194
+ Some of the translations are in fact machine translations. While some website machine translation tools are identifiable from HTML source, these tools were not filtered out en mass because raw HTML was not available from some sources and CommonCrawl processing started from WET files.
195
+
196
+ ## Additional Information
197
+
198
+ ### Dataset Curators
199
+
200
+ The data was not curated.
201
+
202
+ ### Licensing Information
203
+
204
+ The dataset is released under the terms of [ODC-BY](https://opendatacommons.org/licenses/by/1-0/). By using this, you are also bound to the respective Terms of Use and License of the original source.
205
+
206
+
207
+ ### Citation Information
208
+
209
+ Schwenk et al, CCMatrix: Mining Billions of High-Quality Parallel Sentences on the Web. ACL https://aclanthology.org/2021.acl-long.507/
210
+ Hefferman et al, Bitext Mining Using Distilled Sentence Representations for Low-Resource Languages. Arxiv https://arxiv.org/abs/2205.12654, 2022.<br>
211
+ NLLB Team et al, No Language Left Behind: Scaling Human-Centered Machine Translation, Arxiv https://arxiv.org/abs/2207.04672, 2022.
212
+
213
+ ### Contributions
214
+
215
+ We thank the NLLB Meta AI team for open sourcing the meta data and instructions on how to use it with special thanks to Bapi Akula, Pierre Andrews, Onur Çelebi, Sergey Edunov, Kenneth Heafield, Philipp Koehn, Alex Mourachko, Safiyyah Saleem, Holger Schwenk, and Guillaume Wenzek. We also thank the AllenNLP team at AI2 for hosting and releasing this data, including Akshita Bhagia (for engineering efforts to host the data, and create the huggingface dataset), and Jesse Dodge (for organizing the connection).