dataset_id,yaml_metadata,markdown_content
allenai/c4,"{""pretty_name"": ""C4"", ""annotations_creators"": [""no-annotation""], ""language_creators"": [""found""], ""language"": [""af"", ""am"", ""ar"", ""az"", ""be"", ""bg"", ""bn"", ""ca"", ""ceb"", ""co"", ""cs"", ""cy"", ""da"", ""de"", ""el"", ""en"", ""eo"", ""es"", ""et"", ""eu"", ""fa"", ""fi"", ""fil"", ""fr"", ""fy"", ""ga"", ""gd"", ""gl"", ""gu"", ""ha"", ""haw"", ""he"", ""hi"", ""hmn"", ""ht"", ""hu"", ""hy"", ""id"", ""ig"", ""is"", ""it"", ""iw"", ""ja"", ""jv"", ""ka"", ""kk"", ""km"", ""kn"", ""ko"", ""ku"", ""ky"", ""la"", ""lb"", ""lo"", ""lt"", ""lv"", ""mg"", ""mi"", ""mk"", ""ml"", ""mn"", ""mr"", ""ms"", ""mt"", ""my"", ""ne"", ""nl"", ""no"", ""ny"", ""pa"", ""pl"", ""ps"", ""pt"", ""ro"", ""ru"", ""sd"", ""si"", ""sk"", ""sl"", ""sm"", ""sn"", ""so"", ""sq"", ""sr"", ""st"", ""su"", ""sv"", ""sw"", ""ta"", ""te"", ""tg"", ""th"", ""tr"", ""uk"", ""und"", ""ur"", ""uz"", ""vi"", ""xh"", ""yi"", ""yo"", ""zh"", ""zu""], ""language_bcp47"": [""bg-Latn"", ""el-Latn"", ""hi-Latn"", ""ja-Latn"", ""ru-Latn"", ""zh-Latn""], ""license"": [""odc-by""], ""multilinguality"": [""multilingual""], ""size_categories"": [""n<1K"", ""1K: ``
### Data Splits
For each configuration subset, the data is split into ""train"", ""validation"" and ""test"" sets, each containing the
following number of examples:
| | Train | Validation | Test |
|:-------------|--------:|-------------:|-------:|
| ace | 100 | 100 | 100 |
| af | 5000 | 1000 | 1000 |
| als | 100 | 100 | 100 |
| am | 100 | 100 | 100 |
| an | 1000 | 1000 | 1000 |
| ang | 100 | 100 | 100 |
| ar | 20000 | 10000 | 10000 |
| arc | 100 | 100 | 100 |
| arz | 100 | 100 | 100 |
| as | 100 | 100 | 100 |
| ast | 1000 | 1000 | 1000 |
| ay | 100 | 100 | 100 |
| az | 10000 | 1000 | 1000 |
| ba | 100 | 100 | 100 |
| bar | 100 | 100 | 100 |
| bat-smg | 100 | 100 | 100 |
| be | 15000 | 1000 | 1000 |
| be-x-old | 5000 | 1000 | 1000 |
| bg | 20000 | 10000 | 10000 |
| bh | 100 | 100 | 100 |
| bn | 10000 | 1000 | 1000 |
| bo | 100 | 100 | 100 |
| br | 1000 | 1000 | 1000 |
| bs | 15000 | 1000 | 1000 |
| ca | 20000 | 10000 | 10000 |
| cbk-zam | 100 | 100 | 100 |
| cdo | 100 | 100 | 100 |
| ce | 100 | 100 | 100 |
| ceb | 100 | 100 | 100 |
| ckb | 1000 | 1000 | 1000 |
| co | 100 | 100 | 100 |
| crh | 100 | 100 | 100 |
| cs | 20000 | 10000 | 10000 |
| csb | 100 | 100 | 100 |
| cv | 100 | 100 | 100 |
| cy | 10000 | 1000 | 1000 |
| da | 20000 | 10000 | 10000 |
| de | 20000 | 10000 | 10000 |
| diq | 100 | 100 | 100 |
| dv | 100 | 100 | 100 |
| el | 20000 | 10000 | 10000 |
| eml | 100 | 100 | 100 |
| en | 20000 | 10000 | 10000 |
| eo | 15000 | 10000 | 10000 |
| es | 20000 | 10000 | 10000 |
| et | 15000 | 10000 | 10000 |
| eu | 10000 | 10000 | 10000 |
| ext | 100 | 100 | 100 |
| fa | 20000 | 10000 | 10000 |
| fi | 20000 | 10000 | 10000 |
| fiu-vro | 100 | 100 | 100 |
| fo | 100 | 100 | 100 |
| fr | 20000 | 10000 | 10000 |
| frr | 100 | 100 | 100 |
| fur | 100 | 100 | 100 |
| fy | 1000 | 1000 | 1000 |
| ga | 1000 | 1000 | 1000 |
| gan | 100 | 100 | 100 |
| gd | 100 | 100 | 100 |
| gl | 15000 | 10000 | 10000 |
| gn | 100 | 100 | 100 |
| gu | 100 | 100 | 100 |
| hak | 100 | 100 | 100 |
| he | 20000 | 10000 | 10000 |
| hi | 5000 | 1000 | 1000 |
| hr | 20000 | 10000 | 10000 |
| hsb | 100 | 100 | 100 |
| hu | 20000 | 10000 | 10000 |
| hy | 15000 | 1000 | 1000 |
| ia | 100 | 100 | 100 |
| id | 20000 | 10000 | 10000 |
| ig | 100 | 100 | 100 |
| ilo | 100 | 100 | 100 |
| io | 100 | 100 | 100 |
| is | 1000 | 1000 | 1000 |
| it | 20000 | 10000 | 10000 |
| ja | 20000 | 10000 | 10000 |
| jbo | 100 | 100 | 100 |
| jv | 100 | 100 | 100 |
| ka | 10000 | 10000 | 10000 |
| kk | 1000 | 1000 | 1000 |
| km | 100 | 100 | 100 |
| kn | 100 | 100 | 100 |
| ko | 20000 | 10000 | 10000 |
| ksh | 100 | 100 | 100 |
| ku | 100 | 100 | 100 |
| ky | 100 | 100 | 100 |
| la | 5000 | 1000 | 1000 |
| lb | 5000 | 1000 | 1000 |
| li | 100 | 100 | 100 |
| lij | 100 | 100 | 100 |
| lmo | 100 | 100 | 100 |
| ln | 100 | 100 | 100 |
| lt | 10000 | 10000 | 10000 |
| lv | 10000 | 10000 | 10000 |
| map-bms | 100 | 100 | 100 |
| mg | 100 | 100 | 100 |
| mhr | 100 | 100 | 100 |
| mi | 100 | 100 | 100 |
| min | 100 | 100 | 100 |
| mk | 10000 | 1000 | 1000 |
| ml | 10000 | 1000 | 1000 |
| mn | 100 | 100 | 100 |
| mr | 5000 | 1000 | 1000 |
| ms | 20000 | 1000 | 1000 |
| mt | 100 | 100 | 100 |
| mwl | 100 | 100 | 100 |
| my | 100 | 100 | 100 |
| mzn | 100 | 100 | 100 |
| nap | 100 | 100 | 100 |
| nds | 100 | 100 | 100 |
| ne | 100 | 100 | 100 |
| nl | 20000 | 10000 | 10000 |
| nn | 20000 | 1000 | 1000 |
| no | 20000 | 10000 | 10000 |
| nov | 100 | 100 | 100 |
| oc | 100 | 100 | 100 |
| or | 100 | 100 | 100 |
| os | 100 | 100 | 100 |
| pa | 100 | 100 | 100 |
| pdc | 100 | 100 | 100 |
| pl | 20000 | 10000 | 10000 |
| pms | 100 | 100 | 100 |
| pnb | 100 | 100 | 100 |
| ps | 100 | 100 | 100 |
| pt | 20000 | 10000 | 10000 |
| qu | 100 | 100 | 100 |
| rm | 100 | 100 | 100 |
| ro | 20000 | 10000 | 10000 |
| ru | 20000 | 10000 | 10000 |
| rw | 100 | 100 | 100 |
| sa | 100 | 100 | 100 |
| sah | 100 | 100 | 100 |
| scn | 100 | 100 | 100 |
| sco | 100 | 100 | 100 |
| sd | 100 | 100 | 100 |
| sh | 20000 | 10000 | 10000 |
| si | 100 | 100 | 100 |
| simple | 20000 | 1000 | 1000 |
| sk | 20000 | 10000 | 10000 |
| sl | 15000 | 10000 | 10000 |
| so | 100 | 100 | 100 |
| sq | 5000 | 1000 | 1000 |
| sr | 20000 | 10000 | 10000 |
| su | 100 | 100 | 100 |
| sv | 20000 | 10000 | 10000 |
| sw | 1000 | 1000 | 1000 |
| szl | 100 | 100 | 100 |
| ta | 15000 | 1000 | 1000 |
| te | 1000 | 1000 | 1000 |
| tg | 100 | 100 | 100 |
| th | 20000 | 10000 | 10000 |
| tk | 100 | 100 | 100 |
| tl | 10000 | 1000 | 1000 |
| tr | 20000 | 10000 | 10000 |
| tt | 1000 | 1000 | 1000 |
| ug | 100 | 100 | 100 |
| uk | 20000 | 10000 | 10000 |
| ur | 20000 | 1000 | 1000 |
| uz | 1000 | 1000 | 1000 |
| vec | 100 | 100 | 100 |
| vep | 100 | 100 | 100 |
| vi | 20000 | 10000 | 10000 |
| vls | 100 | 100 | 100 |
| vo | 100 | 100 | 100 |
| wa | 100 | 100 | 100 |
| war | 100 | 100 | 100 |
| wuu | 100 | 100 | 100 |
| xmf | 100 | 100 | 100 |
| yi | 100 | 100 | 100 |
| yo | 100 | 100 | 100 |
| zea | 100 | 100 | 100 |
| zh | 20000 | 10000 | 10000 |
| zh-classical | 100 | 100 | 100 |
| zh-min-nan | 100 | 100 | 100 |
| zh-yue | 20000 | 10000 | 10000 |
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
The original 282 datasets are associated with this article
```
@inproceedings{pan-etal-2017-cross,
title = ""Cross-lingual Name Tagging and Linking for 282 Languages"",
author = ""Pan, Xiaoman and
Zhang, Boliang and
May, Jonathan and
Nothman, Joel and
Knight, Kevin and
Ji, Heng"",
booktitle = ""Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)"",
month = jul,
year = ""2017"",
address = ""Vancouver, Canada"",
publisher = ""Association for Computational Linguistics"",
url = ""https://www.aclweb.org/anthology/P17-1178"",
doi = ""10.18653/v1/P17-1178"",
pages = ""1946--1958"",
abstract = ""The ambitious goal of this work is to develop a cross-lingual name tagging and linking framework for 282 languages that exist in Wikipedia. Given a document in any of these languages, our framework is able to identify name mentions, assign a coarse-grained or fine-grained type to each mention, and link it to an English Knowledge Base (KB) if it is linkable. We achieve this goal by performing a series of new KB mining methods: generating {``}silver-standard{''} annotations by transferring annotations from English to other languages through cross-lingual links and KB properties, refining annotations through self-training and topic selection, deriving language-specific morphology features from anchor links, and mining word translation pairs from cross-lingual links. Both name tagging and linking results for 282 languages are promising on Wikipedia data and on-Wikipedia data."",
}
```
while the 176 languages supported in this version are associated with the following article
```
@inproceedings{rahimi-etal-2019-massively,
title = ""Massively Multilingual Transfer for {NER}"",
author = ""Rahimi, Afshin and
Li, Yuan and
Cohn, Trevor"",
booktitle = ""Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics"",
month = jul,
year = ""2019"",
address = ""Florence, Italy"",
publisher = ""Association for Computational Linguistics"",
url = ""https://www.aclweb.org/anthology/P19-1015"",
pages = ""151--164"",
}
```
### Contributions
Thanks to [@lewtun](https://github.com/lewtun) and [@rabeehk](https://github.com/rabeehk) for adding this dataset."
HAERAE-HUB/KMMLU,"{""configs"": [{""config_name"": ""Accounting"", ""data_files"": [{""split"": ""train"", ""path"": ""data/Accounting-train.csv""}, {""split"": ""dev"", ""path"": ""data/Accounting-dev.csv""}, {""split"": ""test"", ""path"": ""data/Accounting-test.csv""}]}, {""config_name"": ""Agricultural-Sciences"", ""data_files"": [{""split"": ""train"", ""path"": ""data/Agricultural-Sciences-train.csv""}, {""split"": ""dev"", ""path"": ""data/Agricultural-Sciences-dev.csv""}, {""split"": ""test"", ""path"": ""data/Agricultural-Sciences-test.csv""}]}, {""config_name"": ""Aviation-Engineering-and-Maintenance"", ""data_files"": [{""split"": ""train"", ""path"": ""data/Aviation-Engineering-and-Maintenance-train.csv""}, {""split"": ""dev"", ""path"": ""data/Aviation-Engineering-and-Maintenance-dev.csv""}, {""split"": ""test"", ""path"": ""data/Aviation-Engineering-and-Maintenance-test.csv""}]}, {""config_name"": ""Biology"", ""data_files"": [{""split"": ""train"", ""path"": ""data/Biology-train.csv""}, {""split"": ""dev"", ""path"": ""data/Biology-dev.csv""}, {""split"": ""test"", ""path"": ""data/Biology-test.csv""}]}, {""config_name"": ""Chemical-Engineering"", ""data_files"": [{""split"": ""train"", ""path"": ""data/Chemical-Engineering-train.csv""}, {""split"": ""dev"", ""path"": ""data/Chemical-Engineering-dev.csv""}, {""split"": ""test"", ""path"": ""data/Chemical-Engineering-test.csv""}]}, {""config_name"": ""Chemistry"", ""data_files"": [{""split"": ""train"", ""path"": ""data/Chemistry-train.csv""}, {""split"": ""dev"", ""path"": ""data/Chemistry-dev.csv""}, {""split"": ""test"", ""path"": ""data/Chemistry-test.csv""}]}, {""config_name"": ""Civil-Engineering"", ""data_files"": [{""split"": ""train"", ""path"": ""data/Civil-Engineering-train.csv""}, {""split"": ""dev"", ""path"": ""data/Civil-Engineering-dev.csv""}, {""split"": ""test"", ""path"": ""data/Civil-Engineering-test.csv""}]}, {""config_name"": ""Computer-Science"", ""data_files"": [{""split"": ""train"", ""path"": ""data/Computer-Science-train.csv""}, {""split"": ""dev"", ""path"": ""data/Computer-Science-dev.csv""}, {""split"": ""test"", ""path"": ""data/Computer-Science-test.csv""}]}, {""config_name"": ""Construction"", ""data_files"": [{""split"": ""train"", ""path"": ""data/Construction-train.csv""}, {""split"": ""dev"", ""path"": ""data/Construction-dev.csv""}, {""split"": ""test"", ""path"": ""data/Construction-test.csv""}]}, {""config_name"": ""Criminal-Law"", ""data_files"": [{""split"": ""train"", ""path"": ""data/Criminal-Law-train.csv""}, {""split"": ""dev"", ""path"": ""data/Criminal-Law-dev.csv""}, {""split"": ""test"", ""path"": ""data/Criminal-Law-test.csv""}]}, {""config_name"": ""Ecology"", ""data_files"": [{""split"": ""train"", ""path"": ""data/Ecology-train.csv""}, {""split"": ""dev"", ""path"": ""data/Ecology-dev.csv""}, {""split"": ""test"", ""path"": ""data/Ecology-test.csv""}]}, {""config_name"": ""Economics"", ""data_files"": [{""split"": ""train"", ""path"": ""data/Economics-train.csv""}, {""split"": ""dev"", ""path"": ""data/Economics-dev.csv""}, {""split"": ""test"", ""path"": ""data/Economics-test.csv""}]}, {""config_name"": ""Education"", ""data_files"": [{""split"": ""train"", ""path"": ""data/Education-train.csv""}, {""split"": ""dev"", ""path"": ""data/Education-dev.csv""}, {""split"": ""test"", ""path"": ""data/Education-test.csv""}]}, {""config_name"": ""Electrical-Engineering"", ""data_files"": [{""split"": ""train"", ""path"": ""data/Electrical-Engineering-train.csv""}, {""split"": ""dev"", ""path"": ""data/Electrical-Engineering-dev.csv""}, {""split"": ""test"", ""path"": ""data/Electrical-Engineering-test.csv""}]}, {""config_name"": ""Electronics-Engineering"", ""data_files"": [{""split"": ""train"", ""path"": ""data/Electronics-Engineering-train.csv""}, {""split"": ""dev"", ""path"": ""data/Electronics-Engineering-dev.csv""}, {""split"": ""test"", ""path"": ""data/Electronics-Engineering-test.csv""}]}, {""config_name"": ""Energy-Management"", ""data_files"": [{""split"": ""train"", ""path"": ""data/Energy-Management-train.csv""}, {""split"": ""dev"", ""path"": ""data/Energy-Management-dev.csv""}, {""split"": ""test"", ""path"": ""data/Energy-Management-test.csv""}]}, {""config_name"": ""Environmental-Science"", ""data_files"": [{""split"": ""train"", ""path"": ""data/Environmental-Science-train.csv""}, {""split"": ""dev"", ""path"": ""data/Environmental-Science-dev.csv""}, {""split"": ""test"", ""path"": ""data/Environmental-Science-test.csv""}]}, {""config_name"": ""Fashion"", ""data_files"": [{""split"": ""train"", ""path"": ""data/Fashion-train.csv""}, {""split"": ""dev"", ""path"": ""data/Fashion-dev.csv""}, {""split"": ""test"", ""path"": ""data/Fashion-test.csv""}]}, {""config_name"": ""Food-Processing"", ""data_files"": [{""split"": ""train"", ""path"": ""data/Food-Processing-train.csv""}, {""split"": ""dev"", ""path"": ""data/Food-Processing-dev.csv""}, {""split"": ""test"", ""path"": ""data/Food-Processing-test.csv""}]}, {""config_name"": ""Gas-Technology-and-Engineering"", ""data_files"": [{""split"": ""train"", ""path"": ""data/Gas-Technology-and-Engineering-train.csv""}, {""split"": ""dev"", ""path"": ""data/Gas-Technology-and-Engineering-dev.csv""}, {""split"": ""test"", ""path"": ""data/Gas-Technology-and-Engineering-test.csv""}]}, {""config_name"": ""Geomatics"", ""data_files"": [{""split"": ""train"", ""path"": ""data/Geomatics-train.csv""}, {""split"": ""dev"", ""path"": ""data/Geomatics-dev.csv""}, {""split"": ""test"", ""path"": ""data/Geomatics-test.csv""}]}, {""config_name"": ""Health"", ""data_files"": [{""split"": ""train"", ""path"": ""data/Health-train.csv""}, {""split"": ""dev"", ""path"": ""data/Health-dev.csv""}, {""split"": ""test"", ""path"": ""data/Health-test.csv""}]}, {""config_name"": ""Industrial-Engineer"", ""data_files"": [{""split"": ""train"", ""path"": ""data/Industrial-Engineer-train.csv""}, {""split"": ""dev"", ""path"": ""data/Industrial-Engineer-dev.csv""}, {""split"": ""test"", ""path"": ""data/Industrial-Engineer-test.csv""}]}, {""config_name"": ""Information-Technology"", ""data_files"": [{""split"": ""train"", ""path"": ""data/Information-Technology-train.csv""}, {""split"": ""dev"", ""path"": ""data/Information-Technology-dev.csv""}, {""split"": ""test"", ""path"": ""data/Information-Technology-test.csv""}]}, {""config_name"": ""Interior-Architecture-and-Design"", ""data_files"": [{""split"": ""train"", ""path"": ""data/Interior-Architecture-and-Design-train.csv""}, {""split"": ""dev"", ""path"": ""data/Interior-Architecture-and-Design-dev.csv""}, {""split"": ""test"", ""path"": ""data/Interior-Architecture-and-Design-test.csv""}]}, {""config_name"": ""Law"", ""data_files"": [{""split"": ""train"", ""path"": ""data/Law-train.csv""}, {""split"": ""dev"", ""path"": ""data/Law-dev.csv""}, {""split"": ""test"", ""path"": ""data/Law-test.csv""}]}, {""config_name"": ""Machine-Design-and-Manufacturing"", ""data_files"": [{""split"": ""train"", ""path"": ""data/Machine-Design-and-Manufacturing-train.csv""}, {""split"": ""dev"", ""path"": ""data/Machine-Design-and-Manufacturing-dev.csv""}, {""split"": ""test"", ""path"": ""data/Machine-Design-and-Manufacturing-test.csv""}]}, {""config_name"": ""Management"", ""data_files"": [{""split"": ""train"", ""path"": ""data/Management-train.csv""}, {""split"": ""dev"", ""path"": ""data/Management-dev.csv""}, {""split"": ""test"", ""path"": ""data/Management-test.csv""}]}, {""config_name"": ""Maritime-Engineering"", ""data_files"": [{""split"": ""train"", ""path"": ""data/Maritime-Engineering-train.csv""}, {""split"": ""dev"", ""path"": ""data/Maritime-Engineering-dev.csv""}, {""split"": ""test"", ""path"": ""data/Maritime-Engineering-test.csv""}]}, {""config_name"": ""Marketing"", ""data_files"": [{""split"": ""train"", ""path"": ""data/Marketing-train.csv""}, {""split"": ""dev"", ""path"": ""data/Marketing-dev.csv""}, {""split"": ""test"", ""path"": ""data/Marketing-test.csv""}]}, {""config_name"": ""Materials-Engineering"", ""data_files"": [{""split"": ""train"", ""path"": ""data/Materials-Engineering-train.csv""}, {""split"": ""dev"", ""path"": ""data/Materials-Engineering-dev.csv""}, {""split"": ""test"", ""path"": ""data/Materials-Engineering-test.csv""}]}, {""config_name"": ""Mechanical-Engineering"", ""data_files"": [{""split"": ""train"", ""path"": ""data/Mechanical-Engineering-train.csv""}, {""split"": ""dev"", ""path"": ""data/Mechanical-Engineering-dev.csv""}, {""split"": ""test"", ""path"": ""data/Mechanical-Engineering-test.csv""}]}, {""config_name"": ""Nondestructive-Testing"", ""data_files"": [{""split"": ""train"", ""path"": ""data/Nondestructive-Testing-train.csv""}, {""split"": ""dev"", ""path"": ""data/Nondestructive-Testing-dev.csv""}, {""split"": ""test"", ""path"": ""data/Nondestructive-Testing-test.csv""}]}, {""config_name"": ""Patent"", ""data_files"": [{""split"": ""train"", ""path"": ""data/Patent-train.csv""}, {""split"": ""dev"", ""path"": ""data/Patent-dev.csv""}, {""split"": ""test"", ""path"": ""data/Patent-test.csv""}]}, {""config_name"": ""Political-Science-and-Sociology"", ""data_files"": [{""split"": ""train"", ""path"": ""data/Political-Science-and-Sociology-train.csv""}, {""split"": ""dev"", ""path"": ""data/Political-Science-and-Sociology-dev.csv""}, {""split"": ""test"", ""path"": ""data/Political-Science-and-Sociology-test.csv""}]}, {""config_name"": ""Psychology"", ""data_files"": [{""split"": ""train"", ""path"": ""data/Psychology-train.csv""}, {""split"": ""dev"", ""path"": ""data/Psychology-dev.csv""}, {""split"": ""test"", ""path"": ""data/Psychology-test.csv""}]}, {""config_name"": ""Public-Safety"", ""data_files"": [{""split"": ""train"", ""path"": ""data/Public-Safety-train.csv""}, {""split"": ""dev"", ""path"": ""data/Public-Safety-dev.csv""}, {""split"": ""test"", ""path"": ""data/Public-Safety-test.csv""}]}, {""config_name"": ""Railway-and-Automotive-Engineering"", ""data_files"": [{""split"": ""train"", ""path"": ""data/Railway-and-Automotive-Engineering-train.csv""}, {""split"": ""dev"", ""path"": ""data/Railway-and-Automotive-Engineering-dev.csv""}, {""split"": ""test"", ""path"": ""data/Railway-and-Automotive-Engineering-test.csv""}]}, {""config_name"": ""Real-Estate"", ""data_files"": [{""split"": ""train"", ""path"": ""data/Real-Estate-train.csv""}, {""split"": ""dev"", ""path"": ""data/Real-Estate-dev.csv""}, {""split"": ""test"", ""path"": ""data/Real-Estate-test.csv""}]}, {""config_name"": ""Refrigerating-Machinery"", ""data_files"": [{""split"": ""train"", ""path"": ""data/Refrigerating-Machinery-train.csv""}, {""split"": ""dev"", ""path"": ""data/Refrigerating-Machinery-dev.csv""}, {""split"": ""test"", ""path"": ""data/Refrigerating-Machinery-test.csv""}]}, {""config_name"": ""Social-Welfare"", ""data_files"": [{""split"": ""train"", ""path"": ""data/Social-Welfare-train.csv""}, {""split"": ""dev"", ""path"": ""data/Social-Welfare-dev.csv""}, {""split"": ""test"", ""path"": ""data/Social-Welfare-test.csv""}]}, {""config_name"": ""Taxation"", ""data_files"": [{""split"": ""train"", ""path"": ""data/Taxation-train.csv""}, {""split"": ""dev"", ""path"": ""data/Taxation-dev.csv""}, {""split"": ""test"", ""path"": ""data/Taxation-test.csv""}]}, {""config_name"": ""Telecommunications-and-Wireless-Technology"", ""data_files"": [{""split"": ""train"", ""path"": ""data/Telecommunications-and-Wireless-Technology-train.csv""}, {""split"": ""dev"", ""path"": ""data/Telecommunications-and-Wireless-Technology-dev.csv""}, {""split"": ""test"", ""path"": ""data/Telecommunications-and-Wireless-Technology-test.csv""}]}, {""config_name"": ""Korean-History"", ""data_files"": [{""split"": ""train"", ""path"": ""data/korean-history-train.csv""}, {""split"": ""dev"", ""path"": ""data/korean-history-dev.csv""}, {""split"": ""test"", ""path"": ""data/korean-history-test.csv""}]}, {""config_name"": ""Math"", ""data_files"": [{""split"": ""train"", ""path"": ""data/math-train.csv""}, {""split"": ""dev"", ""path"": ""data/math-dev.csv""}, {""split"": ""test"", ""path"": ""data/math-test.csv""}]}], ""task_categories"": [""multiple-choice""], ""language"": [""ko""], ""tags"": [""mmlu"", ""haerae""], ""size_categories"": [""10K
### Supported Tasks and Leaderboards
The dataset is generally used for Language Modeling.
### Languages
You can find the list of languages here: https://meta.wikimedia.org/wiki/List_of_Wikipedias
## Dataset Structure
### Data Instances
An example looks as follows:
```
{'id': '1',
'url': 'https://simple.wikipedia.org/wiki/April',
'title': 'April',
'text': 'April is the fourth month...'
}
```
### Data Fields
The data fields are the same among all configurations:
- `id` (`str`): ID of the article.
- `url` (`str`): URL of the article.
- `title` (`str`): Title of the article.
- `text` (`str`): Text content of the article.
### Data Splits
All configurations contain a single `train` split.
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
The dataset is built from the Wikipedia dumps: https://dumps.wikimedia.org
You can find the full list of languages and dates here: https://dumps.wikimedia.org/backup-index.html
The articles have been parsed using the [`mwparserfromhell`](https://mwparserfromhell.readthedocs.io) tool.
When uploading the data files for the 20231101 dump, we noticed that the Wikimedia Dumps website does not contain this date dump
for the ""bbc"", ""dga"", nor ""zgh"" Wikipedias. We have reported the issue to the Wikimedia Phabricator: https://phabricator.wikimedia.org/T351761
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
Copyright licensing information: https://dumps.wikimedia.org/legal.html
All original textual content is licensed under the [GNU Free Documentation License](https://www.gnu.org/licenses/fdl-1.3.html) (GFDL)
and the [Creative Commons Attribution-Share-Alike 3.0 License](https://creativecommons.org/licenses/by-sa/3.0/).
Some text may be available only under the Creative Commons license; see their [Terms of Use](https://foundation.wikimedia.org/wiki/Policy:Terms_of_Use) for details.
Text written by some authors may be released under additional licenses or into the public domain.
### Citation Information
```
@ONLINE{wikidump,
author = ""Wikimedia Foundation"",
title = ""Wikimedia Downloads"",
url = ""https://dumps.wikimedia.org""
}
```"
MBZUAI/Bactrian-X,"{""license"": ""cc-by-nc-4.0"", ""task_categories"": [""text-generation""], ""language"": [""af"", ""ar"", ""az"", ""bn"", ""cs"", ""de"", ""en"", ""es"", ""et"", ""fi"", ""fr"", ""gl"", ""gu"", ""he"", ""hi"", ""hr"", ""id"", ""it"", ""ja"", ""ka"", ""kk"", ""km"", ""ko"", ""lt"", ""lv"", ""mk"", ""ml"", ""mn"", ""mr"", ""my"", ""ne"", ""nl"", ""pl"", ""ps"", ""pt"", ""ro"", ""ru"", ""si"", ""sl"", ""sv"", ""sw"", ""ta"", ""te"", ""th"", ""tl"", ""tr"", ""uk"", ""ur"", ""vi"", ""xh"", ""zh""], ""tags"": [""instruction-finetuning"", ""multilingual""], ""pretty_name"": ""Bactrian-X""}","# Dataset Card for ""Bactrian-X""
## Table of Contents
- [Dataset Description](#a-dataset-description)
- [Dataset Summary](#dataset-summary)
- [Languages](#languages)
- [Dataset Structure](#b-dataset-structure)
- [Data Fields](#data-fields)
- [Data Instances](#data-instances)
- [Data in 52 Languages](#data-in-52-languages)
- [Dataset Creation](#c-dataset-creation)
- [Considerations for Using the Data](#d-considerations-for-using-the-data)
- [Additional Information](#e-additional-information)
## A. Dataset Description
- **Homepage:** https://github.com/mbzuai-nlp/Bactrian-X
- **Repository:** https://huggingface.co/datasets/MBZUAI/Bactrian-X
- **Paper:** to-be-soon released
### Dataset Summary
The Bactrain-X dataset is a collection of 3.4M instruction-response pairs in 52 languages, that are obtained by translating 67K English instructions ([alpaca-52k](https://github.com/tatsu-lab/stanford_alpaca) + [dolly-15k](https://github.com/databrickslabs/dolly/tree/master/data)) into 51 languages using Google Translate API. The translated instructions are then fed to ChatGPT (`gpt-3.5-turbo`) to obtain its natural responses, resulting in 3.4M instruction-response pairs in 52 languages (52 languages x 67k instances = 3.4M instances).
### Languages
We follow 52 languages of [mBART-50](https://arxiv.org/abs/2008.00401), with details can be found [below](#data-in-52-languages).
## B. Dataset Structure
### Data Fields
The data fields are as follows:
* `instruction`: describes the task the model should perform. Each of the 67K instructions is unique.
* `input`: optional context or input for the task. For example, when the instruction is ""Summarize the following article"", the input is the article. Around 40% of the examples have an input.
* `output`: the answer to the instruction as generated by `gpt-3.5-turbo`.
* `id`: the unique index for each data piece in a specific language. The same `id` across splits corresponds to the same original English data.
### Data Instances
An example in `en` looks as follows.
```json
{
""instruction"": ""What is the main idea of this text?"",
""input"": ""The robotics industry has become one of the fastest growing sectors in today’s economy. Robotics technology is being applied in various industries and applications, such as manufacturing, healthcare, and defense."",
""output"": ""The main idea of the text is that the robotics industry is rapidly growing and its technology is being used in many different industries and applications."",
""id"": ""alpaca-107""
}
```
An example in `ar` looks as follows.
```json
{
""instruction"": ""قم بالوصول إلى إعدادات المتصفح الافتراضية على جهاز الكمبيوتر."",
""input"": """",
""output"": ""لا يمكن لي كمساعد افتراضي القيام بهذه المهمة كونها تحتاج إلى الوصول إلى الجهاز المادي. يرجى الرجوع إلى دليل المستخدم للكمبيوتر الخاص بك أو البحث على الإنترنت لمعرفة كيفية الوصول إلى إعدادات المتصفح الافتراضية على جهاز الكمبيوتر الخاص بك."",
""id"": ""alpaca-41997""
}
```
An example in `id` looks as follows.
```json
{
""instruction"": ""Berapa banyak angka penting yang harus digunakan untuk nilai 0,000187?"",
""input"": """",
""output"": ""Ada tiga angka penting dalam nilai 0,000187."",
""id"": ""alpaca-40382""
}
```
An example in `de` looks as follows.
```json
{
""instruction"": ""Komponieren Sie ein Musikstück mit einem Titel, der Trauer bezeichnet."",
""input"": """",
""output"": """"Schmerzendes Herz""\n\nDas Stück beginnt mit einer langsamen, traurigen Melodie, gespielt von einem Solo-Violoncello. Die Harmonien, die von einem Klavier begleitet werden, sind durcheinander und dissonant, um ein Gefühl von Schmerz und Trauer zu vermitteln.\n\nIm Laufe des Stücks treten andere Instrumente ein, wie Streicher, Flöte und Oboe, um die Melodie zu unterstützen und zu intensivieren. Der Rhythmus bleibt langsam und schwerfällig, um die Schwere des Schmerzes darzustellen.\n\nIn der Mitte des Stücks gibt es einen Höhepunkt, wo die Melodie und Harmonien sich verdichten und ein Gefühl von Überwältigung erzeugen. Der Solo-Violoncello spielt eine traurige und schnelle Phrase, die die Verzweiflung und Isolation des Protagonisten ausdrückt.\n\nDas Stück endet mit einer Wiederholung der ersten Melodie, gespielt von einem Solo-Klavier, während die anderen Instrumente auslaufen. Die Harmonien werden jedoch weicher und sanfter, um einen Hauch von Hoffnung zu vermitteln, dass der Schmerz langsam vergeht."",
""id"": ""alpaca-33619""
}
```
An example in `fr` looks as follows.
```json
{
""instruction"": ""Les Ceanothus ont-ils des racines peu profondes ?"",
""input"": """",
""output"": ""En tant qu'IA, je ne peux pas répondre avec une certitude absolue, mais voici des informations générales sur les racines de Ceanothus : \n\nLes Ceanothus ont tendance à avoir un système racinaire étendu et peu profond. Les racines peuvent s'étaler jusqu'à 4 à 6 fois la taille de la plante et s'étendre à environ 60 à 90 cm de profondeur. Cependant, cela peut varier en fonction du type de sol et de la variété de Ceanothus. Certaines variétés de Ceanothus pourraient avoir des racines plus profondes si les conditions s'y prêtent."",
""id"": ""dolly-10940""
}
```
### Data in 52 Languages
| No | Languages | Code | Train |
| ---|---------------- | ----- | ----- |
| 1 | [Afrikaans](https://huggingface.co/datasets/MBZUAI/Bactrian-X/viewer/af) | af_ZA | 67017 |
| 2 | [Arabic](https://huggingface.co/datasets/MBZUAI/Bactrian-X/viewer/ar) | ar_AR | 67017 |
| 3 | [Azerbaijani](https://huggingface.co/datasets/MBZUAI/Bactrian-X/viewer/az) | az_AZ | 67017 |
| 4 | [Bengali](https://huggingface.co/datasets/MBZUAI/Bactrian-X/viewer/bn) | bn_IN | 67017 |
| 5 | [Czech](https://huggingface.co/datasets/MBZUAI/Bactrian-X/viewer/cs) | cs_CZ | 67017 |
| 6 | [German](https://huggingface.co/datasets/MBZUAI/Bactrian-X/viewer/de) | de_DE | 67017 |
| 7 | [English](https://huggingface.co/datasets/MBZUAI/Bactrian-X/viewer/en) | en_XX | 67017 |
| 8 | [Spanish](https://huggingface.co/datasets/MBZUAI/Bactrian-X/viewer/es) | es_XX | 67017 |
| 9 | [Estonian](https://huggingface.co/datasets/MBZUAI/Bactrian-X/viewer/et) | et_EE | 67017 |
| 10 | [Persian](https://huggingface.co/datasets/MBZUAI/Bactrian-X/viewer/fa) | fa_IR | 67017 |
| 11 | [Finnish](https://huggingface.co/datasets/MBZUAI/Bactrian-X/viewer/fi) | fi_FI | 67017 |
| 12 | [French](https://huggingface.co/datasets/MBZUAI/Bactrian-X/viewer/fr) | fr_XX | 67017 |
| 13 | [Galician](https://huggingface.co/datasets/MBZUAI/Bactrian-X/viewer/gl) | gl_ES | 67017 |
| 14 | [Gujarati](https://huggingface.co/datasets/MBZUAI/Bactrian-X/viewer/gu) | gu_IN | 67017 |
| 15 | [Hebrew](https://huggingface.co/datasets/MBZUAI/Bactrian-X/viewer/he) | he_IL | 67017 |
| 16 | [Hindi](https://huggingface.co/datasets/MBZUAI/Bactrian-X/viewer/hi) | hi_IN | 67017 |
| 17 | [Croatian](https://huggingface.co/datasets/MBZUAI/Bactrian-X/viewer/hr) | hr_HR | 67017 |
| 18 | [Indonesian](https://huggingface.co/datasets/MBZUAI/Bactrian-X/viewer/id) | id_ID | 67017 |
| 19 | [Italian](https://huggingface.co/datasets/MBZUAI/Bactrian-X/viewer/it) | it_IT | 67017 |
| 20 | [Japanese](https://huggingface.co/datasets/MBZUAI/Bactrian-X/viewer/ja) | ja_XX | 67017 |
| 21 | [Georgian](https://huggingface.co/datasets/MBZUAI/Bactrian-X/viewer/ka) | ka_GE | 67017 |
| 22 | [Kazakh](https://huggingface.co/datasets/MBZUAI/Bactrian-X/viewer/kk) | kk_KZ | 67017 |
| 23 | [Khmer](https://huggingface.co/datasets/MBZUAI/Bactrian-X/viewer/km) | km_KH | 67017 |
| 24 | [Korean](https://huggingface.co/datasets/MBZUAI/Bactrian-X/viewer/ko) | ko_KR | 67017 |
| 25 | [Lithuanian](https://huggingface.co/datasets/MBZUAI/Bactrian-X/viewer/lt) | lt_LT | 67017 |
| 26 | [Latvian](https://huggingface.co/datasets/MBZUAI/Bactrian-X/viewer/lv) | lv_LV | 67017 |
| 27 | [Macedonian](https://huggingface.co/datasets/MBZUAI/Bactrian-X/viewer/mk) | mk_MK | 67017 |
| 28 | [Malayalam](https://huggingface.co/datasets/MBZUAI/Bactrian-X/viewer/ml) | ml_IN | 67017 |
| 29 | [Mongolian](https://huggingface.co/datasets/MBZUAI/Bactrian-X/viewer/mn) | mn_MN | 67017 |
| 30 | [Marathi](https://huggingface.co/datasets/MBZUAI/Bactrian-X/viewer/mr) | mr_IN | 67017 |
| 31 | [Burmese](https://huggingface.co/datasets/MBZUAI/Bactrian-X/viewer/my) | my_MM | 67017 |
| 32 | [Nepali](https://huggingface.co/datasets/MBZUAI/Bactrian-X/viewer/ne) | ne_NP | 67017 |
| 33 | [Dutch](https://huggingface.co/datasets/MBZUAI/Bactrian-X/viewer/nl) | nl_XX | 67017 |
| 34 | [Polish](https://huggingface.co/datasets/MBZUAI/Bactrian-X/viewer/pl) | pl_PL | 67017 |
| 35 | [Pashto](https://huggingface.co/datasets/MBZUAI/Bactrian-X/viewer/ps) | ps_AF | 67017 |
| 36 | [Portuguese](https://huggingface.co/datasets/MBZUAI/Bactrian-X/viewer/pt) | pt_XX | 67017 |
| 37 | [Romanian](https://huggingface.co/datasets/MBZUAI/Bactrian-X/viewer/ro) | ro_RO | 67017 |
| 38 | [Russian](https://huggingface.co/datasets/MBZUAI/Bactrian-X/viewer/ru) | ru_RU | 67017 |
| 39 | [Sinhala](https://huggingface.co/datasets/MBZUAI/Bactrian-X/viewer/si) | si_LK | 67017 |
| 40 | [Slovene](https://huggingface.co/datasets/MBZUAI/Bactrian-X/viewer/sl) | sl_SI | 67017 |
| 41 | [Swedish](https://huggingface.co/datasets/MBZUAI/Bactrian-X/viewer/sv) | sv_SE | 67017 |
| 42 | [Swahili](https://huggingface.co/datasets/MBZUAI/Bactrian-X/viewer/sw) | sw_KE | 67017 |
| 43 | [Tamil](https://huggingface.co/datasets/MBZUAI/Bactrian-X/viewer/ta) | ta_IN | 67017 |
| 44 | [Telugu](https://huggingface.co/datasets/MBZUAI/Bactrian-X/viewer/te) | te_IN | 67017 |
| 45 | [Thai](https://huggingface.co/datasets/MBZUAI/Bactrian-X/viewer/th) | th_TH | 67017 |
| 46 | [Tagalog](https://huggingface.co/datasets/MBZUAI/Bactrian-X/viewer/tl) | tl_XX | 67017 |
| 47 | [Turkish](https://huggingface.co/datasets/MBZUAI/Bactrian-X/viewer/tr) | tr_TR | 67017 |
| 48 | [Ukrainian](https://huggingface.co/datasets/MBZUAI/Bactrian-X/viewer/uk) | uk_UA | 67017 |
| 49 | [Urdu](https://huggingface.co/datasets/MBZUAI/Bactrian-X/viewer/ur) | ur_PK | 67017 |
| 50 | [Vietnamese](https://huggingface.co/datasets/MBZUAI/Bactrian-X/viewer/vi) | vi_VN | 67017 |
| 51 | [Xhosa](https://huggingface.co/datasets/MBZUAI/Bactrian-X/viewer/xh) | xh_ZA | 67017 |
| 52 | [Chinese](https://huggingface.co/datasets/MBZUAI/Bactrian-X/viewer/zh) | zh_CN | 67017 |
## C. Dataset Creation
1. English Instructions: The English instuctions are obtained from [alpaca-53k](https://github.com/tatsu-lab/stanford_alpaca), and [dolly-15k](https://github.com/databrickslabs/dolly/tree/master/data).
2. Instruction Translation: The instructions (and inputs) are translated into 51 languages using Google Translation API (conducted on April 2023).
3. Output Generation: We generate output from `gpt-3.5-turbo` for each language (conducted on April 2023).
## D. Considerations for Using the Data
### Social Impact of Dataset
NLP for everyone: this dataset helps to democratize the cutting-edge instruction-following models in 52 languages. This dataset also allows the first experiment on the multilingual LoRA-based LLaMA model.
### Discussion of Biases
(1) Translation bias; (2) Potential English-culture bias in the translated dataset.
### Other Known Limitations
The `Bactrian-X` data is generated by a language model (`gpt-3.5-turbo`) and inevitably contains some errors or biases. We encourage users to use this data with caution and propose new methods to filter or improve the imperfections.
## E. Additional Information
### Dataset Curators
[Haonan Li](https://haonan-li.github.io/) and [Fajri Koto](http://www.fajrikoto.com)
### Licensing Information
The dataset is available under the [Creative Commons NonCommercial (CC BY-NC 4.0)](https://creativecommons.org/licenses/by-nc/4.0/legalcode).
### Citation Information
```
@misc{li2023bactrianx,
title={Bactrian-X : A Multilingual Replicable Instruction-Following Model with Low-Rank Adaptation},
author={Haonan Li and Fajri Koto and Minghao Wu and Alham Fikri Aji and Timothy Baldwin},
year={2023},
eprint={2305.15011},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
### Contributions
Thanks to [@haonan-li](https://github.com/haonan-li), [@fajri91](https://github.com/fajri91) for adding this dataset."
csebuetnlp/xlsum,"{""annotations_creators"": [""found""], ""language_creators"": [""found""], ""language"": [""am"", ""ar"", ""az"", ""bn"", ""my"", ""zh"", ""en"", ""fr"", ""gu"", ""ha"", ""hi"", ""ig"", ""id"", ""ja"", ""rn"", ""ko"", ""ky"", ""mr"", ""ne"", ""om"", ""ps"", ""fa"", ""pcm"", ""pt"", ""pa"", ""ru"", ""gd"", ""sr"", ""si"", ""so"", ""es"", ""sw"", ""ta"", ""te"", ""th"", ""ti"", ""tr"", ""uk"", ""ur"", ""uz"", ""vi"", ""cy"", ""yo""], ""license"": [""cc-by-nc-sa-4.0""], ""multilinguality"": [""multilingual""], ""size_categories"": [""1M \n Q: \n A: \n B: \n C: \n D: \n Answer: ```. We perform prediction by picking the answer within `[A, B, C, D]` that has the highest probability relatively to the others.
- **Few-shot in-context learning (translated examples)** ^
- Same as above, except the samples from the training set are translated to the target language so that the examples and evaluation data are in the same language. The training samples can be human or machine-translated.
#### With finetuning
- **English finetune & multilingual evaluation**
- The model is finetuned to the task using the English training set, probably with a sequence classification head. Then the model is evaluated in all the target languages individually. For results presented in the paper we used [the HuggingFace library](https://huggingface.co/docs/transformers/en/model_doc/xlm-roberta#transformers.XLMRobertaForMultipleChoice).
- **English finetune & cross-lingual evaluation**
- Same as above, except the model is evaluated in a cross-lingual setting, where for each question, the passage & answers could be provided in a different language. For example, passage could be in language `x`, question in language `y`, and answers in language `z`.
- **Translate-train** ^
- For each target language, the model is individually finetuned on training samples that have been machine-translated from English to that language. Each model is then evaluated in the respective target language.
- **Translate-train-all**
- Similar to above, except here the model is trained on translated samples from all target languages at once. The single finetuned model is then evaluated on all target languages.
- **Translate-train-all & cross-lingual evaluation**
- Same as above, except the single finetuned model is evaluated in a cross-lingual setting, where for each question, the passage & answers could be provided in a different language.
- **Translate-test**
- The model is finetuned using the English training data and then the evaluation dataset is machine-translated to English and evaluated on the English.
- This setting is primarily a reflection of the quality of the machine translation system, but is useful for comparison to multilingual models.
In addition, there are 83 additional languages in FLORES-200 for which questions were not translated for Belebele. Since the passages exist in those target languages, machine-translating the questions & answers may enable decent evaluation of machine reading comprehension in those languages.
## Training Set
As discussed in the paper, we also provide an assembled training set consisting of samples at the [github repo](https://github.com/facebookresearch/belebele).
The Belebele dataset is intended to be used only as a test set, and not for training or validation. Therefore, for models that require additional task-specific training, we instead propose using an assembled training set consisting of samples from pre-existing multiple-choice QA datasets in English. We considered diverse datasets, and determine the most compatible to be [RACE](https://www.cs.cmu.edu/~glai1/data/race/), [SciQ](https://allenai.org/data/sciq), [MultiRC](https://cogcomp.seas.upenn.edu/multirc/), [MCTest](https://mattr1.github.io/mctest/), [MCScript2.0](https://aclanthology.org/S19-1012/), and [ReClor](https://whyu.me/reclor/).
For each of the six datasets, we unpack and restructure the passages and questions from their respective formats. We then filter out less suitable samples (e.g. questions with multiple correct answers). In the end, the dataset comprises 67.5k training samples and 3.7k development samples, more than half of which are from RACE. We provide a script (`assemble_training_set.py`) to reconstruct this dataset for anyone to perform task finetuning.
Since the training set is a joint sample of other datasets, it is governed by a different license. We do not claim any of that work or datasets to be our own. See the Licenses section in the README of https://github.com/facebookresearch/belebele .
## Languages in Belebele
FLORES-200 Code | English Name | Script | Family
---|---|---|---
acm_Arab | Mesopotamian Arabic | Arab | Afro-Asiatic
afr_Latn | Afrikaans | Latn | Germanic
als_Latn | Tosk Albanian | Latn | Paleo-Balkanic
amh_Ethi | Amharic | Ethi | Afro-Asiatic
apc_Arab | North Levantine Arabic | Arab | Afro-Asiatic
arb_Arab | Modern Standard Arabic | Arab | Afro-Asiatic
arb_Latn | Modern Standard Arabic (Romanized) | Latn | Afro-Asiatic
ars_Arab | Najdi Arabic | Arab | Afro-Asiatic
ary_arab | Moroccan Arabic | Arab | Afro-Asiatic
arz_Arab | Egyptian Arabic | Arab | Afro-Asiatic
asm_Beng | Assamese | Beng | Indo-Aryan
azj_Latn | North Azerbaijani | Latn | Turkic
bam_Latn | Bambara | Latn | Mande
ben_Beng | Bengali | Beng | Indo-Aryan
ben_Latn | Bengali (Romanized) | Latn | Indo-Aryan
bod_Tibt | Standard Tibetan | Tibt | Sino-Tibetan
bul_Cyrl | Bulgarian | Cyrl | Balto-Slavic
cat_Latn | Catalan | Latn | Romance
ceb_Latn | Cebuano | Latn | Austronesian
ces_Latn | Czech | Latn | Balto-Slavic
ckb_Arab | Central Kurdish | Arab | Iranian
dan_Latn | Danish | Latn | Germanic
deu_Latn | German | Latn | Germanic
ell_Grek | Greek | Grek | Hellenic
eng_Latn | English | Latn | Germanic
est_Latn | Estonian | Latn | Uralic
eus_Latn | Basque | Latn | Basque
fin_Latn | Finnish | Latn | Uralic
fra_Latn | French | Latn | Romance
fuv_Latn | Nigerian Fulfulde | Latn | Atlantic-Congo
gaz_Latn | West Central Oromo | Latn | Afro-Asiatic
grn_Latn | Guarani | Latn | Tupian
guj_Gujr | Gujarati | Gujr | Indo-Aryan
hat_Latn | Haitian Creole | Latn | Atlantic-Congo
hau_Latn | Hausa | Latn | Afro-Asiatic
heb_Hebr | Hebrew | Hebr | Afro-Asiatic
hin_Deva | Hindi | Deva | Indo-Aryan
hin_Latn | Hindi (Romanized) | Latn | Indo-Aryan
hrv_Latn | Croatian | Latn | Balto-Slavic
hun_Latn | Hungarian | Latn | Uralic
hye_Armn | Armenian | Armn | Armenian
ibo_Latn | Igbo | Latn | Atlantic-Congo
ilo_Latn | Ilocano | Latn | Austronesian
ind_Latn | Indonesian | Latn | Austronesian
isl_Latn | Icelandic | Latn | Germanic
ita_Latn | Italian | Latn | Romance
jav_Latn | Javanese | Latn | Austronesian
jpn_Jpan | Japanese | Jpan | Japonic
kac_Latn | Jingpho | Latn | Sino-Tibetan
kan_Knda | Kannada | Knda | Dravidian
kat_Geor | Georgian | Geor | kartvelian
kaz_Cyrl | Kazakh | Cyrl | Turkic
kea_Latn | Kabuverdianu | Latn | Portuguese Creole
khk_Cyrl | Halh Mongolian | Cyrl | Mongolic
khm_Khmr | Khmer | Khmr | Austroasiatic
kin_Latn | Kinyarwanda | Latn | Atlantic-Congo
kir_Cyrl | Kyrgyz | Cyrl | Turkic
kor_Hang | Korean | Hang | Koreanic
lao_Laoo | Lao | Laoo | Kra-Dai
lin_Latn | Lingala | Latn | Atlantic-Congo
lit_Latn | Lithuanian | Latn | Balto-Slavic
lug_Latn | Ganda | Latn | Atlantic-Congo
luo_Latn | Luo | Latn | Nilo-Saharan
lvs_Latn | Standard Latvian | Latn | Balto-Slavic
mal_Mlym | Malayalam | Mlym | Dravidian
mar_Deva | Marathi | Deva | Indo-Aryan
mkd_Cyrl | Macedonian | Cyrl | Balto-Slavic
mlt_Latn | Maltese | Latn | Afro-Asiatic
mri_Latn | Maori | Latn | Austronesian
mya_Mymr | Burmese | Mymr | Sino-Tibetan
nld_Latn | Dutch | Latn | Germanic
nob_Latn | Norwegian Bokmål | Latn | Germanic
npi_Deva | Nepali | Deva | Indo-Aryan
npi_Latn | Nepali (Romanized) | Latn | Indo-Aryan
nso_Latn | Northern Sotho | Latn | Atlantic-Congo
nya_Latn | Nyanja | Latn | Afro-Asiatic
ory_Orya | Odia | Orya | Indo-Aryan
pan_Guru | Eastern Panjabi | Guru | Indo-Aryan
pbt_Arab | Southern Pashto | Arab | Indo-Aryan
pes_Arab | Western Persian | Arab | Iranian
plt_Latn | Plateau Malagasy | Latn | Austronesian
pol_Latn | Polish | Latn | Balto-Slavic
por_Latn | Portuguese | Latn | Romance
ron_Latn | Romanian | Latn | Romance
rus_Cyrl | Russian | Cyrl | Balto-Slavic
shn_Mymr | Shan | Mymr | Kra-Dai
sin_Latn | Sinhala (Romanized) | Latn | Indo-Aryan
sin_Sinh | Sinhala | Sinh | Indo-Aryan
slk_Latn | Slovak | Latn | Balto-Slavic
slv_Latn | Slovenian | Latn | Balto-Slavic
sna_Latn | Shona | Latn | Atlantic-Congo
snd_Arab | Sindhi | Arab | Indo-Aryan
som_Latn | Somali | Latn | Afro-Asiatic
sot_Latn | Southern Sotho | Latn | Atlantic-Congo
spa_Latn | Spanish | Latn | Romance
srp_Cyrl | Serbian | Cyrl | Balto-Slavic
ssw_Latn | Swati | Latn | Atlantic-Congo
sun_Latn | Sundanese | Latn | Austronesian
swe_Latn | Swedish | Latn | Germanic
swh_Latn | Swahili | Latn | Atlantic-Congo
tam_Taml | Tamil | Taml | Dravidian
tel_Telu | Telugu | Telu | Dravidian
tgk_Cyrl | Tajik | Cyrl | Iranian
tgl_Latn | Tagalog | Latn | Austronesian
tha_Thai | Thai | Thai | Kra-Dai
tir_Ethi | Tigrinya | Ethi | Afro-Asiatic
tsn_Latn | Tswana | Latn | Atlantic-Congo
tso_Latn | Tsonga | Latn | Afro-Asiatic
tur_Latn | Turkish | Latn | Turkic
ukr_Cyrl | Ukrainian | Cyrl | Balto-Slavic
urd_Arab | Urdu | Arab | Indo-Aryan
urd_Latn | Urdu (Romanized) | Latn | Indo-Aryan
uzn_Latn | Northern Uzbek | Latn | Turkic
vie_Latn | Vietnamese | Latn | Austroasiatic
war_Latn | Waray | Latn | Austronesian
wol_Latn | Wolof | Latn | Atlantic-Congo
xho_Latn | Xhosa | Latn | Atlantic-Congo
yor_Latn | Yoruba | Latn | Atlantic-Congo
zho_Hans | Chinese (Simplified) | Hans | Sino-Tibetan
zho_Hant | Chinese (Traditional) | Hant | Sino-Tibetan
zsm_Latn | Standard Malay | Latn | Austronesian
zul_Latn | Zulu | Latn | Atlantic-Congo"
mteb/sts17-crosslingual-sts,"{""language"": [""ar"", ""de"", ""en"", ""es"", ""fr"", ""it"", ""nl"", ""ko"", ""tr""], ""configs"": [{""config_name"": ""default"", ""data_files"": [{""split"": ""test"", ""path"": ""test/*""}]}, {""config_name"": ""ko-ko"", ""data_files"": [{""split"": ""test"", ""path"": ""test/ko-ko.jsonl.gz""}]}, {""config_name"": ""ar-ar"", ""data_files"": [{""split"": ""test"", ""path"": ""test/ar-ar.jsonl.gz""}]}, {""config_name"": ""en-ar"", ""data_files"": [{""split"": ""test"", ""path"": ""test/en-ar.jsonl.gz""}]}, {""config_name"": ""en-de"", ""data_files"": [{""split"": ""test"", ""path"": ""test/en-de.jsonl.gz""}]}, {""config_name"": ""en-en"", ""data_files"": [{""split"": ""test"", ""path"": ""test/en-en.jsonl.gz""}]}, {""config_name"": ""en-tr"", ""data_files"": [{""split"": ""test"", ""path"": ""test/en-tr.jsonl.gz""}]}, {""config_name"": ""es-en"", ""data_files"": [{""split"": ""test"", ""path"": ""test/es-en.jsonl.gz""}]}, {""config_name"": ""es-es"", ""data_files"": [{""split"": ""test"", ""path"": ""test/es-es.jsonl.gz""}]}, {""config_name"": ""fr-en"", ""data_files"": [{""split"": ""test"", ""path"": ""test/fr-en.jsonl.gz""}]}, {""config_name"": ""it-en"", ""data_files"": [{""split"": ""test"", ""path"": ""test/it-en.jsonl.gz""}]}, {""config_name"": ""nl-en"", ""data_files"": [{""split"": ""test"", ""path"": ""test/nl-en.jsonl.gz""}]}]}",
skt/kobest_v1,"{""pretty_name"": ""KoBEST"", ""annotations_creators"": [""expert-generated""], ""language_creators"": [""expert-generated""], ""language"": [""ko""], ""license"": [""cc-by-sa-4.0""], ""multilinguality"": [""monolingual""], ""size_categories"": [""10K One of the biggest challenges hindering progress in low-resource and multilingual machine translation is the lack of good evaluation benchmarks. Current evaluation benchmarks either lack good coverage of low-resource languages, consider only restricted domains, or are low quality because they are constructed using semi-automatic procedures. In this work, we introduce the FLORES evaluation benchmark, consisting of 3001 sentences extracted from English Wikipedia and covering a variety of different topics and domains. These sentences have been translated in 101 languages by professional translators through a carefully controlled process. The resulting dataset enables better assessment of model quality on the long tail of low-resource languages, including the evaluation of many-to-many multilingual translation systems, as all translations are multilingually aligned. By publicly releasing such a high-quality and high-coverage dataset, we hope to foster progress in the machine translation community and beyond.
**Disclaimer**: *The Flores-101 dataset is hosted by the Facebook and licensed under the [Creative Commons Attribution-ShareAlike 4.0 International License](https://creativecommons.org/licenses/by-sa/4.0/).
### Supported Tasks and Leaderboards
#### Multilingual Machine Translation
Refer to the [Dynabench leaderboard](https://dynabench.org/flores/Flores%20MT%20Evaluation%20(FULL)) for additional details on model evaluation on FLORES-101 in the context of the WMT2021 shared task on [Large-Scale Multilingual Machine Translation](http://www.statmt.org/wmt21/large-scale-multilingual-translation-task.html).
### Languages
The dataset contains parallel sentences for 101 languages, as mentioned in the original [Github](https://github.com/facebookresearch/flores/blob/master/README.md) page for the project. Languages are identified with the ISO 639-3 code (e.g. `eng`, `fra`, `rus`) as in the original dataset.
**New:** Use the configuration `all` to access the full set of parallel sentences for all the available languages in a single command.
## Dataset Structure
### Data Instances
A sample from the `dev` split for the Russian language (`rus` config) is provided below. All configurations have the same structure, and all sentences are aligned across configurations and splits.
```python
{
'id': 1,
'sentence': 'В понедельник ученые из Медицинской школы Стэнфордского университета объявили об изобретении нового диагностического инструмента, который может сортировать клетки по их типу; это маленький чип, который можно напечатать, используя стандартный струйный принтер примерно за 1 цент США.',
'URL': 'https://en.wikinews.org/wiki/Scientists_say_new_medical_diagnostic_chip_can_sort_cells_anywhere_with_an_inkjet',
'domain': 'wikinews',
'topic': 'health',
'has_image': 0,
'has_hyperlink': 0
}
```
The text is provided as-in the original dataset, without further preprocessing or tokenization.
### Data Fields
- `id`: Row number for the data entry, starting at 1.
- `sentence`: The full sentence in the specific language.
- `URL`: The URL for the English article from which the sentence was extracted.
- `domain`: The domain of the sentence.
- `topic`: The topic of the sentence.
- `has_image`: Whether the original article contains an image.
- `has_hyperlink`: Whether the sentence contains a hyperlink.
### Data Splits
| config| `dev`| `devtest`|
|-----------------:|-----:|---------:|
|all configurations| 997| 1012:|
### Dataset Creation
Please refer to the original article [The FLORES-101 Evaluation Benchmark for Low-Resource and Multilingual Machine Translation](https://arxiv.org/abs/2106.03193) for additional information on dataset creation.
## Additional Information
### Dataset Curators
The original authors of FLORES-101 are the curators of the original dataset. For problems or updates on this 🤗 Datasets version, please contact [gabriele.sarti996@gmail.com](mailto:gabriele.sarti996@gmail.com).
### Licensing Information
Licensed with Creative Commons Attribution Share Alike 4.0. License available [here](https://creativecommons.org/licenses/by-sa/4.0/).
### Citation Information
Please cite the authors if you use these corpora in your work:
```bibtex
@inproceedings{flores101,
title={The FLORES-101 Evaluation Benchmark for Low-Resource and Multilingual Machine Translation},
author={Goyal, Naman and Gao, Cynthia and Chaudhary, Vishrav and Chen, Peng-Jen and Wenzek, Guillaume and Ju, Da and Krishnan, Sanjana and Ranzato, Marc'Aurelio and Guzm\'{a}n, Francisco and Fan, Angela},
journal={arXiv preprint arXiv:2106.03193},
year={2021}
}
```"
klue/klue,"{""annotations_creators"": [""expert-generated""], ""language_creators"": [""expert-generated""], ""language"": [""ko""], ""license"": [""cc-by-sa-4.0""], ""multilinguality"": [""monolingual""], ""size_categories"": [""10K <강릉:LC> 방향 <문막휴게소:LC>에서 <만종분기점:LC>까지 <5㎞:QT> 구간에는 승용차 전용 임시 갓길차로제를 운영하기로 했다.'}
```
#### re
An example of 'train' looks as follows.
```
{'guid': 'klue-re-v1_train_00000',
'label': 0,
'object_entity': {'word': '조지 해리슨',
'start_idx': 13,
'end_idx': 18,
'type': 'PER'},
'sentence': '〈Something〉는 조지 해리슨이 쓰고 비틀즈가 1969년 앨범 《Abbey Road》에 담은 노래다.',
'source': 'wikipedia',
'subject_entity': {'word': '비틀즈',
'start_idx': 24,
'end_idx': 26,
'type': 'ORG'}}
```
#### dp
An example of 'train' looks as follows.
```
{'deprel': ['NP', 'NP_OBJ', 'VP', 'NP', 'NP_SBJ', 'NP', 'NP_MOD', 'NP_CNJ', 'NP_CNJ', 'NP', 'NP', 'NP_OBJ', 'AP', 'VP'],
'head': [2, 3, 14, 5, 14, 7, 10, 10, 10, 11, 12, 14, 14, 0],
'index': [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14],
'lemma': ['해당', '그림 을', '보 면', '디즈니', '공주 들 이', '브리트니', '스피어스 의', '앨범 이나', '뮤직 비디오 ,', '화보', '속', '모습 을', '똑같이', '재연 하 였 다 .'],
'pos': ['NNG', 'NNG+JKO', 'VV+EC', 'NNP', 'NNG+XSN+JKS', 'NNP', 'NNP+JKG', 'NNG+JC', 'NNG+NNG+SP', 'NNG', 'NNG', 'NNG+JKO', 'MAG', 'NNG+XSA+EP+EF+SF'],
'sentence': '해당 그림을 보면 디즈니 공주들이 브리트니 스피어스의 앨범이나 뮤직비디오, 화보 속 모습을 똑같이 재연했다.',
'word_form': ['해당', '그림을', '보면', '디즈니', '공주들이', '브리트니', '스피어스의', '앨범이나', '뮤직비디오,', '화보', '속', '모습을', '똑같이', '재연했다.']}
```
#### mrc
An example of 'train' looks as follows.
```
{'answers': {'answer_start': [478, 478], 'text': ['한 달가량', '한 달']},
'context': '올여름 장마가 17일 제주도에서 시작됐다. 서울 등 중부지방은 예년보다 사나흘 정도 늦은 이달 말께 장마가 시작될 전망이다.17일 기상청에 따르면 제주도 남쪽 먼바다에 있는 장마전선의 영향으로 이날 제주도 산간 및 내륙지역에 호우주의보가 내려지면서 곳곳에 100㎜에 육박하는 많은 비가 내렸다. 제주의 장마는 평년보다 2~3일, 지난해보다는 하루 일찍 시작됐다. 장마는 고온다습한 북태평양 기단과 한랭 습윤한 오호츠크해 기단이 만나 형성되는 장마전선에서 내리는 비를 뜻한다.장마전선은 18일 제주도 먼 남쪽 해상으로 내려갔다가 20일께 다시 북상해 전남 남해안까지 영향을 줄 것으로 보인다. 이에 따라 20~21일 남부지방에도 예년보다 사흘 정도 장마가 일찍 찾아올 전망이다. 그러나 장마전선을 밀어올리는 북태평양 고기압 세력이 약해 서울 등 중부지방은 평년보다 사나흘가량 늦은 이달 말부터 장마가 시작될 것이라는 게 기상청의 설명이다. 장마전선은 이후 한 달가량 한반도 중남부를 오르내리며 곳곳에 비를 뿌릴 전망이다. 최근 30년간 평균치에 따르면 중부지방의 장마 시작일은 6월24~25일이었으며 장마기간은 32일, 강수일수는 17.2일이었다.기상청은 올해 장마기간의 평균 강수량이 350~400㎜로 평년과 비슷하거나 적을 것으로 내다봤다. 브라질 월드컵 한국과 러시아의 경기가 열리는 18일 오전 서울은 대체로 구름이 많이 끼지만 비는 오지 않을 것으로 예상돼 거리 응원에는 지장이 없을 전망이다.',
'guid': 'klue-mrc-v1_train_12759',
'is_impossible': False,
'news_category': '종합',
'question': '북태평양 기단과 오호츠크해 기단이 만나 국내에 머무르는 기간은?',
'question_type': 1,
'source': 'hankyung',
'title': '제주도 장마 시작 … 중부는 이달 말부터'}
```
#### wos
An example of 'train' looks as follows.
```
{'dialogue': [{'role': 'user',
'text': '쇼핑을 하려는데 서울 서쪽에 있을까요?',
'state': ['관광-종류-쇼핑', '관광-지역-서울 서쪽']},
{'role': 'sys',
'text': '서울 서쪽에 쇼핑이 가능한 곳이라면 노량진 수산물 도매시장이 있습니다.',
'state': []},
{'role': 'user',
'text': '오 네 거기 주소 좀 알려주세요.',
'state': ['관광-종류-쇼핑', '관광-지역-서울 서쪽', '관광-이름-노량진 수산물 도매시장']},
{'role': 'sys', 'text': '노량진 수산물 도매시장의 주소는 서울 동작구 93806입니다.', 'state': []},
{'role': 'user',
'text': '알려주시는김에 연락처랑 평점도 좀 알려주세요.',
'state': ['관광-종류-쇼핑', '관광-지역-서울 서쪽', '관광-이름-노량진 수산물 도매시장']},
{'role': 'sys', 'text': '그럼. 연락처는 6182006591이고 평점은 4점입니다.', 'state': []},
{'role': 'user',
'text': '와 감사합니다.',
'state': ['관광-종류-쇼핑', '관광-지역-서울 서쪽', '관광-이름-노량진 수산물 도매시장']},
{'role': 'sys', 'text': '감사합니다.', 'state': []}],
'domains': ['관광'],
'guid': 'wos-v1_train_00001'}
```
### Data Fields
#### ynat
+ `guid`: a `string` feature
+ `title`: a `string` feature
+ `label`: a classification label, with possible values `IT과학`(0), `경제`(1), `사회`(2), `생활문화`(3), `세계`(4), `스포츠`(5), `정치`(6)
+ `url`: a `string` feature
+ `date`: a `string` feature
#### sts
+ `guid`: a `string` feature
+ `source`: a `string` feature
+ `sentence1`: a `string` feature
+ `sentence2`: a `string` feature
+ `labels`: a dictionary feature containing
+ `label`: a `float64` feature
+ `real-label`: a `float64` feature
+ `binary-label`: a classification label, with possible values `negative`(0), `positive`(1)
#### nli
+ `guid`: a `string` feature
+ `source`: a `string` feature
+ `premise`: a `string` feature
+ `hypothesis`: a `string` feature
+ `label`: a classification label, with possible values `entailment`(0), `neutral`(1), `contradiction`(2)
#### ner
+ `sentence`: a `string` feature
+ `tokens`: a list of a `string` feature (tokenization is at character level)
+ `ner_tags`: a list of classification labels, with possible values including `B-DT`(0), `I-DT`(1),
`B-LC`(2), `I-LC`(3), `B-OG`(4), `I-OG`(5), `B-PS`(6), `I-PS`(7), `B-QT`(8), `I-QT`(9), `B-TI`(10),
`I-TI`(11), `O`(12)
#### re
+ `guid`: a `string` feature
+ `sentence`: a `string` feature
+ `subject_entity`: a dictionary feature containing
+ `word`: a `string` feature
+ `start_idx`: a `int32` feature
+ `end_idx`: a `int32` feature
+ `type`: a `string` feature
+ `object_entity`: a dictionary feature containing
+ `word`: a `string` feature
+ `start_idx`: a `int32` feature
+ `end_idx`: a `int32` feature
+ `type`: a `string` feature
+ `label`: a list of labels, with possible values including `no_relation`(0), `org:dissolved`(1),
`org:founded`(2), `org:place_of_headquarters`(3), `org:alternate_names`(4), `org:member_of`(5),
`org:members`(6), `org:political/religious_affiliation`(7), `org:product`(8), `org:founded_by`(9),`org:top_members/employees`(10),
`org:number_of_employees/members`(11), `per:date_of_birth`(12), `per:date_of_death`(13), `per:place_of_birth`(14),
`per:place_of_death`(15), `per:place_of_residence`(16), `per:origin`(17), `per:employee_of`(18),
`per:schools_attended`(19), `per:alternate_names`(20), `per:parents`(21), `per:children`(22),
`per:siblings`(23), `per:spouse`(24), `per:other_family`(25), `per:colleagues`(26), `per:product`(27),
`per:religion`(28), `per:title`(29),
+ `source`: a `string` feature
#### dp
+ `sentence`: a `string` feature
+ `index`: a list of `int32` feature
+ `word_form`: a list of `string` feature
+ `lemma`: a list of `string` feature
+ `pos`: a list of `string` feature
+ `head`: a list of `int32` feature
+ `deprel`: a list of `string` feature
#### mrc
+ `title`: a `string` feature
+ `context`: a `string` feature
+ `news_category`: a `string` feature
+ `source`: a `string` feature
+ `guid`: a `string` feature
+ `is_impossible`: a `bool` feature
+ `question_type`: a `int32` feature
+ `question`: a `string` feature
+ `answers`: a dictionary feature containing
+ `answer_start`: a `int32` feature
+ `text`: a `string` feature
#### wos
+ `guid`: a `string` feature
+ `domains`: a `string` feature
+ `dialogue`: a list of dictionary feature containing
+ `role`: a `string` feature
+ `text`: a `string` feature
+ `state`: a `string` feature
### Data Splits
#### ynat
You can see more details in [here](https://klue-benchmark.com/tasks/66/data/description).
+ train: 45,678
+ validation: 9,107
#### sts
You can see more details in [here](https://klue-benchmark.com/tasks/67/data/description).
+ train: 11,668
+ validation: 519
#### nli
You can see more details in [here](https://klue-benchmark.com/tasks/68/data/description).
+ train: 24,998
+ validation: 3,000
#### ner
You can see more details in [here](https://klue-benchmark.com/tasks/69/overview/description).
+ train: 21,008
+ validation: 5,000
#### re
You can see more details in [here](https://klue-benchmark.com/tasks/70/overview/description).
+ train: 32,470
+ validation: 7,765
#### dp
You can see more details in [here](https://klue-benchmark.com/tasks/71/data/description).
+ train: 10,000
+ validation: 2,000
#### mrc
You can see more details in [here](https://klue-benchmark.com/tasks/72/overview/description).
+ train: 17,554
+ validation: 5,841
#### wos
You can see more details in [here](https://klue-benchmark.com/tasks/73/overview/description).
+ train: 8,000
+ validation: 1,000
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
[Needs More Information]
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
[Needs More Information]
### Licensing Information
[Needs More Information]
### Citation Information
```
@misc{park2021klue,
title={KLUE: Korean Language Understanding Evaluation},
author={Sungjoon Park and Jihyung Moon and Sungdong Kim and Won Ik Cho and Jiyoon Han and Jangwon Park and Chisung Song and Junseong Kim and Yongsook Song and Taehwan Oh and Joohong Lee and Juhyun Oh and Sungwon Lyu and Younghoon Jeong and Inkwon Lee and Sangwoo Seo and Dongjun Lee and Hyunwoo Kim and Myeonghwa Lee and Seongbo Jang and Seungwon Do and Sunkyoung Kim and Kyungtae Lim and Jongwon Lee and Kyumin Park and Jamin Shin and Seonghyun Kim and Lucy Park and Alice Oh and Jungwoo Ha and Kyunghyun Cho},
year={2021},
eprint={2105.09680},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
### Contributions
Thanks to [@jungwhank](https://github.com/jungwhank), [@bzantium](https://github.com/bzantium) for adding this dataset."
mozilla-foundation/common_voice_17_0,"{""pretty_name"": ""Common Voice Corpus 17.0"", ""annotations_creators"": [""crowdsourced""], ""language_creators"": [""crowdsourced""], ""language"": [""ab"", ""af"", ""am"", ""ar"", ""as"", ""ast"", ""az"", ""ba"", ""bas"", ""be"", ""bg"", ""bn"", ""br"", ""ca"", ""ckb"", ""cnh"", ""cs"", ""cv"", ""cy"", ""da"", ""de"", ""dv"", ""dyu"", ""el"", ""en"", ""eo"", ""es"", ""et"", ""eu"", ""fa"", ""fi"", ""fr"", ""fy"", ""ga"", ""gl"", ""gn"", ""ha"", ""he"", ""hi"", ""hsb"", ""ht"", ""hu"", ""hy"", ""ia"", ""id"", ""ig"", ""is"", ""it"", ""ja"", ""ka"", ""kab"", ""kk"", ""kmr"", ""ko"", ""ky"", ""lg"", ""lij"", ""lo"", ""lt"", ""ltg"", ""lv"", ""mdf"", ""mhr"", ""mk"", ""ml"", ""mn"", ""mr"", ""mrj"", ""mt"", ""myv"", ""nan"", ""ne"", ""nhi"", ""nl"", ""nn"", ""nso"", ""oc"", ""or"", ""os"", ""pa"", ""pl"", ""ps"", ""pt"", ""quy"", ""rm"", ""ro"", ""ru"", ""rw"", ""sah"", ""sat"", ""sc"", ""sk"", ""skr"", ""sl"", ""sq"", ""sr"", ""sv"", ""sw"", ""ta"", ""te"", ""th"", ""ti"", ""tig"", ""tk"", ""tok"", ""tr"", ""tt"", ""tw"", ""ug"", ""uk"", ""ur"", ""uz"", ""vi"", ""vot"", ""yi"", ""yo"", ""yue"", ""zgh"", ""zh"", ""zu"", ""zza""], ""language_bcp47"": [""zh-CN"", ""zh-HK"", ""zh-TW"", ""sv-SE"", ""rm-sursilv"", ""rm-vallader"", ""pa-IN"", ""nn-NO"", ""ne-NP"", ""nan-tw"", ""hy-AM"", ""ga-IE"", ""fy-NL""], ""license"": [""cc0-1.0""], ""multilinguality"": [""multilingual""], ""source_datasets"": [""extended|common_voice""], ""paperswithcode_id"": ""common-voice"", ""extra_gated_prompt"": ""By clicking on \u201cAccess repository\u201d below, you also agree to not attempt to determine the identity of speakers in the Common Voice dataset.""}","# Dataset Card for Common Voice Corpus 17.0
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://commonvoice.mozilla.org/en/datasets
- **Repository:** https://github.com/common-voice/common-voice
- **Paper:** https://arxiv.org/abs/1912.06670
- **Leaderboard:** https://paperswithcode.com/dataset/common-voice
- **Point of Contact:** [Vaibhav Srivastav](mailto:vaibhav@huggingface.co)
### Dataset Summary
The Common Voice dataset consists of a unique MP3 and corresponding text file.
Many of the 31175 recorded hours in the dataset also include demographic metadata like age, sex, and accent
that can help improve the accuracy of speech recognition engines.
The dataset currently consists of 20408 validated hours in 124 languages, but more voices and languages are always added.
Take a look at the [Languages](https://commonvoice.mozilla.org/en/languages) page to request a language or start contributing.
You can donate to this non-profit, donation-funded project here (https://commonvoice.mozilla.org/?form=common-voice)
### Supported Tasks and Leaderboards
The results for models trained on the Common Voice datasets are available via the
[🤗 Speech Bench](https://huggingface.co/spaces/huggingface/hf-speech-bench)
### Languages
```
Abkhaz, Afrikaans, Albanian, Amharic, Arabic, Armenian, Assamese, Asturian, Azerbaijani, Basaa, Bashkir, Basque, Belarusian, Bengali, Breton, Bulgarian, Cantonese, Catalan, Central Kurdish, Chinese (China), Chinese (Hong Kong), Chinese (Taiwan), Chuvash, Czech, Danish, Dhivehi, Dioula, Dutch, English, Erzya, Esperanto, Estonian, Finnish, French, Frisian, Galician, Georgian, German, Greek, Guarani, Haitian, Hakha Chin, Hausa, Hebrew, Hill Mari, Hindi, Hungarian, Icelandic, Igbo, Indonesian, Interlingua, Irish, Italian, Japanese, Kabyle, Kazakh, Kinyarwanda, Korean, Kurmanji Kurdish, Kyrgyz, Lao, Latgalian, Latvian, Ligurian, Lithuanian, Luganda, Macedonian, Malayalam, Maltese, Marathi, Meadow Mari, Moksha, Mongolian, Nepali, Northern Sotho, Norwegian Nynorsk, Occitan, Odia, Ossetian, Pashto, Persian, Polish, Portuguese, Punjabi, Quechua Chanka, Romanian, Romansh Sursilvan, Romansh Vallader, Russian, Sakha, Santali (Ol Chiki), Saraiki, Sardinian, Serbian, Slovak, Slovenian, Sorbian, Upper, Spanish, Swahili, Swedish, Taiwanese (Minnan), Tamazight, Tamil, Tatar, Telugu, Thai, Tigre, Tigrinya, Toki Pona, Turkish, Turkmen, Twi, Ukrainian, Urdu, Uyghur, Uzbek, Vietnamese, Votic, Welsh, Western Sierra Puebla Nahuatl, Yiddish, Yoruba, Zaza, Zulu
```
## How to use
The `datasets` library allows you to load and pre-process your dataset in pure Python, at scale. The dataset can be downloaded and prepared in one call to your local drive by using the `load_dataset` function.
For example, to download the Hindi config, simply specify the corresponding language config name (i.e., ""hi"" for Hindi):
```python
from datasets import load_dataset
cv_17 = load_dataset(""mozilla-foundation/common_voice_17_0"", ""hi"", split=""train"")
```
Using the datasets library, you can also stream the dataset on-the-fly by adding a `streaming=True` argument to the `load_dataset` function call. Loading a dataset in streaming mode loads individual samples of the dataset at a time, rather than downloading the entire dataset to disk.
```python
from datasets import load_dataset
cv_17 = load_dataset(""mozilla-foundation/common_voice_17_0"", ""hi"", split=""train"", streaming=True)
print(next(iter(cv_17)))
```
*Bonus*: create a [PyTorch dataloader](https://huggingface.co/docs/datasets/use_with_pytorch) directly with your own datasets (local/streamed).
### Local
```python
from datasets import load_dataset
from torch.utils.data.sampler import BatchSampler, RandomSampler
cv_17 = load_dataset(""mozilla-foundation/common_voice_17_0"", ""hi"", split=""train"")
batch_sampler = BatchSampler(RandomSampler(cv_17), batch_size=32, drop_last=False)
dataloader = DataLoader(cv_17, batch_sampler=batch_sampler)
```
### Streaming
```python
from datasets import load_dataset
from torch.utils.data import DataLoader
cv_17 = load_dataset(""mozilla-foundation/common_voice_17_0"", ""hi"", split=""train"")
dataloader = DataLoader(cv_17, batch_size=32)
```
To find out more about loading and preparing audio datasets, head over to [hf.co/blog/audio-datasets](https://huggingface.co/blog/audio-datasets).
### Example scripts
Train your own CTC or Seq2Seq Automatic Speech Recognition models on Common Voice 16 with `transformers` - [here](https://github.com/huggingface/transformers/tree/main/examples/pytorch/speech-recognition).
## Dataset Structure
### Data Instances
A typical data point comprises the `path` to the audio file and its `sentence`.
Additional fields include `accent`, `age`, `client_id`, `up_votes`, `down_votes`, `gender`, `locale` and `segment`.
```python
{
'client_id': 'd59478fbc1ee646a28a3c652a119379939123784d99131b865a89f8b21c81f69276c48bd574b81267d9d1a77b83b43e6d475a6cfc79c232ddbca946ae9c7afc5',
'path': 'et/clips/common_voice_et_18318995.mp3',
'audio': {
'path': 'et/clips/common_voice_et_18318995.mp3',
'array': array([-0.00048828, -0.00018311, -0.00137329, ..., 0.00079346, 0.00091553, 0.00085449], dtype=float32),
'sampling_rate': 48000
},
'sentence': 'Tasub kokku saada inimestega, keda tunned juba ammust ajast saati.',
'up_votes': 2,
'down_votes': 0,
'age': 'twenties',
'gender': 'male',
'accent': '',
'locale': 'et',
'segment': ''
}
```
### Data Fields
`client_id` (`string`): An id for which client (voice) made the recording
`path` (`string`): The path to the audio file
`audio` (`dict`): A dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate. Note that when accessing the audio column: `dataset[0][""audio""]` the audio file is automatically decoded and resampled to `dataset.features[""audio""].sampling_rate`. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the `""audio""` column, *i.e.* `dataset[0][""audio""]` should **always** be preferred over `dataset[""audio""][0]`.
`sentence` (`string`): The sentence the user was prompted to speak
`up_votes` (`int64`): How many upvotes the audio file has received from reviewers
`down_votes` (`int64`): How many downvotes the audio file has received from reviewers
`age` (`string`): The age of the speaker (e.g. `teens`, `twenties`, `fifties`)
`gender` (`string`): The gender of the speaker
`accent` (`string`): Accent of the speaker
`locale` (`string`): The locale of the speaker
`segment` (`string`): Usually an empty field
### Data Splits
The speech material has been subdivided into portions for dev, train, test, validated, invalidated, reported and other.
The validated data is data that has been validated with reviewers and received upvotes that the data is of high quality.
The invalidated data is data has been invalidated by reviewers
and received downvotes indicating that the data is of low quality.
The reported data is data that has been reported, for different reasons.
The other data is data that has not yet been reviewed.
The dev, test, train are all data that has been reviewed, deemed of high quality and split into dev, test and train.
## Data Preprocessing Recommended by Hugging Face
The following are data preprocessing steps advised by the Hugging Face team. They are accompanied by an example code snippet that shows how to put them to practice.
Many examples in this dataset have trailing quotations marks, e.g _“the cat sat on the mat.“_. These trailing quotation marks do not change the actual meaning of the sentence, and it is near impossible to infer whether a sentence is a quotation or not a quotation from audio data alone. In these cases, it is advised to strip the quotation marks, leaving: _the cat sat on the mat_.
In addition, the majority of training sentences end in punctuation ( . or ? or ! ), whereas just a small proportion do not. In the dev set, **almost all** sentences end in punctuation. Thus, it is recommended to append a full-stop ( . ) to the end of the small number of training examples that do not end in punctuation.
```python
from datasets import load_dataset
ds = load_dataset(""mozilla-foundation/common_voice_17"", ""en"", use_auth_token=True)
def prepare_dataset(batch):
""""""Function to preprocess the dataset with the .map method""""""
transcription = batch[""sentence""]
if transcription.startswith('""') and transcription.endswith('""'):
# we can remove trailing quotation marks as they do not affect the transcription
transcription = transcription[1:-1]
if transcription[-1] not in [""."", ""?"", ""!""]:
# append a full-stop to sentences that do not end in punctuation
transcription = transcription + "".""
batch[""sentence""] = transcription
return batch
ds = ds.map(prepare_dataset, desc=""preprocess dataset"")
```
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
The dataset consists of people who have donated their voice online. You agree to not attempt to determine the identity of speakers in the Common Voice dataset.
## Considerations for Using the Data
### Social Impact of Dataset
The dataset consists of people who have donated their voice online. You agree to not attempt to determine the identity of speakers in the Common Voice dataset.
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
Public Domain, [CC-0](https://creativecommons.org/share-your-work/public-domain/cc0/)
### Citation Information
```
@inproceedings{commonvoice:2020,
author = {Ardila, R. and Branson, M. and Davis, K. and Henretty, M. and Kohler, M. and Meyer, J. and Morais, R. and Saunders, L. and Tyers, F. M. and Weber, G.},
title = {Common Voice: A Massively-Multilingual Speech Corpus},
booktitle = {Proceedings of the 12th Conference on Language Resources and Evaluation (LREC 2020)},
pages = {4211--4215},
year = 2020
}
```"
nthngdy/oscar-small,"{""annotations_creators"": [""no-annotation""], ""language_creators"": [""found""], ""language"": [""af"", ""am"", ""ar"", ""arz"", ""as"", ""az"", ""azb"", ""ba"", ""be"", ""bg"", ""bn"", ""bo"", ""br"", ""ca"", ""ce"", ""ceb"", ""ckb"", ""cs"", ""cv"", ""cy"", ""da"", ""de"", ""dv"", ""el"", ""en"", ""eo"", ""es"", ""et"", ""eu"", ""fa"", ""fi"", ""fr"", ""fy"", ""ga"", ""gl"", ""gu"", ""he"", ""hi"", ""hr"", ""hu"", ""hy"", ""id"", ""is"", ""it"", ""ja"", ""ka"", ""kk"", ""km"", ""kn"", ""ko"", ""ku"", ""ky"", ""la"", ""lb"", ""lo"", ""lt"", ""lv"", ""mg"", ""mhr"", ""mk"", ""ml"", ""mn"", ""mr"", ""ms"", ""mt"", ""my"", ""nds"", ""ne"", ""nl"", ""nn"", ""no"", ""or"", ""os"", ""pa"", ""pl"", ""pnb"", ""ps"", ""pt"", ""ro"", ""ru"", ""sa"", ""sah"", ""sd"", ""sh"", ""si"", ""sk"", ""sl"", ""sq"", ""sr"", ""sv"", ""sw"", ""ta"", ""te"", ""tg"", ""th"", ""tk"", ""tl"", ""tr"", ""tt"", ""ug"", ""uk"", ""ur"", ""uz"", ""vi"", ""yi"", ""zh""], ""license"": [""cc0-1.0""], ""multilinguality"": [""multilingual""], ""source_datasets"": [""oscar""], ""task_categories"": [""text-generation""], ""task_ids"": [""language-modeling""], ""paperswithcode_id"": ""oscar"", ""pretty_name"": ""OSCAR""}","## WARNING: this dataset is an extract of the OSCAR dataset published here to simulate the use of the full dataset in low-resource contexts.
Using this dataset is equivalent to using a processed version of OSCAR legally speaking. I take no credit for the gathering of the original data and hence refer entirely to the original dataset in the card below.
# Dataset Card for ""oscar""
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://oscar-corpus.com](https://oscar-corpus.com)
- **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Dataset Summary
OSCAR or **O**pen **S**uper-large **C**rawled [**A**LMAnaCH](https://team.inria.fr/almanach/) co**R**pus is a huge multilingual corpus obtained by language classification and filtering of the [Common Crawl](https://commoncrawl.org/) corpus using the [goclassy](https://github.com/pjox/goclassy) architecture. Data is distributed by language in both original and deduplicated form.
### Supported Tasks and Leaderboards
OSCAR is mainly inteded to pretrain language models and word represantations.
### Languages
All the data is distributed by language, both the original and the deduplicated versions of the data are available. 166 different languages are available. The table in subsection [Data Splits Sample Size](#data-splits-sample-size) provides the language code for each subcorpus as well as the number of words (space separated tokens), lines and sizes for both the original and the deduplicated versions of OSCAR.
## Dataset Structure
We show detailed information for all the configurations of the dataset.
## Dataset Creation
### Curation Rationale
OSCAR was constructed new pipeline derived from the [fastText's one](https://github.com/facebookresearch/fastText), called [_goclassy_](https://github.com/pjox/goclassy). Goclassy reuses the [fastText linear classifier](https://fasttext.cc) and the pre-trained fastText model for language recognition, but it completely rewrites and parallelises their pipeline in an asynchronous manner.
The order of operations is more or less the same as in the fastText pre-processing pipeline but instead of clustering multiple operations into a single blocking process, a worker is launched for each operation but bounding the number of possible parallel operations at a given time by the number of available threads instead of the number of CPUs. Goclassy is implemented in the [Go programming language](https://golang.org/) so it lets the [Go runtime](https://golang.org/src/runtime/mprof.go) handle the scheduling of the processes. Thus the goclassy's pipeline one does not have to wait for a whole WET file to download, decompress and classify in order to start downloading and processing the next one, a new file will start downloading and processing as soon as the scheduler is able to allocate a new process.
Filtering and cleaning processes at line level are done before feeding each line to the classifier. Lines shorter than 100 UTF-8 characters and lines containing invalid UTF-8 characters are discarted and are not classified. After all files are proccesed the deduplicated versions are constructed and everything is then splitted in shards and compressed.
### Source Data
#### Initial Data Collection and Normalization
[Common Crawl](https://commoncrawl.org/) is a non-profit foundation which produces and maintains an open repository of web crawled data that is both accessible and analysable. Common Crawl's complete web archive consists of petabytes of data collected over 8 years of web crawling. The repository contains raw web page HTML data (WARC files), metdata extracts (WAT files) and plain text extracts (WET files). The organisation's crawlers has always respected [nofollow](http://microformats.org/wiki/rel-nofollow) and [robots.txt](https://www.robotstxt.org/) policies.
Each monthly Common Crawl snapshot is in itself a massive multilingual corpus, where every single file contains data coming from multiple web pages written in a large variety of languages and covering all possible types of topics.
To construct OSCAR the WET files of Common Crawl were used. These contain the extracted plain texts from the websites mostly converted to UTF-8, as well as headers containing the metatada of each crawled document. Each WET file comes compressed in gzip format and is stored on Amazon Web Services. In the case of OSCAR, the **November 2018** snapshot was used. It surpasses 20TB of uncompressed data and contains more than 50 thousand plain text files where each file consists of the plain text from multiple websites along its metadata header.
#### Who are the source language producers?
The data comes from multiple web pages in a large variety of languages.
### Annotations
The dataset does not contain any additional annotations.
#### Annotation process
N/A
#### Who are the annotators?
N/A
### Personal and Sensitive Information
Being constructed from Common Crawl, Personal and sensitive information might be present. This **must** be considered before training deep learning models with OSCAR, specially in the case of text-generation models.
## Considerations for Using the Data
### Social Impact of Dataset
OSCAR is intended to bring more data to a wide variety of lanuages, the aim of the corpus is to make large amounts of data available to lower resource languages in order to facilitate the pre-training of state-of-the-art language modeling architectures.
### Discussion of Biases
OSCAR is not properly filtered yet and this can be reflected on the models trained with it. Care is advised specially concerning biases of the resulting models.
### Other Known Limitations
The [fastText linear classifier](https://fasttext.cc) is limed both in performance and the variety of languages it can recognize, so the quality of some OSCAR sub-corpora might be lower than expected, specially for the lowest-resource langiuages. Some audits have already been done by [third parties](https://arxiv.org/abs/2010.14571).
## Additional Information
### Dataset Curators
The corpus was put together by [Pedro J. Ortiz](https://pjortiz.eu/), [Benoît Sagot](http://pauillac.inria.fr/~sagot/), and [Laurent Romary](https://cv.archives-ouvertes.fr/laurentromary), during work done at [Inria](https://www.inria.fr/en), particularly at the [ALMAnaCH team](https://team.inria.fr/almanach/).
### Licensing Information
These data are released under this licensing scheme
We do not own any of the text from which these data has been extracted.
We license the actual packaging of these data under the Creative Commons CC0 license (""no rights reserved"") http://creativecommons.org/publicdomain/zero/1.0/
To the extent possible under law, Inria has waived all copyright and related or neighboring rights to OSCAR
This work is published from: France.
Should you consider that our data contains material that is owned by you and should therefore not be reproduced here, please:
* Clearly identify yourself, with detailed contact data such as an address, telephone number or email address at which you can be contacted.
* Clearly identify the copyrighted work claimed to be infringed.
* Clearly identify the material that is claimed to be infringing and information reasonably sufficient to allow us to locate the material.
We will comply to legitimate requests by removing the affected sources from the next release of the corpus.
### Citation Information
```
@inproceedings{ortiz-suarez-etal-2020-monolingual,
title = ""A Monolingual Approach to Contextualized Word Embeddings for Mid-Resource Languages"",
author = ""Ortiz Su{'a}rez, Pedro Javier and
Romary, Laurent and
Sagot, Benoit"",
booktitle = ""Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics"",
month = jul,
year = ""2020"",
address = ""Online"",
publisher = ""Association for Computational Linguistics"",
url = ""https://www.aclweb.org/anthology/2020.acl-main.156"",
pages = ""1703--1714"",
abstract = ""We use the multilingual OSCAR corpus, extracted from Common Crawl via language classification, filtering and cleaning, to train monolingual contextualized word embeddings (ELMo) for five mid-resource languages. We then compare the performance of OSCAR-based and Wikipedia-based ELMo embeddings for these languages on the part-of-speech tagging and parsing tasks. We show that, despite the noise in the Common-Crawl-based OSCAR data, embeddings trained on OSCAR perform much better than monolingual embeddings trained on Wikipedia. They actually equal or improve the current state of the art in tagging and parsing for all five languages. In particular, they also improve over multilingual Wikipedia-based contextual embeddings (multilingual BERT), which almost always constitutes the previous state of the art, thereby showing that the benefit of a larger, more diverse corpus surpasses the cross-lingual benefit of multilingual embedding architectures."",
}
@inproceedings{OrtizSuarezSagotRomary2019,
author = {Pedro Javier {Ortiz Su{'a}rez} and Benoit Sagot and Laurent Romary},
title = {Asynchronous pipelines for processing huge corpora on medium to low resource infrastructures},
series = {Proceedings of the Workshop on Challenges in the Management of Large Corpora (CMLC-7) 2019. Cardiff, 22nd July 2019},
editor = {Piotr Bański and Adrien Barbaresi and Hanno Biber and Evelyn Breiteneder and Simon Clematide and Marc Kupietz and Harald L{""u}ngen and Caroline Iliadi},
publisher = {Leibniz-Institut f{""u}r Deutsche Sprache},
address = {Mannheim},
doi = {10.14618/ids-pub-9021},
url = {http://nbn-resolving.de/urn:nbn:de:bsz:mh39-90215},
pages = {9 -- 16},
year = {2019},
abstract = {Common Crawl is a considerably large, heterogeneous multilingual corpus comprised of crawled documents from the internet, surpassing 20TB of data and distributed as a set of more than 50 thousand plain text files where each contains many documents written in a wide variety of languages. Even though each document has a metadata block associated to it, this data lacks any information about the language in which each document is written, making it extremely difficult to use Common Crawl for monolingual applications. We propose a general, highly parallel, multithreaded pipeline to clean and classify Common Crawl by language; we specifically design it so that it runs efficiently on medium to low resource infrastructures where I/O speeds are the main constraint. We develop the pipeline so that it can be easily reapplied to any kind of heterogeneous corpus and so that it can be parameterised to a wide range of infrastructures. We also distribute a 6.3TB version of Common Crawl, filtered, classified by language, shuffled at line level in order to avoid copyright issues, and ready to be used for NLP applications.},
language = {en}
}
```
### Contributions
Thanks to [@pjox](https://github.com/pjox) and [@lhoestq](https://github.com/lhoestq) for adding this dataset."
Davlan/sib200,"{""annotations_creators"": [""found""], ""language_creators"": [""expert-generated""], ""language"": [""ace"", ""acm"", ""acq"", ""aeb"", ""af"", ""ajp"", ""ak"", ""als"", ""am"", ""apc"", ""ar"", ""ars"", ""ary"", ""arz"", ""as"", ""ast"", ""awa"", ""ayr"", ""azb"", ""azj"", ""ba"", ""bm"", ""ban"", ""be"", ""bem"", ""bn"", ""bho"", ""bjn"", ""bo"", ""bs"", ""bug"", ""bg"", ""ca"", ""ceb"", ""cs"", ""cjk"", ""ckb"", ""crh"", ""cy"", ""da"", ""de"", ""dik"", ""dyu"", ""dz"", ""el"", ""en"", ""eo"", ""et"", ""eu"", ""ee"", ""fo"", ""fj"", ""fi"", ""fon"", ""fr"", ""fur"", ""fuv"", ""gaz"", ""gd"", ""ga"", ""gl"", ""gn"", ""gu"", ""ht"", ""ha"", ""he"", ""hi"", ""hne"", ""hr"", ""hu"", ""hy"", ""ig"", ""ilo"", ""id"", ""is"", ""it"", ""jv"", ""ja"", ""kab"", ""kac"", ""kam"", ""kn"", ""ks"", ""ka"", ""kk"", ""kbp"", ""kea"", ""khk"", ""km"", ""ki"", ""rw"", ""ky"", ""kmb"", ""kmr"", ""knc"", ""kg"", ""ko"", ""lo"", ""lij"", ""li"", ""ln"", ""lt"", ""lmo"", ""ltg"", ""lb"", ""lua"", ""lg"", ""luo"", ""lus"", ""lvs"", ""mag"", ""mai"", ""ml"", ""mar"", ""min"", ""mk"", ""mt"", ""mni"", ""mos"", ""mi"", ""my"", ""nl"", ""nn"", ""nb"", ""npi"", ""nqo"", ""nso"", ""nus"", ""ny"", ""oc"", ""ory"", ""pag"", ""pa"", ""pap"", ""pbt"", ""pes"", ""plt"", ""pl"", ""pt"", ""prs"", ""quy"", ""ro"", ""rn"", ""ru"", ""sg"", ""sa"", ""sat"", ""scn"", ""shn"", ""si"", ""sk"", ""sl"", ""sm"", ""sn"", ""sd"", ""so"", ""st"", ""es"", ""sc"", ""sr"", ""ss"", ""su"", ""sv"", ""swh"", ""szl"", ""ta"", ""taq"", ""tt"", ""te"", ""tg"", ""tl"", ""th"", ""ti"", ""tpi"", ""tn"", ""ts"", ""tk"", ""tum"", ""tr"", ""tw"", ""tzm"", ""ug"", ""uk"", ""umb"", ""ur"", ""uzn"", ""vec"", ""vi"", ""war"", ""wo"", ""xh"", ""ydd"", ""yo"", ""yue"", ""zh"", ""zsm"", ""zu""], ""license"": [""cc-by-sa-4.0""], ""multilinguality"": [""multilingual""], ""pretty_name"": ""sib200"", ""language_details"": ""ace_Arab, ace_Latn, acm_Arab, acq_Arab, aeb_Arab, afr_Latn, ajp_Arab, aka_Latn, amh_Ethi, apc_Arab, arb_Arab, ars_Arab, ary_Arab, arz_Arab, asm_Beng, ast_Latn, awa_Deva, ayr_Latn, azb_Arab, azj_Latn, bak_Cyrl, bam_Latn, ban_Latn,bel_Cyrl, bem_Latn, ben_Beng, bho_Deva, bjn_Arab, bjn_Latn, bod_Tibt, bos_Latn, bug_Latn, bul_Cyrl, cat_Latn, ceb_Latn, ces_Latn, cjk_Latn, ckb_Arab, crh_Latn, cym_Latn, dan_Latn, deu_Latn, dik_Latn, dyu_Latn, dzo_Tibt, ell_Grek, eng_Latn, epo_Latn, est_Latn, eus_Latn, ewe_Latn, fao_Latn, pes_Arab, fij_Latn, fin_Latn, fon_Latn, fra_Latn, fur_Latn, fuv_Latn, gla_Latn, gle_Latn, glg_Latn, grn_Latn, guj_Gujr, hat_Latn, hau_Latn, heb_Hebr, hin_Deva, hne_Deva, hrv_Latn, hun_Latn, hye_Armn, ibo_Latn, ilo_Latn, ind_Latn, isl_Latn, ita_Latn, jav_Latn, jpn_Jpan, kab_Latn, kac_Latn, kam_Latn, kan_Knda, kas_Arab, kas_Deva, kat_Geor, knc_Arab, knc_Latn, kaz_Cyrl, kbp_Latn, kea_Latn, khm_Khmr, kik_Latn, kin_Latn, kir_Cyrl, kmb_Latn, kon_Latn, kor_Hang, kmr_Latn, lao_Laoo, lvs_Latn, lij_Latn, lim_Latn, lin_Latn, lit_Latn, lmo_Latn, ltg_Latn, ltz_Latn, lua_Latn, lug_Latn, luo_Latn, lus_Latn, mag_Deva, mai_Deva, mal_Mlym, mar_Deva, min_Latn, mkd_Cyrl, plt_Latn, mlt_Latn, mni_Beng, khk_Cyrl, mos_Latn, mri_Latn, zsm_Latn, mya_Mymr, nld_Latn, nno_Latn, nob_Latn, npi_Deva, nso_Latn, nus_Latn, nya_Latn, oci_Latn, gaz_Latn, ory_Orya, pag_Latn, pan_Guru, pap_Latn, pol_Latn, por_Latn, prs_Arab, pbt_Arab, quy_Latn, ron_Latn, run_Latn, rus_Cyrl, sag_Latn, san_Deva, sat_Beng, scn_Latn, shn_Mymr, sin_Sinh, slk_Latn, slv_Latn, smo_Latn, sna_Latn, snd_Arab, som_Latn, sot_Latn, spa_Latn, als_Latn, srd_Latn, srp_Cyrl, ssw_Latn, sun_Latn, swe_Latn, swh_Latn, szl_Latn, tam_Taml, tat_Cyrl, tel_Telu, tgk_Cyrl, tgl_Latn, tha_Thai, tir_Ethi, taq_Latn, taq_Tfng, tpi_Latn, tsn_Latn, tso_Latn, tuk_Latn, tum_Latn, tur_Latn, twi_Latn, tzm_Tfng, uig_Arab, ukr_Cyrl, umb_Latn, urd_Arab, uzn_Latn, vec_Latn, vie_Latn, war_Latn, wol_Latn, xho_Latn, ydd_Hebr, yor_Latn, yue_Hant, zho_Hans, zho_Hant, zul_Latn"", ""size_categories"": [""1K
## Dataset Structure
### Data Instances
```
>>> from datasets import load_dataset
>>> data = load_dataset('aiana94/polynews-parallel', 'eng_Latn-ron_Latn')
# Please, specify the language code,
# A data point example is below:
{
""src"": ""They continue to support the view that this decision will have a lasting negative impact on the rule of law in the country. "",
""tgt"": ""Ei continuă să creadă că această decizie va avea efecte negative pe termen lung asupra statului de drept în țară. "",
""provenance"": ""globalvoices""
}
```
### Data Fields
- src (string): source news text
- tgt (string): target news text
- provenance (string) : source dataset for the news example
### Data Splits
For all languages, there is only the `train` split.
## Dataset Creation
### Curation Rationale
Multiple multilingual, human-translated, datasets containing news texts have been released in recent years.
However, these datasets are stored in different formats and various websites, and many contain numerous near duplicates.
With PolyNewsParallel, we aim to provide an easily-accessible, unified and deduplicated parallel dataset that combines these disparate data sources.
It can be used for machine translation or text retrieval in both high-resource and low-resource languages.
### Source Data
The source data consists of five multilingual news datasets.
- [GlobalVoices](https://opus.nlpl.eu/GlobalVoices/corpus/version/GlobalVoices) (v2018q4)
- [WMT-News](https://opus.nlpl.eu/WMT-News/corpus/version/WMT-News) (v2019)
- [MAFAND](https://huggingface.co/datasets/masakhane/mafand) (`train` split)
#### Data Collection and Processing
We processed the data using a **working script** which covers the entire processing pipeline. It can be found [here](https://github.com/andreeaiana/nase/blob/main/scripts/construct_polynews.sh).
The data processing pipeline consists of:
1. Downloading the WMT-News and GlobalVoices News from OPUS.
2. Loading MAFAND datasets from Hugging Face Hub (only the `train` splits).
4. Concatenating, per language, all news texts from the source datasets.
5. Data cleaning (e.g., removal of exact duplicates, short texts, texts in other scripts)
6. [MinHash near-deduplication](https://github.com/bigcode-project/bigcode-dataset/blob/main/near_deduplication/minhash_deduplication.py) per language.
### Annotations
We augment the original samples with the `provenance` annotation which specifies the original data source from which a particular examples stems.
#### Personal and Sensitive Information
The data is sourced from newspaper sources and contains mentions of public figures and individuals.
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
Users should keep in mind that the dataset contains short news texts (e.g., mostly titles), which might limit the applicability of the developed systems to other domains.
## Additional Information
### Licensing Information
The dataset is released under the [CC BY-NC Attribution-NonCommercial 4.0 International license](https://creativecommons.org/licenses/by-nc/4.0/).
### Citation Infomation
**BibTeX:**
```bibtex
@misc{iana2024news,
title={News Without Borders: Domain Adaptation of Multilingual Sentence Embeddings for Cross-lingual News Recommendation},
author={Andreea Iana and Fabian David Schmidt and Goran Glavaš and Heiko Paulheim},
year={2024},
eprint={2406.12634},
archivePrefix={arXiv},
url={https://arxiv.org/abs/2406.12634}
}
```"
OpenAssistant/oasst1,"{""license"": ""apache-2.0"", ""dataset_info"": {""features"": [{""name"": ""message_id"", ""dtype"": ""string""}, {""name"": ""parent_id"", ""dtype"": ""string""}, {""name"": ""user_id"", ""dtype"": ""string""}, {""name"": ""created_date"", ""dtype"": ""string""}, {""name"": ""text"", ""dtype"": ""string""}, {""name"": ""role"", ""dtype"": ""string""}, {""name"": ""lang"", ""dtype"": ""string""}, {""name"": ""review_count"", ""dtype"": ""int32""}, {""name"": ""review_result"", ""dtype"": ""bool""}, {""name"": ""deleted"", ""dtype"": ""bool""}, {""name"": ""rank"", ""dtype"": ""int32""}, {""name"": ""synthetic"", ""dtype"": ""bool""}, {""name"": ""model_name"", ""dtype"": ""string""}, {""name"": ""detoxify"", ""struct"": [{""name"": ""toxicity"", ""dtype"": ""float64""}, {""name"": ""severe_toxicity"", ""dtype"": ""float64""}, {""name"": ""obscene"", ""dtype"": ""float64""}, {""name"": ""identity_attack"", ""dtype"": ""float64""}, {""name"": ""insult"", ""dtype"": ""float64""}, {""name"": ""threat"", ""dtype"": ""float64""}, {""name"": ""sexual_explicit"", ""dtype"": ""float64""}]}, {""name"": ""message_tree_id"", ""dtype"": ""string""}, {""name"": ""tree_state"", ""dtype"": ""string""}, {""name"": ""emojis"", ""sequence"": [{""name"": ""name"", ""dtype"": ""string""}, {""name"": ""count"", ""dtype"": ""int32""}]}, {""name"": ""labels"", ""sequence"": [{""name"": ""name"", ""dtype"": ""string""}, {""name"": ""value"", ""dtype"": ""float64""}, {""name"": ""count"", ""dtype"": ""int32""}]}], ""splits"": [{""name"": ""train"", ""num_bytes"": 100367999, ""num_examples"": 84437}, {""name"": ""validation"", ""num_bytes"": 5243405, ""num_examples"": 4401}], ""download_size"": 41596430, ""dataset_size"": 105611404}, ""language"": [""en"", ""es"", ""ru"", ""de"", ""pl"", ""th"", ""vi"", ""sv"", ""bn"", ""da"", ""he"", ""it"", ""fa"", ""sk"", ""id"", ""nb"", ""el"", ""nl"", ""hu"", ""eu"", ""zh"", ""eo"", ""ja"", ""ca"", ""cs"", ""bg"", ""fi"", ""pt"", ""tr"", ""ro"", ""ar"", ""uk"", ""gl"", ""fr"", ""ko""], ""tags"": [""human-feedback""], ""size_categories"": [""100K
Languages with under 1000 messages
- Vietnamese: 952
- Basque: 947
- Polish: 886
- Hungarian: 811
- Arabic: 666
- Dutch: 628
- Swedish: 512
- Turkish: 454
- Finnish: 386
- Czech: 372
- Danish: 358
- Galician: 339
- Hebrew: 255
- Romanian: 200
- Norwegian Bokmål: 133
- Indonesian: 115
- Bulgarian: 95
- Bengali: 82
- Persian: 72
- Greek: 66
- Esperanto: 59
- Slovak: 19
## Contact
- Discord [Open Assistant Discord Server](https://ykilcher.com/open-assistant-discord)
- GitHub: [LAION-AI/Open-Assistant](https://github.com/LAION-AI/Open-Assistant)
- E-Mail: [open-assistant@laion.ai](mailto:open-assistant@laion.ai)"
uonlp/CulturaX,"{""configs"": [{""config_name"": ""af"", ""data_files"": ""af/*.parquet""}, {""config_name"": ""als"", ""data_files"": ""als/*.parquet""}, {""config_name"": ""am"", ""data_files"": ""am/*.parquet""}, {""config_name"": ""an"", ""data_files"": ""an/*.parquet""}, {""config_name"": ""ar"", ""data_files"": ""ar/*.parquet""}, {""config_name"": ""arz"", ""data_files"": ""arz/*.parquet""}, {""config_name"": ""as"", ""data_files"": ""as/*.parquet""}, {""config_name"": ""ast"", ""data_files"": ""ast/*.parquet""}, {""config_name"": ""av"", ""data_files"": ""av/*.parquet""}, {""config_name"": ""az"", ""data_files"": ""az/*.parquet""}, {""config_name"": ""azb"", ""data_files"": ""azb/*.parquet""}, {""config_name"": ""ba"", ""data_files"": ""ba/*.parquet""}, {""config_name"": ""bar"", ""data_files"": ""bar/*.parquet""}, {""config_name"": ""bcl"", ""data_files"": ""bcl/*.parquet""}, {""config_name"": ""be"", ""data_files"": ""be/*.parquet""}, {""config_name"": ""bg"", ""data_files"": ""bg/*.parquet""}, {""config_name"": ""bh"", ""data_files"": ""bh/*.parquet""}, {""config_name"": ""bn"", ""data_files"": ""bn/*.parquet""}, {""config_name"": ""bo"", ""data_files"": ""bo/*.parquet""}, {""config_name"": ""bpy"", ""data_files"": ""bpy/*.parquet""}, {""config_name"": ""br"", ""data_files"": ""br/*.parquet""}, {""config_name"": ""bs"", ""data_files"": ""bs/*.parquet""}, {""config_name"": ""bxr"", ""data_files"": ""bxr/*.parquet""}, {""config_name"": ""ca"", ""data_files"": ""ca/*.parquet""}, {""config_name"": ""cbk"", ""data_files"": ""cbk/*.parquet""}, {""config_name"": ""ce"", ""data_files"": ""ce/*.parquet""}, {""config_name"": ""ceb"", ""data_files"": ""ceb/*.parquet""}, {""config_name"": ""ckb"", ""data_files"": ""ckb/*.parquet""}, {""config_name"": ""cs"", ""data_files"": ""cs/*.parquet""}, {""config_name"": ""cv"", ""data_files"": ""cv/*.parquet""}, {""config_name"": ""cy"", ""data_files"": ""cy/*.parquet""}, {""config_name"": ""da"", ""data_files"": ""da/*.parquet""}, {""config_name"": ""de"", ""data_files"": ""de/*.parquet""}, {""config_name"": ""dsb"", ""data_files"": ""dsb/*.parquet""}, {""config_name"": ""dv"", ""data_files"": ""dv/*.parquet""}, {""config_name"": ""el"", ""data_files"": ""el/*.parquet""}, {""config_name"": ""eml"", ""data_files"": ""eml/*.parquet""}, {""config_name"": ""en"", ""data_files"": ""en/*.parquet""}, {""config_name"": ""eo"", ""data_files"": ""eo/*.parquet""}, {""config_name"": ""es"", ""data_files"": ""es/*.parquet""}, {""config_name"": ""et"", ""data_files"": ""et/*.parquet""}, {""config_name"": ""eu"", ""data_files"": ""eu/*.parquet""}, {""config_name"": ""fa"", ""data_files"": ""fa/*.parquet""}, {""config_name"": ""fi"", ""data_files"": ""fi/*.parquet""}, {""config_name"": ""fr"", ""data_files"": ""fr/*.parquet""}, {""config_name"": ""frr"", ""data_files"": ""frr/*.parquet""}, {""config_name"": ""fy"", ""data_files"": ""fy/*.parquet""}, {""config_name"": ""ga"", ""data_files"": ""ga/*.parquet""}, {""config_name"": ""gd"", ""data_files"": ""gd/*.parquet""}, {""config_name"": ""gl"", ""data_files"": ""gl/*.parquet""}, {""config_name"": ""gn"", ""data_files"": ""gn/*.parquet""}, {""config_name"": ""gom"", ""data_files"": ""gom/*.parquet""}, {""config_name"": ""gu"", ""data_files"": ""gu/*.parquet""}, {""config_name"": ""he"", ""data_files"": ""he/*.parquet""}, {""config_name"": ""hi"", ""data_files"": ""hi/*.parquet""}, {""config_name"": ""hr"", ""data_files"": ""hr/*.parquet""}, {""config_name"": ""hsb"", ""data_files"": ""hsb/*.parquet""}, {""config_name"": ""ht"", ""data_files"": ""ht/*.parquet""}, {""config_name"": ""hu"", ""data_files"": ""hu/*.parquet""}, {""config_name"": ""hy"", ""data_files"": ""hy/*.parquet""}, {""config_name"": ""ia"", ""data_files"": ""ia/*.parquet""}, {""config_name"": ""id"", ""data_files"": ""id/*.parquet""}, {""config_name"": ""ie"", ""data_files"": ""ie/*.parquet""}, {""config_name"": ""ilo"", ""data_files"": ""ilo/*.parquet""}, {""config_name"": ""io"", ""data_files"": ""io/*.parquet""}, {""config_name"": ""is"", ""data_files"": ""is/*.parquet""}, {""config_name"": ""it"", ""data_files"": ""it/*.parquet""}, {""config_name"": ""ja"", ""data_files"": ""ja/*.parquet""}, {""config_name"": ""jbo"", ""data_files"": ""jbo/*.parquet""}, {""config_name"": ""jv"", ""data_files"": ""jv/*.parquet""}, {""config_name"": ""ka"", ""data_files"": ""ka/*.parquet""}, {""config_name"": ""kk"", ""data_files"": ""kk/*.parquet""}, {""config_name"": ""km"", ""data_files"": ""km/*.parquet""}, {""config_name"": ""kn"", ""data_files"": ""kn/*.parquet""}, {""config_name"": ""ko"", ""data_files"": ""ko/*.parquet""}, {""config_name"": ""krc"", ""data_files"": ""krc/*.parquet""}, {""config_name"": ""ku"", ""data_files"": ""ku/*.parquet""}, {""config_name"": ""kv"", ""data_files"": ""kv/*.parquet""}, {""config_name"": ""kw"", ""data_files"": ""kw/*.parquet""}, {""config_name"": ""ky"", ""data_files"": ""ky/*.parquet""}, {""config_name"": ""la"", ""data_files"": ""la/*.parquet""}, {""config_name"": ""lb"", ""data_files"": ""lb/*.parquet""}, {""config_name"": ""lez"", ""data_files"": ""lez/*.parquet""}, {""config_name"": ""li"", ""data_files"": ""li/*.parquet""}, {""config_name"": ""lmo"", ""data_files"": ""lmo/*.parquet""}, {""config_name"": ""lo"", ""data_files"": ""lo/*.parquet""}, {""config_name"": ""lrc"", ""data_files"": ""lrc/*.parquet""}, {""config_name"": ""lt"", ""data_files"": ""lt/*.parquet""}, {""config_name"": ""lv"", ""data_files"": ""lv/*.parquet""}, {""config_name"": ""mai"", ""data_files"": ""mai/*.parquet""}, {""config_name"": ""mg"", ""data_files"": ""mg/*.parquet""}, {""config_name"": ""mhr"", ""data_files"": ""mhr/*.parquet""}, {""config_name"": ""min"", ""data_files"": ""min/*.parquet""}, {""config_name"": ""mk"", ""data_files"": ""mk/*.parquet""}, {""config_name"": ""ml"", ""data_files"": ""ml/*.parquet""}, {""config_name"": ""mn"", ""data_files"": ""mn/*.parquet""}, {""config_name"": ""mr"", ""data_files"": ""mr/*.parquet""}, {""config_name"": ""mrj"", ""data_files"": ""mrj/*.parquet""}, {""config_name"": ""ms"", ""data_files"": ""ms/*.parquet""}, {""config_name"": ""mt"", ""data_files"": ""mt/*.parquet""}, {""config_name"": ""mwl"", ""data_files"": ""mwl/*.parquet""}, {""config_name"": ""my"", ""data_files"": ""my/*.parquet""}, {""config_name"": ""myv"", ""data_files"": ""myv/*.parquet""}, {""config_name"": ""mzn"", ""data_files"": ""mzn/*.parquet""}, {""config_name"": ""nah"", ""data_files"": ""nah/*.parquet""}, {""config_name"": ""nap"", ""data_files"": ""nap/*.parquet""}, {""config_name"": ""nds"", ""data_files"": ""nds/*.parquet""}, {""config_name"": ""ne"", ""data_files"": ""ne/*.parquet""}, {""config_name"": ""new"", ""data_files"": ""new/*.parquet""}, {""config_name"": ""nl"", ""data_files"": ""nl/*.parquet""}, {""config_name"": ""nn"", ""data_files"": ""nn/*.parquet""}, {""config_name"": ""no"", ""data_files"": ""no/*.parquet""}, {""config_name"": ""oc"", ""data_files"": ""oc/*.parquet""}, {""config_name"": ""or"", ""data_files"": ""or/*.parquet""}, {""config_name"": ""os"", ""data_files"": ""os/*.parquet""}, {""config_name"": ""pa"", ""data_files"": ""pa/*.parquet""}, {""config_name"": ""pam"", ""data_files"": ""pam/*.parquet""}, {""config_name"": ""pl"", ""data_files"": ""pl/*.parquet""}, {""config_name"": ""pms"", ""data_files"": ""pms/*.parquet""}, {""config_name"": ""pnb"", ""data_files"": ""pnb/*.parquet""}, {""config_name"": ""ps"", ""data_files"": ""ps/*.parquet""}, {""config_name"": ""pt"", ""data_files"": ""pt/*.parquet""}, {""config_name"": ""qu"", ""data_files"": ""qu/*.parquet""}, {""config_name"": ""rm"", ""data_files"": ""rm/*.parquet""}, {""config_name"": ""ro"", ""data_files"": ""ro/*.parquet""}, {""config_name"": ""ru"", ""data_files"": ""ru/*.parquet""}, {""config_name"": ""rue"", ""data_files"": ""rue/*.parquet""}, {""config_name"": ""sa"", ""data_files"": ""sa/*.parquet""}, {""config_name"": ""sah"", ""data_files"": ""sah/*.parquet""}, {""config_name"": ""scn"", ""data_files"": ""scn/*.parquet""}, {""config_name"": ""sd"", ""data_files"": ""sd/*.parquet""}, {""config_name"": ""sh"", ""data_files"": ""sh/*.parquet""}, {""config_name"": ""si"", ""data_files"": ""si/*.parquet""}, {""config_name"": ""sk"", ""data_files"": ""sk/*.parquet""}, {""config_name"": ""sl"", ""data_files"": ""sl/*.parquet""}, {""config_name"": ""so"", ""data_files"": ""so/*.parquet""}, {""config_name"": ""sq"", ""data_files"": ""sq/*.parquet""}, {""config_name"": ""sr"", ""data_files"": ""sr/*.parquet""}, {""config_name"": ""su"", ""data_files"": ""su/*.parquet""}, {""config_name"": ""sv"", ""data_files"": ""sv/*.parquet""}, {""config_name"": ""sw"", ""data_files"": ""sw/*.parquet""}, {""config_name"": ""ta"", ""data_files"": ""ta/*.parquet""}, {""config_name"": ""te"", ""data_files"": ""te/*.parquet""}, {""config_name"": ""tg"", ""data_files"": ""tg/*.parquet""}, {""config_name"": ""th"", ""data_files"": ""th/*.parquet""}, {""config_name"": ""tk"", ""data_files"": ""tk/*.parquet""}, {""config_name"": ""tl"", ""data_files"": ""tl/*.parquet""}, {""config_name"": ""tr"", ""data_files"": ""tr/*.parquet""}, {""config_name"": ""tt"", ""data_files"": ""tt/*.parquet""}, {""config_name"": ""tyv"", ""data_files"": ""tyv/*.parquet""}, {""config_name"": ""ug"", ""data_files"": ""ug/*.parquet""}, {""config_name"": ""uk"", ""data_files"": ""uk/*.parquet""}, {""config_name"": ""ur"", ""data_files"": ""ur/*.parquet""}, {""config_name"": ""uz"", ""data_files"": ""uz/*.parquet""}, {""config_name"": ""vec"", ""data_files"": ""vec/*.parquet""}, {""config_name"": ""vi"", ""data_files"": ""vi/*.parquet""}, {""config_name"": ""vls"", ""data_files"": ""vls/*.parquet""}, {""config_name"": ""vo"", ""data_files"": ""vo/*.parquet""}, {""config_name"": ""wa"", ""data_files"": ""wa/*.parquet""}, {""config_name"": ""war"", ""data_files"": ""war/*.parquet""}, {""config_name"": ""wuu"", ""data_files"": ""wuu/*.parquet""}, {""config_name"": ""xal"", ""data_files"": ""xal/*.parquet""}, {""config_name"": ""xmf"", ""data_files"": ""xmf/*.parquet""}, {""config_name"": ""yi"", ""data_files"": ""yi/*.parquet""}, {""config_name"": ""yo"", ""data_files"": ""yo/*.parquet""}, {""config_name"": ""yue"", ""data_files"": ""yue/*.parquet""}, {""config_name"": ""zh"", ""data_files"": ""zh/*.parquet""}], ""pretty_name"": ""CulturaX"", ""annotations_creators"": [""no-annotation""], ""language_creators"": [""found""], ""language"": [""af"", ""als"", ""am"", ""an"", ""ar"", ""arz"", ""as"", ""ast"", ""av"", ""az"", ""azb"", ""ba"", ""bar"", ""bcl"", ""be"", ""bg"", ""bh"", ""bn"", ""bo"", ""bpy"", ""br"", ""bs"", ""bxr"", ""ca"", ""cbk"", ""ce"", ""ceb"", ""ckb"", ""cs"", ""cv"", ""cy"", ""da"", ""de"", ""dsb"", ""dv"", ""el"", ""eml"", ""en"", ""eo"", ""es"", ""et"", ""eu"", ""fa"", ""fi"", ""fr"", ""frr"", ""fy"", ""ga"", ""gd"", ""gl"", ""gn"", ""gom"", ""gu"", ""he"", ""hi"", ""hr"", ""hsb"", ""ht"", ""hu"", ""hy"", ""ia"", ""id"", ""ie"", ""ilo"", ""io"", ""is"", ""it"", ""ja"", ""jbo"", ""jv"", ""ka"", ""kk"", ""km"", ""kn"", ""ko"", ""krc"", ""ku"", ""kv"", ""kw"", ""ky"", ""la"", ""lb"", ""lez"", ""li"", ""lmo"", ""lo"", ""lrc"", ""lt"", ""lv"", ""mai"", ""mg"", ""mhr"", ""min"", ""mk"", ""ml"", ""mn"", ""mr"", ""mrj"", ""ms"", ""mt"", ""mwl"", ""my"", ""myv"", ""mzn"", ""nah"", ""nap"", ""nds"", ""ne"", ""new"", ""nl"", ""nn"", ""no"", ""oc"", ""or"", ""os"", ""pa"", ""pam"", ""pl"", ""pms"", ""pnb"", ""ps"", ""pt"", ""qu"", ""rm"", ""ro"", ""ru"", ""rue"", ""sa"", ""sah"", ""scn"", ""sd"", ""sh"", ""si"", ""sk"", ""sl"", ""so"", ""sq"", ""sr"", ""su"", ""sv"", ""sw"", ""ta"", ""te"", ""tg"", ""th"", ""tk"", ""tl"", ""tr"", ""tt"", ""tyv"", ""ug"", ""uk"", ""ur"", ""uz"", ""vec"", ""vi"", ""vls"", ""vo"", ""wa"", ""war"", ""wuu"", ""xal"", ""xmf"", ""yi"", ""yo"", ""yue"", ""zh""], ""multilinguality"": [""multilingual""], ""size_categories"": [""n<1K"", ""1K
CulturaX
Cleaned, Enormous, and Public: The Multilingual Fuel to Democratize Large Language Models for 167 Languages
## Dataset Description
- **Repository:** [https://github.com/nlp-uoregon/CulturaX](https://github.com/nlp-uoregon/CulturaX)
- **Papers:** [CulturaX: A Cleaned, Enormous, and Multilingual Dataset for Large Language Models in 167 Languages](https://arxiv.org/abs/2309.09400)
## Dataset Summary
We present CulturaX, a substantial multilingual dataset with 6.3 trillion tokens in 167 languages, tailored for large language model (LLM) development. Our dataset undergoes meticulous cleaning and deduplication through a rigorous pipeline of multiple stages to accomplish the best quality for model training, including language identification, URL-based filtering, metric-based cleaning, document refinement, and data deduplication. We employ MinHash at document level to achieve fuzzy deduplication for the datasets in different languages. Our data cleaning framework includes diverse criteria and threshold selections, guided by extensive data samples, ensuring comprehensive noise filtering in various aspects. CulturaX is fully released to the public in HuggingFace to facilitate research and advancements in multilingual LLMs.
Our dataset combines the most recent iteration of mC4 (version 3.1.0) [1] with all accessible OSCAR corpora up to the present year, including 20.19, 21.09, 22.01, and 23.01 [2]. After deep cleaning and deduplication, CulturaX involves 16TB data in the parquet format (expanding to 27TB when unpacked). More than a half of our dataset is dedicated to non-English languages to significantly boost the data size and enhance the feasibility of training models in multilingual scenarios.
To obtain perplexity scores for data cleaning, we train a SentencePiece tokenizer and 5-gram Kneser-Ney language models as provided in the KenLM library [3] using the 20230501 dumps of Wikipedia. Our KenLM models are also released in HuggingFace: https://huggingface.co/uonlp/kenlm.
Details for the dataset can be found in our technical paper: [https://arxiv.org/abs/2309.09400](https://arxiv.org/abs/2309.09400)
You can download the dataset using Hugging Face datasets:
*You may need to follow these instructions to setup authentication before downloading the dataset: [https://huggingface.co/docs/huggingface_hub/quick-start#login](https://huggingface.co/docs/huggingface_hub/quick-start#login)*
```python
from datasets import load_dataset
ds = load_dataset(""uonlp/CulturaX"",
""en"",
use_auth_token=True)
```
### Languages
The supported languages and statistics for our dataset can be found below:
*(Note that the language code `als` and `eml` refer to `gsw` and `x-eml` in the OSCAR-2301 dataset.)*
| | Code | Language | # Documents | # Tokens | # Tokens (%) |
|----:|:-------|:-------------------------|:----------------|:--------------------|:------|
| 0 | en | English | 3,241,065,682 | 2,846,970,578,793 | 45.13 |
| 1 | ru | Russian | 799,310,908 | 737,201,800,363 | 11.69 |
| 2 | es | Spanish | 450,937,645 | 373,845,662,394 | 5.93 |
| 3 | de | German | 420,017,484 | 357,030,348,021 | 5.66 |
| 4 | fr | French | 363,754,348 | 319,332,674,695 | 5.06 |
| 5 | zh | Chinese | 218,624,604 | 227,055,380,882 | 3.60 |
| 6 | it | Italian | 211,309,922 | 165,446,410,843 | 2.62 |
| 7 | pt | Portuguese | 190,289,658 | 136,941,763,923 | 2.17 |
| 8 | pl | Polish | 142,167,217 | 117,269,087,143 | 1.86 |
| 9 | ja | Japanese | 111,188,475 | 107,873,841,351 | 1.71 |
| 10 | nl | Dutch | 117,392,666 | 80,032,209,900 | 1.27 |
| 11 | ar | Arabic | 74,027,952 | 69,354,335,076 | 1.10 |
| 12 | tr | Turkish | 94,207,460 | 64,292,787,164 | 1.02 |
| 13 | cs | Czech | 65,350,564 | 56,910,486,745 | 0.90 |
| 14 | vi | Vietnamese | 57,606,341 | 55,380,123,774 | 0.88 |
| 15 | fa | Persian | 59,531,144 | 45,947,657,495 | 0.73 |
| 16 | hu | Hungarian | 44,132,152 | 43,417,981,714 | 0.69 |
| 17 | el | Greek | 51,430,226 | 43,147,590,757 | 0.68 |
| 18 | ro | Romanian | 40,325,424 | 39,647,954,768 | 0.63 |
| 19 | sv | Swedish | 49,709,189 | 38,486,181,494 | 0.61 |
| 20 | uk | Ukrainian | 44,740,545 | 38,226,128,686 | 0.61 |
| 21 | fi | Finnish | 30,467,667 | 28,925,009,180 | 0.46 |
| 22 | ko | Korean | 20,557,310 | 24,765,448,392 | 0.39 |
| 23 | da | Danish | 25,429,808 | 22,921,651,314 | 0.36 |
| 24 | bg | Bulgarian | 24,131,819 | 22,917,954,776 | 0.36 |
| 25 | no | Norwegian | 18,907,310 | 18,426,628,868 | 0.29 |
| 26 | hi | Hindi | 19,665,355 | 16,791,362,871 | 0.27 |
| 27 | sk | Slovak | 18,582,517 | 16,442,669,076 | 0.26 |
| 28 | th | Thai | 20,960,550 | 15,717,374,014 | 0.25 |
| 29 | lt | Lithuanian | 13,339,785 | 14,247,110,836 | 0.23 |
| 30 | ca | Catalan | 15,531,777 | 12,530,288,006 | 0.20 |
| 31 | id | Indonesian | 23,251,368 | 12,062,966,061 | 0.19 |
| 32 | bn | Bangla | 12,436,596 | 9,572,929,804 | 0.15 |
| 33 | et | Estonian | 8,004,753 | 8,805,656,165 | 0.14 |
| 34 | sl | Slovenian | 7,335,378 | 8,007,587,522 | 0.13 |
| 35 | lv | Latvian | 7,136,587 | 7,845,180,319 | 0.12 |
| 36 | he | Hebrew | 4,653,979 | 4,937,152,096 | 0.08 |
| 37 | sr | Serbian | 4,053,166 | 4,619,482,725 | 0.07 |
| 38 | ta | Tamil | 4,728,460 | 4,378,078,610 | 0.07 |
| 39 | sq | Albanian | 5,205,579 | 3,648,893,215 | 0.06 |
| 40 | az | Azerbaijani | 5,084,505 | 3,513,351,967 | 0.06 |
| 41 | kk | Kazakh | 2,733,982 | 2,802,485,195 | 0.04 |
| 42 | ur | Urdu | 2,757,279 | 2,703,052,627 | 0.04 |
| 43 | ka | Georgian | 3,120,321 | 2,617,625,564 | 0.04 |
| 44 | hy | Armenian | 2,964,488 | 2,395,179,284 | 0.04 |
| 45 | is | Icelandic | 2,373,560 | 2,350,592,857 | 0.04 |
| 46 | ml | Malayalam | 2,693,052 | 2,100,556,809 | 0.03 |
| 47 | ne | Nepali | 3,124,040 | 2,061,601,961 | 0.03 |
| 48 | mk | Macedonian | 2,762,807 | 2,003,302,006 | 0.03 |
| 49 | mr | Marathi | 2,266,588 | 1,955,227,796 | 0.03 |
| 50 | mn | Mongolian | 1,928,828 | 1,850,667,656 | 0.03 |
| 51 | be | Belarusian | 1,643,486 | 1,791,473,041 | 0.03 |
| 52 | te | Telugu | 1,822,865 | 1,566,972,146 | 0.02 |
| 53 | gl | Galician | 1,785,963 | 1,382,539,693 | 0.02 |
| 54 | eu | Basque | 1,598,822 | 1,262,066,759 | 0.02 |
| 55 | kn | Kannada | 1,352,142 | 1,242,285,201 | 0.02 |
| 56 | gu | Gujarati | 1,162,878 | 1,131,730,537 | 0.02 |
| 57 | af | Afrikaans | 826,519 | 1,119,009,767 | 0.02 |
| 58 | my | Burmese | 865,575 | 882,606,546 | 0.01 |
| 59 | si | Sinhala | 753,655 | 880,289,097 | 0.01 |
| 60 | eo | Esperanto | 460,088 | 803,948,528 | 0.01 |
| 61 | km | Khmer | 1,013,181 | 746,664,132 | 0.01 |
| 62 | pa | Punjabi | 646,987 | 727,546,145 | 0.01 |
| 63 | cy | Welsh | 549,955 | 576,743,162 | 0.01 |
| 64 | ky | Kyrgyz | 570,922 | 501,442,620 | 0.01 |
| 65 | ga | Irish | 304,251 | 376,947,935 | 0.01 |
| 66 | ps | Pashto | 376,914 | 363,007,770 | 0.01 |
| 67 | am | Amharic | 243,349 | 358,206,762 | 0.01 |
| 68 | ku | Kurdish | 295,314 | 302,990,910 | 0.00 |
| 69 | tl | Filipino | 348,453 | 242,086,456 | 0.00 |
| 70 | yi | Yiddish | 141,156 | 217,584,643 | 0.00 |
| 71 | lo | Lao | 217,842 | 168,256,876 | 0.00 |
| 72 | fy | Western Frisian | 223,268 | 167,193,111 | 0.00 |
| 73 | sd | Sindhi | 109,162 | 147,487,058 | 0.00 |
| 74 | mg | Malagasy | 115,910 | 142,685,412 | 0.00 |
| 75 | or | Odia | 153,461 | 100,323,213 | 0.00 |
| 76 | as | Assamese | 52,627 | 83,787,896 | 0.00 |
| 77 | ug | Uyghur | 47,035 | 77,677,306 | 0.00 |
| 78 | uz | Uzbek | 87,219 | 75,250,787 | 0.00 |
| 79 | la | Latin | 48,968 | 44,176,580 | 0.00 |
| 80 | hr | Croatian | 460,690 | 40,796,811 | 0.00 |
| 81 | sw | Swahili | 66,506 | 30,708,309 | 0.00 |
| 82 | ms | Malay | 238,151 | 19,375,976 | 0.00 |
| 83 | br | Breton | 43,765 | 13,987,037 | 0.00 |
| 84 | sa | Sanskrit | 16,290 | 13,561,367 | 0.00 |
| 85 | gd | Scottish Gaelic | 8,408 | 4,796,485 | 0.00 |
| 86 | su | Sundanese | 1,554 | 1,308,460 | 0.00 |
| 87 | jv | Javanese | 2,058 | 625,429 | 0.00 |
| 88 | tg | Tajik | 483,835 | - | - |
| 89 | ceb | Cebuano | 263,890 | - | - |
| 90 | tt | Tatar | 218,102 | - | - |
| 91 | ckb | Central Kurdish | 172,035 | - | - |
| 92 | lb | Luxembourgish | 165,891 | - | - |
| 93 | mt | Maltese | 151,320 | - | - |
| 94 | nn | Norwegian Nynorsk | 126,083 | - | - |
| 95 | qu | Quechua | 1,202 | 72,101 | 0.00 |
| 96 | ba | Bashkir | 71,957 | - | - |
| 97 | arz | Egyptian Arabic | 71,625 | - | - |
| 98 | dv | Divehi | 66,702 | - | - |
| 99 | bo | Tibetan | 54,185 | - | - |
| 100 | sh | Serbian (Latin) | 45,619 | - | - |
| 101 | yo | Yoruba | 192 | 42,943 | 0.00 |
| 102 | bs | Bosnian | 1,237 | 39,768 | 0.00 |
| 103 | azb | South Azerbaijani | 29,833 | - | - |
| 104 | ht | Haitian Creole | 12 | 26,183 | 0.00 |
| 105 | war | Waray | 23,687 | - | - |
| 106 | cv | Chuvash | 22,570 | - | - |
| 107 | sah | Sakha | 22,141 | - | - |
| 108 | li | Limburgish | 206 | 18,532 | 0.00 |
| 109 | ce | Chechen | 17,322 | - | - |
| 110 | pnb | Western Panjabi | 15,625 | - | - |
| 111 | nds | Low German | 15,139 | - | - |
| 112 | tk | Turkmen | 14,393 | - | - |
| 113 | gn | Guarani | 103 | 12,708 | 0.00 |
| 114 | oc | Occitan | 10,556 | - | - |
| 115 | xmf | Mingrelian | 9,706 | - | - |
| 116 | ast | Asturian | 9,002 | - | - |
| 117 | os | Ossetic | 8,596 | - | - |
| 118 | mhr | Eastern Mari | 7,883 | - | - |
| 119 | pms | Piedmontese | 7,566 | - | - |
| 120 | als[*] | Swiss German | 6,936 | - | - |
| 121 | vo | Volapük | 6,621 | - | - |
| 122 | so | Somali | 39 | 6,053 | 0.00 |
| 123 | bpy | Bishnupriya | 5,087 | - | - |
| 124 | new | Newari | 4,344 | - | - |
| 125 | hsb | Upper Sorbian | 4,244 | - | - |
| 126 | lmo | Lombard | 3,530 | - | - |
| 127 | an | Aragonese | 2,746 | - | - |
| 128 | ilo | Iloko | 2,328 | - | - |
| 129 | mzn | Mazanderani | 1,914 | - | - |
| 130 | lez | Lezghian | 1,806 | - | - |
| 131 | rm | Romansh | 30 | 1,769 | 0.00 |
| 132 | krc | Karachay-Balkar | 1,745 | - | - |
| 133 | min | Minangkabau | 1,429 | - | - |
| 134 | kv | Komi | 1,396 | - | - |
| 135 | wa | Walloon | 1,383 | - | - |
| 136 | jbo | Lojban | 1,349 | - | - |
| 137 | io | Ido | 1,144 | - | - |
| 138 | mrj | Western Mari | 1,056 | - | - |
| 139 | gom | Goan Konkani | 721 | - | - |
| 140 | ia | Interlingua | 613 | - | - |
| 141 | av | Avaric | 438 | - | - |
| 142 | bh | Bihari languages | 265 | - | - |
| 143 | wuu | Wu Chinese | 222 | - | - |
| 144 | nah | Nahuatl languages | 131 | - | - |
| 145 | vec | Venetian | 113 | - | - |
| 146 | bxr | Russia Buriat | 100 | - | - |
| 147 | kw | Cornish | 94 | - | - |
| 148 | mai | Maithili | 93 | - | - |
| 149 | eml[*] | Emiliano-Romagnol | 91 | - | - |
| 150 | dsb | Lower Sorbian | 59 | - | - |
| 151 | xal | Kalmyk | 51 | - | - |
| 152 | lrc | Northern Luri | 43 | - | - |
| 153 | nap | Neapolitan | 31 | - | - |
| 154 | tyv | Tuvinian | 23 | - | - |
| 155 | scn | Sicilian | 21 | - | - |
| 156 | frr | Northern Frisian | 11 | - | - |
| 157 | mwl | Mirandese | 9 | - | - |
| 158 | myv | Erzya | 4 | - | - |
| 159 | ie | Interlingue | 4 | - | - |
| 160 | pam | Pampanga | 4 | - | - |
| 161 | bar | Bavarian | 3 | - | - |
| 162 | yue | Yue Chinese | 3 | - | - |
| 163 | cbk | Chavacano | 2 | - | - |
| 164 | bcl | Central Bikol | 1 | - | - |
| 165 | vls | West Flemish | 1 | - | - |
| 166 | rue | Rusyn | 1 | - | - |
### Dataset Structure
```json
{
""text"": ...,
""timestamp"": ...,
""url"": ...,
""source"": ""mc4"" | ""OSCAR-xxxx"",
}
```
## Considerations for Using the Data
As CulturaX is the cleaned version of the mC4 and OSCAR datasets, which were both extracted from CommonCrawl, personal and sensitive information might still contain personal and sensitive information.
This must be considered prior to using this dataset for any purpose, such as training deep learning models, etc.
## License Information
The licence terms for CulturaX strictly follows those of `mC4` and `OSCAR`. Please refer to both below licenses when using this dataset.
- [mC4 license](https://huggingface.co/datasets/allenai/c4#license)
- [OSCAR license](https://huggingface.co/datasets/oscar-corpus/OSCAR-2301#licensing-information)
## Acknowledgements
We would like to extend our sincere thanks to Google Cloud for providing the TPU resources that made this project possible. Their support has been invaluable in enabling our team to run evaluations on our dataset efficiently.
## Citation
To cite CulturaX, please use:
```
@inproceedings{nguyen-etal-2024-culturax,
title = ""{C}ultura{X}: A Cleaned, Enormous, and Multilingual Dataset for Large Language Models in 167 Languages"",
author = ""Nguyen, Thuat and
Nguyen, Chien Van and
Lai, Viet Dac and
Man, Hieu and
Ngo, Nghia Trung and
Dernoncourt, Franck and
Rossi, Ryan A. and
Nguyen, Thien Huu"",
editor = ""Calzolari, Nicoletta and
Kan, Min-Yen and
Hoste, Veronique and
Lenci, Alessandro and
Sakti, Sakriani and
Xue, Nianwen"",
booktitle = ""Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)"",
month = may,
year = ""2024"",
address = ""Torino, Italia"",
publisher = ""ELRA and ICCL"",
url = ""https://aclanthology.org/2024.lrec-main.377"",
pages = ""4226--4237"",
abstract = ""Extensive training datasets represent one of the important factors for the impressive learning capabilities of large language models (LLMs). However, these training datasets for current LLMs, especially the recent state-of-the-art models, are often not fully disclosed. Creating training data for high-performing LLMs involves extensive cleaning and deduplication to ensure the necessary level of quality. The lack of transparency for training data has thus hampered research on attributing and addressing hallucination and bias issues in LLMs, hindering replication efforts and further advancements in the community. These challenges become even more pronounced in multilingual learning scenarios, where the available multilingual text datasets are often inadequately collected and cleaned. Consequently, there is a lack of open-source and readily usable dataset to effectively train LLMs in multiple languages. To overcome this issue, we present CulturaX, a substantial multilingual dataset with 6.3 trillion tokens in 167 languages, tailored for LLM development. Our dataset undergoes meticulous cleaning and deduplication through a rigorous pipeline of multiple stages to accomplish the best quality for model training, including language identification, URL-based filtering, metric-based cleaning, document refinement, and data deduplication. CulturaX is released in Hugging Face facilitate research and advancements in multilingual LLMs: https://huggingface.co/datasets/uonlp/CulturaX."",
}
```
## Reference
[1] Linting Xue, Noah Constant, Adam Roberts, Mihir Kale, Rami Al-Rfou, Aditya Siddhant, Aditya Barua, and Colin Raffel. 2021. mT5: A massively multilingual
pre-trained text-to-text transformer. In NAACL 2021. https://huggingface.co/datasets/mc4
[2] Pedro Javier Ortiz Suárez, Benoît Sagot, and Laurent Romary. 2019. Asynchronous pipelines for processing huge corpora on medium to low resource infrastructures. In Proceedings of the Workshop on Challenges in the Management of Large Corpora (CMLC-
7) 2019. https://oscar-project.org/
[3] KenLM: Faster and smaller language model queries. In Proceedings of the Sixth
Workshop on Statistical Machine Translation, 2011."
mozilla-foundation/common_voice_16_1,"{""pretty_name"": ""Common Voice Corpus 16.1"", ""annotations_creators"": [""crowdsourced""], ""language_creators"": [""crowdsourced""], ""language"": [""ab"", ""af"", ""am"", ""ar"", ""as"", ""ast"", ""az"", ""ba"", ""bas"", ""be"", ""bg"", ""bn"", ""br"", ""ca"", ""ckb"", ""cnh"", ""cs"", ""cv"", ""cy"", ""da"", ""de"", ""dv"", ""dyu"", ""el"", ""en"", ""eo"", ""es"", ""et"", ""eu"", ""fa"", ""fi"", ""fr"", ""fy"", ""ga"", ""gl"", ""gn"", ""ha"", ""he"", ""hi"", ""hsb"", ""hu"", ""hy"", ""ia"", ""id"", ""ig"", ""is"", ""it"", ""ja"", ""ka"", ""kab"", ""kk"", ""kmr"", ""ko"", ""ky"", ""lg"", ""lij"", ""lo"", ""lt"", ""ltg"", ""lv"", ""mdf"", ""mhr"", ""mk"", ""ml"", ""mn"", ""mr"", ""mrj"", ""mt"", ""myv"", ""nan"", ""ne"", ""nhi"", ""nl"", ""nn"", ""oc"", ""or"", ""os"", ""pa"", ""pl"", ""ps"", ""pt"", ""quy"", ""rm"", ""ro"", ""ru"", ""rw"", ""sah"", ""sat"", ""sc"", ""sk"", ""skr"", ""sl"", ""sq"", ""sr"", ""sv"", ""sw"", ""ta"", ""te"", ""th"", ""ti"", ""tig"", ""tk"", ""tok"", ""tr"", ""tt"", ""tw"", ""ug"", ""uk"", ""ur"", ""uz"", ""vi"", ""vot"", ""yi"", ""yo"", ""yue"", ""zgh"", ""zh""], ""language_bcp47"": [""zh-CN"", ""zh-HK"", ""zh-TW"", ""sv-SE"", ""rm-sursilv"", ""rm-vallader"", ""pa-IN"", ""nn-NO"", ""ne-NP"", ""nan-tw"", ""hy-AM"", ""ga-IE"", ""fy-NL""], ""license"": [""cc0-1.0""], ""multilinguality"": [""multilingual""], ""paperswithcode_id"": ""common-voice"", ""extra_gated_prompt"": ""By clicking on \u201cAccess repository\u201d below, you also agree to not attempt to determine the identity of speakers in the Common Voice dataset.""}","# Dataset Card for Common Voice Corpus 16
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://commonvoice.mozilla.org/en/datasets
- **Repository:** https://github.com/common-voice/common-voice
- **Paper:** https://arxiv.org/abs/1912.06670
- **Leaderboard:** https://paperswithcode.com/dataset/common-voice
- **Point of Contact:** [Vaibhav Srivastav](mailto:vaibhav@huggingface.co)
### Dataset Summary
The Common Voice dataset consists of a unique MP3 and corresponding text file.
Many of the 30328 recorded hours in the dataset also include demographic metadata like age, sex, and accent
that can help improve the accuracy of speech recognition engines.
The dataset currently consists of 19673 validated hours in 120 languages, but more voices and languages are always added.
Take a look at the [Languages](https://commonvoice.mozilla.org/en/languages) page to request a language or start contributing.
### Languages
```
Abkhaz, Afrikaans, Albanian, Amharic, Arabic, Armenian, Assamese, Asturian, Azerbaijani, Basaa, Bashkir, Basque, Belarusian, Bengali, Breton, Bulgarian, Cantonese, Catalan, Central Kurdish, Chinese (China), Chinese (Hong Kong), Chinese (Taiwan), Chuvash, Czech, Danish, Dhivehi, Dioula, Dutch, English, Erzya, Esperanto, Estonian, Finnish, French, Frisian, Galician, Georgian, German, Greek, Guarani, Hakha Chin, Hausa, Hebrew, Hill Mari, Hindi, Hungarian, Icelandic, Igbo, Indonesian, Interlingua, Irish, Italian, Japanese, Kabyle, Kazakh, Kinyarwanda, Korean, Kurmanji Kurdish, Kyrgyz, Lao, Latgalian, Latvian, Ligurian, Lithuanian, Luganda, Macedonian, Malayalam, Maltese, Marathi, Meadow Mari, Moksha, Mongolian, Nepali, Norwegian Nynorsk, Occitan, Odia, Ossetian, Pashto, Persian, Polish, Portuguese, Punjabi, Quechua Chanka, Romanian, Romansh Sursilvan, Romansh Vallader, Russian, Sakha, Santali (Ol Chiki), Saraiki, Sardinian, Serbian, Slovak, Slovenian, Sorbian, Upper, Spanish, Swahili, Swedish, Taiwanese (Minnan), Tamazight, Tamil, Tatar, Telugu, Thai, Tigre, Tigrinya, Toki Pona, Turkish, Turkmen, Twi, Ukrainian, Urdu, Uyghur, Uzbek, Vietnamese, Votic, Welsh, Western Sierra Puebla Nahuatl, Yiddish, Yoruba
```
## How to use
The `datasets` library allows you to load and pre-process your dataset in pure Python, at scale. The dataset can be downloaded and prepared in one call to your local drive by using the `load_dataset` function.
For example, to download the Hindi config, simply specify the corresponding language config name (i.e., ""hi"" for Hindi):
```python
from datasets import load_dataset
cv_16 = load_dataset(""mozilla-foundation/common_voice_16_1"", ""hi"", split=""train"")
```
Using the datasets library, you can also stream the dataset on-the-fly by adding a `streaming=True` argument to the `load_dataset` function call. Loading a dataset in streaming mode loads individual samples of the dataset at a time, rather than downloading the entire dataset to disk.
```python
from datasets import load_dataset
cv_16 = load_dataset(""mozilla-foundation/common_voice_16_1"", ""hi"", split=""train"", streaming=True)
print(next(iter(cv_16)))
```
*Bonus*: create a [PyTorch dataloader](https://huggingface.co/docs/datasets/use_with_pytorch) directly with your own datasets (local/streamed).
### Local
```python
from datasets import load_dataset
from torch.utils.data.sampler import BatchSampler, RandomSampler
cv_16 = load_dataset(""mozilla-foundation/common_voice_16_1"", ""hi"", split=""train"")
batch_sampler = BatchSampler(RandomSampler(cv_16), batch_size=32, drop_last=False)
dataloader = DataLoader(cv_16, batch_sampler=batch_sampler)
```
### Streaming
```python
from datasets import load_dataset
from torch.utils.data import DataLoader
cv_16 = load_dataset(""mozilla-foundation/common_voice_16_1"", ""hi"", split=""train"")
dataloader = DataLoader(cv_16, batch_size=32)
```
To find out more about loading and preparing audio datasets, head over to [hf.co/blog/audio-datasets](https://huggingface.co/blog/audio-datasets).
### Example scripts
Train your own CTC or Seq2Seq Automatic Speech Recognition models on Common Voice 16 with `transformers` - [here](https://github.com/huggingface/transformers/tree/main/examples/pytorch/speech-recognition).
## Dataset Structure
### Data Instances
A typical data point comprises the `path` to the audio file and its `sentence`.
Additional fields include `accent`, `age`, `client_id`, `up_votes`, `down_votes`, `gender`, `locale` and `segment`.
```python
{
'client_id': 'd59478fbc1ee646a28a3c652a119379939123784d99131b865a89f8b21c81f69276c48bd574b81267d9d1a77b83b43e6d475a6cfc79c232ddbca946ae9c7afc5',
'path': 'et/clips/common_voice_et_18318995.mp3',
'audio': {
'path': 'et/clips/common_voice_et_18318995.mp3',
'array': array([-0.00048828, -0.00018311, -0.00137329, ..., 0.00079346, 0.00091553, 0.00085449], dtype=float32),
'sampling_rate': 48000
},
'sentence': 'Tasub kokku saada inimestega, keda tunned juba ammust ajast saati.',
'up_votes': 2,
'down_votes': 0,
'age': 'twenties',
'gender': 'male',
'accent': '',
'locale': 'et',
'segment': ''
}
```
### Data Fields
`client_id` (`string`): An id for which client (voice) made the recording
`path` (`string`): The path to the audio file
`audio` (`dict`): A dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate. Note that when accessing the audio column: `dataset[0][""audio""]` the audio file is automatically decoded and resampled to `dataset.features[""audio""].sampling_rate`. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the `""audio""` column, *i.e.* `dataset[0][""audio""]` should **always** be preferred over `dataset[""audio""][0]`.
`sentence` (`string`): The sentence the user was prompted to speak
`up_votes` (`int64`): How many upvotes the audio file has received from reviewers
`down_votes` (`int64`): How many downvotes the audio file has received from reviewers
`age` (`string`): The age of the speaker (e.g. `teens`, `twenties`, `fifties`)
`gender` (`string`): The gender of the speaker
`accent` (`string`): Accent of the speaker
`locale` (`string`): The locale of the speaker
`segment` (`string`): Usually an empty field
### Data Splits
The speech material has been subdivided into portions for dev, train, test, validated, invalidated, reported and other.
The validated data is data that has been validated with reviewers and received upvotes that the data is of high quality.
The invalidated data is data has been invalidated by reviewers
and received downvotes indicating that the data is of low quality.
The reported data is data that has been reported, for different reasons.
The other data is data that has not yet been reviewed.
The dev, test, train are all data that has been reviewed, deemed of high quality and split into dev, test and train.
## Data Preprocessing Recommended by Hugging Face
The following are data preprocessing steps advised by the Hugging Face team. They are accompanied by an example code snippet that shows how to put them to practice.
Many examples in this dataset have trailing quotations marks, e.g _“the cat sat on the mat.“_. These trailing quotation marks do not change the actual meaning of the sentence, and it is near impossible to infer whether a sentence is a quotation or not a quotation from audio data alone. In these cases, it is advised to strip the quotation marks, leaving: _the cat sat on the mat_.
In addition, the majority of training sentences end in punctuation ( . or ? or ! ), whereas just a small proportion do not. In the dev set, **almost all** sentences end in punctuation. Thus, it is recommended to append a full-stop ( . ) to the end of the small number of training examples that do not end in punctuation.
```python
from datasets import load_dataset
ds = load_dataset(""mozilla-foundation/common_voice_16_1"", ""en"", use_auth_token=True)
def prepare_dataset(batch):
""""""Function to preprocess the dataset with the .map method""""""
transcription = batch[""sentence""]
if transcription.startswith('""') and transcription.endswith('""'):
# we can remove trailing quotation marks as they do not affect the transcription
transcription = transcription[1:-1]
if transcription[-1] not in [""."", ""?"", ""!""]:
# append a full-stop to sentences that do not end in punctuation
transcription = transcription + "".""
batch[""sentence""] = transcription
return batch
ds = ds.map(prepare_dataset, desc=""preprocess dataset"")
```
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
The dataset consists of people who have donated their voice online. You agree to not attempt to determine the identity of speakers in the Common Voice dataset.
## Considerations for Using the Data
### Social Impact of Dataset
The dataset consists of people who have donated their voice online. You agree to not attempt to determine the identity of speakers in the Common Voice dataset.
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
Public Domain, [CC-0](https://creativecommons.org/share-your-work/public-domain/cc0/)
### Citation Information
```
@inproceedings{commonvoice:2020,
author = {Ardila, R. and Branson, M. and Davis, K. and Henretty, M. and Kohler, M. and Meyer, J. and Morais, R. and Saunders, L. and Tyers, F. M. and Weber, G.},
title = {Common Voice: A Massively-Multilingual Speech Corpus},
booktitle = {Proceedings of the 12th Conference on Language Resources and Evaluation (LREC 2020)},
pages = {4211--4215},
year = 2020
}
```"
KorQuAD/squad_kor_v1,"{""annotations_creators"": [""crowdsourced""], ""language_creators"": [""found""], ""language"": [""ko""], ""license"": [""cc-by-nd-4.0""], ""multilinguality"": [""monolingual""], ""size_categories"": [""10K
and identify the date.
### 2. [Optional] Get a refreshed list of languages
This is optional because it not very likely that a new language will have
suddenly appeared since the last version _and_ have a significant dataset.
Navigate to and copy the
languages column from the ""Detailed list"" table (near the end of the page).
Copy that content in the form of a Python list into `lang_def.py` (at the top
of the repo) under a new date.
### 3. [Optional] Create Media and Category aliases
In order to properly extract links to images and media in all languages, we
must refresh the two corresponding files. To do so, from the root of the repo,
run
```sh
python -m prep.create_aliases
```
This will create or update these two files at the root of the repo:
- `media_aliases.py`
- `category_aliases.py`
These files are used in the final step
### 4. Build and prepare the datasets into sharded parquet files
Running this script downloads the wikipedia dumps for each language in
`lang_def.py` and shards each language dataset into the appropriate number of
shards (max size ~ 250MB).
```sh
python -m prep.build --date 20230601
```
There are other options:
```text
$ python -m prep.build --help
usage: Wikipedia Builder [-h] [--date DATE] [--language [LANG ...]] [--cache-dir DIR] [--mirror MIRROR]
Prepares the Wikipedia dataset for each language
optional arguments:
-h, --help show this help message and exit
--date DATE Wikipedia dump date (e.g. 20230601)
--language [LANG ...] Language code (e.g. en). If missing, all languages are processed
--cache-dir DIR Cache directory for 🤗 Datasets
--mirror MIRROR Mirror URL
```
For instance, for faster downloads of the dumps, use the mirror option:
```sh
python -m prep.build \
--date 20230601 \
--language bs \
--mirror https://mirror.accum.se/mirror/wikimedia.org/dumps/
```
It will download the dumps at around 60MB/s instead of the capped speed
(~4MB/s) from . The script will skip existing
directories, allowing you to run the script in several passes.
Notes:
- These instructions build upon the build process of the
[Wikipedia](https://huggingface.co/datasets/wikipedia) 🤗 Dataset. HF did a
fantastic job, I just pushed it a bit further.
- Be aware that not all mirrors contain all dumps. For instance mirror.accum.se
does not contain dumps for languages such as be-x-old or cbk-zam. My own
solution is to run a first pass using the aforementioned mirror, and a second
pass with the official `https://dumps.wikimedia.org` site (omitting the
`--mirror` parameter)."
amphion/Emilia-Dataset,"{""license"": ""cc-by-nc-4.0"", ""task_categories"": [""text-to-speech"", ""automatic-speech-recognition""], ""language"": [""zh"", ""en"", ""ja"", ""fr"", ""de"", ""ko""], ""pretty_name"": ""Emilia"", ""size_categories"": [""10M
This is the official repository 👑 for the **Emilia** dataset and the source code for the **Emilia-Pipe** speech data preprocessing pipeline.
![]()
## News 🔥
- **2024/08/28**: Welcome to join Amphion's [Discord channel](https://discord.com/invite/ZxxREr3Y) to stay connected and engage with our community!
- **2024/08/27**: *The Emilia dataset is now publicly available!* Discover the most extensive and diverse speech generation dataset with 101k hours of in-the-wild speech data now at [HuggingFace](https://huggingface.co/datasets/amphion/Emilia-Dataset) or [OpenDataLab](https://opendatalab.com/Amphion/Emilia)! 👑👑👑
- **2024/07/08**: Our preprint [paper](https://arxiv.org/abs/2407.05361) is now available! 🔥🔥🔥
- **2024/07/03**: We welcome everyone to check our [homepage](https://emilia-dataset.github.io/Emilia-Demo-Page/) for our brief introduction for Emilia dataset and our demos!
- **2024/07/01**: We release of Emilia and Emilia-Pipe! We welcome everyone to explore it on our [GitHub](https://github.com/open-mmlab/Amphion/tree/main/preprocessors/Emilia)! 🎉🎉🎉
## Emilia Overview ⭐️
The **Emilia** dataset is a comprehensive, multilingual dataset with the following features:
- containing over *101k* hours of speech data;
- covering six different languages: *English (En), Chinese (Zh), German (De), French (Fr), Japanese (Ja), and Korean (Ko)*;
- containing diverse speech data with *various speaking styles* from diverse video platforms and podcasts on the Internet, covering various content genres such as talk shows, interviews, debates, sports commentary, and audiobooks.
The table below provides the duration statistics for each language in the dataset.
| Language | Duration (hours) |
|:-----------:|:----------------:|
| English | 46,828 |
| Chinese | 49,922 |
| German | 1,590 |
| French | 1,381 |
| Japanese | 1,715 |
| Korean | 217 |
The **Emilia-Pipe** is the first open-source preprocessing pipeline designed to transform raw, in-the-wild speech data into high-quality training data with annotations for speech generation. This pipeline can process one hour of raw audio into model-ready data in just a few minutes, requiring only the raw speech data.
Detailed descriptions for the Emilia and Emilia-Pipe can be found in our [paper](https://arxiv.org/abs/2407.05361).
## Emilia Dataset Usage 📖
Emilia is publicly available at [HuggingFace](https://huggingface.co/datasets/amphion/Emilia-Dataset).
If you are from mainland China or having a connecting issue with HuggingFace, you can also download Emilia from [OpenDataLab](https://opendatalab.com/Amphion/Emilia).
- To download from HuggingFace:
1. Gain access to the dataset and get the HF access token from: [https://huggingface.co/settings/tokens](https://huggingface.co/settings/tokens).
2. Install dependencies and login HF:
- Install Python
- Run `pip install librosa soundfile datasets huggingface_hub[cli]`
- Login by `huggingface-cli login` and paste the HF access token. Check [here](https://huggingface.co/docs/huggingface_hub/guides/cli#huggingface-cli-login) for details.
3. Use following code to load Emilia:
```py
from datasets import load_dataset
dataset = load_dataset(""amphion/Emilia-Dataset"", streaming=True)
print(dataset)
print(next(iter(dataset['train'])))
```
- To download from OpenDataLab (i.e., OpenXLab), please follow the guidance [here](https://speechteam.feishu.cn/wiki/PC8Ew5igviqBiJkElMJcJxNonJc) to gain access.
**ENJOY USING EMILIA!!!** 🔥
### Use cases
If you want to load a subset of Emilia, e.g., only language `DE`, you can use the following code:
```py
from datasets import load_dataset
path = ""DE/*.tar""
dataset = load_dataset(""amphion/Emilia-Dataset"", data_files={""de"": path}, split=""de"", streaming=True)
print(dataset) # here should only shows 90 n_shards instead of 2360
print(next(iter(dataset['train'])))
```
If you want to download all files to your local before using Emilia, remove the `streaming=True` argument:
```py
from datasets import load_dataset
dataset = load_dataset(""amphion/Emilia-Dataset"") # prepare 2.4TB space to store Emilia
print(dataset)
```
### Re-build or Processing your own data
If you wish to re-build Emilia from scratch, you may download the raw audio files from the [provided URL list](https://huggingface.co/datasets/amphion/Emilia) and use our open-source [Emilia-Pipe](https://github.com/open-mmlab/Amphion/tree/main/preprocessors/Emilia) preprocessing pipeline to preprocess the raw data. Additionally, users can easily use Emilia-Pipe to preprocess their own raw speech data for custom needs. By open-sourcing the Emilia-Pipe code, we aim to enable the speech community to collaborate on large-scale speech generation research.
### Notes
*Please note that Emilia does not own the copyright to the audio files; the copyright remains with the original owners of the videos or audio. Users are permitted to use this dataset only for non-commercial purposes under the CC BY-NC-4.0 license.*
## Emilia Dataset Structure ⛪️
### Structure on HuggingFace
On HuggingFace, Emilia is now formatted as [WebDataset](https://github.com/webdataset/webdataset).
Each audio is tared with a corresponding JSON file (having the same prefix filename) within 2360 tar files.
By utilizing WebDataset, you can easily stream audio data, which is magnitude faster than reading separate data files one by one.
Read the *Emilia Dataset Usage 📖* part for a detailed usage guide.
Learn more about WebDataset [here](https://huggingface.co/docs/hub/datasets-webdataset).
*PS: If you want to download the `OpenDataLab` format from HuggingFace, you can specify the `revision` argument to `fc71e07e8572f5f3be1dbd02ed3172a4d298f152`, [which](https://huggingface.co/datasets/amphion/Emilia-Dataset/tree/fc71e07e8572f5f3be1dbd02ed3172a4d298f152) is the old format.*
### Structure on OpenDataLab
On OpenDataLab, Emilia is formatted using the following structure.
Structure example:
```
|-- openemilia_all.tar.gz (all .JSONL files are gzipped with directory structure in this file)
|-- EN (114 batches)
| |-- EN_B00000.jsonl
| |-- EN_B00000 (= EN_B00000.tar.gz)
| | |-- EN_B00000_S00000
| | | `-- mp3
| | | |-- EN_B00000_S00000_W000000.mp3
| | | `-- EN_B00000_S00000_W000001.mp3
| | |-- ...
| |-- ...
| |-- EN_B00113.jsonl
| `-- EN_B00113
|-- ZH (92 batches)
|-- DE (9 batches)
|-- FR (10 batches)
|-- JA (7 batches)
|-- KO (4 batches)
```
JSONL files example:
```
{""id"": ""EN_B00000_S00000_W000000"", ""wav"": ""EN_B00000/EN_B00000_S00000/mp3/EN_B00000_S00000_W000000.mp3"", ""text"": "" You can help my mother and you- No. You didn't leave a bad situation back home to get caught up in another one here. What happened to you, Los Angeles?"", ""duration"": 6.264, ""speaker"": ""EN_B00000_S00000"", ""language"": ""en"", ""dnsmos"": 3.2927}
{""id"": ""EN_B00000_S00000_W000001"", ""wav"": ""EN_B00000/EN_B00000_S00000/mp3/EN_B00000_S00000_W000001.mp3"", ""text"": "" Honda's gone, 20 squads done. X is gonna split us up and put us on different squads. The team's come and go, but 20 squad, can't believe it's ending."", ""duration"": 8.031, ""speaker"": ""EN_B00000_S00000"", ""language"": ""en"", ""dnsmos"": 3.0442}
```
## Reference 📖
If you use the Emilia dataset or the Emilia-Pipe pipeline, please cite the following papers:
```bibtex
@inproceedings{emilia,
author={He, Haorui and Shang, Zengqiang and Wang, Chaoren and Li, Xuyuan and Gu, Yicheng and Hua, Hua and Liu, Liwei and Yang, Chen and Li, Jiaqi and Shi, Peiyang and Wang, Yuancheng and Chen, Kai and Zhang, Pengyuan and Wu, Zhizheng},
title={Emilia: An Extensive, Multilingual, and Diverse Speech Dataset for Large-Scale Speech Generation},
booktitle={Proc.~of SLT},
year={2024}
}
```
```bibtex
@inproceedings{amphion,
author={Zhang, Xueyao and Xue, Liumeng and Gu, Yicheng and Wang, Yuancheng and Li, Jiaqi and He, Haorui and Wang, Chaoren and Song, Ting and Chen, Xi and Fang, Zihao and Chen, Haopeng and Zhang, Junan and Tang, Tze Ying and Zou, Lexiao and Wang, Mingxuan and Han, Jun and Chen, Kai and Li, Haizhou and Wu, Zhizheng},
title={Amphion: An Open-Source Audio, Music and Speech Generation Toolkit},
booktitle={Proc.~of SLT},
year={2024}
}
```"
miracl/miracl-corpus,"{""annotations_creators"": [""expert-generated""], ""language"": [""ar"", ""bn"", ""en"", ""es"", ""fa"", ""fi"", ""fr"", ""hi"", ""id"", ""ja"", ""ko"", ""ru"", ""sw"", ""te"", ""th"", ""zh""], ""multilinguality"": [""multilingual""], ""pretty_name"": ""MIRACL-corpus"", ""size_categories"": [], ""source_datasets"": [], ""tags"": [], ""task_categories"": [""text-retrieval""], ""license"": [""apache-2.0""], ""task_ids"": [""document-retrieval""]}","# Dataset Card for MIRACL Corpus
## Dataset Description
* **Homepage:** http://miracl.ai
* **Repository:** https://github.com/project-miracl/miracl
* **Paper:** https://arxiv.org/abs/2210.09984
MIRACL 🌍🙌🌏 (Multilingual Information Retrieval Across a Continuum of Languages) is a multilingual retrieval dataset that focuses on search across 18 different languages, which collectively encompass over three billion native speakers around the world.
This dataset contains the collection data of the 16 ""known languages"". The remaining 2 ""surprise languages"" will not be released until later.
The corpus for each language is prepared from a Wikipedia dump, where we keep only the plain text and discard images, tables, etc. Each article is segmented into multiple passages using WikiExtractor based on natural discourse units (e.g., `\n\n` in the wiki markup). Each of these passages comprises a ""document"" or unit of retrieval. We preserve the Wikipedia article title of each passage.
## Dataset Structure
Each retrieval unit contains three fields: `docid`, `title`, and `text`. Consider an example from the English corpus:
```
{
""docid"": ""39#0"",
""title"": ""Albedo"",
""text"": ""Albedo (meaning 'whiteness') is the measure of the diffuse reflection of solar radiation out of the total solar radiation received by an astronomical body (e.g. a planet like Earth). It is dimensionless and measured on a scale from 0 (corresponding to a black body that absorbs all incident radiation) to 1 (corresponding to a body that reflects all incident radiation).""
}
```
The `docid` has the schema `X#Y`, where all passages with the same `X` come from the same Wikipedia article, whereas `Y` denotes the passage within that article, numbered sequentially. The text field contains the text of the passage. The title field contains the name of the article the passage comes from.
The collection can be loaded using:
```
lang='ar' # or any of the 16 languages
miracl_corpus = datasets.load_dataset('miracl/miracl-corpus', lang)['train']
for doc in miracl_corpus:
docid = doc['docid']
title = doc['title']
text = doc['text']
```
## Dataset Statistics and Links
The following table contains the number of passage and Wikipedia articles in the collection of each language, along with the links to the datasets and raw Wikipedia dumps.
| Language | # of Passages | # of Articles | Links | Raw Wiki Dump |
|:----------------|--------------:|--------------:|:------|:------|
| Arabic (ar) | 2,061,414 | 656,982 | [🤗](https://huggingface.co/datasets/miracl/miracl-corpus/tree/main/miracl-corpus-v1.0-ar) | [🌏](https://archive.org/download/arwiki-20190201/arwiki-20190201-pages-articles-multistream.xml.bz2)
| Bengali (bn) | 297,265 | 63,762 | [🤗](https://huggingface.co/datasets/miracl/miracl-corpus/tree/main/miracl-corpus-v1.0-bn) | [🌏](https://archive.org/download/bnwiki-20190201/bnwiki-20190201-pages-articles-multistream.xml.bz2)
| English (en) | 32,893,221 | 5,758,285 | [🤗](https://huggingface.co/datasets/miracl/miracl-corpus/tree/main/miracl-corpus-v1.0-en) | [🌏](https://archive.org/download/enwiki-20190201/enwiki-20190201-pages-articles-multistream.xml.bz2)
| Spanish (es) | 10,373,953 | 1,669,181 | [🤗](https://huggingface.co/datasets/miracl/miracl-corpus/tree/main/miracl-corpus-v1.0-es) | [🌏](https://archive.org/download/eswiki-20220301/eswiki-20220301-pages-articles-multistream.xml.bz2)
| Persian (fa) | 2,207,172 | 857,827 | [🤗](https://huggingface.co/datasets/miracl/miracl-corpus/tree/main/miracl-corpus-v1.0-fa) | [🌏](https://archive.org/download/fawiki-20220301/fawiki-20220301-pages-articles-multistream.xml.bz2)
| Finnish (fi) | 1,883,509 | 447,815 | [🤗](https://huggingface.co/datasets/miracl/miracl-corpus/tree/main/miracl-corpus-v1.0-fi) | [🌏](https://archive.org/download/fiwiki-20190201/fiwiki-20190201-pages-articles-multistream.xml.bz2)
| French (fr) | 14,636,953 | 2,325,608 | [🤗](https://huggingface.co/datasets/miracl/miracl-corpus/tree/main/miracl-corpus-v1.0-fr) | [🌏](https://archive.org/download/frwiki-20220301/frwiki-20220301-pages-articles-multistream.xml.bz2)
| Hindi (hi) | 506,264 | 148,107 | [🤗](https://huggingface.co/datasets/miracl/miracl-corpus/tree/main/miracl-corpus-v1.0-hi) | [🌏](https://archive.org/download/hiwiki-20220301/hiwiki-20220301-pages-articles-multistream.xml.bz2)
| Indonesian (id) | 1,446,315 | 446,330 | [🤗](https://huggingface.co/datasets/miracl/miracl-corpus/tree/main/miracl-corpus-v1.0-id) | [🌏](https://archive.org/download/idwiki-20190201/idwiki-20190201-pages-articles-multistream.xml.bz2)
| Japanese (ja) | 6,953,614 | 1,133,444 | [🤗](https://huggingface.co/datasets/miracl/miracl-corpus/tree/main/miracl-corpus-v1.0-ja) | [🌏](https://archive.org/download/jawiki-20190201/jawiki-20190201-pages-articles-multistream.xml.bz2)
| Korean (ko) | 1,486,752 | 437,373 | [🤗](https://huggingface.co/datasets/miracl/miracl-corpus/tree/main/miracl-corpus-v1.0-ko) | [🌏](https://archive.org/download/kowiki-20190201/kowiki-20190201-pages-articles-multistream.xml.bz2)
| Russian (ru) | 9,543,918 | 1,476,045 | [🤗](https://huggingface.co/datasets/miracl/miracl-corpus/tree/main/miracl-corpus-v1.0-ru) | [🌏](https://archive.org/download/ruwiki-20190201/ruwiki-20190201-pages-articles-multistream.xml.bz2)
| Swahili (sw) | 131,924 | 47,793 | [🤗](https://huggingface.co/datasets/miracl/miracl-corpus/tree/main/miracl-corpus-v1.0-sw) | [🌏](https://archive.org/download/swwiki-20190201/swwiki-20190201-pages-articles-multistream.xml.bz2)
| Telugu (te) | 518,079 | 66,353 | [🤗](https://huggingface.co/datasets/miracl/miracl-corpus/tree/main/miracl-corpus-v1.0-te) | [🌏](https://archive.org/download/tewiki-20190201/tewiki-20190201-pages-articles-multistream.xml.bz2)
| Thai (th) | 542,166 | 128,179 | [🤗](https://huggingface.co/datasets/miracl/miracl-corpus/tree/main/miracl-corpus-v1.0-th) | [🌏](https://archive.org/download/thwiki-20190101/thwiki-20190101-pages-articles-multistream.xml.bz2)
| Chinese (zh) | 4,934,368 | 1,246,389 | [🤗](https://huggingface.co/datasets/miracl/miracl-corpus/tree/main/miracl-corpus-v1.0-zh) | [🌏](https://archive.org/download/zhwiki-20220301/zhwiki-20220301-pages-articles-multistream.xml.bz2)"
HPLT/HPLT2.0_cleaned,"{""configs"": [{""config_name"": ""ace_Arab"", ""data_files"": [{""split"": ""train"", ""path"": ""ace_Arab*/train-*""}]}, {""config_name"": ""ace_Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""ace_Latn*/train-*""}]}, {""config_name"": ""afr_Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""afr_Latn*/train-*""}]}, {""config_name"": ""als_Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""als_Latn*/train-*""}]}, {""config_name"": ""amh_Ethi"", ""data_files"": [{""split"": ""train"", ""path"": ""amh_Ethi*/train-*""}]}, {""config_name"": ""ara_Arab"", ""data_files"": [{""split"": ""train"", ""path"": ""ara_Arab*/train-*""}]}, {""config_name"": ""asm_Beng"", ""data_files"": [{""split"": ""train"", ""path"": ""asm_Beng*/train-*""}]}, {""config_name"": ""ast_Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""ast_Latn*/train-*""}]}, {""config_name"": ""awa_Deva"", ""data_files"": [{""split"": ""train"", ""path"": ""awa_Deva*/train-*""}]}, {""config_name"": ""ayr_Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""ayr_Latn*/train-*""}]}, {""config_name"": ""azb_Arab"", ""data_files"": [{""split"": ""train"", ""path"": ""azb_Arab*/train-*""}]}, {""config_name"": ""azj_Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""azj_Latn*/train-*""}]}, {""config_name"": ""bak_Cyrl"", ""data_files"": [{""split"": ""train"", ""path"": ""bak_Cyrl*/train-*""}]}, {""config_name"": ""ban_Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""ban_Latn*/train-*""}]}, {""config_name"": ""bel_Cyrl"", ""data_files"": [{""split"": ""train"", ""path"": ""bel_Cyrl*/train-*""}]}, {""config_name"": ""bem_Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""bem_Latn*/train-*""}]}, {""config_name"": ""ben_Beng"", ""data_files"": [{""split"": ""train"", ""path"": ""ben_Beng*/train-*""}]}, {""config_name"": ""bho_Deva"", ""data_files"": [{""split"": ""train"", ""path"": ""bho_Deva*/train-*""}]}, {""config_name"": ""bjn_Arab"", ""data_files"": [{""split"": ""train"", ""path"": ""bjn_Arab*/train-*""}]}, {""config_name"": ""bjn_Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""bjn_Latn*/train-*""}]}, {""config_name"": ""bod_Tibt"", ""data_files"": [{""split"": ""train"", ""path"": ""bod_Tibt*/train-*""}]}, {""config_name"": ""bos_Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""bos_Latn*/train-*""}]}, {""config_name"": ""bug_Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""bug_Latn*/train-*""}]}, {""config_name"": ""bul_Cyrl"", ""data_files"": [{""split"": ""train"", ""path"": ""bul_Cyrl*/train-*""}]}, {""config_name"": ""cat_Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""cat_Latn*/train-*""}]}, {""config_name"": ""ceb_Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""ceb_Latn*/train-*""}]}, {""config_name"": ""ces_Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""ces_Latn*/train-*""}]}, {""config_name"": ""cjk_Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""cjk_Latn*/train-*""}]}, {""config_name"": ""ckb_Arab"", ""data_files"": [{""split"": ""train"", ""path"": ""ckb_Arab*/train-*""}]}, {""config_name"": ""crh_Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""crh_Latn*/train-*""}]}, {""config_name"": ""cym_Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""cym_Latn*/train-*""}]}, {""config_name"": ""dan_Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""dan_Latn*/train-*""}]}, {""config_name"": ""deu_Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""deu_Latn*/train-*""}]}, {""config_name"": ""dik_Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""dik_Latn*/train-*""}]}, {""config_name"": ""dyu_Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""dyu_Latn*/train-*""}]}, {""config_name"": ""dzo_Tibt"", ""data_files"": [{""split"": ""train"", ""path"": ""dzo_Tibt*/train-*""}]}, {""config_name"": ""ell_Grek"", ""data_files"": [{""split"": ""train"", ""path"": ""ell_Grek*/train-*""}]}, {""config_name"": ""eng_Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""eng_Latn*/train-*""}]}, {""config_name"": ""epo_Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""epo_Latn*/train-*""}]}, {""config_name"": ""est_Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""est_Latn*/train-*""}]}, {""config_name"": ""eus_Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""eus_Latn*/train-*""}]}, {""config_name"": ""ewe_Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""ewe_Latn*/train-*""}]}, {""config_name"": ""fao_Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""fao_Latn*/train-*""}]}, {""config_name"": ""fij_Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""fij_Latn*/train-*""}]}, {""config_name"": ""fin_Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""fin_Latn*/train-*""}]}, {""config_name"": ""fon_Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""fon_Latn*/train-*""}]}, {""config_name"": ""fra_Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""fra_Latn*/train-*""}]}, {""config_name"": ""fur_Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""fur_Latn*/train-*""}]}, {""config_name"": ""fuv_Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""fuv_Latn*/train-*""}]}, {""config_name"": ""gaz_Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""gaz_Latn*/train-*""}]}, {""config_name"": ""gla_Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""gla_Latn*/train-*""}]}, {""config_name"": ""gle_Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""gle_Latn*/train-*""}]}, {""config_name"": ""glg_Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""glg_Latn*/train-*""}]}, {""config_name"": ""grn_Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""grn_Latn*/train-*""}]}, {""config_name"": ""guj_Gujr"", ""data_files"": [{""split"": ""train"", ""path"": ""guj_Gujr*/train-*""}]}, {""config_name"": ""hat_Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""hat_Latn*/train-*""}]}, {""config_name"": ""hau_Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""hau_Latn*/train-*""}]}, {""config_name"": ""heb_Hebr"", ""data_files"": [{""split"": ""train"", ""path"": ""heb_Hebr*/train-*""}]}, {""config_name"": ""hin_Deva"", ""data_files"": [{""split"": ""train"", ""path"": ""hin_Deva*/train-*""}]}, {""config_name"": ""hne_Deva"", ""data_files"": [{""split"": ""train"", ""path"": ""hne_Deva*/train-*""}]}, {""config_name"": ""hrv_Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""hrv_Latn*/train-*""}]}, {""config_name"": ""hun_Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""hun_Latn*/train-*""}]}, {""config_name"": ""hye_Armn"", ""data_files"": [{""split"": ""train"", ""path"": ""hye_Armn*/train-*""}]}, {""config_name"": ""ibo_Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""ibo_Latn*/train-*""}]}, {""config_name"": ""ilo_Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""ilo_Latn*/train-*""}]}, {""config_name"": ""ind_Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""ind_Latn*/train-*""}]}, {""config_name"": ""isl_Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""isl_Latn*/train-*""}]}, {""config_name"": ""ita_Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""ita_Latn*/train-*""}]}, {""config_name"": ""jav_Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""jav_Latn*/train-*""}]}, {""config_name"": ""jpn_Jpan"", ""data_files"": [{""split"": ""train"", ""path"": ""jpn_Jpan*/train-*""}]}, {""config_name"": ""kab_Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""kab_Latn*/train-*""}]}, {""config_name"": ""kac_Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""kac_Latn*/train-*""}]}, {""config_name"": ""kam_Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""kam_Latn*/train-*""}]}, {""config_name"": ""kan_Knda"", ""data_files"": [{""split"": ""train"", ""path"": ""kan_Knda*/train-*""}]}, {""config_name"": ""kas_Arab"", ""data_files"": [{""split"": ""train"", ""path"": ""kas_Arab*/train-*""}]}, {""config_name"": ""kas_Deva"", ""data_files"": [{""split"": ""train"", ""path"": ""kas_Deva*/train-*""}]}, {""config_name"": ""kat_Geor"", ""data_files"": [{""split"": ""train"", ""path"": ""kat_Geor*/train-*""}]}, {""config_name"": ""kaz_Cyrl"", ""data_files"": [{""split"": ""train"", ""path"": ""kaz_Cyrl*/train-*""}]}, {""config_name"": ""kbp_Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""kbp_Latn*/train-*""}]}, {""config_name"": ""kea_Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""kea_Latn*/train-*""}]}, {""config_name"": ""khk_Cyrl"", ""data_files"": [{""split"": ""train"", ""path"": ""khk_Cyrl*/train-*""}]}, {""config_name"": ""khm_Khmr"", ""data_files"": [{""split"": ""train"", ""path"": ""khm_Khmr*/train-*""}]}, {""config_name"": ""kik_Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""kik_Latn*/train-*""}]}, {""config_name"": ""kin_Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""kin_Latn*/train-*""}]}, {""config_name"": ""kir_Cyrl"", ""data_files"": [{""split"": ""train"", ""path"": ""kir_Cyrl*/train-*""}]}, {""config_name"": ""kmb_Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""kmb_Latn*/train-*""}]}, {""config_name"": ""kmr_Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""kmr_Latn*/train-*""}]}, {""config_name"": ""knc_Arab"", ""data_files"": [{""split"": ""train"", ""path"": ""knc_Arab*/train-*""}]}, {""config_name"": ""kon_Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""kon_Latn*/train-*""}]}, {""config_name"": ""kor_Hang"", ""data_files"": [{""split"": ""train"", ""path"": ""kor_Hang*/train-*""}]}, {""config_name"": ""lao_Laoo"", ""data_files"": [{""split"": ""train"", ""path"": ""lao_Laoo*/train-*""}]}, {""config_name"": ""lij_Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""lij_Latn*/train-*""}]}, {""config_name"": ""lim_Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""lim_Latn*/train-*""}]}, {""config_name"": ""lin_Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""lin_Latn*/train-*""}]}, {""config_name"": ""lit_Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""lit_Latn*/train-*""}]}, {""config_name"": ""lmo_Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""lmo_Latn*/train-*""}]}, {""config_name"": ""ltg_Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""ltg_Latn*/train-*""}]}, {""config_name"": ""ltz_Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""ltz_Latn*/train-*""}]}, {""config_name"": ""lua_Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""lua_Latn*/train-*""}]}, {""config_name"": ""lug_Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""lug_Latn*/train-*""}]}, {""config_name"": ""luo_Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""luo_Latn*/train-*""}]}, {""config_name"": ""lus_Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""lus_Latn*/train-*""}]}, {""config_name"": ""lvs_Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""lvs_Latn*/train-*""}]}, {""config_name"": ""mag_Deva"", ""data_files"": [{""split"": ""train"", ""path"": ""mag_Deva*/train-*""}]}, {""config_name"": ""mai_Deva"", ""data_files"": [{""split"": ""train"", ""path"": ""mai_Deva*/train-*""}]}, {""config_name"": ""mal_Mlym"", ""data_files"": [{""split"": ""train"", ""path"": ""mal_Mlym*/train-*""}]}, {""config_name"": ""mar_Deva"", ""data_files"": [{""split"": ""train"", ""path"": ""mar_Deva*/train-*""}]}, {""config_name"": ""min_Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""min_Latn*/train-*""}]}, {""config_name"": ""mkd_Cyrl"", ""data_files"": [{""split"": ""train"", ""path"": ""mkd_Cyrl*/train-*""}]}, {""config_name"": ""mlt_Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""mlt_Latn*/train-*""}]}, {""config_name"": ""mni_Beng"", ""data_files"": [{""split"": ""train"", ""path"": ""mni_Beng*/train-*""}]}, {""config_name"": ""mos_Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""mos_Latn*/train-*""}]}, {""config_name"": ""mri_Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""mri_Latn*/train-*""}]}, {""config_name"": ""mya_Mymr"", ""data_files"": [{""split"": ""train"", ""path"": ""mya_Mymr*/train-*""}]}, {""config_name"": ""nld_Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""nld_Latn*/train-*""}]}, {""config_name"": ""nno_Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""nno_Latn*/train-*""}]}, {""config_name"": ""nob_Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""nob_Latn*/train-*""}]}, {""config_name"": ""npi_Deva"", ""data_files"": [{""split"": ""train"", ""path"": ""npi_Deva*/train-*""}]}, {""config_name"": ""nso_Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""nso_Latn*/train-*""}]}, {""config_name"": ""nus_Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""nus_Latn*/train-*""}]}, {""config_name"": ""nya_Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""nya_Latn*/train-*""}]}, {""config_name"": ""oci_Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""oci_Latn*/train-*""}]}, {""config_name"": ""ory_Orya"", ""data_files"": [{""split"": ""train"", ""path"": ""ory_Orya*/train-*""}]}, {""config_name"": ""pan_Guru"", ""data_files"": [{""split"": ""train"", ""path"": ""pan_Guru*/train-*""}]}, {""config_name"": ""pap_Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""pap_Latn*/train-*""}]}, {""config_name"": ""pbt_Arab"", ""data_files"": [{""split"": ""train"", ""path"": ""pbt_Arab*/train-*""}]}, {""config_name"": ""pes_Arab"", ""data_files"": [{""split"": ""train"", ""path"": ""pes_Arab*/train-*""}]}, {""config_name"": ""plt_Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""plt_Latn*/train-*""}]}, {""config_name"": ""pol_Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""pol_Latn*/train-*""}]}, {""config_name"": ""por_Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""por_Latn*/train-*""}]}, {""config_name"": ""prs_Arab"", ""data_files"": [{""split"": ""train"", ""path"": ""prs_Arab*/train-*""}]}, {""config_name"": ""quy_Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""quy_Latn*/train-*""}]}, {""config_name"": ""ron_Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""ron_Latn*/train-*""}]}, {""config_name"": ""run_Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""run_Latn*/train-*""}]}, {""config_name"": ""rus_Cyrl"", ""data_files"": [{""split"": ""train"", ""path"": ""rus_Cyrl*/train-*""}]}, {""config_name"": ""san_Deva"", ""data_files"": [{""split"": ""train"", ""path"": ""san_Deva*/train-*""}]}, {""config_name"": ""sat_Olck"", ""data_files"": [{""split"": ""train"", ""path"": ""sat_Olck*/train-*""}]}, {""config_name"": ""scn_Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""scn_Latn*/train-*""}]}, {""config_name"": ""shn_Mymr"", ""data_files"": [{""split"": ""train"", ""path"": ""shn_Mymr*/train-*""}]}, {""config_name"": ""sin_Sinh"", ""data_files"": [{""split"": ""train"", ""path"": ""sin_Sinh*/train-*""}]}, {""config_name"": ""slk_Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""slk_Latn*/train-*""}]}, {""config_name"": ""slv_Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""slv_Latn*/train-*""}]}, {""config_name"": ""smo_Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""smo_Latn*/train-*""}]}, {""config_name"": ""sna_Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""sna_Latn*/train-*""}]}, {""config_name"": ""snd_Arab"", ""data_files"": [{""split"": ""train"", ""path"": ""snd_Arab*/train-*""}]}, {""config_name"": ""som_Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""som_Latn*/train-*""}]}, {""config_name"": ""sot_Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""sot_Latn*/train-*""}]}, {""config_name"": ""spa_Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""spa_Latn*/train-*""}]}, {""config_name"": ""srd_Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""srd_Latn*/train-*""}]}, {""config_name"": ""srp_Cyrl"", ""data_files"": [{""split"": ""train"", ""path"": ""srp_Cyrl*/train-*""}]}, {""config_name"": ""ssw_Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""ssw_Latn*/train-*""}]}, {""config_name"": ""sun_Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""sun_Latn*/train-*""}]}, {""config_name"": ""swe_Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""swe_Latn*/train-*""}]}, {""config_name"": ""swh_Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""swh_Latn*/train-*""}]}, {""config_name"": ""szl_Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""szl_Latn*/train-*""}]}, {""config_name"": ""tam_Taml"", ""data_files"": [{""split"": ""train"", ""path"": ""tam_Taml*/train-*""}]}, {""config_name"": ""taq_Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""taq_Latn*/train-*""}]}, {""config_name"": ""tat_Cyrl"", ""data_files"": [{""split"": ""train"", ""path"": ""tat_Cyrl*/train-*""}]}, {""config_name"": ""tel_Telu"", ""data_files"": [{""split"": ""train"", ""path"": ""tel_Telu*/train-*""}]}, {""config_name"": ""tgk_Cyrl"", ""data_files"": [{""split"": ""train"", ""path"": ""tgk_Cyrl*/train-*""}]}, {""config_name"": ""tgl_Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""tgl_Latn*/train-*""}]}, {""config_name"": ""tha_Thai"", ""data_files"": [{""split"": ""train"", ""path"": ""tha_Thai*/train-*""}]}, {""config_name"": ""tir_Ethi"", ""data_files"": [{""split"": ""train"", ""path"": ""tir_Ethi*/train-*""}]}, {""config_name"": ""tpi_Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""tpi_Latn*/train-*""}]}, {""config_name"": ""tsn_Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""tsn_Latn*/train-*""}]}, {""config_name"": ""tso_Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""tso_Latn*/train-*""}]}, {""config_name"": ""tuk_Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""tuk_Latn*/train-*""}]}, {""config_name"": ""tum_Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""tum_Latn*/train-*""}]}, {""config_name"": ""tur_Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""tur_Latn*/train-*""}]}, {""config_name"": ""twi_Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""twi_Latn*/train-*""}]}, {""config_name"": ""uig_Arab"", ""data_files"": [{""split"": ""train"", ""path"": ""uig_Arab*/train-*""}]}, {""config_name"": ""ukr_Cyrl"", ""data_files"": [{""split"": ""train"", ""path"": ""ukr_Cyrl*/train-*""}]}, {""config_name"": ""umb_Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""umb_Latn*/train-*""}]}, {""config_name"": ""urd_Arab"", ""data_files"": [{""split"": ""train"", ""path"": ""urd_Arab*/train-*""}]}, {""config_name"": ""uzn_Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""uzn_Latn*/train-*""}]}, {""config_name"": ""vec_Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""vec_Latn*/train-*""}]}, {""config_name"": ""vie_Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""vie_Latn*/train-*""}]}, {""config_name"": ""war_Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""war_Latn*/train-*""}]}, {""config_name"": ""wol_Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""wol_Latn*/train-*""}]}, {""config_name"": ""xho_Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""xho_Latn*/train-*""}]}, {""config_name"": ""ydd_Hebr"", ""data_files"": [{""split"": ""train"", ""path"": ""ydd_Hebr*/train-*""}]}, {""config_name"": ""yor_Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""yor_Latn*/train-*""}]}, {""config_name"": ""yue_Hant"", ""data_files"": [{""split"": ""train"", ""path"": ""yue_Hant*/train-*""}]}, {""config_name"": ""zho_Hans"", ""data_files"": [{""split"": ""train"", ""path"": ""zho_Hans*/train-*""}]}, {""config_name"": ""zho_Hant"", ""data_files"": [{""split"": ""train"", ""path"": ""zho_Hant*/train-*""}]}, {""config_name"": ""zsm_Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""zsm_Latn*/train-*""}]}, {""config_name"": ""zul_Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""zul_Latn*/train-*""}]}, {""config_name"": ""pag_Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""pag_Latn*/train-*""}]}, {""config_name"": ""sag_Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""sag_Latn*/train-*""}]}, {""config_name"": ""bam_Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""bam_Latn*/train-*""}]}, {""config_name"": ""knc_Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""knc_Latn*/train-*""}]}], ""license"": ""cc0-1.0"", ""size_categories"": [""n>1T""], ""multilinguality"": [""multilingual""], ""task_categories"": [""fill-mask"", ""text-generation""], ""task_ids"": [""language-modeling""], ""language"": [""ace"", ""af"", ""als"", ""am"", ""ar"", ""as"", ""ast"", ""awa"", ""ayr"", ""azb"", ""azj"", ""ba"", ""bm"", ""ban"", ""be"", ""bem"", ""bn"", ""bho"", ""bjn"", ""bo"", ""bs"", ""bug"", ""bg"", ""ca"", ""ceb"", ""cs"", ""cjk"", ""ckb"", ""crh"", ""cy"", ""da"", ""de"", ""dik"", ""dyu"", ""dz"", ""el"", ""en"", ""eo"", ""et"", ""eu"", ""ee"", ""fo"", ""fj"", ""fi"", ""fon"", ""fr"", ""fur"", ""fuv"", ""gaz"", ""gd"", ""ga"", ""gl"", ""gn"", ""gu"", ""ht"", ""ha"", ""he"", ""hi"", ""hne"", ""hr"", ""hu"", ""hy"", ""ig"", ""ilo"", ""id"", ""is"", ""it"", ""jv"", ""ja"", ""kab"", ""kac"", ""kam"", ""kn"", ""ks"", ""ka"", ""kk"", ""kbp"", ""kea"", ""khk"", ""km"", ""ki"", ""rw"", ""ky"", ""kmb"", ""kmr"", ""knc"", ""kg"", ""ko"", ""lo"", ""lij"", ""li"", ""ln"", ""lt"", ""lmo"", ""ltg"", ""lb"", ""lua"", ""lg"", ""luo"", ""lus"", ""lvs"", ""mag"", ""mai"", ""ml"", ""mr"", ""min"", ""mk"", ""mt"", ""mni"", ""mos"", ""mi"", ""my"", ""nl"", ""nn"", ""nb"", ""npi"", ""nso"", ""nus"", ""ny"", ""oc"", ""ory"", ""pag"", ""pa"", ""pap"", ""pbt"", ""pes"", ""plt"", ""pl"", ""pt"", ""prs"", ""quy"", ""ro"", ""rn"", ""ru"", ""sg"", ""sa"", ""sat"", ""scn"", ""shn"", ""si"", ""sk"", ""sl"", ""sm"", ""sn"", ""sd"", ""so"", ""st"", ""es"", ""sc"", ""sr"", ""ss"", ""su"", ""sv"", ""swh"", ""szl"", ""ta"", ""taq"", ""tt"", ""te"", ""tg"", ""tl"", ""th"", ""ti"", ""tpi"", ""tn"", ""ts"", ""tk"", ""tum"", ""tr"", ""tw"", ""ug"", ""uk"", ""umb"", ""ur"", ""uzn"", ""vec"", ""vi"", ""war"", ""wo"", ""xh"", ""ydd"", ""yo"", ""yue"", ""zh"", ""zsm"", ""zu""]}","This is a large-scale collection of web-crawled documents in 191 world languages, produced by the [HPLT project](https://hplt-project.org/).
The source of the data is mostly [Internet Archive](https://archive.org/) with some additions from [Common Crawl](https://commoncrawl.org/).
For a detailed description of the dataset, please refer to https://hplt-project.org/datasets/v2.0
**The Cleaned variant of HPLT Datasets v2.0**
This is the ```cleaned``` variant of the HPLT Datasets v2.0 converted to the Parquet format semi-automatically when being uploaded here.
The original JSONL files (which take ~4x fewer disk space than this HF version) and the larger non-cleaned version can be found at https://hplt-project.org/datasets/v2.0.
**Dataset Performance**
***External Evaluation***
The HuggingFace team has [compared the utility of various multilingual corpora for training large language models in their FineWeb2 initiative](https://huggingface.co/datasets/HuggingFaceFW/fineweb-2).
They found that the HPLT v2 datasets are next to their FineWeb 2, on par with the CulturaX dataset as shown in this figure produced by HuggingFace:
This is a massive improvement compared to the HPLT v1 datasets, as can be seen on the plot above.
In fact, it’s even better: if one looks at the language-specific results, it becomes clear that on
Arabic, Hindi, Russian, Thai and Turkish (5 out of 9 languages HuggingFace evaluated on), [HPLT v2 is on par or better than FineWeb 2](https://huggingface.co/datasets/HuggingFaceFW/fineweb-2#comparison-with-other-datasets).
The average score is lower mostly because of Chinese, so we have some work ahead for this language!
Note that the source of the FineWeb 2 (and CulturaX) data is exclusively CommonCrawl, while the HPLT datasets are to a large extent composed of Internet Archive crawls.
Thus, **FineWeb 2 and HPLTv2 are complementary to each other and should be used together**.
***Internal Evaluation***
We also conducted FineWeb-style evaluations within the HPLT project, for now limited to English.
It confirmed the findings of HuggingFace in that HPLT v2 datasets are of much better quality than HPLT v1.2 data, which was released almost a year ago.
We replicated the FineWeb evaluation setting, training large language models with the same architecture and pretraining configuration
(e.g. 1.82B parameters, Llama architecture with a sequence length of 2048 tokens, GPT 2 tokenizer, and a global batch size of ~2 million tokens), with the only difference between the models being the training data.
We randomly sampled approximately 100B tokens from different versions of HPLT as well as FineWeb-data and trained a separate model on each of these datasets.
Each model was trained with the GPT-NeoX framework on 8 nodes on the LUMI cluster, where each node has 4 MI250X GPUs.
For evaluation, we use the HuggingFace LightEval in a zero-shot setting with the tasks ARC (Easy and Challenge), Hellaswag, PICA, and OpenbookQA.
The figure shows the macro average of the acc_norm values for these evaluations.
***Languages***
The ```cleaned``` version of HPLT Datasets v2.0 consists of subsets corresponding to 191 language codes.
Below we provide a list of language codes. For each language code the amount of text is shown as measured in:
- segments: the number of sequences of characters (possibly empty) separated by the newline symbol,
- wcwords: the number of words as defined by the Unix ```wc``` utility, i.e. the number of non-whitespaces with a whitespace or the beginning of document before,
- chars: the number of characters,
- docs: the number of documents, each document corresponds to an individual web page from the sourcing web crawls.
| | lang | segments | wcwords | chars | docs | Language Name | ISO693-3 code | ISO693-3 code macro | ISO693-1 direct code | ISO693-1 through macro |
|-----|----------|----------|----------|----------|----------|-------------------------------|---------------|---------------------|----------------------|------------------------|
| 0 | *TOTAL* | 3.00e+11 | 5.56e+12 | 3.74e+13 | 1.06e+10 | | | | | |
| 1 | ace_Arab | 1.17e+02 | 8.36e+03 | 4.97e+04 | 1.60e+01 | Achinese | ace | | | |
| 2 | ace_Latn | 2.06e+05 | 8.20e+06 | 5.08e+07 | 1.29e+04 | Achinese | ace | | | |
| 3 | afr_Latn | 3.77e+07 | 1.00e+09 | 5.95e+09 | 1.46e+06 | Afrikaans | afr | | af | af |
| 4 | als_Latn | 9.51e+07 | 2.71e+09 | 1.61e+10 | 5.38e+06 | Tosk Albanian | als | sqi | | sq |
| 5 | amh_Ethi | 7.01e+06 | 1.96e+08 | 1.03e+09 | 2.96e+05 | Amharic | amh | | am | am |
| 6 | ara_Arab | 2.20e+09 | 4.81e+10 | 2.80e+11 | 8.27e+07 | Arabic | ara | | ar | ar |
| 7 | asm_Beng | 2.68e+06 | 7.34e+07 | 4.76e+08 | 1.76e+05 | Assamese | asm | | as | as |
| 8 | ast_Latn | 7.43e+06 | 1.95e+08 | 1.24e+09 | 2.73e+05 | Asturian | ast | | | |
| 9 | awa_Deva | 1.32e+05 | 6.05e+06 | 2.88e+07 | 7.28e+03 | Awadhi | awa | | | |
| 10 | ayr_Latn | 1.88e+05 | 3.07e+06 | 2.51e+07 | 9.22e+03 | Central Aymara | ayr | aym | | ay |
| 11 | azb_Arab | 2.39e+06 | 3.96e+07 | 2.60e+08 | 6.61e+04 | South Azerbaijani | azb | aze | | az |
| 12 | azj_Latn | 1.27e+08 | 2.57e+09 | 1.96e+10 | 6.48e+06 | North Azerbaijani | azj | aze | | az |
| 13 | bak_Cyrl | 3.14e+06 | 7.53e+07 | 5.58e+08 | 1.71e+05 | Bashkir | bak | | ba | ba |
| 14 | bam_Latn | 9.17e+04 | 3.98e+06 | 2.07e+07 | 5.72e+03 | Bambara | bam | | bm | bm |
| 15 | ban_Latn | 6.01e+05 | 1.13e+07 | 7.72e+07 | 1.07e+04 | Balinese | ban | | | |
| 16 | bel_Cyrl | 4.88e+07 | 1.21e+09 | 8.54e+09 | 2.32e+06 | Belarusian | bel | | be | be |
| 17 | bem_Latn | 1.34e+05 | 4.52e+06 | 3.23e+07 | 6.14e+03 | Bemba (Zambia) | bem | | | |
| 18 | ben_Beng | 1.76e+08 | 4.64e+09 | 3.02e+10 | 1.10e+07 | Bengali | ben | | bn | bn |
| 19 | bho_Deva | 4.58e+05 | 1.35e+07 | 6.86e+07 | 2.86e+04 | Bhojpuri | bho | | | |
| 20 | bjn_Arab | 1.95e+04 | 5.48e+05 | 3.32e+06 | 1.11e+03 | Banjar | bjn | msa | | ms |
| 21 | bjn_Latn | 3.66e+05 | 8.05e+06 | 5.60e+07 | 1.88e+04 | Banjar | bjn | msa | | ms |
| 22 | bod_Tibt | 4.65e+05 | 5.78e+06 | 2.68e+08 | 2.74e+04 | Tibetan | bod | | bo | bo |
| 23 | bos_Latn | 2.68e+08 | 7.26e+09 | 4.61e+10 | 1.46e+07 | Bosnian | bos | hbs | bs | bs |
| 24 | bug_Latn | 3.86e+04 | 2.70e+06 | 1.93e+07 | 2.02e+03 | Buginese | bug | | | |
| 25 | bul_Cyrl | 6.81e+08 | 1.53e+10 | 9.69e+10 | 2.81e+07 | Bulgarian | bul | | bg | bg |
| 26 | cat_Latn | 3.83e+08 | 1.00e+10 | 6.02e+10 | 1.86e+07 | Catalan | cat | | ca | ca |
| 27 | ceb_Latn | 2.86e+06 | 8.59e+07 | 5.16e+08 | 1.39e+05 | Cebuano | ceb | | | |
| 28 | ces_Latn | 1.93e+09 | 4.21e+10 | 2.74e+11 | 7.53e+07 | Czech | ces | | cs | cs |
| 29 | cjk_Latn | 3.67e+04 | 9.65e+05 | 7.43e+06 | 1.20e+03 | Chokwe | cjk | | | |
| 30 | ckb_Arab | 5.23e+06 | 1.43e+08 | 9.13e+08 | 2.74e+05 | Central Kurdish | ckb | kur | | ku |
| 31 | crh_Latn | 1.38e+06 | 3.68e+07 | 2.81e+08 | 1.23e+05 | Crimean Tatar | crh | | | |
| 32 | cym_Latn | 1.56e+07 | 4.09e+08 | 2.40e+09 | 7.58e+05 | Welsh | cym | | cy | cy |
| 33 | dan_Latn | 8.73e+08 | 2.12e+10 | 1.33e+11 | 3.38e+07 | Danish | dan | | da | da |
| 34 | deu_Latn | 1.11e+10 | 2.52e+11 | 1.78e+12 | 4.82e+08 | German | deu | | de | de |
| 35 | dik_Latn | 3.46e+04 | 2.30e+06 | 1.15e+07 | 2.32e+03 | Southwestern Dinka | dik | din | | |
| 36 | dyu_Latn | 2.46e+04 | 1.19e+06 | 5.55e+06 | 1.39e+03 | Dyula | dyu | | | |
| 37 | dzo_Tibt | 4.00e+04 | 4.22e+05 | 7.38e+06 | 1.63e+03 | Dzongkha | dzo | | dz | dz |
| 38 | ell_Grek | 1.85e+09 | 4.27e+10 | 2.84e+11 | 7.03e+07 | Modern Greek (1453-) | ell | | el | el |
| 39 | eng_Latn | 1.16e+11 | 2.86e+12 | 1.71e+13 | 4.39e+09 | English | eng | | en | en |
| 40 | epo_Latn | 2.04e+07 | 4.72e+08 | 2.98e+09 | 8.19e+05 | Esperanto | epo | | eo | eo |
| 41 | est_Latn | 2.64e+08 | 4.74e+09 | 3.60e+10 | 8.45e+06 | Estonian | est | | et | et |
| 42 | eus_Latn | 3.76e+07 | 7.77e+08 | 6.05e+09 | 1.97e+06 | Basque | eus | | eu | eu |
| 43 | ewe_Latn | 1.43e+05 | 4.31e+06 | 2.13e+07 | 3.77e+03 | Ewe | ewe | | ee | ee |
| 44 | fao_Latn | 4.53e+06 | 9.34e+07 | 5.82e+08 | 2.40e+05 | Faroese | fao | | fo | fo |
| 45 | fij_Latn | 1.79e+05 | 7.26e+06 | 3.77e+07 | 8.91e+03 | Fijian | fij | | fj | fj |
| 46 | fin_Latn | 9.77e+08 | 1.84e+10 | 1.56e+11 | 3.48e+07 | Finnish | fin | | fi | fi |
| 47 | fon_Latn | 1.48e+04 | 1.23e+06 | 5.34e+06 | 1.23e+03 | Fon | fon | | | |
| 48 | fra_Latn | 1.06e+10 | 2.37e+11 | 1.46e+12 | 4.02e+08 | French | fra | | fr | fr |
| 49 | fur_Latn | 7.30e+05 | 2.08e+07 | 1.15e+08 | 3.67e+04 | Friulian | fur | | | |
| 50 | fuv_Latn | 1.34e+05 | 5.14e+06 | 2.99e+07 | 7.76e+03 | Nigerian Fulfulde | fuv | ful | | ff |
| 51 | gaz_Latn | 9.74e+05 | 2.89e+07 | 2.19e+08 | 4.91e+04 | West Central Oromo | gaz | orm | | om |
| 52 | gla_Latn | 3.31e+06 | 8.07e+07 | 4.84e+08 | 1.37e+05 | Scottish Gaelic | gla | | gd | gd |
| 53 | gle_Latn | 1.10e+07 | 2.96e+08 | 1.75e+09 | 4.91e+05 | Irish | gle | | ga | ga |
| 54 | glg_Latn | 6.12e+07 | 1.64e+09 | 1.01e+10 | 3.02e+06 | Galician | glg | | gl | gl |
| 55 | grn_Latn | 1.71e+06 | 3.07e+07 | 2.19e+08 | 7.34e+04 | Guarani | grn | | gn | gn |
| 56 | guj_Gujr | 2.06e+07 | 5.77e+08 | 3.39e+09 | 1.13e+06 | Gujarati | guj | | gu | gu |
| 57 | hat_Latn | 4.64e+06 | 1.22e+08 | 6.39e+08 | 2.13e+05 | Haitian | hat | | ht | ht |
| 58 | hau_Latn | 5.69e+06 | 1.53e+08 | 8.54e+08 | 3.16e+05 | Hausa | hau | | ha | ha |
| 59 | heb_Hebr | 4.67e+08 | 9.97e+09 | 5.68e+10 | 1.71e+07 | Hebrew | heb | | he | he |
| 60 | hin_Deva | 2.67e+08 | 8.64e+09 | 4.40e+10 | 1.36e+07 | Hindi | hin | | hi | hi |
| 61 | hne_Deva | 5.50e+04 | 2.20e+06 | 1.06e+07 | 2.81e+03 | Chhattisgarhi | hne | | | |
| 62 | hrv_Latn | 2.97e+08 | 7.31e+09 | 4.80e+10 | 1.23e+07 | Croatian | hrv | hbs | hr | hr |
| 63 | hun_Latn | 1.42e+09 | 3.05e+10 | 2.25e+11 | 5.19e+07 | Hungarian | hun | | hu | hu |
| 64 | hye_Armn | 6.52e+07 | 1.40e+09 | 1.07e+10 | 3.60e+06 | Armenian | hye | | hy | hy |
| 65 | ibo_Latn | 1.41e+06 | 3.83e+07 | 2.05e+08 | 5.63e+04 | Igbo | ibo | | ig | ig |
| 66 | ilo_Latn | 1.12e+06 | 2.48e+07 | 1.57e+08 | 4.88e+04 | Iloko | ilo | | | |
| 67 | ind_Latn | 2.39e+09 | 5.46e+10 | 3.84e+11 | 9.81e+07 | Indonesian | ind | msa | id | id |
| 68 | isl_Latn | 6.96e+07 | 1.54e+09 | 9.59e+09 | 2.84e+06 | Icelandic | isl | | is | is |
| 69 | ita_Latn | 5.13e+09 | 1.27e+11 | 8.21e+11 | 2.22e+08 | Italian | ita | | it | it |
| 70 | jav_Latn | 6.43e+06 | 1.38e+08 | 9.38e+08 | 1.96e+05 | Javanese | jav | | jv | jv |
| 71 | jpn_Jpan | 2.33e+10 | 4.24e+10 | 9.01e+11 | 4.18e+08 | Japanese | jpn | | ja | ja |
| 72 | kab_Latn | 3.45e+05 | 9.22e+06 | 5.42e+07 | 1.51e+04 | Kabyle | kab | | | |
| 73 | kac_Latn | 1.59e+05 | 5.96e+06 | 2.84e+07 | 7.59e+03 | Kachin | kac | | | |
| 74 | kam_Latn | 1.43e+04 | 6.74e+05 | 4.64e+06 | 1.18e+03 | Kamba (Kenya) | kam | | | |
| 75 | kan_Knda | 2.49e+07 | 5.33e+08 | 4.30e+09 | 1.34e+06 | Kannada | kan | | kn | kn |
| 76 | kas_Arab | 2.71e+04 | 6.78e+05 | 3.47e+06 | 9.49e+02 | Kashmiri | kas | | ks | ks |
| 77 | kas_Deva | 1.36e+03 | 3.19e+04 | 1.85e+05 | 1.06e+02 | Kashmiri | kas | | ks | ks |
| 78 | kat_Geor | 6.37e+07 | 1.24e+09 | 1.02e+10 | 3.34e+06 | Georgian | kat | | ka | ka |
| 79 | kaz_Cyrl | 8.10e+07 | 1.41e+09 | 1.11e+10 | 2.64e+06 | Kazakh | kaz | | kk | kk |
| 80 | kbp_Latn | 4.68e+04 | 4.26e+06 | 2.09e+07 | 7.08e+03 | Kabiyè | kbp | | | |
| 81 | kea_Latn | 4.39e+04 | 1.14e+06 | 6.14e+06 | 1.96e+03 | Kabuverdianu | kea | | | |
| 82 | khk_Cyrl | 5.35e+07 | 1.34e+09 | 9.33e+09 | 2.12e+06 | Halh Mongolian | khk | mon | | mn |
| 83 | khm_Khmr | 9.86e+06 | 1.14e+08 | 2.12e+09 | 7.01e+05 | Khmer | khm | | km | km |
| 84 | kik_Latn | 5.19e+04 | 1.43e+06 | 9.29e+06 | 4.00e+03 | Kikuyu | kik | | ki | ki |
| 85 | kin_Latn | 1.92e+06 | 5.07e+07 | 3.67e+08 | 9.27e+04 | Kinyarwanda | kin | | rw | rw |
| 86 | kir_Cyrl | 1.00e+07 | 2.47e+08 | 1.92e+09 | 6.76e+05 | Kirghiz | kir | | ky | ky |
| 87 | kmb_Latn | 1.18e+04 | 3.83e+05 | 2.07e+06 | 5.31e+02 | Kimbundu | kmb | | | |
| 88 | kmr_Latn | 7.15e+06 | 1.96e+08 | 1.12e+09 | 3.64e+05 | Northern Kurdish | kmr | kur | | ku |
| 89 | knc_Arab | 1.08e+04 | 2.62e+05 | 1.30e+06 | 2.45e+02 | Central Kanuri | knc | kau | | kr |
| 90 | knc_Latn | 1.05e+04 | 2.41e+06 | 1.20e+07 | 2.47e+03 | Central Kanuri | knc | kau | | kr |
| 91 | kon_Latn | 4.75e+04 | 1.94e+06 | 1.13e+07 | 2.54e+03 | Kongo | kon | | kg | kg |
| 92 | kor_Hang | 1.36e+09 | 1.97e+10 | 8.92e+10 | 3.89e+07 | Korean | kor | | ko | ko |
| 93 | lao_Laoo | 3.20e+05 | 5.18e+06 | 8.47e+07 | 2.95e+04 | Lao | lao | | lo | lo |
| 94 | lij_Latn | 1.58e+05 | 5.59e+06 | 3.15e+07 | 8.37e+03 | Ligurian | lij | | | |
| 95 | lim_Latn | 7.14e+06 | 1.81e+08 | 1.12e+09 | 3.68e+05 | Limburgan | lim | | li | li |
| 96 | lin_Latn | 2.00e+05 | 5.56e+06 | 3.29e+07 | 7.59e+03 | Lingala | lin | | ln | ln |
| 97 | lit_Latn | 3.22e+08 | 6.68e+09 | 5.04e+10 | 1.33e+07 | Lithuanian | lit | | lt | lt |
| 98 | lmo_Latn | 2.12e+06 | 5.96e+07 | 3.45e+08 | 1.46e+05 | Lombard | lmo | | | |
| 99 | ltg_Latn | 1.51e+05 | 3.79e+06 | 2.69e+07 | 9.21e+03 | Latgalian | ltg | lav | | lv |
| 100 | ltz_Latn | 5.06e+06 | 1.07e+08 | 7.10e+08 | 2.47e+05 | Luxembourgish | ltz | | lb | lb |
| 101 | lua_Latn | 3.87e+04 | 1.37e+06 | 9.00e+06 | 1.08e+03 | Luba-Lulua | lua | | | |
| 102 | lug_Latn | 4.08e+05 | 9.18e+06 | 6.80e+07 | 2.13e+04 | Ganda | lug | | lg | lg |
| 103 | luo_Latn | 8.41e+04 | 3.73e+06 | 2.03e+07 | 4.15e+03 | Luo (Kenya and Tanzania) | luo | | | |
| 104 | lus_Latn | 3.43e+06 | 1.25e+08 | 6.52e+08 | 1.60e+05 | Lushai | lus | | | |
| 105 | lvs_Latn | 1.74e+08 | 3.46e+09 | 2.52e+10 | 6.77e+06 | Standard Latvian | lvs | lav | | lv |
| 106 | mag_Deva | 1.93e+04 | 8.91e+05 | 4.28e+06 | 3.28e+02 | Magahi | mag | | | |
| 107 | mai_Deva | 6.46e+05 | 1.78e+07 | 9.67e+07 | 2.50e+04 | Maithili | mai | | | |
| 108 | mal_Mlym | 4.80e+07 | 9.74e+08 | 9.49e+09 | 3.10e+06 | Malayalam | mal | | ml | ml |
| 109 | mar_Deva | 3.63e+07 | 9.81e+08 | 6.62e+09 | 2.08e+06 | Marathi | mar | | mr | mr |
| 110 | min_Latn | 6.01e+05 | 1.10e+07 | 7.48e+07 | 2.50e+04 | Minangkabau | min | msa | | ms |
| 111 | mkd_Cyrl | 5.70e+07 | 1.48e+09 | 9.44e+09 | 3.57e+06 | Macedonian | mkd | | mk | mk |
| 112 | mlt_Latn | 8.68e+06 | 1.96e+08 | 1.44e+09 | 3.67e+05 | Maltese | mlt | | mt | mt |
| 113 | mni_Beng | 6.58e+04 | 1.63e+06 | 1.18e+07 | 2.93e+03 | Manipuri | mni | | | |
| 114 | mos_Latn | 1.91e+04 | 8.08e+05 | 3.86e+06 | 9.31e+02 | Mossi | mos | | | |
| 115 | mri_Latn | 2.80e+06 | 8.68e+07 | 4.24e+08 | 1.08e+05 | Maori | mri | | mi | mi |
| 116 | mya_Mymr | 3.05e+07 | 4.53e+08 | 5.82e+09 | 1.37e+06 | Burmese | mya | | my | my |
| 117 | nld_Latn | 3.08e+09 | 7.14e+10 | 4.51e+11 | 1.39e+08 | Dutch | nld | | nl | nl |
| 118 | nno_Latn | 3.46e+07 | 8.60e+08 | 5.40e+09 | 1.42e+06 | Norwegian Nynorsk | nno | nor | nn | nn |
| 119 | nob_Latn | 6.76e+08 | 2.15e+10 | 1.33e+11 | 2.70e+07 | Norwegian Bokmål | nob | nor | nb | nb |
| 120 | npi_Deva | 3.71e+07 | 1.13e+09 | 7.26e+09 | 2.78e+06 | Nepali (individual language) | npi | nep | | ne |
| 121 | nso_Latn | 1.43e+05 | 5.32e+06 | 2.75e+07 | 6.07e+03 | Pedi | nso | | | |
| 122 | nus_Latn | 8.51e+03 | 3.93e+05 | 1.88e+06 | 2.72e+02 | Nuer | nus | | | |
| 123 | nya_Latn | 1.34e+06 | 2.71e+07 | 2.03e+08 | 5.31e+04 | Nyanja | nya | | ny | ny |
| 124 | oci_Latn | 4.20e+06 | 1.03e+08 | 6.35e+08 | 1.90e+05 | Occitan (post 1500) | oci | | oc | oc |
| 125 | ory_Orya | 3.60e+06 | 1.20e+08 | 7.82e+08 | 4.13e+05 | Odia | ory | ori | | or |
| 126 | pag_Latn | 8.58e+04 | 5.66e+06 | 3.35e+07 | 6.90e+03 | Pangasinan | pag | | | |
| 127 | pan_Guru | 1.17e+07 | 3.72e+08 | 1.90e+09 | 5.85e+05 | Panjabi | pan | | pa | pa |
| 128 | pap_Latn | 1.39e+06 | 4.67e+07 | 2.54e+08 | 8.98e+04 | Papiamento | pap | | | |
| 129 | pbt_Arab | 8.46e+06 | 2.79e+08 | 1.30e+09 | 4.66e+05 | Southern Pashto | pbt | pus | | ps |
| 130 | pes_Arab | 3.96e+09 | 8.86e+10 | 4.55e+11 | 9.05e+07 | Iranian Persian | pes | fas | | fa |
| 131 | plt_Latn | 4.74e+06 | 1.17e+08 | 8.10e+08 | 2.08e+05 | Plateau Malagasy | plt | mlg | | mg |
| 132 | pol_Latn | 4.46e+09 | 8.95e+10 | 6.32e+11 | 1.75e+08 | Polish | pol | | pl | pl |
| 133 | por_Latn | 6.12e+09 | 1.46e+11 | 8.96e+11 | 2.38e+08 | Portuguese | por | | pt | pt |
| 134 | prs_Arab | 6.90e+07 | 1.84e+09 | 9.57e+09 | 2.84e+06 | Dari | prs | fas | | fa |
| 135 | quy_Latn | 4.94e+05 | 1.73e+07 | 1.43e+08 | 3.69e+04 | Ayacucho Quechua | quy | que | | qu |
| 136 | ron_Latn | 1.70e+09 | 4.00e+10 | 2.51e+11 | 6.59e+07 | Romanian | ron | | ro | ro |
| 137 | run_Latn | 1.75e+06 | 4.44e+07 | 3.16e+08 | 1.37e+05 | Rundi | run | | rn | rn |
| 138 | rus_Cyrl | 2.63e+10 | 5.41e+11 | 3.91e+12 | 8.85e+08 | Russian | rus | | ru | ru |
| 139 | sag_Latn | 5.19e+04 | 3.61e+06 | 1.67e+07 | 3.16e+03 | Sango | sag | | sg | sg |
| 140 | san_Deva | 3.28e+06 | 4.38e+07 | 3.59e+08 | 5.49e+04 | Sanskrit | san | | sa | sa |
| 141 | sat_Olck | 4.58e+04 | 1.08e+06 | 6.27e+06 | 2.57e+03 | Santali | sat | | | |
| 142 | scn_Latn | 1.65e+06 | 4.24e+07 | 2.52e+08 | 8.20e+04 | Sicilian | scn | | | |
| 143 | shn_Mymr | 9.21e+04 | 1.65e+06 | 2.12e+07 | 6.00e+03 | Shan | shn | | | |
| 144 | sin_Sinh | 3.37e+07 | 7.96e+08 | 4.98e+09 | 1.15e+06 | Sinhala | sin | | si | si |
| 145 | slk_Latn | 4.94e+08 | 1.06e+10 | 7.04e+10 | 2.18e+07 | Slovak | slk | | sk | sk |
| 146 | slv_Latn | 2.39e+08 | 5.44e+09 | 3.53e+10 | 1.03e+07 | Slovenian | slv | | sl | sl |
| 147 | smo_Latn | 1.01e+06 | 3.71e+07 | 1.86e+08 | 4.59e+04 | Samoan | smo | | sm | sm |
| 148 | sna_Latn | 1.20e+06 | 2.39e+07 | 1.93e+08 | 6.11e+04 | Shona | sna | | sn | sn |
| 149 | snd_Arab | 2.83e+06 | 8.95e+07 | 4.29e+08 | 1.00e+05 | Sindhi | snd | | sd | sd |
| 150 | som_Latn | 1.64e+07 | 3.89e+08 | 2.56e+09 | 9.66e+05 | Somali | som | | so | so |
| 151 | sot_Latn | 1.08e+06 | 3.10e+07 | 1.72e+08 | 4.39e+04 | Southern Sotho | sot | | st | st |
| 152 | spa_Latn | 1.21e+10 | 3.22e+11 | 1.95e+12 | 5.03e+08 | Spanish | spa | | es | es |
| 153 | srd_Latn | 9.17e+05 | 2.39e+07 | 1.49e+08 | 5.38e+04 | Sardinian | srd | | sc | sc |
| 154 | srp_Cyrl | 9.38e+07 | 2.52e+09 | 1.62e+10 | 4.12e+06 | Serbian | srp | hbs | sr | sr |
| 155 | ssw_Latn | 6.21e+04 | 9.94e+05 | 8.82e+06 | 2.04e+03 | Swati | ssw | | ss | ss |
| 156 | sun_Latn | 3.24e+06 | 6.96e+07 | 4.75e+08 | 1.15e+05 | Sundanese | sun | | su | su |
| 157 | swe_Latn | 1.76e+09 | 4.01e+10 | 2.51e+11 | 6.68e+07 | Swedish | swe | | sv | sv |
| 158 | swh_Latn | 3.43e+07 | 7.18e+08 | 4.66e+09 | 1.37e+06 | Swahili (individual language) | swh | swa | | sw |
| 159 | szl_Latn | 6.37e+05 | 1.47e+07 | 1.04e+08 | 4.09e+04 | Silesian | szl | | | |
| 160 | tam_Taml | 1.69e+08 | 2.98e+09 | 2.62e+10 | 6.11e+06 | Tamil | tam | | ta | ta |
| 161 | taq_Latn | 1.39e+04 | 1.54e+06 | 8.84e+06 | 1.75e+03 | Tamasheq | taq | tmh | | |
| 162 | tat_Cyrl | 1.34e+07 | 2.97e+08 | 2.16e+09 | 6.31e+05 | Tatar | tat | | tt | tt |
| 163 | tel_Telu | 3.92e+07 | 8.35e+08 | 6.50e+09 | 2.06e+06 | Telugu | tel | | te | te |
| 164 | tgk_Cyrl | 2.48e+07 | 6.25e+08 | 4.59e+09 | 1.26e+06 | Tajik | tgk | | tg | tg |
| 165 | tgl_Latn | 5.29e+07 | 1.35e+09 | 8.13e+09 | 1.87e+06 | Tagalog | tgl | | tl | tl |
| 166 | tha_Thai | 3.39e+08 | 3.51e+09 | 6.00e+10 | 1.77e+07 | Thai | tha | | th | th |
| 167 | tir_Ethi | 1.13e+06 | 3.67e+07 | 1.82e+08 | 6.47e+04 | Tigrinya | tir | | ti | ti |
| 168 | tpi_Latn | 2.82e+05 | 1.25e+07 | 6.45e+07 | 1.40e+04 | Tok Pisin | tpi | | | |
| 169 | tsn_Latn | 1.32e+05 | 5.27e+06 | 2.77e+07 | 6.05e+03 | Tswana | tsn | | tn | tn |
| 170 | tso_Latn | 2.21e+05 | 8.67e+06 | 4.93e+07 | 1.10e+04 | Tsonga | tso | | ts | ts |
| 171 | tuk_Latn | 3.36e+06 | 7.07e+07 | 5.70e+08 | 1.71e+05 | Turkmen | tuk | | tk | tk |
| 172 | tum_Latn | 9.90e+04 | 2.88e+06 | 2.11e+07 | 4.38e+03 | Tumbuka | tum | | | |
| 173 | tur_Latn | 2.58e+09 | 5.17e+10 | 3.90e+11 | 1.17e+08 | Turkish | tur | | tr | tr |
| 174 | twi_Latn | 1.26e+05 | 4.70e+06 | 2.42e+07 | 5.86e+03 | Twi | twi | aka | tw | tw |
| 175 | uig_Arab | 8.98e+06 | 2.24e+08 | 1.75e+09 | 4.42e+05 | Uighur | uig | | ug | ug |
| 176 | ukr_Cyrl | 1.17e+09 | 2.52e+10 | 1.83e+11 | 4.74e+07 | Ukrainian | ukr | | uk | uk |
| 177 | umb_Latn | 5.99e+04 | 2.43e+06 | 1.54e+07 | 2.47e+03 | Umbundu | umb | | | |
| 178 | urd_Arab | 5.06e+07 | 2.13e+09 | 1.00e+10 | 3.19e+06 | Urdu | urd | | ur | ur |
| 179 | uzn_Latn | 1.48e+07 | 3.51e+08 | 2.85e+09 | 7.07e+05 | Northern Uzbek | uzn | uzb | | uz |
| 180 | vec_Latn | 1.58e+06 | 3.53e+07 | 2.18e+08 | 8.48e+04 | Venetian | vec | | | |
| 181 | vie_Latn | 3.02e+09 | 8.32e+10 | 3.80e+11 | 1.01e+08 | Vietnamese | vie | | vi | vi |
| 182 | war_Latn | 2.01e+05 | 5.89e+06 | 3.56e+07 | 1.39e+04 | Waray (Philippines) | war | | | |
| 183 | wol_Latn | 1.62e+05 | 5.46e+06 | 2.75e+07 | 5.68e+03 | Wolof | wol | | wo | wo |
| 184 | xho_Latn | 1.82e+06 | 3.03e+07 | 2.59e+08 | 6.31e+04 | Xhosa | xho | | xh | xh |
| 185 | ydd_Hebr | 2.94e+06 | 7.75e+07 | 4.58e+08 | 1.28e+05 | Eastern Yiddish | ydd | yid | | yi |
| 186 | yor_Latn | 1.47e+06 | 4.28e+07 | 2.18e+08 | 6.61e+04 | Yoruba | yor | | yo | yo |
| 187 | yue_Hant | 1.24e+06 | 3.27e+06 | 7.43e+07 | 6.13e+04 | Yue Chinese | yue | zho | | zh |
| 188 | zho_Hans | 4.24e+10 | 7.40e+10 | 2.35e+12 | 1.25e+09 | Chinese | zho | | zh | zh |
| 189 | zho_Hant | 4.48e+09 | 9.51e+09 | 2.87e+11 | 1.57e+08 | Chinese | zho | | zh | zh |
| 190 | zsm_Latn | 5.80e+08 | 1.15e+10 | 7.84e+10 | 1.84e+07 | Standard Malay | zsm | msa | | ms |
| 191 | zul_Latn | 2.71e+06 | 4.44e+07 | 3.81e+08 | 1.14e+05 | Zulu | zul | | zu | zu |"
mozilla-foundation/common_voice_16_0,"{""pretty_name"": ""Common Voice Corpus 16"", ""annotations_creators"": [""crowdsourced""], ""language_creators"": [""crowdsourced""], ""language"": [""ab"", ""af"", ""am"", ""ar"", ""as"", ""ast"", ""az"", ""ba"", ""bas"", ""be"", ""bg"", ""bn"", ""br"", ""ca"", ""ckb"", ""cnh"", ""cs"", ""cv"", ""cy"", ""da"", ""de"", ""dv"", ""dyu"", ""el"", ""en"", ""eo"", ""es"", ""et"", ""eu"", ""fa"", ""fi"", ""fr"", ""fy"", ""ga"", ""gl"", ""gn"", ""ha"", ""he"", ""hi"", ""hsb"", ""hu"", ""hy"", ""ia"", ""id"", ""ig"", ""is"", ""it"", ""ja"", ""ka"", ""kab"", ""kk"", ""kmr"", ""ko"", ""ky"", ""lg"", ""lij"", ""lo"", ""lt"", ""ltg"", ""lv"", ""mdf"", ""mhr"", ""mk"", ""ml"", ""mn"", ""mr"", ""mrj"", ""mt"", ""myv"", ""nan"", ""ne"", ""nhi"", ""nl"", ""nn"", ""oc"", ""or"", ""os"", ""pa"", ""pl"", ""ps"", ""pt"", ""quy"", ""rm"", ""ro"", ""ru"", ""rw"", ""sah"", ""sat"", ""sc"", ""sk"", ""skr"", ""sl"", ""sq"", ""sr"", ""sv"", ""sw"", ""ta"", ""te"", ""th"", ""ti"", ""tig"", ""tk"", ""tok"", ""tr"", ""tt"", ""tw"", ""ug"", ""uk"", ""ur"", ""uz"", ""vi"", ""vot"", ""yi"", ""yo"", ""yue"", ""zgh"", ""zh""], ""language_bcp47"": [""zh-CN"", ""zh-HK"", ""zh-TW"", ""sv-SE"", ""rm-sursilv"", ""rm-vallader"", ""pa-IN"", ""nn-NO"", ""ne-NP"", ""nan-tw"", ""hy-AM"", ""ga-IE"", ""fy-NL""], ""license"": [""cc0-1.0""], ""multilinguality"": [""multilingual""], ""paperswithcode_id"": ""common-voice"", ""extra_gated_prompt"": ""By clicking on \u201cAccess repository\u201d below, you also agree to not attempt to determine the identity of speakers in the Common Voice dataset.""}","# Dataset Card for Common Voice Corpus 16
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://commonvoice.mozilla.org/en/datasets
- **Repository:** https://github.com/common-voice/common-voice
- **Paper:** https://arxiv.org/abs/1912.06670
- **Leaderboard:** https://paperswithcode.com/dataset/common-voice
- **Point of Contact:** [Vaibhav Srivastav](mailto:vaibhav@huggingface.co)
### Dataset Summary
The Common Voice dataset consists of a unique MP3 and corresponding text file.
Many of the 30328 recorded hours in the dataset also include demographic metadata like age, sex, and accent
that can help improve the accuracy of speech recognition engines.
The dataset currently consists of 19673 validated hours in 120 languages, but more voices and languages are always added.
Take a look at the [Languages](https://commonvoice.mozilla.org/en/languages) page to request a language or start contributing.
### Languages
```
Abkhaz, Afrikaans, Albanian, Amharic, Arabic, Armenian, Assamese, Asturian, Azerbaijani, Basaa, Bashkir, Basque, Belarusian, Bengali, Breton, Bulgarian, Cantonese, Catalan, Central Kurdish, Chinese (China), Chinese (Hong Kong), Chinese (Taiwan), Chuvash, Czech, Danish, Dhivehi, Dioula, Dutch, English, Erzya, Esperanto, Estonian, Finnish, French, Frisian, Galician, Georgian, German, Greek, Guarani, Hakha Chin, Hausa, Hebrew, Hill Mari, Hindi, Hungarian, Icelandic, Igbo, Indonesian, Interlingua, Irish, Italian, Japanese, Kabyle, Kazakh, Kinyarwanda, Korean, Kurmanji Kurdish, Kyrgyz, Lao, Latgalian, Latvian, Ligurian, Lithuanian, Luganda, Macedonian, Malayalam, Maltese, Marathi, Meadow Mari, Moksha, Mongolian, Nepali, Norwegian Nynorsk, Occitan, Odia, Ossetian, Pashto, Persian, Polish, Portuguese, Punjabi, Quechua Chanka, Romanian, Romansh Sursilvan, Romansh Vallader, Russian, Sakha, Santali (Ol Chiki), Saraiki, Sardinian, Serbian, Slovak, Slovenian, Sorbian, Upper, Spanish, Swahili, Swedish, Taiwanese (Minnan), Tamazight, Tamil, Tatar, Telugu, Thai, Tigre, Tigrinya, Toki Pona, Turkish, Turkmen, Twi, Ukrainian, Urdu, Uyghur, Uzbek, Vietnamese, Votic, Welsh, Western Sierra Puebla Nahuatl, Yiddish, Yoruba
```
## How to use
The `datasets` library allows you to load and pre-process your dataset in pure Python, at scale. The dataset can be downloaded and prepared in one call to your local drive by using the `load_dataset` function.
For example, to download the Hindi config, simply specify the corresponding language config name (i.e., ""hi"" for Hindi):
```python
from datasets import load_dataset
cv_16 = load_dataset(""mozilla-foundation/common_voice_16_0"", ""hi"", split=""train"")
```
Using the datasets library, you can also stream the dataset on-the-fly by adding a `streaming=True` argument to the `load_dataset` function call. Loading a dataset in streaming mode loads individual samples of the dataset at a time, rather than downloading the entire dataset to disk.
```python
from datasets import load_dataset
cv_16 = load_dataset(""mozilla-foundation/common_voice_16_0"", ""hi"", split=""train"", streaming=True)
print(next(iter(cv_16)))
```
*Bonus*: create a [PyTorch dataloader](https://huggingface.co/docs/datasets/use_with_pytorch) directly with your own datasets (local/streamed).
### Local
```python
from datasets import load_dataset
from torch.utils.data.sampler import BatchSampler, RandomSampler
cv_16 = load_dataset(""mozilla-foundation/common_voice_16_0"", ""hi"", split=""train"")
batch_sampler = BatchSampler(RandomSampler(cv_16), batch_size=32, drop_last=False)
dataloader = DataLoader(cv_16, batch_sampler=batch_sampler)
```
### Streaming
```python
from datasets import load_dataset
from torch.utils.data import DataLoader
cv_16 = load_dataset(""mozilla-foundation/common_voice_16_0"", ""hi"", split=""train"")
dataloader = DataLoader(cv_16, batch_size=32)
```
To find out more about loading and preparing audio datasets, head over to [hf.co/blog/audio-datasets](https://huggingface.co/blog/audio-datasets).
### Example scripts
Train your own CTC or Seq2Seq Automatic Speech Recognition models on Common Voice 16 with `transformers` - [here](https://github.com/huggingface/transformers/tree/main/examples/pytorch/speech-recognition).
## Dataset Structure
### Data Instances
A typical data point comprises the `path` to the audio file and its `sentence`.
Additional fields include `accent`, `age`, `client_id`, `up_votes`, `down_votes`, `gender`, `locale` and `segment`.
```python
{
'client_id': 'd59478fbc1ee646a28a3c652a119379939123784d99131b865a89f8b21c81f69276c48bd574b81267d9d1a77b83b43e6d475a6cfc79c232ddbca946ae9c7afc5',
'path': 'et/clips/common_voice_et_18318995.mp3',
'audio': {
'path': 'et/clips/common_voice_et_18318995.mp3',
'array': array([-0.00048828, -0.00018311, -0.00137329, ..., 0.00079346, 0.00091553, 0.00085449], dtype=float32),
'sampling_rate': 48000
},
'sentence': 'Tasub kokku saada inimestega, keda tunned juba ammust ajast saati.',
'up_votes': 2,
'down_votes': 0,
'age': 'twenties',
'gender': 'male',
'accent': '',
'locale': 'et',
'segment': ''
}
```
### Data Fields
`client_id` (`string`): An id for which client (voice) made the recording
`path` (`string`): The path to the audio file
`audio` (`dict`): A dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate. Note that when accessing the audio column: `dataset[0][""audio""]` the audio file is automatically decoded and resampled to `dataset.features[""audio""].sampling_rate`. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the `""audio""` column, *i.e.* `dataset[0][""audio""]` should **always** be preferred over `dataset[""audio""][0]`.
`sentence` (`string`): The sentence the user was prompted to speak
`up_votes` (`int64`): How many upvotes the audio file has received from reviewers
`down_votes` (`int64`): How many downvotes the audio file has received from reviewers
`age` (`string`): The age of the speaker (e.g. `teens`, `twenties`, `fifties`)
`gender` (`string`): The gender of the speaker
`accent` (`string`): Accent of the speaker
`locale` (`string`): The locale of the speaker
`segment` (`string`): Usually an empty field
### Data Splits
The speech material has been subdivided into portions for dev, train, test, validated, invalidated, reported and other.
The validated data is data that has been validated with reviewers and received upvotes that the data is of high quality.
The invalidated data is data has been invalidated by reviewers
and received downvotes indicating that the data is of low quality.
The reported data is data that has been reported, for different reasons.
The other data is data that has not yet been reviewed.
The dev, test, train are all data that has been reviewed, deemed of high quality and split into dev, test and train.
## Data Preprocessing Recommended by Hugging Face
The following are data preprocessing steps advised by the Hugging Face team. They are accompanied by an example code snippet that shows how to put them to practice.
Many examples in this dataset have trailing quotations marks, e.g _“the cat sat on the mat.“_. These trailing quotation marks do not change the actual meaning of the sentence, and it is near impossible to infer whether a sentence is a quotation or not a quotation from audio data alone. In these cases, it is advised to strip the quotation marks, leaving: _the cat sat on the mat_.
In addition, the majority of training sentences end in punctuation ( . or ? or ! ), whereas just a small proportion do not. In the dev set, **almost all** sentences end in punctuation. Thus, it is recommended to append a full-stop ( . ) to the end of the small number of training examples that do not end in punctuation.
```python
from datasets import load_dataset
ds = load_dataset(""mozilla-foundation/common_voice_16_0"", ""en"", use_auth_token=True)
def prepare_dataset(batch):
""""""Function to preprocess the dataset with the .map method""""""
transcription = batch[""sentence""]
if transcription.startswith('""') and transcription.endswith('""'):
# we can remove trailing quotation marks as they do not affect the transcription
transcription = transcription[1:-1]
if transcription[-1] not in [""."", ""?"", ""!""]:
# append a full-stop to sentences that do not end in punctuation
transcription = transcription + "".""
batch[""sentence""] = transcription
return batch
ds = ds.map(prepare_dataset, desc=""preprocess dataset"")
```
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
The dataset consists of people who have donated their voice online. You agree to not attempt to determine the identity of speakers in the Common Voice dataset.
## Considerations for Using the Data
### Social Impact of Dataset
The dataset consists of people who have donated their voice online. You agree to not attempt to determine the identity of speakers in the Common Voice dataset.
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
Public Domain, [CC-0](https://creativecommons.org/share-your-work/public-domain/cc0/)
### Citation Information
```
@inproceedings{commonvoice:2020,
author = {Ardila, R. and Branson, M. and Davis, K. and Henretty, M. and Kohler, M. and Meyer, J. and Morais, R. and Saunders, L. and Tyers, F. M. and Weber, G.},
title = {Common Voice: A Massively-Multilingual Speech Corpus},
booktitle = {Proceedings of the 12th Conference on Language Resources and Evaluation (LREC 2020)},
pages = {4211--4215},
year = 2020
}
```"
esdurmus/wiki_lingua,"{""annotations_creators"": [""crowdsourced""], ""language_creators"": [""crowdsourced""], ""language"": [""ar"", ""cs"", ""de"", ""en"", ""es"", ""fr"", ""hi"", ""id"", ""it"", ""ja"", ""ko"", ""nl"", ""pt"", ""ru"", ""th"", ""tr"", ""vi"", ""zh""], ""license"": [""cc-by-3.0""], ""multilinguality"": [""multilingual""], ""size_categories"": [""10K
- **Repository:**
- **Paper:**
- **Leaderboard:** N/A
- **Point of Contact:** Adithya Pratapa
### Dataset Summary
XLEL-WD is a multilingual event linking dataset. This dataset repo contains mention references in multilingual Wikipedia/Wikinews articles to event items from Wikidata.
The descriptions for Wikidata event items were collected from the corresponding Wikipedia articles. Download the event dictionary from [adithya7/xlel_wd_dictionary](https://huggingface.co/datasets/adithya7/xlel_wd_dictionary).
### Supported Tasks and Leaderboards
This dataset can be used for the task of event linking. There are two variants of the task, multilingual and crosslingual.
- Multilingual linking: mention and the event descriptions are in the same language.
- Crosslingual linking: the event descriptions are only available in English.
### Languages
This dataset contains text from 44 languages. The language names and their ISO 639-1 codes are listed below. For details on the dataset distribution for each language, refer to the original paper.
| Language | Code | Language | Code | Language | Code | Language | Code |
| -------- | ---- | -------- | ---- | -------- | ---- | -------- | ---- |
| Afrikaans | af | Arabic | ar | Belarusian | be | Bulgarian | bg |
| Bengali | bn | Catalan | ca | Czech | cs | Danish | da |
| German | de | Greek | el | English | en | Spanish | es |
| Persian | fa | Finnish | fi | French | fr | Hebrew | he |
| Hindi | hi | Hungarian | hu | Indonesian | id | Italian | it |
| Japanese | ja | Korean | ko | Malayalam | ml | Marathi | mr |
| Malay | ms | Dutch | nl | Norwegian | no | Polish | pl |
| Portuguese | pt | Romanian | ro | Russian | ru | Sinhala | si |
| Slovak | sk | Slovene | sl | Serbian | sr | Swedish | sv |
| Swahili | sw | Tamil | ta | Telugu | te | Thai | th |
| Turkish | tr | Ukrainian | uk | Vietnamese | vi | Chinese | zh |
## Dataset Structure
### Data Instances
Each instance in the `train.jsonl`, `dev.jsonl` and `test.jsonl` files follow the below template.
```json
{
""context_left"": ""Minibaev's first major international medal came in the men's synchronized 10 metre platform event at the "",
""mention"": ""2010 European Championships"",
""context_right"": ""."",
""context_lang"": ""en"",
""label_id"": ""830917"",
}
```
### Data Fields
| Field | Meaning |
| ----- | ------- |
| `mention` | text span of the mention |
| `context_left` | left paragraph context from the document |
| `context_right` | right paragraph context from the document |
| `context_lang` | language of the context (and mention) |
| `context_title` | document title of the mention (only Wikinews subset) |
| `context_date` | document publication date of the mention (only Wikinews subset) |
| `label_id` | Wikidata label ID for the event. E.g. 830917 refers to Q830917 from Wikidata. |
### Data Splits
The Wikipedia-based corpus has three splits. This is a zero-shot evaluation setup.
| | Train | Dev | Test | Total |
| ---- | :-----: | :---: | :----: | :-----: |
| Events | 8653 | 1090 | 1204 | 10947 |
| Event Sequences | 6758 | 844 | 846 | 8448 |
| Mentions | 1.44M | 165K | 190K | 1.8M |
| Languages | 44 | 44 | 44 | 44 |
The Wikinews-based evaluation set has two variants, one for cross-domain evaluation and another for zero-shot evaluation.
| | (Cross-domain) Test | (Zero-shot) Test |
| --- | :------------------: | :-----: |
| Events | 802 | 149 |
| Mentions | 2562 | 437 |
| Languages | 27 | 21 |
## Dataset Creation
### Curation Rationale
This dataset helps address the task of event linking. KB linking is extensively studied for entities, but its unclear if the same methodologies can be extended for linking mentions to events from KB. We use Wikidata as our KB, as it allows for linking mentions from multilingual Wikipedia and Wikinews articles.
### Source Data
#### Initial Data Collection and Normalization
First, we utilize spatial & temporal properties from Wikidata to identify event items. Second, we identify corresponding multilingual Wikipedia pages for each Wikidata event item. Third, we pool hyperlinks from multilingual Wikipedia & Wikinews articles to these event items.
#### Who are the source language producers?
The documents in XLEL-WD are written by Wikipedia and Wikinews contributors in respective languages.
### Annotations
#### Annotation process
This dataset was originally collected automatically from Wikipedia, Wikinews and Wikidata. It was post-processed to improve data quality.
#### Who are the annotators?
The annotations in XLEL-WD (hyperlinks from Wikipedia/Wikinews to Wikidata) are added the original Wiki contributors.
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
XLEL-WD v1.0.0 mostly caters to eventive nouns from Wikidata. It does not include any links to other event items from Wikidata such as disease outbreak (Q3241045), military offensive (Q2001676) and war (Q198).
## Additional Information
### Dataset Curators
The dataset was curated by Adithya Pratapa, Rishubh Gupta and Teruko Mitamura. The code for collecting the dataset is available at [Github:xlel-wd](https://github.com/adithya7/xlel-wd).
### Licensing Information
XLEL-WD dataset is released under [CC-BY-4.0 license](https://creativecommons.org/licenses/by/4.0/).
### Citation Information
```bib
@article{pratapa-etal-2022-multilingual,
title = {Multilingual Event Linking to Wikidata},
author = {Pratapa, Adithya and Gupta, Rishubh and Mitamura, Teruko},
publisher = {arXiv},
year = {2022},
url = {https://arxiv.org/abs/2204.06535},
}
```
### Contributions
Thanks to [@adithya7](https://github.com/adithya7) for adding this dataset."
Helsinki-NLP/opus_ubuntu,"{""annotations_creators"": [""crowdsourced"", ""expert-generated""], ""language_creators"": [""found""], ""language"": [""ace"", ""af"", ""ak"", ""am"", ""an"", ""ang"", ""ar"", ""ary"", ""as"", ""ast"", ""az"", ""ba"", ""bal"", ""be"", ""bem"", ""ber"", ""bg"", ""bho"", ""bn"", ""bo"", ""br"", ""brx"", ""bs"", ""bua"", ""byn"", ""ca"", ""ce"", ""ceb"", ""chr"", ""ckb"", ""co"", ""crh"", ""cs"", ""csb"", ""cv"", ""cy"", ""da"", ""de"", ""dsb"", ""dv"", ""dz"", ""el"", ""en"", ""eo"", ""es"", ""et"", ""eu"", ""fa"", ""ff"", ""fi"", ""fil"", ""fo"", ""fr"", ""frm"", ""frp"", ""fur"", ""fy"", ""ga"", ""gd"", ""gl"", ""gn"", ""grc"", ""gu"", ""guc"", ""gv"", ""ha"", ""haw"", ""he"", ""hi"", ""hil"", ""hne"", ""hr"", ""hsb"", ""ht"", ""hu"", ""hy"", ""ia"", ""id"", ""ig"", ""io"", ""is"", ""it"", ""iu"", ""ja"", ""jbo"", ""jv"", ""ka"", ""kab"", ""kg"", ""kk"", ""kl"", ""km"", ""kn"", ""ko"", ""kok"", ""ks"", ""ksh"", ""ku"", ""kw"", ""ky"", ""la"", ""lb"", ""lg"", ""li"", ""lij"", ""lld"", ""ln"", ""lo"", ""lt"", ""ltg"", ""lv"", ""mai"", ""mg"", ""mh"", ""mhr"", ""mi"", ""miq"", ""mk"", ""ml"", ""mn"", ""mr"", ""ms"", ""mt"", ""mus"", ""my"", ""nan"", ""nap"", ""nb"", ""nds"", ""ne"", ""nhn"", ""nl"", ""nn"", ""no"", ""nso"", ""ny"", ""oc"", ""om"", ""or"", ""os"", ""pa"", ""pam"", ""pap"", ""pl"", ""pms"", ""pmy"", ""ps"", ""pt"", ""qu"", ""rm"", ""ro"", ""rom"", ""ru"", ""rw"", ""sa"", ""sc"", ""sco"", ""sd"", ""se"", ""shn"", ""shs"", ""si"", ""sk"", ""sl"", ""sm"", ""sml"", ""sn"", ""so"", ""son"", ""sq"", ""sr"", ""st"", ""sv"", ""sw"", ""syr"", ""szl"", ""ta"", ""te"", ""tet"", ""tg"", ""th"", ""ti"", ""tk"", ""tl"", ""tlh"", ""tr"", ""trv"", ""ts"", ""tt"", ""ug"", ""uk"", ""ur"", ""uz"", ""ve"", ""vec"", ""vi"", ""wa"", ""wae"", ""wo"", ""xal"", ""xh"", ""yi"", ""yo"", ""zh"", ""zu"", ""zza""], ""license"": [""bsd-3-clause""], ""multilinguality"": [""multilingual""], ""size_categories"": [""10K1)]
df['chosen'] = df.apply(lambda x:x['text'][np.argmin(x['rank'])],axis=1)
df['rejected'] = df.apply(lambda x:x['text'][np.argmax(x['rank'])],axis=1)
d[split]=Dataset.from_pandas(df[['lang','parent_id','prompt','chosen','rejected']],preserve_index=False)
DatasetDict(d).push_to_hub('tasksource/oasst1_pairwise_rlhf_reward')
```"
castorini/mr-tydi-corpus,"{""language"": [""ar"", ""bn"", ""en"", ""fi"", ""id"", ""fi"", ""ja"", ""ko"", ""ru"", ""sw"", ""te"", ""th""], ""multilinguality"": [""multilingual""], ""task_categories"": [""text-retrieval""], ""license"": ""apache-2.0""}","# Dataset Summary
Mr. TyDi is a multi-lingual benchmark dataset built on TyDi, covering eleven typologically diverse languages. It is designed for monolingual retrieval, specifically to evaluate ranking with learned dense representations.
This dataset stores documents of Mr. TyDi. To access the queries and judgments, please refer to [castorini/mr-tydi](https://huggingface.co/datasets/castorini/mr-tydi).
# Dataset Structure
The only configuration here is the `language`. As all three folds (train, dev and test) share the same corpus, there is only one fold 'train' under each language, unlike [castorini/mr-tydi](https://huggingface.co/datasets/castorini/mr-tydi).
An example of document data entry looks as follows:
```
{
'docid': '25#0',
'title': 'Autism',
'text': 'Autism is a developmental disorder characterized by difficulties with social interaction and communication, ...'
}
```
# Load Dataset
An example to load the dataset:
```
language = 'english'
dataset = load_dataset('castorini/mr-tydi-corpus', language, 'train')
```
# Citation Information
```
@article{mrtydi,
title={{Mr. TyDi}: A Multi-lingual Benchmark for Dense Retrieval},
author={Xinyu Zhang and Xueguang Ma and Peng Shi and Jimmy Lin},
year={2021},
journal={arXiv:2108.08787},
}
```"
Helsinki-NLP/opus_paracrawl,"{""annotations_creators"": [""found""], ""language_creators"": [""found""], ""language"": [""bg"", ""ca"", ""cs"", ""da"", ""de"", ""el"", ""en"", ""es"", ""et"", ""eu"", ""fi"", ""fr"", ""ga"", ""gl"", ""hr"", ""hu"", ""is"", ""it"", ""km"", ""ko"", ""lt"", ""lv"", ""mt"", ""my"", ""nb"", ""ne"", ""nl"", ""nn"", ""pl"", ""pt"", ""ro"", ""ru"", ""si"", ""sk"", ""sl"", ""so"", ""sv"", ""sw"", ""tl"", ""uk"", ""zh""], ""license"": [""cc0-1.0""], ""multilinguality"": [""multilingual""], ""size_categories"": [""100K>> from datasets import load_dataset
>>> ds = load_dataset(""beomi/KoAlpaca-v1.1a"", split=""train"")
>>> ds
Dataset({
features: ['instruction', 'input', 'output'],
num_rows: 21155
})
```
```python
>>> ds[0]
{'instruction': '양파는 어떤 식물 부위인가요? 그리고 고구마는 뿌리인가요?',
'output': '양파는 잎이 아닌 식물의 줄기 부분입니다. 고구마는 식물의 뿌리 부분입니다. \n\n식물의 부위의 구분에 대해 궁금해하는 분이라면 분명 이 질문에 대한 답을 찾고 있을 것입니다. 양파는 잎이 아닌 줄기 부분입니다. 고구마는 다른 질문과 답변에서 언급된 것과 같이 뿌리 부분입니다. 따라서, 양파는 식물의 줄기 부분이 되고, 고구마는 식물의 뿌리 부분입니다.\n\n 덧붙이는 답변: 고구마 줄기도 볶아먹을 수 있나요? \n\n고구마 줄기도 식용으로 볶아먹을 수 있습니다. 하지만 줄기 뿐만 아니라, 잎, 씨, 뿌리까지 모든 부위가 식용으로 활용되기도 합니다. 다만, 한국에서는 일반적으로 뿌리 부분인 고구마를 주로 먹습니다.',
'url': 'https://kin.naver.com/qna/detail.naver?d1id=11&dirId=1116&docId=55320268'}
```"
Helsinki-NLP/opus_gnome,"{""annotations_creators"": [""found""], ""language_creators"": [""found""], ""language"": [""af"", ""am"", ""an"", ""ang"", ""ar"", ""as"", ""ast"", ""az"", ""bal"", ""be"", ""bem"", ""bg"", ""bn"", ""bo"", ""br"", ""brx"", ""bs"", ""ca"", ""crh"", ""cs"", ""csb"", ""cy"", ""da"", ""de"", ""dv"", ""dz"", ""el"", ""en"", ""eo"", ""es"", ""et"", ""eu"", ""fa"", ""fi"", ""fo"", ""fr"", ""fur"", ""fy"", ""ga"", ""gd"", ""gl"", ""gn"", ""gu"", ""gv"", ""ha"", ""he"", ""hi"", ""hr"", ""hu"", ""hy"", ""ia"", ""id"", ""ig"", ""io"", ""is"", ""it"", ""ja"", ""jbo"", ""ka"", ""kg"", ""kk"", ""km"", ""kn"", ""ko"", ""kr"", ""ks"", ""ku"", ""ky"", ""la"", ""lg"", ""li"", ""lo"", ""lt"", ""lv"", ""mai"", ""mg"", ""mi"", ""mk"", ""ml"", ""mn"", ""mr"", ""ms"", ""mt"", ""mus"", ""my"", ""nb"", ""nds"", ""ne"", ""nhn"", ""nl"", ""nn"", ""no"", ""nqo"", ""nr"", ""nso"", ""oc"", ""or"", ""os"", ""pa"", ""pl"", ""ps"", ""pt"", ""quz"", ""ro"", ""ru"", ""rw"", ""si"", ""sk"", ""sl"", ""so"", ""sq"", ""sr"", ""st"", ""sv"", ""sw"", ""szl"", ""ta"", ""te"", ""tg"", ""th"", ""tk"", ""tl"", ""tr"", ""ts"", ""tt"", ""tyj"", ""ug"", ""uk"", ""ur"", ""uz"", ""vi"", ""wa"", ""xh"", ""yi"", ""yo"", ""zh"", ""zu""], ""license"": [""unknown""], ""multilinguality"": [""multilingual""], ""size_categories"": [""10K
## Contact
- Discord [Open Assistant Discord Server](https://ykilcher.com/open-assistant-discord)
- GitHub: [LAION-AI/Open-Assistant](https://github.com/LAION-AI/Open-Assistant)
- E-Mail: [open-assistant@laion.ai](mailto:open-assistant@laion.ai)"
Babelscape/SREDFM,"{""dataset_info"": [{""config_name"": ""ar"", ""features"": [{""name"": ""docid"", ""dtype"": ""string""}, {""name"": ""title"", ""dtype"": ""string""}, {""name"": ""uri"", ""dtype"": ""string""}, {""name"": ""text"", ""dtype"": ""string""}, {""name"": ""entities"", ""list"": [{""name"": ""uri"", ""dtype"": ""string""}, {""name"": ""surfaceform"", ""dtype"": ""string""}, {""name"": ""type"", ""dtype"": ""string""}, {""name"": ""start"", ""dtype"": ""int32""}, {""name"": ""end"", ""dtype"": ""int32""}]}, {""name"": ""relations"", ""list"": [{""name"": ""subject"", ""dtype"": ""int32""}, {""name"": ""predicate"", ""dtype"": ""string""}, {""name"": ""object"", ""dtype"": ""int32""}]}], ""splits"": [{""name"": ""train"", ""num_bytes"": 659105981, ""num_examples"": 499568}, {""name"": ""test"", ""num_bytes"": 9015516, ""num_examples"": 4387}, {""name"": ""validation"", ""num_bytes"": 7406509, ""num_examples"": 3783}], ""download_size"": 3651950669, ""dataset_size"": 675528006}, {""config_name"": ""ca"", ""features"": [{""name"": ""docid"", ""dtype"": ""string""}, {""name"": ""title"", ""dtype"": ""string""}, {""name"": ""uri"", ""dtype"": ""string""}, {""name"": ""text"", ""dtype"": ""string""}, {""name"": ""entities"", ""list"": [{""name"": ""uri"", ""dtype"": ""string""}, {""name"": ""surfaceform"", ""dtype"": ""string""}, {""name"": ""type"", ""dtype"": ""string""}, {""name"": ""start"", ""dtype"": ""int32""}, {""name"": ""end"", ""dtype"": ""int32""}]}, {""name"": ""relations"", ""list"": [{""name"": ""subject"", ""dtype"": ""int32""}, {""name"": ""predicate"", ""dtype"": ""string""}, {""name"": ""object"", ""dtype"": ""int32""}]}], ""splits"": [{""name"": ""train"", ""num_bytes"": 406179567, ""num_examples"": 294856}, {""name"": ""test"", ""num_bytes"": 5378789, ""num_examples"": 2541}, {""name"": ""validation"", ""num_bytes"": 3136722, ""num_examples"": 1532}], ""download_size"": 1513026644, ""dataset_size"": 414695078}, {""config_name"": ""de"", ""features"": [{""name"": ""docid"", ""dtype"": ""string""}, {""name"": ""title"", ""dtype"": ""string""}, {""name"": ""uri"", ""dtype"": ""string""}, {""name"": ""text"", ""dtype"": ""string""}, {""name"": ""entities"", ""list"": [{""name"": ""uri"", ""dtype"": ""string""}, {""name"": ""surfaceform"", ""dtype"": ""string""}, {""name"": ""type"", ""dtype"": ""string""}, {""name"": ""start"", ""dtype"": ""int32""}, {""name"": ""end"", ""dtype"": ""int32""}]}, {""name"": ""relations"", ""list"": [{""name"": ""subject"", ""dtype"": ""int32""}, {""name"": ""predicate"", ""dtype"": ""string""}, {""name"": ""object"", ""dtype"": ""int32""}]}], ""splits"": [{""name"": ""train"", ""num_bytes"": 1288274676, ""num_examples"": 1049967}, {""name"": ""test"", ""num_bytes"": 10773087, ""num_examples"": 5649}, {""name"": ""validation"", ""num_bytes"": 8955886, ""num_examples"": 4994}], ""download_size"": 4521091910, ""dataset_size"": 1308003649}, {""config_name"": ""el"", ""features"": [{""name"": ""docid"", ""dtype"": ""string""}, {""name"": ""title"", ""dtype"": ""string""}, {""name"": ""uri"", ""dtype"": ""string""}, {""name"": ""text"", ""dtype"": ""string""}, {""name"": ""entities"", ""list"": [{""name"": ""uri"", ""dtype"": ""string""}, {""name"": ""surfaceform"", ""dtype"": ""string""}, {""name"": ""type"", ""dtype"": ""string""}, {""name"": ""start"", ""dtype"": ""int32""}, {""name"": ""end"", ""dtype"": ""int32""}]}, {""name"": ""relations"", ""list"": [{""name"": ""subject"", ""dtype"": ""int32""}, {""name"": ""predicate"", ""dtype"": ""string""}, {""name"": ""object"", ""dtype"": ""int32""}]}], ""splits"": [{""name"": ""train"", ""num_bytes"": 133497910, ""num_examples"": 64221}, {""name"": ""test"", ""num_bytes"": 2364826, ""num_examples"": 861}, {""name"": ""validation"", ""num_bytes"": 1836092, ""num_examples"": 668}], ""download_size"": 579372781, ""dataset_size"": 137698828}, {""config_name"": ""en"", ""features"": [{""name"": ""docid"", ""dtype"": ""string""}, {""name"": ""title"", ""dtype"": ""string""}, {""name"": ""uri"", ""dtype"": ""string""}, {""name"": ""text"", ""dtype"": ""string""}, {""name"": ""entities"", ""list"": [{""name"": ""uri"", ""dtype"": ""string""}, {""name"": ""surfaceform"", ""dtype"": ""string""}, {""name"": ""type"", ""dtype"": ""string""}, {""name"": ""start"", ""dtype"": ""int32""}, {""name"": ""end"", ""dtype"": ""int32""}]}, {""name"": ""relations"", ""list"": [{""name"": ""subject"", ""dtype"": ""int32""}, {""name"": ""predicate"", ""dtype"": ""string""}, {""name"": ""object"", ""dtype"": ""int32""}]}], ""splits"": [{""name"": ""train"", ""num_bytes"": 3555107736, ""num_examples"": 2701389}, {""name"": ""test"", ""num_bytes"": 13160183, ""num_examples"": 6685}, {""name"": ""validation"", ""num_bytes"": 27692074, ""num_examples"": 13236}], ""download_size"": 11914987368, ""dataset_size"": 3595959993}, {""config_name"": ""es"", ""features"": [{""name"": ""docid"", ""dtype"": ""string""}, {""name"": ""title"", ""dtype"": ""string""}, {""name"": ""uri"", ""dtype"": ""string""}, {""name"": ""text"", ""dtype"": ""string""}, {""name"": ""entities"", ""list"": [{""name"": ""uri"", ""dtype"": ""string""}, {""name"": ""surfaceform"", ""dtype"": ""string""}, {""name"": ""type"", ""dtype"": ""string""}, {""name"": ""start"", ""dtype"": ""int32""}, {""name"": ""end"", ""dtype"": ""int32""}]}, {""name"": ""relations"", ""list"": [{""name"": ""subject"", ""dtype"": ""int32""}, {""name"": ""predicate"", ""dtype"": ""string""}, {""name"": ""object"", ""dtype"": ""int32""}]}], ""splits"": [{""name"": ""train"", ""num_bytes"": 888914515, ""num_examples"": 702785}, {""name"": ""test"", ""num_bytes"": 16076382, ""num_examples"": 8561}, {""name"": ""validation"", ""num_bytes"": 4621760, ""num_examples"": 2177}], ""download_size"": 3570403740, ""dataset_size"": 909612657}, {""config_name"": ""fr"", ""features"": [{""name"": ""docid"", ""dtype"": ""string""}, {""name"": ""title"", ""dtype"": ""string""}, {""name"": ""uri"", ""dtype"": ""string""}, {""name"": ""text"", ""dtype"": ""string""}, {""name"": ""entities"", ""list"": [{""name"": ""uri"", ""dtype"": ""string""}, {""name"": ""surfaceform"", ""dtype"": ""string""}, {""name"": ""type"", ""dtype"": ""string""}, {""name"": ""start"", ""dtype"": ""int32""}, {""name"": ""end"", ""dtype"": ""int32""}]}, {""name"": ""relations"", ""list"": [{""name"": ""subject"", ""dtype"": ""int32""}, {""name"": ""predicate"", ""dtype"": ""string""}, {""name"": ""object"", ""dtype"": ""int32""}]}], ""splits"": [{""name"": ""train"", ""num_bytes"": 768697146, ""num_examples"": 870448}, {""name"": ""test"", ""num_bytes"": 5937745, ""num_examples"": 3883}, {""name"": ""validation"", ""num_bytes"": 3233262, ""num_examples"": 2079}], ""download_size"": 3269522484, ""dataset_size"": 777868153}, {""config_name"": ""hi"", ""features"": [{""name"": ""docid"", ""dtype"": ""string""}, {""name"": ""title"", ""dtype"": ""string""}, {""name"": ""uri"", ""dtype"": ""string""}, {""name"": ""text"", ""dtype"": ""string""}, {""name"": ""entities"", ""list"": [{""name"": ""uri"", ""dtype"": ""string""}, {""name"": ""surfaceform"", ""dtype"": ""string""}, {""name"": ""type"", ""dtype"": ""string""}, {""name"": ""start"", ""dtype"": ""int32""}, {""name"": ""end"", ""dtype"": ""int32""}]}, {""name"": ""relations"", ""list"": [{""name"": ""subject"", ""dtype"": ""int32""}, {""name"": ""predicate"", ""dtype"": ""string""}, {""name"": ""object"", ""dtype"": ""int32""}]}], ""splits"": [{""name"": ""train"", ""num_bytes"": 96926984, ""num_examples"": 51900}, {""name"": ""test"", ""num_bytes"": 1340091, ""num_examples"": 374}, {""name"": ""validation"", ""num_bytes"": 1222098, ""num_examples"": 405}], ""download_size"": 385810623, ""dataset_size"": 99489173}, {""config_name"": ""it"", ""features"": [{""name"": ""docid"", ""dtype"": ""string""}, {""name"": ""title"", ""dtype"": ""string""}, {""name"": ""uri"", ""dtype"": ""string""}, {""name"": ""text"", ""dtype"": ""string""}, {""name"": ""entities"", ""list"": [{""name"": ""uri"", ""dtype"": ""string""}, {""name"": ""surfaceform"", ""dtype"": ""string""}, {""name"": ""type"", ""dtype"": ""string""}, {""name"": ""start"", ""dtype"": ""int32""}, {""name"": ""end"", ""dtype"": ""int32""}]}, {""name"": ""relations"", ""list"": [{""name"": ""subject"", ""dtype"": ""int32""}, {""name"": ""predicate"", ""dtype"": ""string""}, {""name"": ""object"", ""dtype"": ""int32""}]}], ""splits"": [{""name"": ""train"", ""num_bytes"": 436879977, ""num_examples"": 432076}, {""name"": ""test"", ""num_bytes"": 3798221, ""num_examples"": 2175}, {""name"": ""validation"", ""num_bytes"": 2230995, ""num_examples"": 1276}], ""download_size"": 1685172398, ""dataset_size"": 442909193}, {""config_name"": ""ja"", ""features"": [{""name"": ""docid"", ""dtype"": ""string""}, {""name"": ""title"", ""dtype"": ""string""}, {""name"": ""uri"", ""dtype"": ""string""}, {""name"": ""text"", ""dtype"": ""string""}, {""name"": ""entities"", ""list"": [{""name"": ""uri"", ""dtype"": ""string""}, {""name"": ""surfaceform"", ""dtype"": ""string""}, {""name"": ""type"", ""dtype"": ""string""}, {""name"": ""start"", ""dtype"": ""int32""}, {""name"": ""end"", ""dtype"": ""int32""}]}, {""name"": ""relations"", ""list"": [{""name"": ""subject"", ""dtype"": ""int32""}, {""name"": ""predicate"", ""dtype"": ""string""}, {""name"": ""object"", ""dtype"": ""int32""}]}], ""splits"": [{""name"": ""train"", ""num_bytes"": 708617436, ""num_examples"": 480785}, {""name"": ""test"", ""num_bytes"": 7802066, ""num_examples"": 3392}, {""name"": ""validation"", ""num_bytes"": 6990637, ""num_examples"": 3106}], ""download_size"": 3186065351, ""dataset_size"": 723410139}, {""config_name"": ""ko"", ""features"": [{""name"": ""docid"", ""dtype"": ""string""}, {""name"": ""title"", ""dtype"": ""string""}, {""name"": ""uri"", ""dtype"": ""string""}, {""name"": ""text"", ""dtype"": ""string""}, {""name"": ""entities"", ""list"": [{""name"": ""uri"", ""dtype"": ""string""}, {""name"": ""surfaceform"", ""dtype"": ""string""}, {""name"": ""type"", ""dtype"": ""string""}, {""name"": ""start"", ""dtype"": ""int32""}, {""name"": ""end"", ""dtype"": ""int32""}]}, {""name"": ""relations"", ""list"": [{""name"": ""subject"", ""dtype"": ""int32""}, {""name"": ""predicate"", ""dtype"": ""string""}, {""name"": ""object"", ""dtype"": ""int32""}]}], ""splits"": [{""name"": ""train"", ""num_bytes"": 266381416, ""num_examples"": 213659}, {""name"": ""test"", ""num_bytes"": 1736809, ""num_examples"": 803}, {""name"": ""validation"", ""num_bytes"": 1857229, ""num_examples"": 917}], ""download_size"": 1119778167, ""dataset_size"": 269975454}, {""config_name"": ""nl"", ""features"": [{""name"": ""docid"", ""dtype"": ""string""}, {""name"": ""title"", ""dtype"": ""string""}, {""name"": ""uri"", ""dtype"": ""string""}, {""name"": ""text"", ""dtype"": ""string""}, {""name"": ""entities"", ""list"": [{""name"": ""uri"", ""dtype"": ""string""}, {""name"": ""surfaceform"", ""dtype"": ""string""}, {""name"": ""type"", ""dtype"": ""string""}, {""name"": ""start"", ""dtype"": ""int32""}, {""name"": ""end"", ""dtype"": ""int32""}]}, {""name"": ""relations"", ""list"": [{""name"": ""subject"", ""dtype"": ""int32""}, {""name"": ""predicate"", ""dtype"": ""string""}, {""name"": ""object"", ""dtype"": ""int32""}]}], ""splits"": [{""name"": ""train"", ""num_bytes"": 695855128, ""num_examples"": 648029}, {""name"": ""test"", ""num_bytes"": 5186584, ""num_examples"": 2715}, {""name"": ""validation"", ""num_bytes"": 4188877, ""num_examples"": 2188}], ""download_size"": 2591997126, ""dataset_size"": 705230589}, {""config_name"": ""pl"", ""features"": [{""name"": ""docid"", ""dtype"": ""string""}, {""name"": ""title"", ""dtype"": ""string""}, {""name"": ""uri"", ""dtype"": ""string""}, {""name"": ""text"", ""dtype"": ""string""}, {""name"": ""entities"", ""list"": [{""name"": ""uri"", ""dtype"": ""string""}, {""name"": ""surfaceform"", ""dtype"": ""string""}, {""name"": ""type"", ""dtype"": ""string""}, {""name"": ""start"", ""dtype"": ""int32""}, {""name"": ""end"", ""dtype"": ""int32""}]}, {""name"": ""relations"", ""list"": [{""name"": ""subject"", ""dtype"": ""int32""}, {""name"": ""predicate"", ""dtype"": ""string""}, {""name"": ""object"", ""dtype"": ""int32""}]}], ""splits"": [{""name"": ""train"", ""num_bytes"": 877441685, ""num_examples"": 675688}, {""name"": ""test"", ""num_bytes"": 11475559, ""num_examples"": 6376}, {""name"": ""validation"", ""num_bytes"": 6618989, ""num_examples"": 3476}], ""download_size"": 3365852789, ""dataset_size"": 895536233}, {""config_name"": ""pt"", ""features"": [{""name"": ""docid"", ""dtype"": ""string""}, {""name"": ""title"", ""dtype"": ""string""}, {""name"": ""uri"", ""dtype"": ""string""}, {""name"": ""text"", ""dtype"": ""string""}, {""name"": ""entities"", ""list"": [{""name"": ""uri"", ""dtype"": ""string""}, {""name"": ""surfaceform"", ""dtype"": ""string""}, {""name"": ""type"", ""dtype"": ""string""}, {""name"": ""start"", ""dtype"": ""int32""}, {""name"": ""end"", ""dtype"": ""int32""}]}, {""name"": ""relations"", ""list"": [{""name"": ""subject"", ""dtype"": ""int32""}, {""name"": ""predicate"", ""dtype"": ""string""}, {""name"": ""object"", ""dtype"": ""int32""}]}], ""splits"": [{""name"": ""train"", ""num_bytes"": 584986936, ""num_examples"": 469347}, {""name"": ""test"", ""num_bytes"": 8678707, ""num_examples"": 4313}, {""name"": ""validation"", ""num_bytes"": 5807293, ""num_examples"": 2973}], ""download_size"": 2347987926, ""dataset_size"": 599472936}, {""config_name"": ""ru"", ""features"": [{""name"": ""docid"", ""dtype"": ""string""}, {""name"": ""title"", ""dtype"": ""string""}, {""name"": ""uri"", ""dtype"": ""string""}, {""name"": ""text"", ""dtype"": ""string""}, {""name"": ""entities"", ""list"": [{""name"": ""uri"", ""dtype"": ""string""}, {""name"": ""surfaceform"", ""dtype"": ""string""}, {""name"": ""type"", ""dtype"": ""string""}, {""name"": ""start"", ""dtype"": ""int32""}, {""name"": ""end"", ""dtype"": ""int32""}]}, {""name"": ""relations"", ""list"": [{""name"": ""subject"", ""dtype"": ""int32""}, {""name"": ""predicate"", ""dtype"": ""string""}, {""name"": ""object"", ""dtype"": ""int32""}]}], ""splits"": [{""name"": ""train"", ""num_bytes"": 604993210, ""num_examples"": 339697}, {""name"": ""test"", ""num_bytes"": 5941158, ""num_examples"": 2296}, {""name"": ""validation"", ""num_bytes"": 5352859, ""num_examples"": 2107}], ""download_size"": 2754576893, ""dataset_size"": 616287227}, {""config_name"": ""sv"", ""features"": [{""name"": ""docid"", ""dtype"": ""string""}, {""name"": ""title"", ""dtype"": ""string""}, {""name"": ""uri"", ""dtype"": ""string""}, {""name"": ""text"", ""dtype"": ""string""}, {""name"": ""entities"", ""list"": [{""name"": ""uri"", ""dtype"": ""string""}, {""name"": ""surfaceform"", ""dtype"": ""string""}, {""name"": ""type"", ""dtype"": ""string""}, {""name"": ""start"", ""dtype"": ""int32""}, {""name"": ""end"", ""dtype"": ""int32""}]}, {""name"": ""relations"", ""list"": [{""name"": ""subject"", ""dtype"": ""int32""}, {""name"": ""predicate"", ""dtype"": ""string""}, {""name"": ""object"", ""dtype"": ""int32""}]}], ""splits"": [{""name"": ""train"", ""num_bytes"": 1822863623, ""num_examples"": 1742082}, {""name"": ""test"", ""num_bytes"": 13002356, ""num_examples"": 7531}, {""name"": ""validation"", ""num_bytes"": 5136097, ""num_examples"": 2987}], ""download_size"": 6790489020, ""dataset_size"": 1841002076}, {""config_name"": ""vi"", ""features"": [{""name"": ""docid"", ""dtype"": ""string""}, {""name"": ""title"", ""dtype"": ""string""}, {""name"": ""uri"", ""dtype"": ""string""}, {""name"": ""text"", ""dtype"": ""string""}, {""name"": ""entities"", ""list"": [{""name"": ""uri"", ""dtype"": ""string""}, {""name"": ""surfaceform"", ""dtype"": ""string""}, {""name"": ""type"", ""dtype"": ""string""}, {""name"": ""start"", ""dtype"": ""int32""}, {""name"": ""end"", ""dtype"": ""int32""}]}, {""name"": ""relations"", ""list"": [{""name"": ""subject"", ""dtype"": ""int32""}, {""name"": ""predicate"", ""dtype"": ""string""}, {""name"": ""object"", ""dtype"": ""int32""}]}], ""splits"": [{""name"": ""train"", ""num_bytes"": 300641174, ""num_examples"": 260010}, {""name"": ""test"", ""num_bytes"": 4304795, ""num_examples"": 1824}, {""name"": ""validation"", ""num_bytes"": 3402120, ""num_examples"": 1461}], ""download_size"": 1301938106, ""dataset_size"": 308348089}, {""config_name"": ""zh"", ""features"": [{""name"": ""docid"", ""dtype"": ""string""}, {""name"": ""title"", ""dtype"": ""string""}, {""name"": ""uri"", ""dtype"": ""string""}, {""name"": ""text"", ""dtype"": ""string""}, {""name"": ""entities"", ""list"": [{""name"": ""uri"", ""dtype"": ""string""}, {""name"": ""surfaceform"", ""dtype"": ""string""}, {""name"": ""type"", ""dtype"": ""string""}, {""name"": ""start"", ""dtype"": ""int32""}, {""name"": ""end"", ""dtype"": ""int32""}]}, {""name"": ""relations"", ""list"": [{""name"": ""subject"", ""dtype"": ""int32""}, {""name"": ""predicate"", ""dtype"": ""string""}, {""name"": ""object"", ""dtype"": ""int32""}]}], ""splits"": [{""name"": ""train"", ""num_bytes"": 449085696, ""num_examples"": 369249}, {""name"": ""test"", ""num_bytes"": 5260974, ""num_examples"": 2667}, {""name"": ""validation"", ""num_bytes"": 3511103, ""num_examples"": 1816}], ""download_size"": 2440525684, ""dataset_size"": 457857773}, {""config_name"": ""all_languages"", ""features"": [{""name"": ""docid"", ""dtype"": ""string""}, {""name"": ""title"", ""dtype"": ""string""}, {""name"": ""uri"", ""dtype"": ""string""}, {""name"": ""lan"", ""dtype"": ""string""}, {""name"": ""text"", ""dtype"": ""string""}, {""name"": ""entities"", ""list"": [{""name"": ""uri"", ""dtype"": ""string""}, {""name"": ""surfaceform"", ""dtype"": ""string""}, {""name"": ""type"", ""dtype"": ""string""}, {""name"": ""start"", ""dtype"": ""int32""}, {""name"": ""end"", ""dtype"": ""int32""}]}, {""name"": ""relations"", ""list"": [{""name"": ""subject"", ""dtype"": ""int32""}, {""name"": ""predicate"", ""dtype"": ""string""}, {""name"": ""object"", ""dtype"": ""int32""}]}], ""splits"": [{""name"": ""train"", ""num_bytes"": 14615645332, ""num_examples"": 11865756}, {""name"": ""test"", ""num_bytes"": 131636046, ""num_examples"": 67033}, {""name"": ""validation"", ""num_bytes"": 103507688, ""num_examples"": 51181}], ""download_size"": 56989165879, ""dataset_size"": 14850789066}], ""task_categories"": [""token-classification""], ""language"": [""ar"", ""ca"", ""de"", ""el"", ""en"", ""es"", ""fr"", ""hi"", ""it"", ""ja"", ""ko"", ""nl"", ""pl"", ""pt"", ""ru"", ""sv"", ""vi"", ""zh""], ""size_categories"": [""10MFM: a Filtered and Multilingual Relation Extraction Dataset
This is the automatically-filtered dataset from the 2023 ACL paper [RED^{FM}: a Filtered and Multilingual Relation Extraction Dataset](https://arxiv.org/abs/2306.09802). If you use the model, please reference this work in your paper:
@inproceedings{huguet-cabot-et-al-2023-redfm-dataset,
title = ""RED$^{\rm FM}$: a Filtered and Multilingual Relation Extraction Dataset"",
author = ""Huguet Cabot, Pere-Llu{\'\i}s and Tedeschi, Simone and Ngonga Ngomo, Axel-Cyrille and
Navigli, Roberto"",
booktitle = ""Proc. of the 61st Annual Meeting of the Association for Computational Linguistics: ACL 2023"",
month = jul,
year = ""2023"",
address = ""Toronto, Canada"",
publisher = ""Association for Computational Linguistics"",
url = ""https://arxiv.org/abs/2306.09802"",
}
## License
SREDFM is licensed under the CC BY-SA 4.0 license. The text of the license can be found [here](https://creativecommons.org/licenses/by-sa/4.0/)."
maywell/korean_textbooks,"{""language"": [""ko""], ""license"": ""apache-2.0"", ""size_categories"": [""1M 1) / tmp.shape[0]}
tokenizer_name = 'mistralai/Mistral-7B-v0.1'
language = 'sk' #Slovak
tokenizer = AutoTokenizer.from_pretrained(tokenizer_name)
ds = load_dataset('occiglot/tokenizer-wiki-bench', name=language, split='clean')
remove_columns = list(set(ds.column_names) - set([""text""]))
ds = ds.map(lambda x: {'tokens': tokenizer(x['split_text'], add_special_tokens=False)['input_ids']} ,num_proc=256, remove_columns=remove_columns, batched=False)
remove_columns = None#list(set(ds.column_names))
ds = ds.map(lambda x: calculate_metrics(x['tokens']), num_proc=256, remove_columns=remove_columns, batched=False)
df = ds.to_pandas()
print('Fertility: ', df.fertility.mean())
print('Prop. continued words:', df.cont_prop.mean())
```
## Dataset Creation
We loosely follow the approach of [Rust _et al.](https://arxiv.org/abs/2012.15613) using the fast [UDPipe](https://ufal.mff.cuni.cz/udpipe) to pre-split documents into words and subsequently run the tokenizer over isolated words. For all languages we use the respective November 2023 snapshot from [Wikipedia](wikimedia/wikipedia). Since Wikipedia, by nature, contains significantly more numbers and dates than other text and most tokenizers split those into single digits, we filtered all lone-standing numbers from the documents. Additionally, we removed any documents that still contained non-parsed HTML code (less than 1%).
## Licensing
We release our curated benchmark and any associated code under [MIT](https://opensource.org/license/mit) license. However, depending on your use case, the licensing conditions of the original [Wikipedia data](https://huggingface.co/datasets/wikimedia/wikipedia#licensing-information) and [UDPipe](https://github.com/ufal/udpipe/tree/udpipe-2?tab=License-1-ov-file) may apply.
## Supported Languages
This dataset currently contains pre-processed data for the following languages:
| Language | Code |
|:-----------|:-------|
| Afrikaans | af |
| Arabic | ar |
| Armenian | hy |
| Basque | eu |
| Bulgarian | bg |
| Catalan | ca |
| Croatian | hr |
| Czech | cs |
| Danish | da |
| Dutch | nl |
| English | en |
| Estonian | et |
| Finnish | fi |
| French | fr |
| German | de |
| Greek | el |
| Hebrew | he |
| Hindi | hi |
| Hungarian | hu |
| Indonesian | id |
| Irish | ga |
| Italian | it |
| Japanese | ja |
| Korean | ko |
| Latvian | lv |
| Lithuanian | lt |
| Marathi | mr |
| Norwegian | no |
| Persian | fa |
| Polish | pl |
| Portuguese | pt |
| Romanian | ro |
| Russian | ru |
| Sanskrit | sa |
| Serbian | sr |
| Slovak | sk |
| Slovenian | sl |
| Spanish | es |
| Swedish | sv |
| Tamil | ta |
| Telugu | te |
| Turkish | tr |
| Ukrainian | uk |
| Urdu | ur |
| Vietnamese | vi |"
taeminlee/Ko-StrategyQA,"{""language"": [""ko""], ""multilinguality"": [""monolingual""], ""size_categories"": [""1K One of the biggest challenges hindering progress in low-resource and multilingual machine translation is the lack of good evaluation benchmarks. Current evaluation benchmarks either lack good coverage of low-resource languages, consider only restricted domains, or are low quality because they are constructed using semi-automatic procedures. In this work, we introduce the FLORES evaluation benchmark, consisting of 3001 sentences extracted from English Wikipedia and covering a variety of different topics and domains. These sentences have been translated in 101 languages by professional translators through a carefully controlled process. The resulting dataset enables better assessment of model quality on the long tail of low-resource languages, including the evaluation of many-to-many multilingual translation systems, as all translations are multilingually aligned. By publicly releasing such a high-quality and high-coverage dataset, we hope to foster progress in the machine translation community and beyond.
**Disclaimer**: *The Flores-101 dataset is hosted by the Facebook and licensed under the [Creative Commons Attribution-ShareAlike 4.0 International License](https://creativecommons.org/licenses/by-sa/4.0/).
### Supported Tasks and Leaderboards
#### Multilingual Machine Translation
Refer to the [Dynabench leaderboard](https://dynabench.org/flores/Flores%20MT%20Evaluation%20(FULL)) for additional details on model evaluation on FLORES-101 in the context of the WMT2021 shared task on [Large-Scale Multilingual Machine Translation](http://www.statmt.org/wmt21/large-scale-multilingual-translation-task.html).
### Languages
The dataset contains parallel sentences for 101 languages, as mentioned in the original [Github](https://github.com/facebookresearch/flores/blob/master/README.md) page for the project. Languages are identified with the ISO 639-3 code (e.g. `eng`, `fra`, `rus`) as in the original dataset.
**New:** Use the configuration `all` to access the full set of parallel sentences for all the available languages in a single command.
## Dataset Structure
### Data Instances
A sample from the `dev` split for the Russian language (`rus` config) is provided below. All configurations have the same structure, and all sentences are aligned across configurations and splits.
```python
{
'id': 1,
'sentence': 'В понедельник ученые из Медицинской школы Стэнфордского университета объявили об изобретении нового диагностического инструмента, который может сортировать клетки по их типу; это маленький чип, который можно напечатать, используя стандартный струйный принтер примерно за 1 цент США.',
'URL': 'https://en.wikinews.org/wiki/Scientists_say_new_medical_diagnostic_chip_can_sort_cells_anywhere_with_an_inkjet',
'domain': 'wikinews',
'topic': 'health',
'has_image': 0,
'has_hyperlink': 0
}
```
The text is provided as-in the original dataset, without further preprocessing or tokenization.
### Data Fields
- `id`: Row number for the data entry, starting at 1.
- `sentence`: The full sentence in the specific language.
- `URL`: The URL for the English article from which the sentence was extracted.
- `domain`: The domain of the sentence.
- `topic`: The topic of the sentence.
- `has_image`: Whether the original article contains an image.
- `has_hyperlink`: Whether the sentence contains a hyperlink.
### Data Splits
| config| `dev`| `devtest`|
|-----------------:|-----:|---------:|
|all configurations| 997| 1012:|
### Dataset Creation
Please refer to the original article [The FLORES-101 Evaluation Benchmark for Low-Resource and Multilingual Machine Translation](https://arxiv.org/abs/2106.03193) for additional information on dataset creation.
## Additional Information
### Dataset Curators
The original authors of FLORES-101 are the curators of the original dataset. For problems or updates on this 🤗 Datasets version, please contact [gabriele.sarti996@gmail.com](mailto:gabriele.sarti996@gmail.com).
### Licensing Information
Licensed with Creative Commons Attribution Share Alike 4.0. License available [here](https://creativecommons.org/licenses/by-sa/4.0/).
### Citation Information
Please cite the authors if you use these corpora in your work:
```bibtex
@inproceedings{flores101,
title={The FLORES-101 Evaluation Benchmark for Low-Resource and Multilingual Machine Translation},
author={Goyal, Naman and Gao, Cynthia and Chaudhary, Vishrav and Chen, Peng-Jen and Wenzek, Guillaume and Ju, Da and Krishnan, Sanjana and Ranzato, Marc'Aurelio and Guzm\'{a}n, Francisco and Fan, Angela},
journal={arXiv preprint arXiv:2106.03193},
year={2021}
}
```"
aiana94/polynews,"{""license"": ""cc-by-nc-4.0"", ""task_categories"": [""fill-mask"", ""text-generation""], ""language"": [""am"", ""ar"", ""ay"", ""bm"", ""bbj"", ""bn"", ""bs"", ""bg"", ""ca"", ""cs"", ""ku"", ""da"", ""el"", ""en"", ""et"", ""ee"", ""fil"", ""fi"", ""fr"", ""fon"", ""gu"", ""guw"", ""ha"", ""he"", ""hi"", ""hu"", ""ig"", ""id"", ""it"", ""ja"", ""kk"", ""km"", ""ko"", ""lv"", ""ln"", ""lt"", ""lg"", ""luo"", ""mk"", ""mos"", ""my"", ""nl"", ""no"", ""ne"", ""om"", ""or"", ""pa"", ""pcm"", ""fa"", ""pl"", ""pt"", ""mg"", ""ro"", ""rn"", ""ru"", ""sn"", ""so"", ""es"", ""sr"", ""sq"", ""sw"", ""sv"", ""ta"", ""tet"", ""ti"", ""th"", ""tn"", ""tr"", ""tw"", ""uk"", ""ur"", ""wo"", ""xh"", ""yo"", ""zh"", ""zu"", ""de""], ""multilinguality"": [""multilingual""], ""pretty_name"": ""PolyNews"", ""size_categories"": [""1K>> from datasets import load_dataset
>>> data = load_dataset('aiana94/polynews', 'ron_Latn')
# Please, specify the language code,
# A data point example is below:
{
""text"": ""Un public numeros. Este uimitor succesul după doar trei ediții . "",
""provenance"": ""globalvoices""
}
```
### Data Fields
- text (string): news text
- provenance (string) : source dataset for the news example
### Data Splits
For all languages, there is only the `train` split.
## Dataset Creation
### Curation Rationale
Multiple multilingual, human-translated, datasets containing news texts have been released in recent years.
However, these datasets are stored in different formats and various websites, and many contain numerous near duplicates.
With PolyNews, we aim to provide an easily-accessible, unified and deduplicated dataset that combines these disparate data sources.
It can be used for domain adaptation of language models, language modeling or text generation in both high-resource and low-resource languages.
### Source Data
The source data consists of five multilingual news datasets.
- [Wikinews](https://www.wikinews.org/) (latest dump available in May 2024)
- [GlobalVoices](https://opus.nlpl.eu/GlobalVoices/corpus/version/GlobalVoices) (v2018q4)
- [WMT-News](https://opus.nlpl.eu/WMT-News/corpus/version/WMT-News) (v2019)
- [MasakhaNews](https://huggingface.co/datasets/masakhane/masakhanews) (`train` split)
- [MAFAND](https://huggingface.co/datasets/masakhane/mafand) (`train` split)
#### Data Collection and Processing
We processed the data using a **working script** which covers the entire processing pipeline. It can be found [here](https://github.com/andreeaiana/nase/blob/main/scripts/construct_polynews.sh).
The data processing pipeline consists of:
1. Downloading the WMT-News and GlobalVoices News from OPUS.
2. Downloading the latest dump from WikiNews.
3. Loading the MasakhaNews and MAFAND datasets from Hugging Face Hub (only the `train` splits).
4. Concatenating, per language, all news texts from the source datasets.
5. Data cleaning (e.g., removal of exact duplicates, short texts, texts in other scripts)
6. [MinHash near-deduplication](https://github.com/bigcode-project/bigcode-dataset/blob/main/near_deduplication/minhash_deduplication.py) per language.
### Annotations
We augment the original samples with the `provenance` annotation which specifies the original data source from which a particular examples stems.
#### Personal and Sensitive Information
The data is sourced from newspaper sources and contains mentions of public figures and individuals.
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
Users should keep in mind that the dataset contains short news texts (e.g., mostly titles), which might limit the applicability of the developed systems to other domains.
## Additional Information
### Licensing Information
The dataset is released under the [CC BY-NC Attribution-NonCommercial 4.0 International license](https://creativecommons.org/licenses/by-nc/4.0/).
### Citation Infomation
**BibTeX:**
```bibtex
@misc{iana2024news,
title={News Without Borders: Domain Adaptation of Multilingual Sentence Embeddings for Cross-lingual News Recommendation},
author={Andreea Iana and Fabian David Schmidt and Goran Glavaš and Heiko Paulheim},
year={2024},
eprint={2406.12634},
archivePrefix={arXiv},
url={https://arxiv.org/abs/2406.12634}
}
```"
daekeun-ml/naver-news-summarization-ko,"{""license"": ""apache-2.0"", ""task_categories"": [""summarization""], ""language"": [""ko""], ""size_categories"": [""10K 함수해석학은 함수의 공간(특히 무한차원)의 탐구 에 주목한다. 함수해석학의 많은 응용분야 중 하나가 양자역학이다. 많은 문제들이 자연스럽게 양과 그 양의 변화율의 관계로 귀착되고, 이러한 문제들이 미분방정식으로 다루어진다. 자연의 많은 현상들이 동역학계로 기술될 수 있다. 혼돈 이론은 이러한 예측 불가능한 현상을 탐구하는 데 상당한 기여를 한다.',
""paragraph_answer"": '변화에 대한 이해와 묘사는 자연과학에 있어서 일반적인 주제이며, 미적분학은 변화를 탐구하는 강력한 도구로서 발전되었다. 함수는 변화하는 양을 묘사함에 있어서 중추적인 개념으로써 떠오르게 된다. 실수와 실변수로 구성된 함수의 엄밀한 탐구가 실해석학이라는 분야로 알려지게 되었고, 복소수에 대한 이와 같은 탐구 분야는 복소해석학이라고 한다. 함수해석학은 함수의 공간(특히 무한차원)의 탐구 에 주목한다. 함수해석학의 많은 응용분야 중 하나가 양자역학이다. 많은 문제들이 자연스럽게 양과 그 양의 변화율의 관계로 귀착되고, 이러한 문제들이 미분방정식으로 다루어진다. 자연의 많은 현상들이 동역학계로 기술될 수 있다. 혼돈 이론은 이러한 예측 불가능한 현상을 탐구하는 데 상당한 기여를 한다.',
""sentence_answer"": ""함수해석학은 함수의 공간(특히 무한차원)의 탐구 에 주목한다.""
}
```
The data fields are the same among all splits.
- `question`: a `string` feature.
- `paragraph`: a `string` feature.
- `answer`: a `string` feature.
- `sentence`: a `string` feature.
- `paragraph_answer`: a `string` feature, which is same as the paragraph but the answer is highlighted by a special token ``.
- `paragraph_sentence`: a `string` feature, which is same as the paragraph but a sentence containing the answer is highlighted by a special token ``.
- `sentence_answer`: a `string` feature, which is same as the sentence but the answer is highlighted by a special token ``.
Each of `paragraph_answer`, `paragraph_sentence`, and `sentence_answer` feature is assumed to be used to train a question generation model,
but with different information. The `paragraph_answer` and `sentence_answer` features are for answer-aware question generation and
`paragraph_sentence` feature is for sentence-aware question generation.
## Data Splits
|train|validation|test |
|----:|---------:|----:|
|54556| 5766 |5766 |
## Citation Information
```
@inproceedings{ushio-etal-2022-generative,
title = ""{G}enerative {L}anguage {M}odels for {P}aragraph-{L}evel {Q}uestion {G}eneration: {A} {U}nified {B}enchmark and {E}valuation"",
author = ""Ushio, Asahi and
Alva-Manchego, Fernando and
Camacho-Collados, Jose"",
booktitle = ""Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing"",
month = dec,
year = ""2022"",
address = ""Abu Dhabi, U.A.E."",
publisher = ""Association for Computational Linguistics"",
}
```"
MoritzLaurer/multilingual-NLI-26lang-2mil7,"{""annotations_creators"": [""crowdsourced""], ""language_creators"": [""machinetranslation""], ""size_categories"": [""1MMIMIC-IT Dataset Download\nAgreement\nS-Lab, Nanyang Technological University (S-Lab) provides access to\nthe MIMIC-IT Dataset (referred to as the Dataset) under the following\nconditions.
\nBy signing, the researcher agrees to the following terms of use:
\n\n- S-Lab makes no warranties regarding the Dataset, including but not\nlimited to being up-to-date, correct or complete. S-Lab cannot be held\nliable for providing access to the Dataset or usage of the Dataset.
\n- The Dataset should only be used for scientific or research purposes.\nAny other use is explicitly prohibited.
\n- The researcher agrees to the following terms and conditions of data\nsources of the Dataset:\n
\n- The researcher takes full responsibility for usage of the Dataset at\nany time.
\n- S-Lab reserves the right to terminate the researcher's access to the\nDataset at any time.
\n- The place of jurisdiction is Singapore.
\n- If any part of this agreement is legally invalid, this shall not\naffect the remaining agreement.
\n
\n"", ""extra_gated_fields"": {""Verifiable Name"": ""text"", ""Institution Email"": ""text"", ""Institutional Affiliation"": ""text"", ""I agree with the agreement"": ""checkbox""}, ""dataset_info"": [{""config_name"": ""CGD"", ""features"": [{""name"": ""id"", ""dtype"": ""string""}, {""name"": ""instruction"", ""dtype"": ""string""}, {""name"": ""answer"", ""dtype"": ""string""}, {""name"": ""images"", ""sequence"": ""image""}, {""name"": ""related instructions"", ""sequence"": ""string""}], ""splits"": [{""name"": ""train"", ""num_bytes"": 26335666892.75, ""num_examples"": 141869}], ""download_size"": 13284595128, ""dataset_size"": 26335666892.75}, {""config_name"": ""CGD_Images"", ""features"": [{""name"": ""id"", ""dtype"": ""string""}, {""name"": ""image"", ""dtype"": ""image""}], ""splits"": [{""name"": ""train"", ""num_bytes"": 10977030309.125, ""num_examples"": 118287}], ""download_size"": 10976812684, ""dataset_size"": 10977030309.125}, {""config_name"": ""CGD_Instructions"", ""features"": [{""name"": ""id"", ""dtype"": ""string""}, {""name"": ""instruction"", ""dtype"": ""string""}, {""name"": ""answer"", ""dtype"": ""string""}, {""name"": ""images"", ""sequence"": ""string""}, {""name"": ""related instructions"", ""sequence"": ""string""}], ""splits"": [{""name"": ""train"", ""num_bytes"": 42088070, ""num_examples"": 141869}], ""download_size"": 14266985, ""dataset_size"": 42088070}, {""config_name"": ""DC_Instructions"", ""features"": [{""name"": ""id"", ""dtype"": ""string""}, {""name"": ""instruction"", ""dtype"": ""string""}, {""name"": ""answer"", ""dtype"": ""string""}, {""name"": ""images"", ""sequence"": ""string""}, {""name"": ""related instructions"", ""sequence"": ""string""}], ""splits"": [{""name"": ""train"", ""num_bytes"": 718166107, ""num_examples"": 226242}], ""download_size"": 50424022, ""dataset_size"": 718166107}, {""config_name"": ""E4D_Instructions"", ""features"": [{""name"": ""id"", ""dtype"": ""string""}, {""name"": ""instruction"", ""dtype"": ""string""}, {""name"": ""answer"", ""dtype"": ""string""}, {""name"": ""images"", ""sequence"": ""string""}, {""name"": ""related instructions"", ""sequence"": ""string""}], ""splits"": [{""name"": ""train"", ""num_bytes"": 3647794122, ""num_examples"": 2729222}], ""download_size"": 396261870, ""dataset_size"": 3647794122}, {""config_name"": ""LACONV"", ""features"": [{""name"": ""id"", ""dtype"": ""string""}, {""name"": ""instruction"", ""dtype"": ""string""}, {""name"": ""answer"", ""dtype"": ""string""}, {""name"": ""images"", ""sequence"": ""image""}, {""name"": ""related instructions"", ""sequence"": ""string""}], ""splits"": [{""name"": ""train"", ""num_bytes"": 13374859898.25, ""num_examples"": 256870}], ""download_size"": 3096198512, ""dataset_size"": 13374859898.25}, {""config_name"": ""LACONV_Instructions"", ""features"": [{""name"": ""id"", ""dtype"": ""string""}, {""name"": ""instruction"", ""dtype"": ""string""}, {""name"": ""answer"", ""dtype"": ""string""}, {""name"": ""images"", ""sequence"": ""string""}, {""name"": ""related instructions"", ""sequence"": ""string""}], ""splits"": [{""name"": ""train"", ""num_bytes"": 119528906, ""num_examples"": 256870}], ""download_size"": 54731579, ""dataset_size"": 119528906}, {""config_name"": ""LACR_I2I"", ""features"": [{""name"": ""id"", ""dtype"": ""string""}, {""name"": ""instruction"", ""dtype"": ""string""}, {""name"": ""answer"", ""dtype"": ""string""}, {""name"": ""images"", ""sequence"": ""image""}, {""name"": ""related instructions"", ""sequence"": ""string""}], ""splits"": [{""name"": ""train"", ""num_bytes"": 4027892178.625, ""num_examples"": 76643}], ""download_size"": 3988169106, ""dataset_size"": 4027892178.625}, {""config_name"": ""LACR_I2I_Instructions"", ""features"": [{""name"": ""id"", ""dtype"": ""string""}, {""name"": ""instruction"", ""dtype"": ""string""}, {""name"": ""answer"", ""dtype"": ""string""}, {""name"": ""images"", ""sequence"": ""string""}, {""name"": ""related instructions"", ""sequence"": ""string""}], ""splits"": [{""name"": ""train"", ""num_bytes"": 89534975, ""num_examples"": 76643}], ""download_size"": 42911696, ""dataset_size"": 89534975}, {""config_name"": ""LACR_T2T"", ""features"": [{""name"": ""id"", ""dtype"": ""string""}, {""name"": ""instruction"", ""dtype"": ""string""}, {""name"": ""answer"", ""dtype"": ""string""}, {""name"": ""images"", ""sequence"": ""image""}, {""name"": ""related instructions"", ""sequence"": ""string""}], ""splits"": [{""name"": ""train"", ""num_bytes"": 4028004669.625, ""num_examples"": 76643}], ""download_size"": 3988281406, ""dataset_size"": 4028004669.625}, {""config_name"": ""LACR_T2T_Instructions"", ""features"": [{""name"": ""id"", ""dtype"": ""string""}, {""name"": ""instruction"", ""dtype"": ""string""}, {""name"": ""answer"", ""dtype"": ""string""}, {""name"": ""images"", ""sequence"": ""string""}, {""name"": ""related instructions"", ""sequence"": ""string""}], ""splits"": [{""name"": ""train"", ""num_bytes"": 89647466, ""num_examples"": 76643}], ""download_size"": 43136360, ""dataset_size"": 89647466}, {""config_name"": ""LADD"", ""features"": [{""name"": ""id"", ""dtype"": ""string""}, {""name"": ""instruction"", ""dtype"": ""string""}, {""name"": ""answer"", ""dtype"": ""string""}, {""name"": ""images"", ""sequence"": ""image""}, {""name"": ""related instructions"", ""sequence"": ""string""}], ""splits"": [{""name"": ""train"", ""num_bytes"": 1293641342.0, ""num_examples"": 23240}], ""download_size"": 1285923315, ""dataset_size"": 1293641342.0}, {""config_name"": ""LADD_Instructions"", ""features"": [{""name"": ""id"", ""dtype"": ""string""}, {""name"": ""instruction"", ""dtype"": ""string""}, {""name"": ""answer"", ""dtype"": ""string""}, {""name"": ""images"", ""sequence"": ""string""}, {""name"": ""related instructions"", ""sequence"": ""string""}], ""splits"": [{""name"": ""train"", ""num_bytes"": 16659871, ""num_examples"": 23240}], ""download_size"": 7472431, ""dataset_size"": 16659871}, {""config_name"": ""LA_Images"", ""features"": [{""name"": ""id"", ""dtype"": ""string""}, {""name"": ""image"", ""dtype"": ""image""}], ""splits"": [{""name"": ""train"", ""num_bytes"": 4191197157.25, ""num_examples"": 81398}], ""download_size"": 4190198358, ""dataset_size"": 4191197157.25}, {""config_name"": ""SD"", ""features"": [{""name"": ""id"", ""dtype"": ""string""}, {""name"": ""instruction"", ""dtype"": ""string""}, {""name"": ""answer"", ""dtype"": ""string""}, {""name"": ""images"", ""sequence"": ""image""}, {""name"": ""related instructions"", ""sequence"": ""string""}], ""splits"": [{""name"": ""train"", ""num_bytes"": 3098784669.75, ""num_examples"": 15989}], ""download_size"": 1669131271, ""dataset_size"": 3098784669.75}, {""config_name"": ""SD_Images"", ""features"": [{""name"": ""id"", ""dtype"": ""string""}, {""name"": ""image"", ""dtype"": ""image""}], ""splits"": [{""name"": ""train"", ""num_bytes"": 2523484759.75, ""num_examples"": 26154}], ""download_size"": 2438558263, ""dataset_size"": 2523484759.75}, {""config_name"": ""SD_Instructions"", ""features"": [{""name"": ""id"", ""dtype"": ""string""}, {""name"": ""instruction"", ""dtype"": ""string""}, {""name"": ""answer"", ""dtype"": ""string""}, {""name"": ""images"", ""sequence"": ""string""}, {""name"": ""related instructions"", ""sequence"": ""string""}], ""splits"": [{""name"": ""train"", ""num_bytes"": 4112174, ""num_examples"": 15989}], ""download_size"": 1237759, ""dataset_size"": 4112174}, {""config_name"": ""SN"", ""features"": [{""name"": ""id"", ""dtype"": ""string""}, {""name"": ""instruction"", ""dtype"": ""string""}, {""name"": ""answer"", ""dtype"": ""string""}, {""name"": ""images"", ""sequence"": ""image""}, {""name"": ""related instructions"", ""sequence"": ""string""}], ""splits"": [{""name"": ""train"", ""num_bytes"": 7979712053.04, ""num_examples"": 6640}], ""download_size"": 3401191449, ""dataset_size"": 7979712053.04}, {""config_name"": ""SN_Images"", ""features"": [{""name"": ""id"", ""dtype"": ""string""}, {""name"": ""image"", ""dtype"": ""image""}], ""splits"": [{""name"": ""train"", ""num_bytes"": 859886037.875, ""num_examples"": 11513}], ""download_size"": 859698909, ""dataset_size"": 859886037.875}, {""config_name"": ""SN_Instructions"", ""features"": [{""name"": ""id"", ""dtype"": ""string""}, {""name"": ""instruction"", ""dtype"": ""string""}, {""name"": ""answer"", ""dtype"": ""string""}, {""name"": ""images"", ""sequence"": ""string""}, {""name"": ""related instructions"", ""sequence"": ""string""}], ""splits"": [{""name"": ""train"", ""num_bytes"": 7230721, ""num_examples"": 6640}], ""download_size"": 1324832, ""dataset_size"": 7230721}, {""config_name"": ""TVC"", ""features"": [{""name"": ""id"", ""dtype"": ""string""}, {""name"": ""instruction"", ""dtype"": ""string""}, {""name"": ""answer"", ""dtype"": ""string""}, {""name"": ""images"", ""sequence"": ""image""}, {""name"": ""related instructions"", ""sequence"": ""string""}], ""splits"": [{""name"": ""train"", ""num_bytes"": 130408953299.393, ""num_examples"": 137607}], ""download_size"": 79524699480, ""dataset_size"": 130408953299.393}, {""config_name"": ""TVC_Images"", ""features"": [{""name"": ""id"", ""dtype"": ""string""}, {""name"": ""image"", ""dtype"": ""image""}], ""splits"": [{""name"": ""train"", ""num_bytes"": 13056626872.375, ""num_examples"": 227701}], ""download_size"": 13052443854, ""dataset_size"": 13056626872.375}, {""config_name"": ""TVC_Instructions"", ""features"": [{""name"": ""id"", ""dtype"": ""string""}, {""name"": ""instruction"", ""dtype"": ""string""}, {""name"": ""answer"", ""dtype"": ""string""}, {""name"": ""images"", ""sequence"": ""string""}, {""name"": ""related instructions"", ""sequence"": ""string""}], ""splits"": [{""name"": ""train"", ""num_bytes"": 161582906, ""num_examples"": 137607}], ""download_size"": 30882217, ""dataset_size"": 161582906}, {""config_name"": ""VST"", ""features"": [{""name"": ""id"", ""dtype"": ""string""}, {""name"": ""instruction"", ""dtype"": ""string""}, {""name"": ""answer"", ""dtype"": ""string""}, {""name"": ""images"", ""sequence"": ""image""}, {""name"": ""related instructions"", ""sequence"": ""string""}], ""splits"": [{""name"": ""train"", ""num_bytes"": 7093814625.328, ""num_examples"": 32893}], ""download_size"": 4263530868, ""dataset_size"": 7093814625.328}, {""config_name"": ""VST_Images"", ""features"": [{""name"": ""id"", ""dtype"": ""string""}, {""name"": ""image"", ""dtype"": ""image""}], ""splits"": [{""name"": ""train"", ""num_bytes"": 14529719834.625, ""num_examples"": 144755}], ""download_size"": 14282540973, ""dataset_size"": 14529719834.625}, {""config_name"": ""VST_Instructions"", ""features"": [{""name"": ""id"", ""dtype"": ""string""}, {""name"": ""instruction"", ""dtype"": ""string""}, {""name"": ""answer"", ""dtype"": ""string""}, {""name"": ""images"", ""sequence"": ""string""}, {""name"": ""related instructions"", ""sequence"": ""string""}], ""splits"": [{""name"": ""train"", ""num_bytes"": 30877616, ""num_examples"": 32893}], ""download_size"": 9311504, ""dataset_size"": 30877616}], ""configs"": [{""config_name"": ""CGD"", ""data_files"": [{""split"": ""train"", ""path"": ""CGD/train-*""}]}, {""config_name"": ""CGD_Images"", ""data_files"": [{""split"": ""train"", ""path"": ""CGD_Images/train-*""}]}, {""config_name"": ""CGD_Instructions"", ""data_files"": [{""split"": ""train"", ""path"": ""CGD_Instructions/train-*""}]}, {""config_name"": ""DC_Instructions"", ""data_files"": [{""split"": ""train"", ""path"": ""DC_Instructions/train-*""}]}, {""config_name"": ""E4D_Instructions"", ""data_files"": [{""split"": ""train"", ""path"": ""E4D_Instructions/train-*""}]}, {""config_name"": ""LACONV"", ""data_files"": [{""split"": ""train"", ""path"": ""LACONV/train-*""}]}, {""config_name"": ""LACONV_Instructions"", ""data_files"": [{""split"": ""train"", ""path"": ""LACONV_Instructions/train-*""}]}, {""config_name"": ""LACR_I2I"", ""data_files"": [{""split"": ""train"", ""path"": ""LACR_I2I/train-*""}]}, {""config_name"": ""LACR_I2I_Instructions"", ""data_files"": [{""split"": ""train"", ""path"": ""LACR_I2I_Instructions/train-*""}]}, {""config_name"": ""LACR_T2T"", ""data_files"": [{""split"": ""train"", ""path"": ""LACR_T2T/train-*""}]}, {""config_name"": ""LACR_T2T_Instructions"", ""data_files"": [{""split"": ""train"", ""path"": ""LACR_T2T_Instructions/train-*""}]}, {""config_name"": ""LADD"", ""data_files"": [{""split"": ""train"", ""path"": ""LADD/train-*""}]}, {""config_name"": ""LADD_Instructions"", ""data_files"": [{""split"": ""train"", ""path"": ""LADD_Instructions/train-*""}]}, {""config_name"": ""LA_Images"", ""data_files"": [{""split"": ""train"", ""path"": ""LA_Images/train-*""}]}, {""config_name"": ""SD"", ""data_files"": [{""split"": ""train"", ""path"": ""SD/train-*""}]}, {""config_name"": ""SD_Images"", ""data_files"": [{""split"": ""train"", ""path"": ""SD_Images/train-*""}]}, {""config_name"": ""SD_Instructions"", ""data_files"": [{""split"": ""train"", ""path"": ""SD_Instructions/train-*""}]}, {""config_name"": ""SN"", ""data_files"": [{""split"": ""train"", ""path"": ""SN/train-*""}]}, {""config_name"": ""SN_Images"", ""data_files"": [{""split"": ""train"", ""path"": ""SN_Images/train-*""}]}, {""config_name"": ""SN_Instructions"", ""data_files"": [{""split"": ""train"", ""path"": ""SN_Instructions/train-*""}]}, {""config_name"": ""TVC"", ""data_files"": [{""split"": ""train"", ""path"": ""TVC/train-*""}]}, {""config_name"": ""TVC_Images"", ""data_files"": [{""split"": ""train"", ""path"": ""TVC_Images/train-*""}]}, {""config_name"": ""TVC_Instructions"", ""data_files"": [{""split"": ""train"", ""path"": ""TVC_Instructions/train-*""}]}, {""config_name"": ""VST"", ""data_files"": [{""split"": ""train"", ""path"": ""VST/train-*""}]}, {""config_name"": ""VST_Images"", ""data_files"": [{""split"": ""train"", ""path"": ""VST_Images/train-*""}]}, {""config_name"": ""VST_Instructions"", ""data_files"": [{""split"": ""train"", ""path"": ""VST_Instructions/train-*""}]}]}","
1S-Lab, Nanyang Technological University
2Microsoft Research, Redmond
♠ Co-Project Lead
* Equal Contribution
✉ Corresponding Author
## Dataset Description
- **Homepage: https://otter-ntu.github.io**
- **Repository: https://github.com/Luodian/Otter**
- **Paper: https://arxiv.org/abs/2306.05425**
**Note 1: To reduce memory consumption during image loading and improve loading speed, we are converting the JSON format of images to the Parquet format. For detailed information, please refer to [this link](https://github.com/Luodian/Otter/blob/main/docs/mimicit_format.md).**
**Note 2: We are uploading the full version of `DC` and `E4D`, the new files are indicated by the suffix `1207`.**
### Dataset Summary
MIMIC-IT offers a diverse and extensive dataset of 2.8M multimodal instruction-response pairs, designed to enhance the performance of Vision-Language Models (VLMs) in real-life scenarios, enabling VLMs to excel in perception, reasoning, and planning while also catering to a multilingual audience.
MIMIC-IT enables the application of egocentric visual assistant model that can serve that can answer your questions like **Hey, Do you think I left my keys on the table?**. Harness the power of MIMIC-IT to unlock the full potential of your AI-driven visual assistant and elevate your interactive vision-language tasks to new heights.
MIMIC-IT provides multilingual instructions, supporting English, Chinese, Korean, Japanese, German, French, Spanish, and Arabic, thereby allowing a larger global audience to altogether enjoy from the convenience brought about by advancements in artificial intelligence.
## Using MIMIC-IT
We have already upload the `images.parquet` file. You can check [`tools/load.py`](tools/load.py) to learn how to load the dataset (`instruction.json` + `images.parquet`) and check the integrity of the whole dataset.
You can also use [this code](https://huggingface.co/datasets/pufanyi/MIMICIT/blob/main/tools/convert_to_parquet.py) to convert `image.json` to `parquet` version by yourself.
You can following the steps to obtain the MIMIC-IT dataset. Each task (e.g. `DC`, `LA`) in MIMIC-IT is composed of three parts, including:
1. `xx.json` file: the images in base64 format.
2. `xx_instructions.json` file: the instruction-response pairs (also includes image ids and related instructions ids for each instruction-response pair) for each task.
3. `xx_train.json` file: the customized related instruction-response pairs for each instruction.
You can directly download the contents in the `data` folder. The distribution of the `data` folder is as follows:
```plain
data/
CGD/
CGD.json
CGD_images_preview.csv
CGD_instructions.json
...
```
For each `dataset_name`, there are three main files **except for `DC` and `E4D`**:
1. `{dataset_name}.json`: Stores the image numbers and their corresponding base64 codes in lossless compressed PNG format.
```json
{
""image_id_1"": ""base64_code_1"",
""image_id_2"": ""base64_code_2"",
...
}
```
2. `{dataset_name}_images_preview.csv`: Stores the image numbers and their corresponding base64 codes in lossy compressed JPG format, mainly used for display in the Dataset Card.
```csv
id, image
""image_id_1"", ""base64_code_1""
""image_id_2"", ""base64_code_2""
...
```
3. `{dataset_name}_instructions.json`: Stores each instruction and its associated answer.
```json
{
""meta"": {
""version"": current_version,
""time"": update_time,
""author"": ""ntu""
},
""data"": {
""instruction_id_1"": {
""instruction"": ""instruction_1"",
""answer"": ""answer_of_instruction_1"",
""image_ids"": [
""image_id_1"",
""image_id_2"",
...
],
""rel_ins_ids"": [
""related_instruction_id_1"",
""related_instruction_id_2"",
...
]
},
...
}
}
```
Of course, you can also use `wget` or `curl` for direct downloads. Below is an example.
Before proceeding with the downloads, you need to set your Hugging Face token. For that, please refer to [this page](https://huggingface.co/docs/hub/security-tokens).
```shell
$ # Set Hugging Face Token
$ HF_TOKEN=""YOUR_HUGGING_FACE_TOKEN""
$ # Set the dataset you want to download
$ DATASET_NAME=""DATASET_YOU_WANT_TO_DOWNLOAD"" # e.g. CGD
$ # Download {DATASET_NAME}.json
$ wget --header=""Authorization: Bearer $HF_TOKEN"" ""https://huggingface.co/datasets/pufanyi/MIMICIT/resolve/main/data/${DATASET_NAME}/${DATASET_NAME}.json""
$ # Download {DATASET_NAME}_instructions.json
$ wget --header=""Authorization: Bearer $HF_TOKEN"" ""https://huggingface.co/datasets/pufanyi/MIMICIT/resolve/main/data/${DATASET_NAME}/${DATASET_NAME}_instructions.json""
$ # Download {DATASET_NAME}_images_preview.csv (usually not necessary)
$ wget --header=""Authorization: Bearer $HF_TOKEN"" ""https://huggingface.co/datasets/pufanyi/MIMICIT/resolve/main/data/${DATASET_NAME}/${DATASET_NAME}_images_preview.csv""
```
Or
```shell
$ # Set Hugging Face Token
$ HF_TOKEN=""YOUR_HUGGING_FACE_TOKEN""
$ # Set the dataset you want to download
$ DATASET_NAME=""DATASET_YOU_WANT_TO_DOWNLOAD"" # e.g. CGD
$ # Download {DATASET_NAME}.json
$ curl -LJO -H ""Authorization: Bearer $HF_TOKEN"" ""https://huggingface.co/datasets/pufanyi/MIMICIT/resolve/main/data/${DATASET_NAME}/${DATASET_NAME}.json""
$ # Download {DATASET_NAME}_instructions.json
$ curl -LJO -H ""Authorization: Bearer $HF_TOKEN"" ""https://huggingface.co/datasets/pufanyi/MIMICIT/resolve/main/data/${DATASET_NAME}/${DATASET_NAME}_instructions.json""
$ # Download {DATASET_NAME}_images_preview.csv (usually not necessary)
$ curl -LJO -H ""Authorization: Bearer $HF_TOKEN"" ""https://huggingface.co/datasets/pufanyi/MIMICIT/resolve/main/data/${DATASET_NAME}/${DATASET_NAME}_images_preview.csv""
```
Alternatively, you can use `dataset.load_dataset` for downloading. However, due to Hugging Face's size limitations, all images can only be loaded in JPG format. Below is an example using `CGD` dataset:
### CGD_Images
Download the JPG format images and their corresponding identifiers:
```python
from datasets import load_dataset
data = load_dataset(""pufanyi/MIMICIT"", ""CGD_Images"")
```
The format will be like:
```json
{
""id"": ""CGD_IMG_000000426149"",
""image"":
}
```
It should be noted that, due to size limitations, for `DC` (Dense Captions), this command will only extract a portion of the images from the `DC` collection for downloading.
### CGD_Instructions
Download all instructions:
```python
from datasets import load_dataset
data = load_dataset(""pufanyi/MIMICIT"", ""CGD_Instructions"")
```
The format will be like:
```json
{
""id"": ""CGD_INS_000000"",
""instruction"": ""What is the difference between the two pizzas in these images?"",
""answer"": ""The pizza in the first image is on a red plate and being held by an old lady, while the pizza in the second image is on a metal counter being prepared by a woman in a blue shirt."",
""images"": [
""CGD_IMG_000000069568"",
""CGD_IMG_000000328270""
],
""related instructions"": [
""CGD_INS_000001""
]
}
```
### CGD_Preview
Download all instructions along with their corresponding JPG images:
```python
from datasets import load_dataset
data = load_dataset(""pufanyi/MIMICIT"", ""CGD_Preview"")
```
The format will be like:
```json
{
""id"": ""CGD_INS_000000"",
""instruction"": ""What is the difference between the two pizzas in these images?"",
""answer"": ""The pizza in the first image is on a red plate and being held by an old lady, while the pizza in the second image is on a metal counter being prepared by a woman in a blue shirt."",
""images"": [
,
],
""related instructions"": [
""CGD_INS_000001""
]
}
```
It should be noted that, due to size limitations, for `DC` (Dense Captions), this command will only extract a portion of the images from the `DC` collection for downloading."
wecover/OPUS_GlobalVoices,{},"---
configs:
- config_name: default
data_files:
- split: train
path: '*/*/train.parquet'
- split: valid
path: '*/*/valid.parquet'
- split: test
path: '*/*/test.parquet'
- config_name: am
data_files:
- split: train
path: '*/*am*/train.parquet'
- split: test
path: '*/*am*/test.parquet'
- split: valid
path: '*/*am*/valid.parquet'
- config_name: ar
data_files:
- split: train
path: '*/*ar*/train.parquet'
- split: test
path: '*/*ar*/test.parquet'
- split: valid
path: '*/*ar*/valid.parquet'
- config_name: bn
data_files:
- split: train
path: '*/*bn*/train.parquet'
- split: test
path: '*/*bn*/test.parquet'
- split: valid
path: '*/*bn*/valid.parquet'
- config_name: ca
data_files:
- split: train
path: '*/*ca*/train.parquet'
- split: test
path: '*/*ca*/test.parquet'
- split: valid
path: '*/*ca*/valid.parquet'
- config_name: de
data_files:
- split: train
path: '*/*de*/train.parquet'
- split: test
path: '*/*de*/test.parquet'
- split: valid
path: '*/*de*/valid.parquet'
- config_name: el
data_files:
- split: train
path: '*/*el*/train.parquet'
- split: test
path: '*/*el*/test.parquet'
- split: valid
path: '*/*el*/valid.parquet'
- config_name: en
data_files:
- split: train
path: '*/*en*/train.parquet'
- split: test
path: '*/*en*/test.parquet'
- split: valid
path: '*/*en*/valid.parquet'
- config_name: es
data_files:
- split: train
path: '*/*es*/train.parquet'
- split: test
path: '*/*es*/test.parquet'
- split: valid
path: '*/*es*/valid.parquet'
- config_name: fa
data_files:
- split: train
path: '*/*fa*/train.parquet'
- split: test
path: '*/*fa*/test.parquet'
- split: valid
path: '*/*fa*/valid.parquet'
- config_name: fr
data_files:
- split: train
path: '*/*fr*/train.parquet'
- split: test
path: '*/*fr*/test.parquet'
- split: valid
path: '*/*fr*/valid.parquet'
- config_name: hi
data_files:
- split: train
path: '*/*hi*/train.parquet'
- split: test
path: '*/*hi*/test.parquet'
- split: valid
path: '*/*hi*/valid.parquet'
- config_name: hu
data_files:
- split: train
path: '*/*hu*/train.parquet'
- split: test
path: '*/*hu*/test.parquet'
- split: valid
path: '*/*hu*/valid.parquet'
- config_name: id
data_files:
- split: train
path: '*/*id*/train.parquet'
- split: test
path: '*/*id*/test.parquet'
- split: valid
path: '*/*id*/valid.parquet'
- config_name: it
data_files:
- split: train
path: '*/*it*/train.parquet'
- split: test
path: '*/*it*/test.parquet'
- split: valid
path: '*/*it*/valid.parquet'
- config_name: mg
data_files:
- split: train
path: '*/*mg*/train.parquet'
- split: test
path: '*/*mg*/test.parquet'
- split: valid
path: '*/*mg*/valid.parquet'
- config_name: mk
data_files:
- split: train
path: '*/*mk*/train.parquet'
- split: test
path: '*/*mk*/test.parquet'
- split: valid
path: '*/*mk*/valid.parquet'
- config_name: my
data_files:
- split: train
path: '*/*my*/train.parquet'
- split: test
path: '*/*my*/test.parquet'
- split: valid
path: '*/*my*/valid.parquet'
- config_name: nl
data_files:
- split: train
path: '*/*nl*/train.parquet'
- split: test
path: '*/*nl*/test.parquet'
- split: valid
path: '*/*nl*/valid.parquet'
- config_name: pl
data_files:
- split: train
path: '*/*pl*/train.parquet'
- split: test
path: '*/*pl*/test.parquet'
- split: valid
path: '*/*pl*/valid.parquet'
- config_name: pt
data_files:
- split: train
path: '*/*pt*/train.parquet'
- split: test
path: '*/*pt*/test.parquet'
- split: valid
path: '*/*pt*/valid.parquet'
- config_name: ru
data_files:
- split: train
path: '*/*ru*/train.parquet'
- split: test
path: '*/*ru*/test.parquet'
- split: valid
path: '*/*ru*/valid.parquet'
- config_name: sr
data_files:
- split: train
path: '*/*sr*/train.parquet'
- split: test
path: '*/*sr*/test.parquet'
- split: valid
path: '*/*sr*/valid.parquet'
- config_name: sw
data_files:
- split: train
path: '*/*sw*/train.parquet'
- split: test
path: '*/*sw*/test.parquet'
- split: valid
path: '*/*sw*/valid.parquet'
- config_name: tr
data_files:
- split: train
path: '*/*tr*/train.parquet'
- split: test
path: '*/*tr*/test.parquet'
- split: valid
path: '*/*tr*/valid.parquet'
- config_name: ur
data_files:
- split: train
path: '*/*ur*/train.parquet'
- split: test
path: '*/*ur*/test.parquet'
- split: valid
path: '*/*ur*/valid.parquet'
- config_name: zhs
data_files:
- split: train
path: '*/*zhs*/train.parquet'
- split: test
path: '*/*zhs*/test.parquet'
- split: valid
path: '*/*zhs*/valid.parquet'
- config_name: zht
data_files:
- split: train
path: '*/*zht*/train.parquet'
- split: test
path: '*/*zht*/test.parquet'
- split: valid
path: '*/*zht*/valid.parquet'
- config_name: bg
data_files:
- split: train
path: '*/*bg*/train.parquet'
- split: test
path: '*/*bg*/test.parquet'
- split: valid
path: '*/*bg*/valid.parquet'
- config_name: cs
data_files:
- split: train
path: '*/*cs*/train.parquet'
- split: test
path: '*/*cs*/test.parquet'
- split: valid
path: '*/*cs*/valid.parquet'
- config_name: da
data_files:
- split: train
path: '*/*da*/train.parquet'
- split: test
path: '*/*da*/test.parquet'
- split: valid
path: '*/*da*/valid.parquet'
- config_name: eo
data_files:
- split: train
path: '*/*eo*/train.parquet'
- split: test
path: '*/*eo*/test.parquet'
- split: valid
path: '*/*eo*/valid.parquet'
- config_name: he
data_files:
- split: train
path: '*/*he*/train.parquet'
- split: test
path: '*/*he*/test.parquet'
- split: valid
path: '*/*he*/valid.parquet'
- config_name: km
data_files:
- split: train
path: '*/*km*/train.parquet'
- split: test
path: '*/*km*/test.parquet'
- split: valid
path: '*/*km*/valid.parquet'
- config_name: ko
data_files:
- split: train
path: '*/*ko*/train.parquet'
- split: test
path: '*/*ko*/test.parquet'
- split: valid
path: '*/*ko*/valid.parquet'
- config_name: ku
data_files:
- split: train
path: '*/*ku*/train.parquet'
- split: test
path: '*/*ku*/test.parquet'
- split: valid
path: '*/*ku*/valid.parquet'
- config_name: ne
data_files:
- split: train
path: '*/*ne*/train.parquet'
- split: test
path: '*/*ne*/test.parquet'
- split: valid
path: '*/*ne*/valid.parquet'
- config_name: or
data_files:
- split: train
path: '*/*or*/train.parquet'
- split: test
path: '*/*or*/test.parquet'
- split: valid
path: '*/*or*/valid.parquet'
- config_name: pa
data_files:
- split: train
path: '*/*pa*/train.parquet'
- split: test
path: '*/*pa*/test.parquet'
- split: valid
path: '*/*pa*/valid.parquet'
- config_name: ro
data_files:
- split: train
path: '*/*ro*/train.parquet'
- split: test
path: '*/*ro*/test.parquet'
- split: valid
path: '*/*ro*/valid.parquet'
- config_name: sq
data_files:
- split: train
path: '*/*sq*/train.parquet'
- split: test
path: '*/*sq*/test.parquet'
- split: valid
path: '*/*sq*/valid.parquet'
- config_name: sv
data_files:
- split: train
path: '*/*sv*/train.parquet'
- split: test
path: '*/*sv*/test.parquet'
- split: valid
path: '*/*sv*/valid.parquet'
language:
- am
- ar
- bg
- bn
- ca
- cs
- da
- de
- el
- en
- eo
- es
- fa
- fr
- he
- hi
- hu
- id
- it
- km
- ko
- ku
- mg
- mk
- my
- ne
- nl
- or
- pa
- pt
- pl
- ro
- ru
- sq
- sr
- sv
- sw
- tr
- ur
- zh
---"
CohereForAI/Global-MMLU,"{""dataset_info"": [{""config_name"": ""am"", ""features"": [{""name"": ""sample_id"", ""dtype"": ""string""}, {""name"": ""subject"", ""dtype"": ""string""}, {""name"": ""subject_category"", ""dtype"": ""string""}, {""name"": ""question"", ""dtype"": ""string""}, {""name"": ""option_a"", ""dtype"": ""string""}, {""name"": ""option_b"", ""dtype"": ""string""}, {""name"": ""option_c"", ""dtype"": ""string""}, {""name"": ""option_d"", ""dtype"": ""string""}, {""name"": ""answer"", ""dtype"": ""string""}, {""name"": ""required_knowledge"", ""dtype"": ""string""}, {""name"": ""time_sensitive"", ""dtype"": ""string""}, {""name"": ""reference"", ""dtype"": ""string""}, {""name"": ""culture"", ""dtype"": ""string""}, {""name"": ""region"", ""dtype"": ""string""}, {""name"": ""country"", ""dtype"": ""string""}, {""name"": ""cultural_sensitivity_label"", ""dtype"": ""string""}, {""name"": ""is_annotated"", ""dtype"": ""bool""}], ""splits"": [{""name"": ""dev"", ""num_bytes"": 209505, ""num_examples"": 285}, {""name"": ""test"", ""num_bytes"": 12085768, ""num_examples"": 14042}], ""download_size"": 10260448, ""dataset_size"": 12295273}, {""config_name"": ""ar"", ""features"": [{""name"": ""sample_id"", ""dtype"": ""string""}, {""name"": ""subject"", ""dtype"": ""string""}, {""name"": ""subject_category"", ""dtype"": ""string""}, {""name"": ""question"", ""dtype"": ""string""}, {""name"": ""option_a"", ""dtype"": ""string""}, {""name"": ""option_b"", ""dtype"": ""string""}, {""name"": ""option_c"", ""dtype"": ""string""}, {""name"": ""option_d"", ""dtype"": ""string""}, {""name"": ""answer"", ""dtype"": ""string""}, {""name"": ""required_knowledge"", ""dtype"": ""string""}, {""name"": ""time_sensitive"", ""dtype"": ""string""}, {""name"": ""reference"", ""dtype"": ""string""}, {""name"": ""culture"", ""dtype"": ""string""}, {""name"": ""region"", ""dtype"": ""string""}, {""name"": ""country"", ""dtype"": ""string""}, {""name"": ""cultural_sensitivity_label"", ""dtype"": ""string""}, {""name"": ""is_annotated"", ""dtype"": ""bool""}], ""splits"": [{""name"": ""dev"", ""num_bytes"": 202343, ""num_examples"": 285}, {""name"": ""test"", ""num_bytes"": 11621977, ""num_examples"": 14042}], ""download_size"": 9817049, ""dataset_size"": 11824320}, {""config_name"": ""bn"", ""features"": [{""name"": ""sample_id"", ""dtype"": ""string""}, {""name"": ""subject"", ""dtype"": ""string""}, {""name"": ""subject_category"", ""dtype"": ""string""}, {""name"": ""question"", ""dtype"": ""string""}, {""name"": ""option_a"", ""dtype"": ""string""}, {""name"": ""option_b"", ""dtype"": ""string""}, {""name"": ""option_c"", ""dtype"": ""string""}, {""name"": ""option_d"", ""dtype"": ""string""}, {""name"": ""answer"", ""dtype"": ""string""}, {""name"": ""required_knowledge"", ""dtype"": ""string""}, {""name"": ""time_sensitive"", ""dtype"": ""string""}, {""name"": ""reference"", ""dtype"": ""string""}, {""name"": ""culture"", ""dtype"": ""string""}, {""name"": ""region"", ""dtype"": ""string""}, {""name"": ""country"", ""dtype"": ""string""}, {""name"": ""cultural_sensitivity_label"", ""dtype"": ""string""}, {""name"": ""is_annotated"", ""dtype"": ""bool""}], ""splits"": [{""name"": ""dev"", ""num_bytes"": 301875, ""num_examples"": 285}, {""name"": ""test"", ""num_bytes"": 18061158, ""num_examples"": 14042}], ""download_size"": 12524784, ""dataset_size"": 18363033}, {""config_name"": ""cs"", ""features"": [{""name"": ""sample_id"", ""dtype"": ""string""}, {""name"": ""subject"", ""dtype"": ""string""}, {""name"": ""subject_category"", ""dtype"": ""string""}, {""name"": ""question"", ""dtype"": ""string""}, {""name"": ""option_a"", ""dtype"": ""string""}, {""name"": ""option_b"", ""dtype"": ""string""}, {""name"": ""option_c"", ""dtype"": ""string""}, {""name"": ""option_d"", ""dtype"": ""string""}, {""name"": ""answer"", ""dtype"": ""string""}, {""name"": ""required_knowledge"", ""dtype"": ""string""}, {""name"": ""time_sensitive"", ""dtype"": ""string""}, {""name"": ""reference"", ""dtype"": ""string""}, {""name"": ""culture"", ""dtype"": ""string""}, {""name"": ""region"", ""dtype"": ""string""}, {""name"": ""country"", ""dtype"": ""string""}, {""name"": ""cultural_sensitivity_label"", ""dtype"": ""string""}, {""name"": ""is_annotated"", ""dtype"": ""bool""}], ""splits"": [{""name"": ""dev"", ""num_bytes"": 149807, ""num_examples"": 285}, {""name"": ""test"", ""num_bytes"": 8607308, ""num_examples"": 14042}], ""download_size"": 8640151, ""dataset_size"": 8757115}, {""config_name"": ""de"", ""features"": [{""name"": ""sample_id"", ""dtype"": ""string""}, {""name"": ""subject"", ""dtype"": ""string""}, {""name"": ""subject_category"", ""dtype"": ""string""}, {""name"": ""question"", ""dtype"": ""string""}, {""name"": ""option_a"", ""dtype"": ""string""}, {""name"": ""option_b"", ""dtype"": ""string""}, {""name"": ""option_c"", ""dtype"": ""string""}, {""name"": ""option_d"", ""dtype"": ""string""}, {""name"": ""answer"", ""dtype"": ""string""}, {""name"": ""required_knowledge"", ""dtype"": ""string""}, {""name"": ""time_sensitive"", ""dtype"": ""string""}, {""name"": ""reference"", ""dtype"": ""string""}, {""name"": ""culture"", ""dtype"": ""string""}, {""name"": ""region"", ""dtype"": ""string""}, {""name"": ""country"", ""dtype"": ""string""}, {""name"": ""cultural_sensitivity_label"", ""dtype"": ""string""}, {""name"": ""is_annotated"", ""dtype"": ""bool""}], ""splits"": [{""name"": ""dev"", ""num_bytes"": 162406, ""num_examples"": 285}, {""name"": ""test"", ""num_bytes"": 9575360, ""num_examples"": 14042}], ""download_size"": 9187953, ""dataset_size"": 9737766}, {""config_name"": ""el"", ""features"": [{""name"": ""sample_id"", ""dtype"": ""string""}, {""name"": ""subject"", ""dtype"": ""string""}, {""name"": ""subject_category"", ""dtype"": ""string""}, {""name"": ""question"", ""dtype"": ""string""}, {""name"": ""option_a"", ""dtype"": ""string""}, {""name"": ""option_b"", ""dtype"": ""string""}, {""name"": ""option_c"", ""dtype"": ""string""}, {""name"": ""option_d"", ""dtype"": ""string""}, {""name"": ""answer"", ""dtype"": ""string""}, {""name"": ""required_knowledge"", ""dtype"": ""string""}, {""name"": ""time_sensitive"", ""dtype"": ""string""}, {""name"": ""reference"", ""dtype"": ""string""}, {""name"": ""culture"", ""dtype"": ""string""}, {""name"": ""region"", ""dtype"": ""string""}, {""name"": ""country"", ""dtype"": ""string""}, {""name"": ""cultural_sensitivity_label"", ""dtype"": ""string""}, {""name"": ""is_annotated"", ""dtype"": ""bool""}], ""splits"": [{""name"": ""dev"", ""num_bytes"": 254308, ""num_examples"": 285}, {""name"": ""test"", ""num_bytes"": 14502137, ""num_examples"": 14042}], ""download_size"": 12288940, ""dataset_size"": 14756445}, {""config_name"": ""en"", ""features"": [{""name"": ""sample_id"", ""dtype"": ""string""}, {""name"": ""subject"", ""dtype"": ""string""}, {""name"": ""subject_category"", ""dtype"": ""string""}, {""name"": ""question"", ""dtype"": ""string""}, {""name"": ""option_a"", ""dtype"": ""string""}, {""name"": ""option_b"", ""dtype"": ""string""}, {""name"": ""option_c"", ""dtype"": ""string""}, {""name"": ""option_d"", ""dtype"": ""string""}, {""name"": ""answer"", ""dtype"": ""string""}, {""name"": ""required_knowledge"", ""dtype"": ""string""}, {""name"": ""time_sensitive"", ""dtype"": ""string""}, {""name"": ""reference"", ""dtype"": ""string""}, {""name"": ""culture"", ""dtype"": ""string""}, {""name"": ""region"", ""dtype"": ""string""}, {""name"": ""country"", ""dtype"": ""string""}, {""name"": ""cultural_sensitivity_label"", ""dtype"": ""string""}, {""name"": ""is_annotated"", ""dtype"": ""bool""}], ""splits"": [{""name"": ""dev"", ""num_bytes"": 146364, ""num_examples"": 285}, {""name"": ""test"", ""num_bytes"": 8440632, ""num_examples"": 14042}], ""download_size"": 7912429, ""dataset_size"": 8586996}, {""config_name"": ""es"", ""features"": [{""name"": ""sample_id"", ""dtype"": ""string""}, {""name"": ""subject"", ""dtype"": ""string""}, {""name"": ""subject_category"", ""dtype"": ""string""}, {""name"": ""question"", ""dtype"": ""string""}, {""name"": ""option_a"", ""dtype"": ""string""}, {""name"": ""option_b"", ""dtype"": ""string""}, {""name"": ""option_c"", ""dtype"": ""string""}, {""name"": ""option_d"", ""dtype"": ""string""}, {""name"": ""answer"", ""dtype"": ""string""}, {""name"": ""required_knowledge"", ""dtype"": ""string""}, {""name"": ""time_sensitive"", ""dtype"": ""string""}, {""name"": ""reference"", ""dtype"": ""string""}, {""name"": ""culture"", ""dtype"": ""string""}, {""name"": ""region"", ""dtype"": ""string""}, {""name"": ""country"", ""dtype"": ""string""}, {""name"": ""cultural_sensitivity_label"", ""dtype"": ""string""}, {""name"": ""is_annotated"", ""dtype"": ""bool""}], ""splits"": [{""name"": ""dev"", ""num_bytes"": 160633, ""num_examples"": 285}, {""name"": ""test"", ""num_bytes"": 9399724, ""num_examples"": 14042}], ""download_size"": 8752720, ""dataset_size"": 9560357}, {""config_name"": ""fa"", ""features"": [{""name"": ""sample_id"", ""dtype"": ""string""}, {""name"": ""subject"", ""dtype"": ""string""}, {""name"": ""subject_category"", ""dtype"": ""string""}, {""name"": ""question"", ""dtype"": ""string""}, {""name"": ""option_a"", ""dtype"": ""string""}, {""name"": ""option_b"", ""dtype"": ""string""}, {""name"": ""option_c"", ""dtype"": ""string""}, {""name"": ""option_d"", ""dtype"": ""string""}, {""name"": ""answer"", ""dtype"": ""string""}, {""name"": ""required_knowledge"", ""dtype"": ""string""}, {""name"": ""time_sensitive"", ""dtype"": ""string""}, {""name"": ""reference"", ""dtype"": ""string""}, {""name"": ""culture"", ""dtype"": ""string""}, {""name"": ""region"", ""dtype"": ""string""}, {""name"": ""country"", ""dtype"": ""string""}, {""name"": ""cultural_sensitivity_label"", ""dtype"": ""string""}, {""name"": ""is_annotated"", ""dtype"": ""bool""}], ""splits"": [{""name"": ""dev"", ""num_bytes"": 202609, ""num_examples"": 285}, {""name"": ""test"", ""num_bytes"": 11611890, ""num_examples"": 14042}], ""download_size"": 9564082, ""dataset_size"": 11814499}, {""config_name"": ""fil"", ""features"": [{""name"": ""sample_id"", ""dtype"": ""string""}, {""name"": ""subject"", ""dtype"": ""string""}, {""name"": ""subject_category"", ""dtype"": ""string""}, {""name"": ""question"", ""dtype"": ""string""}, {""name"": ""option_a"", ""dtype"": ""string""}, {""name"": ""option_b"", ""dtype"": ""string""}, {""name"": ""option_c"", ""dtype"": ""string""}, {""name"": ""option_d"", ""dtype"": ""string""}, {""name"": ""answer"", ""dtype"": ""string""}, {""name"": ""required_knowledge"", ""dtype"": ""string""}, {""name"": ""time_sensitive"", ""dtype"": ""string""}, {""name"": ""reference"", ""dtype"": ""string""}, {""name"": ""culture"", ""dtype"": ""string""}, {""name"": ""region"", ""dtype"": ""string""}, {""name"": ""country"", ""dtype"": ""string""}, {""name"": ""cultural_sensitivity_label"", ""dtype"": ""string""}, {""name"": ""is_annotated"", ""dtype"": ""bool""}], ""splits"": [{""name"": ""dev"", ""num_bytes"": 165182, ""num_examples"": 285}, {""name"": ""test"", ""num_bytes"": 9510179, ""num_examples"": 14042}], ""download_size"": 8564879, ""dataset_size"": 9675361}, {""config_name"": ""fr"", ""features"": [{""name"": ""sample_id"", ""dtype"": ""string""}, {""name"": ""subject"", ""dtype"": ""string""}, {""name"": ""subject_category"", ""dtype"": ""string""}, {""name"": ""question"", ""dtype"": ""string""}, {""name"": ""option_a"", ""dtype"": ""string""}, {""name"": ""option_b"", ""dtype"": ""string""}, {""name"": ""option_c"", ""dtype"": ""string""}, {""name"": ""option_d"", ""dtype"": ""string""}, {""name"": ""answer"", ""dtype"": ""string""}, {""name"": ""required_knowledge"", ""dtype"": ""string""}, {""name"": ""time_sensitive"", ""dtype"": ""string""}, {""name"": ""reference"", ""dtype"": ""string""}, {""name"": ""culture"", ""dtype"": ""string""}, {""name"": ""region"", ""dtype"": ""string""}, {""name"": ""country"", ""dtype"": ""string""}, {""name"": ""cultural_sensitivity_label"", ""dtype"": ""string""}, {""name"": ""is_annotated"", ""dtype"": ""bool""}], ""splits"": [{""name"": ""dev"", ""num_bytes"": 166173, ""num_examples"": 285}, {""name"": ""test"", ""num_bytes"": 9858873, ""num_examples"": 14042}], ""download_size"": 9202595, ""dataset_size"": 10025046}, {""config_name"": ""ha"", ""features"": [{""name"": ""sample_id"", ""dtype"": ""string""}, {""name"": ""subject"", ""dtype"": ""string""}, {""name"": ""subject_category"", ""dtype"": ""string""}, {""name"": ""question"", ""dtype"": ""string""}, {""name"": ""option_a"", ""dtype"": ""string""}, {""name"": ""option_b"", ""dtype"": ""string""}, {""name"": ""option_c"", ""dtype"": ""string""}, {""name"": ""option_d"", ""dtype"": ""string""}, {""name"": ""answer"", ""dtype"": ""string""}, {""name"": ""required_knowledge"", ""dtype"": ""string""}, {""name"": ""time_sensitive"", ""dtype"": ""string""}, {""name"": ""reference"", ""dtype"": ""string""}, {""name"": ""culture"", ""dtype"": ""string""}, {""name"": ""region"", ""dtype"": ""string""}, {""name"": ""country"", ""dtype"": ""string""}, {""name"": ""cultural_sensitivity_label"", ""dtype"": ""string""}, {""name"": ""is_annotated"", ""dtype"": ""bool""}], ""splits"": [{""name"": ""dev"", ""num_bytes"": 147406, ""num_examples"": 285}, {""name"": ""test"", ""num_bytes"": 8445707, ""num_examples"": 14042}], ""download_size"": 7665529, ""dataset_size"": 8593113}, {""config_name"": ""he"", ""features"": [{""name"": ""sample_id"", ""dtype"": ""string""}, {""name"": ""subject"", ""dtype"": ""string""}, {""name"": ""subject_category"", ""dtype"": ""string""}, {""name"": ""question"", ""dtype"": ""string""}, {""name"": ""option_a"", ""dtype"": ""string""}, {""name"": ""option_b"", ""dtype"": ""string""}, {""name"": ""option_c"", ""dtype"": ""string""}, {""name"": ""option_d"", ""dtype"": ""string""}, {""name"": ""answer"", ""dtype"": ""string""}, {""name"": ""required_knowledge"", ""dtype"": ""string""}, {""name"": ""time_sensitive"", ""dtype"": ""string""}, {""name"": ""reference"", ""dtype"": ""string""}, {""name"": ""culture"", ""dtype"": ""string""}, {""name"": ""region"", ""dtype"": ""string""}, {""name"": ""country"", ""dtype"": ""string""}, {""name"": ""cultural_sensitivity_label"", ""dtype"": ""string""}, {""name"": ""is_annotated"", ""dtype"": ""bool""}], ""splits"": [{""name"": ""dev"", ""num_bytes"": 178912, ""num_examples"": 285}, {""name"": ""test"", ""num_bytes"": 10248592, ""num_examples"": 14042}], ""download_size"": 8818618, ""dataset_size"": 10427504}, {""config_name"": ""hi"", ""features"": [{""name"": ""sample_id"", ""dtype"": ""string""}, {""name"": ""subject"", ""dtype"": ""string""}, {""name"": ""subject_category"", ""dtype"": ""string""}, {""name"": ""question"", ""dtype"": ""string""}, {""name"": ""option_a"", ""dtype"": ""string""}, {""name"": ""option_b"", ""dtype"": ""string""}, {""name"": ""option_c"", ""dtype"": ""string""}, {""name"": ""option_d"", ""dtype"": ""string""}, {""name"": ""answer"", ""dtype"": ""string""}, {""name"": ""required_knowledge"", ""dtype"": ""string""}, {""name"": ""time_sensitive"", ""dtype"": ""string""}, {""name"": ""reference"", ""dtype"": ""string""}, {""name"": ""culture"", ""dtype"": ""string""}, {""name"": ""region"", ""dtype"": ""string""}, {""name"": ""country"", ""dtype"": ""string""}, {""name"": ""cultural_sensitivity_label"", ""dtype"": ""string""}, {""name"": ""is_annotated"", ""dtype"": ""bool""}], ""splits"": [{""name"": ""dev"", ""num_bytes"": 308254, ""num_examples"": 285}, {""name"": ""test"", ""num_bytes"": 17970478, ""num_examples"": 14042}], ""download_size"": 12407854, ""dataset_size"": 18278732}, {""config_name"": ""id"", ""features"": [{""name"": ""sample_id"", ""dtype"": ""string""}, {""name"": ""subject"", ""dtype"": ""string""}, {""name"": ""subject_category"", ""dtype"": ""string""}, {""name"": ""question"", ""dtype"": ""string""}, {""name"": ""option_a"", ""dtype"": ""string""}, {""name"": ""option_b"", ""dtype"": ""string""}, {""name"": ""option_c"", ""dtype"": ""string""}, {""name"": ""option_d"", ""dtype"": ""string""}, {""name"": ""answer"", ""dtype"": ""string""}, {""name"": ""required_knowledge"", ""dtype"": ""string""}, {""name"": ""time_sensitive"", ""dtype"": ""string""}, {""name"": ""reference"", ""dtype"": ""string""}, {""name"": ""culture"", ""dtype"": ""string""}, {""name"": ""region"", ""dtype"": ""string""}, {""name"": ""country"", ""dtype"": ""string""}, {""name"": ""cultural_sensitivity_label"", ""dtype"": ""string""}, {""name"": ""is_annotated"", ""dtype"": ""bool""}], ""splits"": [{""name"": ""dev"", ""num_bytes"": 154692, ""num_examples"": 285}, {""name"": ""test"", ""num_bytes"": 8886643, ""num_examples"": 14042}], ""download_size"": 7793365, ""dataset_size"": 9041335}, {""config_name"": ""ig"", ""features"": [{""name"": ""sample_id"", ""dtype"": ""string""}, {""name"": ""subject"", ""dtype"": ""string""}, {""name"": ""subject_category"", ""dtype"": ""string""}, {""name"": ""question"", ""dtype"": ""string""}, {""name"": ""option_a"", ""dtype"": ""string""}, {""name"": ""option_b"", ""dtype"": ""string""}, {""name"": ""option_c"", ""dtype"": ""string""}, {""name"": ""option_d"", ""dtype"": ""string""}, {""name"": ""answer"", ""dtype"": ""string""}, {""name"": ""required_knowledge"", ""dtype"": ""string""}, {""name"": ""time_sensitive"", ""dtype"": ""string""}, {""name"": ""reference"", ""dtype"": ""string""}, {""name"": ""culture"", ""dtype"": ""string""}, {""name"": ""region"", ""dtype"": ""string""}, {""name"": ""country"", ""dtype"": ""string""}, {""name"": ""cultural_sensitivity_label"", ""dtype"": ""string""}, {""name"": ""is_annotated"", ""dtype"": ""bool""}], ""splits"": [{""name"": ""dev"", ""num_bytes"": 157376, ""num_examples"": 285}, {""name"": ""test"", ""num_bytes"": 9221405, ""num_examples"": 14042}], ""download_size"": 7644102, ""dataset_size"": 9378781}, {""config_name"": ""it"", ""features"": [{""name"": ""sample_id"", ""dtype"": ""string""}, {""name"": ""subject"", ""dtype"": ""string""}, {""name"": ""subject_category"", ""dtype"": ""string""}, {""name"": ""question"", ""dtype"": ""string""}, {""name"": ""option_a"", ""dtype"": ""string""}, {""name"": ""option_b"", ""dtype"": ""string""}, {""name"": ""option_c"", ""dtype"": ""string""}, {""name"": ""option_d"", ""dtype"": ""string""}, {""name"": ""answer"", ""dtype"": ""string""}, {""name"": ""required_knowledge"", ""dtype"": ""string""}, {""name"": ""time_sensitive"", ""dtype"": ""string""}, {""name"": ""reference"", ""dtype"": ""string""}, {""name"": ""culture"", ""dtype"": ""string""}, {""name"": ""region"", ""dtype"": ""string""}, {""name"": ""country"", ""dtype"": ""string""}, {""name"": ""cultural_sensitivity_label"", ""dtype"": ""string""}, {""name"": ""is_annotated"", ""dtype"": ""bool""}], ""splits"": [{""name"": ""dev"", ""num_bytes"": 157547, ""num_examples"": 285}, {""name"": ""test"", ""num_bytes"": 9374481, ""num_examples"": 14042}], ""download_size"": 8873034, ""dataset_size"": 9532028}, {""config_name"": ""ja"", ""features"": [{""name"": ""sample_id"", ""dtype"": ""string""}, {""name"": ""subject"", ""dtype"": ""string""}, {""name"": ""subject_category"", ""dtype"": ""string""}, {""name"": ""question"", ""dtype"": ""string""}, {""name"": ""option_a"", ""dtype"": ""string""}, {""name"": ""option_b"", ""dtype"": ""string""}, {""name"": ""option_c"", ""dtype"": ""string""}, {""name"": ""option_d"", ""dtype"": ""string""}, {""name"": ""answer"", ""dtype"": ""string""}, {""name"": ""required_knowledge"", ""dtype"": ""string""}, {""name"": ""time_sensitive"", ""dtype"": ""string""}, {""name"": ""reference"", ""dtype"": ""string""}, {""name"": ""culture"", ""dtype"": ""string""}, {""name"": ""region"", ""dtype"": ""string""}, {""name"": ""country"", ""dtype"": ""string""}, {""name"": ""cultural_sensitivity_label"", ""dtype"": ""string""}, {""name"": ""is_annotated"", ""dtype"": ""bool""}], ""splits"": [{""name"": ""dev"", ""num_bytes"": 167646, ""num_examples"": 285}, {""name"": ""test"", ""num_bytes"": 9830716, ""num_examples"": 14042}], ""download_size"": 8826164, ""dataset_size"": 9998362}, {""config_name"": ""ko"", ""features"": [{""name"": ""sample_id"", ""dtype"": ""string""}, {""name"": ""subject"", ""dtype"": ""string""}, {""name"": ""subject_category"", ""dtype"": ""string""}, {""name"": ""question"", ""dtype"": ""string""}, {""name"": ""option_a"", ""dtype"": ""string""}, {""name"": ""option_b"", ""dtype"": ""string""}, {""name"": ""option_c"", ""dtype"": ""string""}, {""name"": ""option_d"", ""dtype"": ""string""}, {""name"": ""answer"", ""dtype"": ""string""}, {""name"": ""required_knowledge"", ""dtype"": ""string""}, {""name"": ""time_sensitive"", ""dtype"": ""string""}, {""name"": ""reference"", ""dtype"": ""string""}, {""name"": ""culture"", ""dtype"": ""string""}, {""name"": ""region"", ""dtype"": ""string""}, {""name"": ""country"", ""dtype"": ""string""}, {""name"": ""cultural_sensitivity_label"", ""dtype"": ""string""}, {""name"": ""is_annotated"", ""dtype"": ""bool""}], ""splits"": [{""name"": ""dev"", ""num_bytes"": 160572, ""num_examples"": 285}, {""name"": ""test"", ""num_bytes"": 9454859, ""num_examples"": 14042}], ""download_size"": 8640457, ""dataset_size"": 9615431}, {""config_name"": ""ky"", ""features"": [{""name"": ""sample_id"", ""dtype"": ""string""}, {""name"": ""subject"", ""dtype"": ""string""}, {""name"": ""subject_category"", ""dtype"": ""string""}, {""name"": ""question"", ""dtype"": ""string""}, {""name"": ""option_a"", ""dtype"": ""string""}, {""name"": ""option_b"", ""dtype"": ""string""}, {""name"": ""option_c"", ""dtype"": ""string""}, {""name"": ""option_d"", ""dtype"": ""string""}, {""name"": ""answer"", ""dtype"": ""string""}, {""name"": ""required_knowledge"", ""dtype"": ""string""}, {""name"": ""time_sensitive"", ""dtype"": ""string""}, {""name"": ""reference"", ""dtype"": ""string""}, {""name"": ""culture"", ""dtype"": ""string""}, {""name"": ""region"", ""dtype"": ""string""}, {""name"": ""country"", ""dtype"": ""string""}, {""name"": ""cultural_sensitivity_label"", ""dtype"": ""string""}, {""name"": ""is_annotated"", ""dtype"": ""bool""}], ""splits"": [{""name"": ""dev"", ""num_bytes"": 235001, ""num_examples"": 285}, {""name"": ""test"", ""num_bytes"": 13483934, ""num_examples"": 14042}], ""download_size"": 11148813, ""dataset_size"": 13718935}, {""config_name"": ""lt"", ""features"": [{""name"": ""sample_id"", ""dtype"": ""string""}, {""name"": ""subject"", ""dtype"": ""string""}, {""name"": ""subject_category"", ""dtype"": ""string""}, {""name"": ""question"", ""dtype"": ""string""}, {""name"": ""option_a"", ""dtype"": ""string""}, {""name"": ""option_b"", ""dtype"": ""string""}, {""name"": ""option_c"", ""dtype"": ""string""}, {""name"": ""option_d"", ""dtype"": ""string""}, {""name"": ""answer"", ""dtype"": ""string""}, {""name"": ""required_knowledge"", ""dtype"": ""string""}, {""name"": ""time_sensitive"", ""dtype"": ""string""}, {""name"": ""reference"", ""dtype"": ""string""}, {""name"": ""culture"", ""dtype"": ""string""}, {""name"": ""region"", ""dtype"": ""string""}, {""name"": ""country"", ""dtype"": ""string""}, {""name"": ""cultural_sensitivity_label"", ""dtype"": ""string""}, {""name"": ""is_annotated"", ""dtype"": ""bool""}], ""splits"": [{""name"": ""dev"", ""num_bytes"": 148917, ""num_examples"": 285}, {""name"": ""test"", ""num_bytes"": 8504949, ""num_examples"": 14042}], ""download_size"": 8416467, ""dataset_size"": 8653866}, {""config_name"": ""mg"", ""features"": [{""name"": ""sample_id"", ""dtype"": ""string""}, {""name"": ""subject"", ""dtype"": ""string""}, {""name"": ""subject_category"", ""dtype"": ""string""}, {""name"": ""question"", ""dtype"": ""string""}, {""name"": ""option_a"", ""dtype"": ""string""}, {""name"": ""option_b"", ""dtype"": ""string""}, {""name"": ""option_c"", ""dtype"": ""string""}, {""name"": ""option_d"", ""dtype"": ""string""}, {""name"": ""answer"", ""dtype"": ""string""}, {""name"": ""required_knowledge"", ""dtype"": ""string""}, {""name"": ""time_sensitive"", ""dtype"": ""string""}, {""name"": ""reference"", ""dtype"": ""string""}, {""name"": ""culture"", ""dtype"": ""string""}, {""name"": ""region"", ""dtype"": ""string""}, {""name"": ""country"", ""dtype"": ""string""}, {""name"": ""cultural_sensitivity_label"", ""dtype"": ""string""}, {""name"": ""is_annotated"", ""dtype"": ""bool""}], ""splits"": [{""name"": ""dev"", ""num_bytes"": 161992, ""num_examples"": 285}, {""name"": ""test"", ""num_bytes"": 9337415, ""num_examples"": 14042}], ""download_size"": 8011427, ""dataset_size"": 9499407}, {""config_name"": ""ms"", ""features"": [{""name"": ""sample_id"", ""dtype"": ""string""}, {""name"": ""subject"", ""dtype"": ""string""}, {""name"": ""subject_category"", ""dtype"": ""string""}, {""name"": ""question"", ""dtype"": ""string""}, {""name"": ""option_a"", ""dtype"": ""string""}, {""name"": ""option_b"", ""dtype"": ""string""}, {""name"": ""option_c"", ""dtype"": ""string""}, {""name"": ""option_d"", ""dtype"": ""string""}, {""name"": ""answer"", ""dtype"": ""string""}, {""name"": ""required_knowledge"", ""dtype"": ""string""}, {""name"": ""time_sensitive"", ""dtype"": ""string""}, {""name"": ""reference"", ""dtype"": ""string""}, {""name"": ""culture"", ""dtype"": ""string""}, {""name"": ""region"", ""dtype"": ""string""}, {""name"": ""country"", ""dtype"": ""string""}, {""name"": ""cultural_sensitivity_label"", ""dtype"": ""string""}, {""name"": ""is_annotated"", ""dtype"": ""bool""}], ""splits"": [{""name"": ""dev"", ""num_bytes"": 152549, ""num_examples"": 285}, {""name"": ""test"", ""num_bytes"": 8823844, ""num_examples"": 14042}], ""download_size"": 7783581, ""dataset_size"": 8976393}, {""config_name"": ""ne"", ""features"": [{""name"": ""sample_id"", ""dtype"": ""string""}, {""name"": ""subject"", ""dtype"": ""string""}, {""name"": ""subject_category"", ""dtype"": ""string""}, {""name"": ""question"", ""dtype"": ""string""}, {""name"": ""option_a"", ""dtype"": ""string""}, {""name"": ""option_b"", ""dtype"": ""string""}, {""name"": ""option_c"", ""dtype"": ""string""}, {""name"": ""option_d"", ""dtype"": ""string""}, {""name"": ""answer"", ""dtype"": ""string""}, {""name"": ""required_knowledge"", ""dtype"": ""string""}, {""name"": ""time_sensitive"", ""dtype"": ""string""}, {""name"": ""reference"", ""dtype"": ""string""}, {""name"": ""culture"", ""dtype"": ""string""}, {""name"": ""region"", ""dtype"": ""string""}, {""name"": ""country"", ""dtype"": ""string""}, {""name"": ""cultural_sensitivity_label"", ""dtype"": ""string""}, {""name"": ""is_annotated"", ""dtype"": ""bool""}], ""splits"": [{""name"": ""dev"", ""num_bytes"": 294790, ""num_examples"": 285}, {""name"": ""test"", ""num_bytes"": 16972110, ""num_examples"": 14042}], ""download_size"": 11895818, ""dataset_size"": 17266900}, {""config_name"": ""nl"", ""features"": [{""name"": ""sample_id"", ""dtype"": ""string""}, {""name"": ""subject"", ""dtype"": ""string""}, {""name"": ""subject_category"", ""dtype"": ""string""}, {""name"": ""question"", ""dtype"": ""string""}, {""name"": ""option_a"", ""dtype"": ""string""}, {""name"": ""option_b"", ""dtype"": ""string""}, {""name"": ""option_c"", ""dtype"": ""string""}, {""name"": ""option_d"", ""dtype"": ""string""}, {""name"": ""answer"", ""dtype"": ""string""}, {""name"": ""required_knowledge"", ""dtype"": ""string""}, {""name"": ""time_sensitive"", ""dtype"": ""string""}, {""name"": ""reference"", ""dtype"": ""string""}, {""name"": ""culture"", ""dtype"": ""string""}, {""name"": ""region"", ""dtype"": ""string""}, {""name"": ""country"", ""dtype"": ""string""}, {""name"": ""cultural_sensitivity_label"", ""dtype"": ""string""}, {""name"": ""is_annotated"", ""dtype"": ""bool""}], ""splits"": [{""name"": ""dev"", ""num_bytes"": 158122, ""num_examples"": 285}, {""name"": ""test"", ""num_bytes"": 9099176, ""num_examples"": 14042}], ""download_size"": 8565959, ""dataset_size"": 9257298}, {""config_name"": ""ny"", ""features"": [{""name"": ""sample_id"", ""dtype"": ""string""}, {""name"": ""subject"", ""dtype"": ""string""}, {""name"": ""subject_category"", ""dtype"": ""string""}, {""name"": ""question"", ""dtype"": ""string""}, {""name"": ""option_a"", ""dtype"": ""string""}, {""name"": ""option_b"", ""dtype"": ""string""}, {""name"": ""option_c"", ""dtype"": ""string""}, {""name"": ""option_d"", ""dtype"": ""string""}, {""name"": ""answer"", ""dtype"": ""string""}, {""name"": ""required_knowledge"", ""dtype"": ""string""}, {""name"": ""time_sensitive"", ""dtype"": ""string""}, {""name"": ""reference"", ""dtype"": ""string""}, {""name"": ""culture"", ""dtype"": ""string""}, {""name"": ""region"", ""dtype"": ""string""}, {""name"": ""country"", ""dtype"": ""string""}, {""name"": ""cultural_sensitivity_label"", ""dtype"": ""string""}, {""name"": ""is_annotated"", ""dtype"": ""bool""}], ""splits"": [{""name"": ""dev"", ""num_bytes"": 151315, ""num_examples"": 285}, {""name"": ""test"", ""num_bytes"": 8686819, ""num_examples"": 14042}], ""download_size"": 7822699, ""dataset_size"": 8838134}, {""config_name"": ""pl"", ""features"": [{""name"": ""sample_id"", ""dtype"": ""string""}, {""name"": ""subject"", ""dtype"": ""string""}, {""name"": ""subject_category"", ""dtype"": ""string""}, {""name"": ""question"", ""dtype"": ""string""}, {""name"": ""option_a"", ""dtype"": ""string""}, {""name"": ""option_b"", ""dtype"": ""string""}, {""name"": ""option_c"", ""dtype"": ""string""}, {""name"": ""option_d"", ""dtype"": ""string""}, {""name"": ""answer"", ""dtype"": ""string""}, {""name"": ""required_knowledge"", ""dtype"": ""string""}, {""name"": ""time_sensitive"", ""dtype"": ""string""}, {""name"": ""reference"", ""dtype"": ""string""}, {""name"": ""culture"", ""dtype"": ""string""}, {""name"": ""region"", ""dtype"": ""string""}, {""name"": ""country"", ""dtype"": ""string""}, {""name"": ""cultural_sensitivity_label"", ""dtype"": ""string""}, {""name"": ""is_annotated"", ""dtype"": ""bool""}], ""splits"": [{""name"": ""dev"", ""num_bytes"": 157290, ""num_examples"": 285}, {""name"": ""test"", ""num_bytes"": 8980730, ""num_examples"": 14042}], ""download_size"": 8981270, ""dataset_size"": 9138020}, {""config_name"": ""pt"", ""features"": [{""name"": ""sample_id"", ""dtype"": ""string""}, {""name"": ""subject"", ""dtype"": ""string""}, {""name"": ""subject_category"", ""dtype"": ""string""}, {""name"": ""question"", ""dtype"": ""string""}, {""name"": ""option_a"", ""dtype"": ""string""}, {""name"": ""option_b"", ""dtype"": ""string""}, {""name"": ""option_c"", ""dtype"": ""string""}, {""name"": ""option_d"", ""dtype"": ""string""}, {""name"": ""answer"", ""dtype"": ""string""}, {""name"": ""required_knowledge"", ""dtype"": ""string""}, {""name"": ""time_sensitive"", ""dtype"": ""string""}, {""name"": ""reference"", ""dtype"": ""string""}, {""name"": ""culture"", ""dtype"": ""string""}, {""name"": ""region"", ""dtype"": ""string""}, {""name"": ""country"", ""dtype"": ""string""}, {""name"": ""cultural_sensitivity_label"", ""dtype"": ""string""}, {""name"": ""is_annotated"", ""dtype"": ""bool""}], ""splits"": [{""name"": ""dev"", ""num_bytes"": 154592, ""num_examples"": 285}, {""name"": ""test"", ""num_bytes"": 8983299, ""num_examples"": 14042}], ""download_size"": 8517588, ""dataset_size"": 9137891}, {""config_name"": ""ro"", ""features"": [{""name"": ""sample_id"", ""dtype"": ""string""}, {""name"": ""subject"", ""dtype"": ""string""}, {""name"": ""subject_category"", ""dtype"": ""string""}, {""name"": ""question"", ""dtype"": ""string""}, {""name"": ""option_a"", ""dtype"": ""string""}, {""name"": ""option_b"", ""dtype"": ""string""}, {""name"": ""option_c"", ""dtype"": ""string""}, {""name"": ""option_d"", ""dtype"": ""string""}, {""name"": ""answer"", ""dtype"": ""string""}, {""name"": ""required_knowledge"", ""dtype"": ""string""}, {""name"": ""time_sensitive"", ""dtype"": ""string""}, {""name"": ""reference"", ""dtype"": ""string""}, {""name"": ""culture"", ""dtype"": ""string""}, {""name"": ""region"", ""dtype"": ""string""}, {""name"": ""country"", ""dtype"": ""string""}, {""name"": ""cultural_sensitivity_label"", ""dtype"": ""string""}, {""name"": ""is_annotated"", ""dtype"": ""bool""}], ""splits"": [{""name"": ""dev"", ""num_bytes"": 158311, ""num_examples"": 285}, {""name"": ""test"", ""num_bytes"": 9163189, ""num_examples"": 14042}], ""download_size"": 8773232, ""dataset_size"": 9321500}, {""config_name"": ""ru"", ""features"": [{""name"": ""sample_id"", ""dtype"": ""string""}, {""name"": ""subject"", ""dtype"": ""string""}, {""name"": ""subject_category"", ""dtype"": ""string""}, {""name"": ""question"", ""dtype"": ""string""}, {""name"": ""option_a"", ""dtype"": ""string""}, {""name"": ""option_b"", ""dtype"": ""string""}, {""name"": ""option_c"", ""dtype"": ""string""}, {""name"": ""option_d"", ""dtype"": ""string""}, {""name"": ""answer"", ""dtype"": ""string""}, {""name"": ""required_knowledge"", ""dtype"": ""string""}, {""name"": ""time_sensitive"", ""dtype"": ""string""}, {""name"": ""reference"", ""dtype"": ""string""}, {""name"": ""culture"", ""dtype"": ""string""}, {""name"": ""region"", ""dtype"": ""string""}, {""name"": ""country"", ""dtype"": ""string""}, {""name"": ""cultural_sensitivity_label"", ""dtype"": ""string""}, {""name"": ""is_annotated"", ""dtype"": ""bool""}], ""splits"": [{""name"": ""dev"", ""num_bytes"": 246059, ""num_examples"": 285}, {""name"": ""test"", ""num_bytes"": 14059847, ""num_examples"": 14042}], ""download_size"": 11904365, ""dataset_size"": 14305906}, {""config_name"": ""si"", ""features"": [{""name"": ""sample_id"", ""dtype"": ""string""}, {""name"": ""subject"", ""dtype"": ""string""}, {""name"": ""subject_category"", ""dtype"": ""string""}, {""name"": ""question"", ""dtype"": ""string""}, {""name"": ""option_a"", ""dtype"": ""string""}, {""name"": ""option_b"", ""dtype"": ""string""}, {""name"": ""option_c"", ""dtype"": ""string""}, {""name"": ""option_d"", ""dtype"": ""string""}, {""name"": ""answer"", ""dtype"": ""string""}, {""name"": ""required_knowledge"", ""dtype"": ""string""}, {""name"": ""time_sensitive"", ""dtype"": ""string""}, {""name"": ""reference"", ""dtype"": ""string""}, {""name"": ""culture"", ""dtype"": ""string""}, {""name"": ""region"", ""dtype"": ""string""}, {""name"": ""country"", ""dtype"": ""string""}, {""name"": ""cultural_sensitivity_label"", ""dtype"": ""string""}, {""name"": ""is_annotated"", ""dtype"": ""bool""}], ""splits"": [{""name"": ""dev"", ""num_bytes"": 297843, ""num_examples"": 285}, {""name"": ""test"", ""num_bytes"": 17374939, ""num_examples"": 14042}], ""download_size"": 12790098, ""dataset_size"": 17672782}, {""config_name"": ""sn"", ""features"": [{""name"": ""sample_id"", ""dtype"": ""string""}, {""name"": ""subject"", ""dtype"": ""string""}, {""name"": ""subject_category"", ""dtype"": ""string""}, {""name"": ""question"", ""dtype"": ""string""}, {""name"": ""option_a"", ""dtype"": ""string""}, {""name"": ""option_b"", ""dtype"": ""string""}, {""name"": ""option_c"", ""dtype"": ""string""}, {""name"": ""option_d"", ""dtype"": ""string""}, {""name"": ""answer"", ""dtype"": ""string""}, {""name"": ""required_knowledge"", ""dtype"": ""string""}, {""name"": ""time_sensitive"", ""dtype"": ""string""}, {""name"": ""reference"", ""dtype"": ""string""}, {""name"": ""culture"", ""dtype"": ""string""}, {""name"": ""region"", ""dtype"": ""string""}, {""name"": ""country"", ""dtype"": ""string""}, {""name"": ""cultural_sensitivity_label"", ""dtype"": ""string""}, {""name"": ""is_annotated"", ""dtype"": ""bool""}], ""splits"": [{""name"": ""dev"", ""num_bytes"": 147355, ""num_examples"": 285}, {""name"": ""test"", ""num_bytes"": 8507368, ""num_examples"": 14042}], ""download_size"": 7962672, ""dataset_size"": 8654723}, {""config_name"": ""so"", ""features"": [{""name"": ""sample_id"", ""dtype"": ""string""}, {""name"": ""subject"", ""dtype"": ""string""}, {""name"": ""subject_category"", ""dtype"": ""string""}, {""name"": ""question"", ""dtype"": ""string""}, {""name"": ""option_a"", ""dtype"": ""string""}, {""name"": ""option_b"", ""dtype"": ""string""}, {""name"": ""option_c"", ""dtype"": ""string""}, {""name"": ""option_d"", ""dtype"": ""string""}, {""name"": ""answer"", ""dtype"": ""string""}, {""name"": ""required_knowledge"", ""dtype"": ""string""}, {""name"": ""time_sensitive"", ""dtype"": ""string""}, {""name"": ""reference"", ""dtype"": ""string""}, {""name"": ""culture"", ""dtype"": ""string""}, {""name"": ""region"", ""dtype"": ""string""}, {""name"": ""country"", ""dtype"": ""string""}, {""name"": ""cultural_sensitivity_label"", ""dtype"": ""string""}, {""name"": ""is_annotated"", ""dtype"": ""bool""}], ""splits"": [{""name"": ""dev"", ""num_bytes"": 156282, ""num_examples"": 285}, {""name"": ""test"", ""num_bytes"": 9033243, ""num_examples"": 14042}], ""download_size"": 8706693, ""dataset_size"": 9189525}, {""config_name"": ""sr"", ""features"": [{""name"": ""sample_id"", ""dtype"": ""string""}, {""name"": ""subject"", ""dtype"": ""string""}, {""name"": ""subject_category"", ""dtype"": ""string""}, {""name"": ""question"", ""dtype"": ""string""}, {""name"": ""option_a"", ""dtype"": ""string""}, {""name"": ""option_b"", ""dtype"": ""string""}, {""name"": ""option_c"", ""dtype"": ""string""}, {""name"": ""option_d"", ""dtype"": ""string""}, {""name"": ""answer"", ""dtype"": ""string""}, {""name"": ""required_knowledge"", ""dtype"": ""string""}, {""name"": ""time_sensitive"", ""dtype"": ""string""}, {""name"": ""reference"", ""dtype"": ""string""}, {""name"": ""culture"", ""dtype"": ""string""}, {""name"": ""region"", ""dtype"": ""string""}, {""name"": ""country"", ""dtype"": ""string""}, {""name"": ""cultural_sensitivity_label"", ""dtype"": ""string""}, {""name"": ""is_annotated"", ""dtype"": ""bool""}], ""splits"": [{""name"": ""dev"", ""num_bytes"": 221580, ""num_examples"": 285}, {""name"": ""test"", ""num_bytes"": 12695546, ""num_examples"": 14042}], ""download_size"": 10748391, ""dataset_size"": 12917126}, {""config_name"": ""sv"", ""features"": [{""name"": ""sample_id"", ""dtype"": ""string""}, {""name"": ""subject"", ""dtype"": ""string""}, {""name"": ""subject_category"", ""dtype"": ""string""}, {""name"": ""question"", ""dtype"": ""string""}, {""name"": ""option_a"", ""dtype"": ""string""}, {""name"": ""option_b"", ""dtype"": ""string""}, {""name"": ""option_c"", ""dtype"": ""string""}, {""name"": ""option_d"", ""dtype"": ""string""}, {""name"": ""answer"", ""dtype"": ""string""}, {""name"": ""required_knowledge"", ""dtype"": ""string""}, {""name"": ""time_sensitive"", ""dtype"": ""string""}, {""name"": ""reference"", ""dtype"": ""string""}, {""name"": ""culture"", ""dtype"": ""string""}, {""name"": ""region"", ""dtype"": ""string""}, {""name"": ""country"", ""dtype"": ""string""}, {""name"": ""cultural_sensitivity_label"", ""dtype"": ""string""}, {""name"": ""is_annotated"", ""dtype"": ""bool""}], ""splits"": [{""name"": ""dev"", ""num_bytes"": 147893, ""num_examples"": 285}, {""name"": ""test"", ""num_bytes"": 8549708, ""num_examples"": 14042}], ""download_size"": 8181997, ""dataset_size"": 8697601}, {""config_name"": ""sw"", ""features"": [{""name"": ""sample_id"", ""dtype"": ""string""}, {""name"": ""subject"", ""dtype"": ""string""}, {""name"": ""subject_category"", ""dtype"": ""string""}, {""name"": ""question"", ""dtype"": ""string""}, {""name"": ""option_a"", ""dtype"": ""string""}, {""name"": ""option_b"", ""dtype"": ""string""}, {""name"": ""option_c"", ""dtype"": ""string""}, {""name"": ""option_d"", ""dtype"": ""string""}, {""name"": ""answer"", ""dtype"": ""string""}, {""name"": ""required_knowledge"", ""dtype"": ""string""}, {""name"": ""time_sensitive"", ""dtype"": ""string""}, {""name"": ""reference"", ""dtype"": ""string""}, {""name"": ""culture"", ""dtype"": ""string""}, {""name"": ""region"", ""dtype"": ""string""}, {""name"": ""country"", ""dtype"": ""string""}, {""name"": ""cultural_sensitivity_label"", ""dtype"": ""string""}, {""name"": ""is_annotated"", ""dtype"": ""bool""}], ""splits"": [{""name"": ""dev"", ""num_bytes"": 147069, ""num_examples"": 285}, {""name"": ""test"", ""num_bytes"": 8653210, ""num_examples"": 14042}], ""download_size"": 7932986, ""dataset_size"": 8800279}, {""config_name"": ""te"", ""features"": [{""name"": ""sample_id"", ""dtype"": ""string""}, {""name"": ""subject"", ""dtype"": ""string""}, {""name"": ""subject_category"", ""dtype"": ""string""}, {""name"": ""question"", ""dtype"": ""string""}, {""name"": ""option_a"", ""dtype"": ""string""}, {""name"": ""option_b"", ""dtype"": ""string""}, {""name"": ""option_c"", ""dtype"": ""string""}, {""name"": ""option_d"", ""dtype"": ""string""}, {""name"": ""answer"", ""dtype"": ""string""}, {""name"": ""required_knowledge"", ""dtype"": ""string""}, {""name"": ""time_sensitive"", ""dtype"": ""string""}, {""name"": ""reference"", ""dtype"": ""string""}, {""name"": ""culture"", ""dtype"": ""string""}, {""name"": ""region"", ""dtype"": ""string""}, {""name"": ""country"", ""dtype"": ""string""}, {""name"": ""cultural_sensitivity_label"", ""dtype"": ""string""}, {""name"": ""is_annotated"", ""dtype"": ""bool""}], ""splits"": [{""name"": ""dev"", ""num_bytes"": 315724, ""num_examples"": 285}, {""name"": ""test"", ""num_bytes"": 18170058, ""num_examples"": 14042}], ""download_size"": 12631358, ""dataset_size"": 18485782}, {""config_name"": ""tr"", ""features"": [{""name"": ""sample_id"", ""dtype"": ""string""}, {""name"": ""subject"", ""dtype"": ""string""}, {""name"": ""subject_category"", ""dtype"": ""string""}, {""name"": ""question"", ""dtype"": ""string""}, {""name"": ""option_a"", ""dtype"": ""string""}, {""name"": ""option_b"", ""dtype"": ""string""}, {""name"": ""option_c"", ""dtype"": ""string""}, {""name"": ""option_d"", ""dtype"": ""string""}, {""name"": ""answer"", ""dtype"": ""string""}, {""name"": ""required_knowledge"", ""dtype"": ""string""}, {""name"": ""time_sensitive"", ""dtype"": ""string""}, {""name"": ""reference"", ""dtype"": ""string""}, {""name"": ""culture"", ""dtype"": ""string""}, {""name"": ""region"", ""dtype"": ""string""}, {""name"": ""country"", ""dtype"": ""string""}, {""name"": ""cultural_sensitivity_label"", ""dtype"": ""string""}, {""name"": ""is_annotated"", ""dtype"": ""bool""}], ""splits"": [{""name"": ""dev"", ""num_bytes"": 153426, ""num_examples"": 285}, {""name"": ""test"", ""num_bytes"": 8833244, ""num_examples"": 14042}], ""download_size"": 8351339, ""dataset_size"": 8986670}, {""config_name"": ""uk"", ""features"": [{""name"": ""sample_id"", ""dtype"": ""string""}, {""name"": ""subject"", ""dtype"": ""string""}, {""name"": ""subject_category"", ""dtype"": ""string""}, {""name"": ""question"", ""dtype"": ""string""}, {""name"": ""option_a"", ""dtype"": ""string""}, {""name"": ""option_b"", ""dtype"": ""string""}, {""name"": ""option_c"", ""dtype"": ""string""}, {""name"": ""option_d"", ""dtype"": ""string""}, {""name"": ""answer"", ""dtype"": ""string""}, {""name"": ""required_knowledge"", ""dtype"": ""string""}, {""name"": ""time_sensitive"", ""dtype"": ""string""}, {""name"": ""reference"", ""dtype"": ""string""}, {""name"": ""culture"", ""dtype"": ""string""}, {""name"": ""region"", ""dtype"": ""string""}, {""name"": ""country"", ""dtype"": ""string""}, {""name"": ""cultural_sensitivity_label"", ""dtype"": ""string""}, {""name"": ""is_annotated"", ""dtype"": ""bool""}], ""splits"": [{""name"": ""dev"", ""num_bytes"": 229888, ""num_examples"": 285}, {""name"": ""test"", ""num_bytes"": 13233771, ""num_examples"": 14042}], ""download_size"": 11347842, ""dataset_size"": 13463659}, {""config_name"": ""vi"", ""features"": [{""name"": ""sample_id"", ""dtype"": ""string""}, {""name"": ""subject"", ""dtype"": ""string""}, {""name"": ""subject_category"", ""dtype"": ""string""}, {""name"": ""question"", ""dtype"": ""string""}, {""name"": ""option_a"", ""dtype"": ""string""}, {""name"": ""option_b"", ""dtype"": ""string""}, {""name"": ""option_c"", ""dtype"": ""string""}, {""name"": ""option_d"", ""dtype"": ""string""}, {""name"": ""answer"", ""dtype"": ""string""}, {""name"": ""required_knowledge"", ""dtype"": ""string""}, {""name"": ""time_sensitive"", ""dtype"": ""string""}, {""name"": ""reference"", ""dtype"": ""string""}, {""name"": ""culture"", ""dtype"": ""string""}, {""name"": ""region"", ""dtype"": ""string""}, {""name"": ""country"", ""dtype"": ""string""}, {""name"": ""cultural_sensitivity_label"", ""dtype"": ""string""}, {""name"": ""is_annotated"", ""dtype"": ""bool""}], ""splits"": [{""name"": ""dev"", ""num_bytes"": 185712, ""num_examples"": 285}, {""name"": ""test"", ""num_bytes"": 10604332, ""num_examples"": 14042}], ""download_size"": 8971266, ""dataset_size"": 10790044}, {""config_name"": ""yo"", ""features"": [{""name"": ""sample_id"", ""dtype"": ""string""}, {""name"": ""subject"", ""dtype"": ""string""}, {""name"": ""subject_category"", ""dtype"": ""string""}, {""name"": ""question"", ""dtype"": ""string""}, {""name"": ""option_a"", ""dtype"": ""string""}, {""name"": ""option_b"", ""dtype"": ""string""}, {""name"": ""option_c"", ""dtype"": ""string""}, {""name"": ""option_d"", ""dtype"": ""string""}, {""name"": ""answer"", ""dtype"": ""string""}, {""name"": ""required_knowledge"", ""dtype"": ""string""}, {""name"": ""time_sensitive"", ""dtype"": ""string""}, {""name"": ""reference"", ""dtype"": ""string""}, {""name"": ""culture"", ""dtype"": ""string""}, {""name"": ""region"", ""dtype"": ""string""}, {""name"": ""country"", ""dtype"": ""string""}, {""name"": ""cultural_sensitivity_label"", ""dtype"": ""string""}, {""name"": ""is_annotated"", ""dtype"": ""bool""}], ""splits"": [{""name"": ""dev"", ""num_bytes"": 153810, ""num_examples"": 285}, {""name"": ""test"", ""num_bytes"": 10694916, ""num_examples"": 14042}], ""download_size"": 9303668, ""dataset_size"": 10848726}, {""config_name"": ""zh"", ""features"": [{""name"": ""sample_id"", ""dtype"": ""string""}, {""name"": ""subject"", ""dtype"": ""string""}, {""name"": ""subject_category"", ""dtype"": ""string""}, {""name"": ""question"", ""dtype"": ""string""}, {""name"": ""option_a"", ""dtype"": ""string""}, {""name"": ""option_b"", ""dtype"": ""string""}, {""name"": ""option_c"", ""dtype"": ""string""}, {""name"": ""option_d"", ""dtype"": ""string""}, {""name"": ""answer"", ""dtype"": ""string""}, {""name"": ""required_knowledge"", ""dtype"": ""string""}, {""name"": ""time_sensitive"", ""dtype"": ""string""}, {""name"": ""reference"", ""dtype"": ""string""}, {""name"": ""culture"", ""dtype"": ""string""}, {""name"": ""region"", ""dtype"": ""string""}, {""name"": ""country"", ""dtype"": ""string""}, {""name"": ""cultural_sensitivity_label"", ""dtype"": ""string""}, {""name"": ""is_annotated"", ""dtype"": ""bool""}], ""splits"": [{""name"": ""dev"", ""num_bytes"": 127577, ""num_examples"": 285}, {""name"": ""test"", ""num_bytes"": 7393764, ""num_examples"": 14042}], ""download_size"": 7322261, ""dataset_size"": 7521341}], ""configs"": [{""config_name"": ""am"", ""data_files"": [{""split"": ""test"", ""path"": ""am/test-*""}, {""split"": ""dev"", ""path"": ""am/dev-*""}]}, {""config_name"": ""ar"", ""data_files"": [{""split"": ""test"", ""path"": ""ar/test-*""}, {""split"": ""dev"", ""path"": ""ar/dev-*""}]}, {""config_name"": ""bn"", ""data_files"": [{""split"": ""test"", ""path"": ""bn/test-*""}, {""split"": ""dev"", ""path"": ""bn/dev-*""}]}, {""config_name"": ""cs"", ""data_files"": [{""split"": ""test"", ""path"": ""cs/test-*""}, {""split"": ""dev"", ""path"": ""cs/dev-*""}]}, {""config_name"": ""de"", ""data_files"": [{""split"": ""test"", ""path"": ""de/test-*""}, {""split"": ""dev"", ""path"": ""de/dev-*""}]}, {""config_name"": ""el"", ""data_files"": [{""split"": ""test"", ""path"": ""el/test-*""}, {""split"": ""dev"", ""path"": ""el/dev-*""}]}, {""config_name"": ""en"", ""data_files"": [{""split"": ""test"", ""path"": ""en/test-*""}, {""split"": ""dev"", ""path"": ""en/dev-*""}]}, {""config_name"": ""es"", ""data_files"": [{""split"": ""test"", ""path"": ""es/test-*""}, {""split"": ""dev"", ""path"": ""es/dev-*""}]}, {""config_name"": ""fa"", ""data_files"": [{""split"": ""test"", ""path"": ""fa/test-*""}, {""split"": ""dev"", ""path"": ""fa/dev-*""}]}, {""config_name"": ""fil"", ""data_files"": [{""split"": ""test"", ""path"": ""fil/test-*""}, {""split"": ""dev"", ""path"": ""fil/dev-*""}]}, {""config_name"": ""fr"", ""data_files"": [{""split"": ""test"", ""path"": ""fr/test-*""}, {""split"": ""dev"", ""path"": ""fr/dev-*""}]}, {""config_name"": ""ha"", ""data_files"": [{""split"": ""test"", ""path"": ""ha/test-*""}, {""split"": ""dev"", ""path"": ""ha/dev-*""}]}, {""config_name"": ""he"", ""data_files"": [{""split"": ""test"", ""path"": ""he/test-*""}, {""split"": ""dev"", ""path"": ""he/dev-*""}]}, {""config_name"": ""hi"", ""data_files"": [{""split"": ""test"", ""path"": ""hi/test-*""}, {""split"": ""dev"", ""path"": ""hi/dev-*""}]}, {""config_name"": ""id"", ""data_files"": [{""split"": ""test"", ""path"": ""id/test-*""}, {""split"": ""dev"", ""path"": ""id/dev-*""}]}, {""config_name"": ""ig"", ""data_files"": [{""split"": ""test"", ""path"": ""ig/test-*""}, {""split"": ""dev"", ""path"": ""ig/dev-*""}]}, {""config_name"": ""it"", ""data_files"": [{""split"": ""test"", ""path"": ""it/test-*""}, {""split"": ""dev"", ""path"": ""it/dev-*""}]}, {""config_name"": ""ja"", ""data_files"": [{""split"": ""test"", ""path"": ""ja/test-*""}, {""split"": ""dev"", ""path"": ""ja/dev-*""}]}, {""config_name"": ""ko"", ""data_files"": [{""split"": ""test"", ""path"": ""ko/test-*""}, {""split"": ""dev"", ""path"": ""ko/dev-*""}]}, {""config_name"": ""ky"", ""data_files"": [{""split"": ""test"", ""path"": ""ky/test-*""}, {""split"": ""dev"", ""path"": ""ky/dev-*""}]}, {""config_name"": ""lt"", ""data_files"": [{""split"": ""test"", ""path"": ""lt/test-*""}, {""split"": ""dev"", ""path"": ""lt/dev-*""}]}, {""config_name"": ""mg"", ""data_files"": [{""split"": ""test"", ""path"": ""mg/test-*""}, {""split"": ""dev"", ""path"": ""mg/dev-*""}]}, {""config_name"": ""ms"", ""data_files"": [{""split"": ""test"", ""path"": ""ms/test-*""}, {""split"": ""dev"", ""path"": ""ms/dev-*""}]}, {""config_name"": ""ne"", ""data_files"": [{""split"": ""test"", ""path"": ""ne/test-*""}, {""split"": ""dev"", ""path"": ""ne/dev-*""}]}, {""config_name"": ""nl"", ""data_files"": [{""split"": ""test"", ""path"": ""nl/test-*""}, {""split"": ""dev"", ""path"": ""nl/dev-*""}]}, {""config_name"": ""ny"", ""data_files"": [{""split"": ""test"", ""path"": ""ny/test-*""}, {""split"": ""dev"", ""path"": ""ny/dev-*""}]}, {""config_name"": ""pl"", ""data_files"": [{""split"": ""test"", ""path"": ""pl/test-*""}, {""split"": ""dev"", ""path"": ""pl/dev-*""}]}, {""config_name"": ""pt"", ""data_files"": [{""split"": ""test"", ""path"": ""pt/test-*""}, {""split"": ""dev"", ""path"": ""pt/dev-*""}]}, {""config_name"": ""ro"", ""data_files"": [{""split"": ""test"", ""path"": ""ro/test-*""}, {""split"": ""dev"", ""path"": ""ro/dev-*""}]}, {""config_name"": ""ru"", ""data_files"": [{""split"": ""test"", ""path"": ""ru/test-*""}, {""split"": ""dev"", ""path"": ""ru/dev-*""}]}, {""config_name"": ""si"", ""data_files"": [{""split"": ""test"", ""path"": ""si/test-*""}, {""split"": ""dev"", ""path"": ""si/dev-*""}]}, {""config_name"": ""sn"", ""data_files"": [{""split"": ""test"", ""path"": ""sn/test-*""}, {""split"": ""dev"", ""path"": ""sn/dev-*""}]}, {""config_name"": ""so"", ""data_files"": [{""split"": ""test"", ""path"": ""so/test-*""}, {""split"": ""dev"", ""path"": ""so/dev-*""}]}, {""config_name"": ""sr"", ""data_files"": [{""split"": ""test"", ""path"": ""sr/test-*""}, {""split"": ""dev"", ""path"": ""sr/dev-*""}]}, {""config_name"": ""sv"", ""data_files"": [{""split"": ""test"", ""path"": ""sv/test-*""}, {""split"": ""dev"", ""path"": ""sv/dev-*""}]}, {""config_name"": ""sw"", ""data_files"": [{""split"": ""test"", ""path"": ""sw/test-*""}, {""split"": ""dev"", ""path"": ""sw/dev-*""}]}, {""config_name"": ""te"", ""data_files"": [{""split"": ""test"", ""path"": ""te/test-*""}, {""split"": ""dev"", ""path"": ""te/dev-*""}]}, {""config_name"": ""tr"", ""data_files"": [{""split"": ""test"", ""path"": ""tr/test-*""}, {""split"": ""dev"", ""path"": ""tr/dev-*""}]}, {""config_name"": ""uk"", ""data_files"": [{""split"": ""test"", ""path"": ""uk/test-*""}, {""split"": ""dev"", ""path"": ""uk/dev-*""}]}, {""config_name"": ""vi"", ""data_files"": [{""split"": ""test"", ""path"": ""vi/test-*""}, {""split"": ""dev"", ""path"": ""vi/dev-*""}]}, {""config_name"": ""yo"", ""data_files"": [{""split"": ""test"", ""path"": ""yo/test-*""}, {""split"": ""dev"", ""path"": ""yo/dev-*""}]}, {""config_name"": ""zh"", ""data_files"": [{""split"": ""test"", ""path"": ""zh/test-*""}, {""split"": ""dev"", ""path"": ""zh/dev-*""}]}], ""tags"": [""argilla""], ""language"": [""en"", ""ar"", ""bn"", ""es"", ""fr"", ""hi"", ""ru"", ""de"", ""id"", ""it"", ""ja"", ""ko"", ""pt"", ""zh"", ""yo"", ""nl"", ""ro"", ""uk"", ""vi"", ""tr"", ""pl"", ""fa"", ""cs"", ""he"", ""el"", ""ms"", ""fil"", ""te"", ""si"", ""ne"", ""ky"", ""sv"", ""lt"", ""sr"", ""mg"", ""so"", ""ha"", ""am"", ""sn"", ""ig"", ""ny"", ""sw""]}","
# Dataset Summary
[Global-MMLU](https://arxiv.org/abs/2412.03304) 🌍 is a multilingual evaluation set spanning 42 languages, including English. This dataset combines machine translations for [MMLU](https://huggingface.co/datasets/cais/mmlu) questions along with professional translations and crowd-sourced post-edits.
It also includes cultural sensitivity annotations for a subset of the questions (2850 questions per language) and classifies them as *Culturally Sensitive* (CS) 🗽 or *Culturally Agnostic* (CA) ⚖️. These annotations were collected as part of an open science initiative led by Cohere For AI in collaboration with many external collaborators from both industry and academia.
- **Curated by:** Professional annotators and contributors of [Cohere For AI Community](https://cohere.com/research).
- **Language(s):** 42 languages.
- **License:** [Apache 2.0](https://opensource.org/license/apache-2-0)
**Note:** We also provide a ""lite"" version of Global MMLU called [""Global-MMLU-Lite""](https://huggingface.co/datasets/CohereForAI/Global-MMLU-Lite). This datatset is more balanced containing 200 samples each for CA and CS subsets for each language. And provides coverage for 15 languages with human translations.
### **Global-MMLU Dataset Family:**
| Name | Explanation |
|------|--------------|
| [Global-MMLU](https://huggingface.co/datasets/CohereForAI/Global-MMLU) | Full Global-MMLU set with translations for all 14K samples including CS and CA subsets|
| [Global-MMLU-Lite](https://huggingface.co/datasets/CohereForAI/Global-MMLU-Lite) | Lite version of Global-MMLU with human translated samples in 15 languages and containing 200 samples each for CS and CA subsets per language.|
## Load with Datasets
To load this dataset with `datasets`, you'll first need to install it using `pip install datasets` and then use the following code:
```python
from datasets import load_dataset
# load HF dataset
global_mmlu = load_dataset(""CohereForAI/Global-MMLU"", 'en')
# can also be used as pandas dataframe
global_mmlu.set_format(""pandas"")
global_mmlu_test = global_mmlu['test'][:]
global_mmlu_dev = global_mmlu['dev'][:]
```
additional details
The columns corresponding to annotations collected from our cultural bias study (i.e. 'required_knowledge', 'time_sensitive', 'reference', 'culture', 'region', 'country') contain a list of values representing annotations from different annotators.
However, to avoid conversion issues to HF dataset, these columns are provided as string in the final dataset.
You can convert these columns back to list of values for easier manipulation as follows:
```python
import ast
# convert string values to list
global_mmlu_df['required_knowledge'] = global_mmlu_df['required_knowledge'].apply(lamda x: ast.literal_eval(x))
```
## Data Fields
The data fields are the same among all splits. Brief description of each field is provided below.
data field description
- `sample_id`: A unique identifier for the question.
- `subject`: The main topic the question falls under.
- `subject_category`: The high-level category the subject falls under i.e. STEM/Humanities/Social Sciences/Medical/Business/Other.
- `question`: translated question from MMLU
- `option_a`: one of the possible option choices
- `option_b`: one of the possible option choices
- `option_c`: one of the possible option choices
- `option_d`: one of the possible option choices
- `answer': the correct answer (A/B/C/D)
- `required_knowledge`: annotator votes for knowledge needed to answer the question correctly. Possible values include: ""cultural"", ""regional"", ""dialect"" or ""none""
- `time_sensitive`: annotator votes indicating if the question's answer is time-dependent. Possible values include: Yes/No
- `reference`: annotations for which part of the question contains cultural/regional/dialect references. The different items in the list are annotations from different annotators.
- `culture`: annotations for which culture does the question belong to. The different items in the list correspond to annotations from different annotators.
- `region`: Geographic region the question is relevant to. Each item in the list correspond to annotations from different annotators.
- `country`: Specific country the question pertains to. Each item in the list correspond to annotations from different annotators.
- `cultural_sensitivity_label`: Label to indicate if question is culturally sensitive (CS) or culturally agnostic (CA) based on annotator votes.
- `is_annotated`: True/False flag to indicate if sample contains any annotations from our cultural bias study.
## Data Splits
The following are the splits of the data:
| Split | No. of instances | Language Coverage |
|-------|------------------|-------------------|
| test | 589,764 | 42 |
| dev | 11,970 | 42 |
## Data Instances
An example from `test` set looks as follows:
```json
{'sample_id': 'world_religions/test/170'
'subject': 'world_religions',
'subject_category': 'Humanities',
'question': ' The numen of Augustus referred to which of the following characteristics?',
'option_a': 'Divine power',
'option_b': 'Sexual virility',
'option_c': 'Military acumen',
'option_d': 'Philosophical intellect',
'answer': 'A',
'required_knowledge': ""['none', 'cultural', 'cultural', 'cultural']"",
'time_sensitive': ""['No', 'No', 'No', 'No']"",
'reference': ""['-', '-', {'end': 22, 'label': 'Cultural', 'score': None, 'start': 5}, {'end': 22, 'label': 'Cultural', 'score': None, 'start': 5}]"",
'culture': ""['Western Culture', 'Western Culture', 'Western Culture']"",
'region': ""['North America', 'Europe']"",
'country': ""['Italy']"",
'cultural_sensitivity_label': 'CS',
'is_annotated': True,
}
```
## Statistics
### Annotation Types
The following is the breakdown of CS🗽, CA⚖️ and MA📝 samples in the final dataset.
| Type of Annotation | Instances per language | No. of languages | Total instances
|--------------------|------------------------|------------------|----------------|
| Culturally Sensitive 🗽 | 792 | 42 | 33,264 |
| Culturally Agnostic ⚖️ | 2058 |42 | 86,436 |
| MMLU Annotated 📝| 2850 |42 | 119,700 |
### Languages
The dataset covers 42 languages: 20 high-resource, 9 mid-resource, and 13 low-resource languages. The following is details about the languages included in the dataset.
Languages Info
| ISO Code | Language | Resources |
|----------|----------|-----------|
| `am` | Amharic | Low |
| `ar` | Arabic (Standard)| High |
| `bn` | Bengali | Mid |
| `de` | German | High |
| `el` | Greek | Mid |
| `en` | English | High |
| `fil` | Filipino | Mid |
| `fr` | French | High |
| `ha` | Hausa | Low |
| `he` | Hebrew | Mid |
| `hi` | Hindi | High |
| `ig` | Igbo | Low |
| `id` | Indonesian | Mid |
| `it` | Italian | High |
| `ja` | Japanese | High |
| `ky` | Kyrgyz | Low |
| `ko` | Korean | Mid |
| `lt` | Lithuanian | Mid |
| `mg` | Malagasy | Low |
| `ms` | Malay | Mid |
| `ne` | Nepali | Low |
| `nl` | Dutch | High |
| `ny` | Chichewa | Low |
| `fa` | Persian | High |
| `pl` | Polish | High |
| `pt` | Portuguese | High |
| `ru` | Russian | High |
| `si` | Sinhala | Low |
| `sn` | Shona | Low |
| `so` | Somali | Low |
| `es` | Spanish | High |
| `sr` | Serbian | High |
| `sw` | Swahili | Low |
| `sw` | Swedish | High |
| `te` | Telugu | Low |
| `tr` | Turkish | High |
| `uk` | Ukrainian | Mid |
| `vi` | Vietnamese | High |
| `yo` | Yorùbá | Low |
| `zh` | Chinese (Simplified) | High |
# Known Limitations
A brief overview of limitations of this dataset is provided below.
show limitations
- **Language and dialect coverage:** Global-MMLU focusses on 42 languages. However, this is still only a tiny fraction of the world’s linguistic diversity. Future work is needed to continue to improve evaluations beyond these 42 languages and take into account how technology serves different dialects.
- **Uneven distribution of contributions:** The dataset contains translation post-edits from community volunteers, with a 'long tail' of volunteers making only one or two contributions. Similarly, there is a huge gap between languages with the highest number of contributions and ones with the lowest number of contributions.
- **Toxic or offensive speech:** Our annotation process did not focus on flagging for toxic,harmful, or offensive speech, so it is possible that Global-MMLU contains some data that could be considered harmful. We believe this is of relatively low risk because of the nature of the original MMLU and the focus on examination material.
- **Region Category Assignment:** For the annotation of geographically sensitive questions, we classified regions into six geographic regions (Africa, Asia, Europe, North America, Oceania,and South America). However, based upon discussions we would going forward recommend switching to the taxonomy proposed by the World Bank which is more granular and includes separate designations for Central America and Sub-Saharan Africa.
- **Identifying cultural sensitivity does not guarantee cultural inclusion:** While Global-MMLU highlights important limitations in current datasets by identifying gaps in non-Western cultural representation. Future work must prioritize the integration of diverse culturally grounded knowledge to achieve true inclusivity and fairness in multilingual AI evaluation.
# Additional Information
## Provenance
- **Methods Used:** Professional annotations as well as crowd-sourced through volunteer annotations.
- **Methodology Details:** We collected cultural bias annotations as well as post-edits of translations for different mmlu questions.
- [Cultural Sensitivity Annotation Platform](https://huggingface.co/spaces/CohereForAI/MMLU-evaluation)
- [Translation Quality Annotation Platform](https://huggingface.co/spaces/CohereForAI/review-mmlu-translations)
- Dates of Collection: May 2024 - Aug 2024
## Dataset Version and Maintenance
- **Maintenance Status:** Actively Maintained
- **Version Details:**
- *Current version:* 1.0
- *Last Update:* 12/2024
- *First Release:* 12/2024
## Authorship
- **Publishing Organization:** [Cohere For AI](https://cohere.com/research)
- **Industry Type:** Not-for-profit - Tech
## Licensing Information
This dataset can be used for any purpose, under the terms of the [Apache 2.0](https://opensource.org/license/apache-2-0) License.
## Additional Details
For any additional details, please check our paper, [Global MMLU: Understanding and Addressing Cultural and Linguistic Biases in Multilingual Evaluation](https://arxiv.org/abs/2412.03304).
## Citation Information
```bibtex
@misc{singh2024globalmmluunderstandingaddressing,
title={Global MMLU: Understanding and Addressing Cultural and Linguistic Biases in Multilingual Evaluation},
author={Shivalika Singh and Angelika Romanou and Clémentine Fourrier and David I. Adelani and Jian Gang Ngui and Daniel Vila-Suero and Peerat Limkonchotiwat and Kelly Marchisio and Wei Qi Leong and Yosephine Susanto and Raymond Ng and Shayne Longpre and Wei-Yin Ko and Madeline Smith and Antoine Bosselut and Alice Oh and Andre F. T. Martins and Leshem Choshen and Daphne Ippolito and Enzo Ferrante and Marzieh Fadaee and Beyza Ermis and Sara Hooker},
year={2024},
eprint={2412.03304},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2412.03304},
}
```"
EunsuKim/CLIcK,"{""task_categories"": [""multiple-choice""], ""language"": [""ko""], ""tags"": [""Culture"", ""Language""], ""size_categories"": [""1K
CLIcK 🇰🇷🧠
A Benchmark Dataset of Cultural and Linguistic Intelligence in Korean
## Introduction 🎉
CLIcK (Cultural and Linguistic Intelligence in Korean) is a comprehensive dataset designed to evaluate cultural and linguistic intelligence in the context of Korean language models. In an era where diverse language models are continually emerging, there is a pressing need for robust evaluation datasets, especially for non-English languages like Korean. CLIcK fills this gap by providing a rich, well-categorized dataset focusing on both cultural and linguistic aspects, enabling a nuanced assessment of Korean language models.
## News 📰
- **[LREC-COLING]** Our paper introducing CLIcK has been accepted to LREC-COLING 2024!🎉
## Dataset Description 📊
The CLIcK benchmark comprises two broad categories: Culture and Language, which are further divided into 11 fine-grained subcategories.
### Categories 📂
- **Language** 🗣️
- Textual Knowledge
- Grammatical Knowledge
- Functional Knowledge
- **Culture** 🌍
- Korean Society
- Korean Tradition
- Korean Politics
- Korean Economy
- Korean Law
- Korean History
- Korean Geography
- Korean Popular Culture (K-Pop)
### Construction 🏗️
CLIcK was developed using two human-centric approaches:
1. Reclassification of **official and well-designed exam data** into our defined categories.
2. Generation of questions using ChatGPT, based on **official educational materials** from the Korean Ministry of Justice, followed by our own validation process.
### Structure 🏛️
The dataset is organized as follows, with each subcategory containing relevant JSON files:
```
📦CLIcK
└─ Dataset
├─ Culture
│ ├─ [Each cultural subcategory with associated JSON files]
└─ Language
├─ [Each language subcategory with associated JSON files]
```
### Exam Code Descriptions 📜
- KIIP: Korea Immigration & Integration Program ([Website](www.immigration.go.kr))
- CSAT: College Scholastic Ability Test for Korean ([Website](https://www.suneung.re.kr/))
- Kedu: Test of Teaching Korean as a Foreign Language exams ([Website](https://www.q-net.or.kr/man001.do?gSite=L&gId=36))
- PSE: Public Service Exam for 9th grade
- TOPIK: Test of Proficiency in Korean ([Website](https://www.topik.go.kr/))
- KHB: Korean History Exam Basic ([Website](https://www.historyexam.go.kr/))
- PSAT: Public Service Aptitude Test in Korea
## Results
| Models | Average Accuracy (Korean Culture) | Average Accuracy (Korean Language) |
|-------------------|-----------------------------------|------------------------------------|
| Polyglot-Ko 1.3B | 32.71% | 22.88% |
| Polyglot-Ko 3.8B | 32.90% | 22.38% |
| Polyglot-Ko 5.8B | 33.14% | 23.27% |
| Polyglot-Ko 12.8B | 33.40% | 22.24% |
| KULLM 5.8B | 33.79% | 23.50% |
| KULLM 12.8B | 33.51% | 23.78% |
| KoAlpaca 5.8B | 32.33% | 23.87% |
| KoAlpaca 12.8B | 33.80% | 22.42% |
| LLaMA-Ko 7B | 33.26% | 25.69% |
| LLaMA 7B | 35.44% | 27.17% |
| LLaMA 13B | **36.22%** | **26.71%** |
| GPT-3.5 | 49.30% | 42.32% |
| Claude2 | **51.72%** | **45.39%** |
## Dataset Link 🔗
The CLIcK dataset is available on the Hugging Face Hub: [CLIcK Dataset](https://huggingface.co/datasets/your_username/CLIcK)
## Citation 📝
If you use CLIcK in your research, please cite our paper:
```bibtex
@misc{kim2024click,
title={CLIcK: A Benchmark Dataset of Cultural and Linguistic Intelligence in Korean},
author={Eunsu Kim and Juyoung Suk and Philhoon Oh and Haneul Yoo and James Thorne and Alice Oh},
year={2024},
eprint={2403.06412},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
## Contact 📧
For any questions or inquiries, please contact [kes0317@kaist.ac.kr](mailto:kes0317@kaist.ac.kr)."
wikimedia/wit_base,"{""annotations_creators"": [""machine-generated""], ""language_creators"": [""found""], ""language"": [""af"", ""an"", ""ar"", ""arz"", ""ast"", ""az"", ""azb"", ""ba"", ""bar"", ""be"", ""bg"", ""bn"", ""br"", ""bs"", ""ca"", ""ce"", ""ceb"", ""ckb"", ""cs"", ""cv"", ""cy"", ""da"", ""de"", ""el"", ""en"", ""eo"", ""es"", ""et"", ""eu"", ""fa"", ""fi"", ""fil"", ""fr"", ""fy"", ""ga"", ""gl"", ""hi"", ""hr"", ""hsb"", ""ht"", ""hu"", ""hy"", ""ia"", ""id"", ""io"", ""is"", ""it"", ""iw"", ""ja"", ""jv"", ""ka"", ""kk"", ""kn"", ""ko"", ""la"", ""lah"", ""lb"", ""lmo"", ""lt"", ""lv"", ""mg"", ""mk"", ""ml"", ""mn"", ""mr"", ""ms"", ""my"", ""nan"", ""nds"", ""ne"", ""nl"", ""nn"", ""no"", ""nv"", ""oc"", ""pa"", ""pl"", ""pt"", ""qu"", ""ro"", ""ru"", ""sco"", ""si"", ""sk"", ""sl"", ""sq"", ""sr"", ""sv"", ""sw"", ""ta"", ""te"", ""tg"", ""th"", ""tr"", ""tt"", ""uk"", ""ur"", ""uz"", ""vec"", ""vi"", ""vo"", ""war"", ""xmf"", ""yue"", ""zh""], ""license"": [""cc-by-sa-4.0""], ""multilinguality"": [""multilingual""], ""size_categories"": [""1M
The core training data is taken from the Wikipedia Image-Text (WIT) Dataset, a large curated set of more than 37 million image-text associations extracted from Wikipedia articles in 108 languages that was recently released by Google Research.
>
> The WIT dataset offers extremely valuable data about the pieces of text associated with Wikipedia images. However, due to licensing and data volume issues, the Google dataset only provides the image name and corresponding URL for download and not the raw image files.
>
> Getting easy access to the image files is crucial for participants to successfully develop competitive models. Therefore, today, the Wikimedia Research team is releasing its first large image dataset. It contains more than six million image files from Wikipedia articles in 100+ languages, which correspond to almost [1] all captioned images in the WIT dataset. Image files are provided at a 300-px resolution, a size that is suitable for most of the learning frameworks used to classify and analyze images.
> [1] We are publishing all images having a non-null “reference description” in the WIT dataset. For privacy reasons, we are not publishing images where a person is the primary subject, i.e., where a person’s face covers more than 10% of the image surface. To identify faces and their bounding boxes, we use the RetinaFace detector. In addition, to avoid the inclusion of inappropriate images or images that violate copyright constraints, we have removed all images that are candidate for deletion on Commons from the dataset.
**Note**: Compared to [Google's version](https://huggingface.co/datasets/google/wit), which has contents of one Wikipedia page per data sample, this version groups contents of all Wikipedia pages available in different languages for the image in one single data sample to avoid duplication of image bytes.
### Supported Tasks and Leaderboards
- `image-captioning`: This dataset can be used to train a model for image captioning where the goal is to predict a caption given the image.
- `text-retrieval`: The goal in this task is to build a model that retrieves the text (`caption_title_and_reference_description`) closest to an image. The leaderboard for this task can be found [here](https://paperswithcode.com/sota/text-image-retrieval-on-wit). This task also has a competition on [Kaggle](https://www.kaggle.com/c/wikipedia-image-caption).
In these tasks, any combination of the `caption_reference_description`, `caption_attribution_description` and `caption_alt_text_description` fields can be used as the input text/caption.
### Languages
The dataset contains examples from all Wikipedia languages.
## Dataset Structure
### Data Instances
Each instance is an image, its representation in bytes, a pre-computed embedding, and the set of captions attached to the image in Wikipedia.
```
{
'image': ,
'image_url': 'https://upload.wikimedia.org/wikipedia/commons/8/8b/Scolopendra_gigantea.jpg',
'embedding': [1.4784087, 2.8710432, 0.0, 0.51603067, ..., 10.266883, 0.51142216, 0.0, 2.3464653],
'metadata_url': 'http://commons.wikimedia.org/wiki/File:Scolopendra_gigantea.jpg',
'original_height': 3000,
'original_width': 4000,
'mime_type': 'image/jpeg',
'caption_attribution_description': 'English: Puerto Rican Giant Centipede, Scolopendra gigantea; Vieques, Puerto Rico Slovenčina: Stonožka obrovská, Scolopendra gigantea; Vieques, Portoriko',
'wit_features': {
'language': ['ro', 'vi', 'sk', ..., 'nl', 'th', 'lv'],
'page_url': ['https://ro.wikipedia.org/wiki/Scolopendra_gigantea', 'https://vi.wikipedia.org/wiki/Scolopendra_gigantea', 'https://sk.wikipedia.org/wiki/Scolopendra_gigantea', ..., 'https://nl.wikipedia.org/wiki/Scolopendra_gigantea', 'https://th.wikipedia.org/wiki/%E0%B8%95%E0%B8%B0%E0%B8%82%E0%B8%B2%E0%B8%9A%E0%B8%A2%E0%B8%B1%E0%B8%81%E0%B8%A9%E0%B9%8C%E0%B8%82%E0%B8%B2%E0%B9%80%E0%B8%AB%E0%B8%A5%E0%B8%B7%E0%B8%AD%E0%B8%87%E0%B9%80%E0%B8%9B%E0%B8%A3%E0%B8%B9', 'https://lv.wikipedia.org/wiki/Skolopendru_dzimta'],
'attribution_passes_lang_id': [True, True, True, ..., True, True, True],
'caption_alt_text_description': [None, None, None, ..., 'Scolopendra gigantea', None, 'Milzu skolopendra (Scolopendra gigantea)'],
'caption_reference_description': [None, None, None, ..., None, None, 'Milzu skolopendra (Scolopendra gigantea)'],
'caption_title_and_reference_description': [None, 'Scolopendra gigantea [SEP] ', None, ..., 'Scolopendra gigantea [SEP] ', None, 'Skolopendru dzimta [SEP] Milzu skolopendra (Scolopendra gigantea)'],
'context_page_description': ['Scolopendra gigantea este un miriapod din clasa Chilopoda, fiind cel mai mare reprezentant al genului Scolopendra. Adultul poate atinge o lungime de 26 cm, uneori depășind 30 cm. Această specie habitează în regiunile de nord și de vest a Americii de Sud, pe insulele Trinidad, insulele Virgine, Jamaica Hispaniola ș.a. Localnicii denumesc scolopendra chilopodul gigant galben și chilopodul gigant amazonian.', 'Scolopendra gigantea là đại diện lớn nhất của chi Scolopendra nói riêng và cả lớp rết nói chung, thường đạt độ dài 26 cm và có thể vượt quá 30 cm. Sinh sống ở khu vực phía bắc và tây của Nam Mỹ và các đảo Trinidad, Puerto Rico, Saint Thomas, U.S. Virgin Islands, Jamaica, và Hispaniola.', 'Scolopendra gigantea, starší slovenský nazov: štípavica veľká, je živočích z rodu Scolopendra, s veľkosťou do 30 cm.', ..., 'Scolopendra gigantea is een tijgerduizendpoot uit Zuid-Amerika. De soort jaagt onder andere op grote geleedpotigen, amfibieën, reptielen en kleine zoogdieren. Het is voor zover bekend de grootste niet uitgestorven duizendpoot ter wereld.', 'ตะขาบยักษ์ขาเหลืองเปรู หรือ ตะขาบยักษ์อเมซอน เป็นตะขาบชนิดที่มีขนาดใหญ่ที่สุดในสกุล Scolopendra โดยปกติเมื่อโตเต็มที่จะยาว 26 เซนติเมตร แต่บางครั้งก็สามารถโตได้ถึง 30 เซนติเมตร ตะขาบชนิดนี้อาศัยอยู่ทางแถบเหนือและตะวันตกของทวีปอเมริกาใต้ และตามเกาะแก่งของประเทศตรินิแดดและจาไมกา เป็นสัตว์กินเนื้อ โดยกินจิ้งจก, กบ, นก, หนู และแม้แต่ค้างคาวเป็นอาหาร และขึ้นชื่อในเรื่องความดุร้าย', 'Skolpendru dzimta pieder pie simtkāju kārtas. Ap 400 dzimtas sugas sastopamas visā pasaulē, īpaši subtropu un tropu apgabalos. Mitinās augsnē, nobirušās lapās, plaisās, spraugās.'],
'context_section_description': [None, 'Scolopendra gigantea (còn được gọi là Rết chân vàng khổng lồ Peru và Rết khổng lồ Amazon) là đại diện lớn nhất của chi Scolopendra nói riêng và cả lớp rết nói chung, thường đạt độ dài 26\xa0cm (10\xa0in) và có thể vượt quá 30\xa0cm (12\xa0in). Sinh sống ở khu vực phía bắc và tây của Nam Mỹ và các đảo Trinidad, Puerto Rico, Saint Thomas, U.S. Virgin Islands, Jamaica, và Hispaniola.', None, ..., 'Scolopendra gigantea is een tijgerduizendpoot uit Zuid-Amerika. De soort jaagt onder andere op grote geleedpotigen, amfibieën, reptielen en kleine zoogdieren. Het is voor zover bekend de grootste niet uitgestorven duizendpoot ter wereld.', None, 'Skolpendru dzimta (Scolopendridae) pieder pie simtkāju kārtas. Ap 400 dzimtas sugas sastopamas visā pasaulē, īpaši subtropu un tropu apgabalos. Mitinās augsnē, nobirušās lapās, plaisās, spraugās.'],
'hierarchical_section_title': ['Scolopendra gigantea', 'Scolopendra gigantea', 'Scolopendra gigantea', ..., 'Scolopendra gigantea', 'ตะขาบยักษ์ขาเหลืองเปรู', 'Skolopendru dzimta'],
'is_main_image': [True, True, True, ..., True, True, True],
'page_title': ['Scolopendra gigantea', 'Scolopendra gigantea', 'Scolopendra gigantea', ..., 'Scolopendra gigantea', 'ตะขาบยักษ์ขาเหลืองเปรู', 'Skolopendru dzimta'],
'section_title': [None, None, None, ..., None, None, None]
}
}
```
**Note**: The dataset is stored in Parquet for better performance. This dataset was generated from the original files using [this script](wit_base/blob/main/scripts/wit.py). Additionally, 120 examples from the original files have incorrectly formatted one or more of the following fields: `original_height`, `original_width`, `mime_type` and `caption_attribution_description`. The fixed versions of these examples that were used in the generation script can be found [here](wit_base/blob/main/scripts/corrected_examples.py).
### Data Fields
- `image`: A `PIL.Image.Image` object containing the image resized to a width of 300-px while preserving its aspect ratio. Note that when accessing the image column: `dataset[0][""image""]` the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the `""image""` column, *i.e.* `dataset[0][""image""]` should **always** be preferred over `dataset[""image""][0]`.
- `image_url`: URL to wikipedia image
- `embedding`: Precomputed image embedding. Each image is described with a 2048-dimensional signature extracted from the second-to-last layer of a [ResNet-50](https://arxiv.org/abs/1512.03385) neural network trained with [Imagenet](https://www.image-net.org/) data. These embeddings contain rich information about the image content and layout, in a compact form
- `metadata_url`: URL to wikimedia page containing the image and the metadata
- `original_height`: Original image height before resizing
- `original_width`: Original image width before resizing
- `mime_type`: Mime type associated to the image
- `caption_attribution_description`: This is the text found on the Wikimedia page of the image. This text is common to all occurrences of that image across all Wikipedias.
- `wit_features`: Sequence of captions for the image with language, page URL, information about the page, caption text, etc.
- `language`: Language code depicting wikipedia language of the page
- `page_url`: URL to wikipedia page
- `attribution_passes_lang_id`: Compared `language` field with the attribution language (written in the prefix of the attribution description.
- `caption_alt_text_description`: This is the “alt” text associated with the image. While not visible in general, it is commonly used for accessibility / screen readers
- `caption_reference_description`: This is the caption that is visible on the wikipedia page directly below the image.
- `caption_title_and_reference_description`: Concatenation of `page_title` and `caption_reference_description`.
- `context_page_description`: Corresponds to the short description of the page. It provides a concise explanation of the scope of the page.
- `context_section_description`: Text within the image's section
- `hierarchical_section_title`: Hierarchical section's title
- `is_main_image`: Flag determining if the image is the first image of the page. Usually displayed on the top-right part of the page when using web browsers.
- `page_changed_recently`: [More Information Needed]
- `page_title`: Wikipedia page's title
- `section_title`: Section's title
Figure: WIT annotation example.
Details on the field content can be found directly in the [paper, figure 5 and table 12.](https://arxiv.org/abs/2103.01913)
### Data Splits
All data is held in `train` split, with a total of 6477255 examples.
## Dataset Creation
### Curation Rationale
From the [official blog post](https://techblog.wikimedia.org/2021/09/09/the-wikipedia-image-caption-matching-challenge-and-a-huge-release-of-image-data-for-research/):
> The WIT dataset offers extremely valuable data about the pieces of text associated with Wikipedia images.
> Getting easy access to the image files is crucial for participants to successfully develop competitive models.
> With this large release of visual data, we aim to help the competition participants—as well as researchers and practitioners who are interested in working with Wikipedia images—find and download the large number of image files associated with the challenge, in a compact form.
### Source Data
#### Initial Data Collection and Normalization
From the [paper, section 3.1](https://arxiv.org/abs/2103.01913):
> We started with all Wikipedia content pages (i.e., ignoring other
pages that have discussions, comments and such). These number about ~124M pages across 279 languages.
#### Who are the source language producers?
Text was extracted from Wikipedia.
### Annotations
#### Annotation process
WIT was constructed using an automatic process. However it was human-validated.
From the [paper, section 3.7](https://arxiv.org/abs/2103.01913):
> To further verify the quality of the WIT dataset we performed a
study using (crowd-sourced) human annotators. As seen in Fig. 3,
we asked raters to answer 3 questions. Given an image and the page
title, raters first evaluate the quality of the attribution description
and reference description in the first two questions (order randomized). The third question understands the contextual quality of these
text descriptions given the page description and caption. Each response is on a 3-point scale: ""Yes"" if the text perfectly describes
the image, ""Maybe"" if it is sufficiently explanatory and ""No"" if it is
irrelevant or the image is inappropriate.
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
From the [official blog post](https://techblog.wikimedia.org/2021/09/09/the-wikipedia-image-caption-matching-challenge-and-a-huge-release-of-image-data-for-research/#FN1):
> For privacy reasons, we are not publishing images where a person is the primary subject, i.e., where a person’s face covers more than 10% of the image surface. To identify faces and their bounding boxes, we use the [RetinaFace](https://arxiv.org/abs/1905.00641) detector. In addition, to avoid the inclusion of inappropriate images or images that violate copyright constraints, we have removed all images that are [candidate for deletion](https://commons.wikimedia.org/wiki/Commons:Deletion_requests) on Commons from the dataset.
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
From the [paper, section 3.4](https://arxiv.org/abs/2103.01913):
> Lastly we found that certain image-text pairs occurred very
frequently. These were often generic images that did not have
much to do with the main article page. Common examples
included flags, logos, maps, insignia and such. To prevent
biasing the data, we heavily under-sampled all such images
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
Miriam Redi, Fabian Kaelin and Tiziano Piccardi.
### Licensing Information
[CC BY-SA 4.0 international license](https://creativecommons.org/licenses/by-sa/4.0/)
### Citation Information
```bibtex
@article{srinivasan2021wit,
title={WIT: Wikipedia-based Image Text Dataset for Multimodal Multilingual Machine Learning},
author={Srinivasan, Krishna and Raman, Karthik and Chen, Jiecao and Bendersky, Michael and Najork, Marc},
journal={arXiv preprint arXiv:2103.01913},
year={2021}
}
```
### Contributions
Thanks to [@nateraw](https://github.com/nateraw), [yjernite](https://github.com/yjernite) and [mariosasko](https://github.com/mariosasko) for adding this dataset."
cis-lmu/GlotCC-V1,"{""license"": ""cc0-1.0"", ""configs"": [{""config_name"": ""default"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/*/*.parquet""}]}, {""config_name"": ""eng-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/eng-Latn/*.parquet""}]}, {""config_name"": ""rus-Cyrl"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/rus-Cyrl/*.parquet""}]}, {""config_name"": ""fra-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/fra-Latn/*.parquet""}]}, {""config_name"": ""spa-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/spa-Latn/*.parquet""}]}, {""config_name"": ""deu-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/deu-Latn/*.parquet""}]}, {""config_name"": ""pol-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/pol-Latn/*.parquet""}]}, {""config_name"": ""vie-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/vie-Latn/*.parquet""}]}, {""config_name"": ""ita-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/ita-Latn/*.parquet""}]}, {""config_name"": ""nld-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/nld-Latn/*.parquet""}]}, {""config_name"": ""por-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/por-Latn/*.parquet""}]}, {""config_name"": ""ces-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/ces-Latn/*.parquet""}]}, {""config_name"": ""fas-Arab"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/fas-Arab/*.parquet""}]}, {""config_name"": ""tur-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/tur-Latn/*.parquet""}]}, {""config_name"": ""tha-Thai"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/tha-Thai/*.parquet""}]}, {""config_name"": ""ind-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/ind-Latn/*.parquet""}]}, {""config_name"": ""cmn-Hani"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/cmn-Hani/*.parquet""}]}, {""config_name"": ""hun-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/hun-Latn/*.parquet""}]}, {""config_name"": ""ell-Grek"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/ell-Grek/*.parquet""}]}, {""config_name"": ""swe-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/swe-Latn/*.parquet""}]}, {""config_name"": ""ron-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/ron-Latn/*.parquet""}]}, {""config_name"": ""kor-Hang"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/kor-Hang/*.parquet""}]}, {""config_name"": ""ukr-Cyrl"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/ukr-Cyrl/*.parquet""}]}, {""config_name"": ""arb-Arab"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/arb-Arab/*.parquet""}]}, {""config_name"": ""fin-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/fin-Latn/*.parquet""}]}, {""config_name"": ""slk-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/slk-Latn/*.parquet""}]}, {""config_name"": ""bul-Cyrl"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/bul-Cyrl/*.parquet""}]}, {""config_name"": ""dan-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/dan-Latn/*.parquet""}]}, {""config_name"": ""heb-Hebr"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/heb-Hebr/*.parquet""}]}, {""config_name"": ""nob-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/nob-Latn/*.parquet""}]}, {""config_name"": ""cat-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/cat-Latn/*.parquet""}]}, {""config_name"": ""lit-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/lit-Latn/*.parquet""}]}, {""config_name"": ""ben-Beng"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/ben-Beng/*.parquet""}]}, {""config_name"": ""slv-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/slv-Latn/*.parquet""}]}, {""config_name"": ""azj-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/azj-Latn/*.parquet""}]}, {""config_name"": ""ekk-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/ekk-Latn/*.parquet""}]}, {""config_name"": ""lvs-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/lvs-Latn/*.parquet""}]}, {""config_name"": ""hrv-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/hrv-Latn/*.parquet""}]}, {""config_name"": ""jpn-Jpan"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/jpn-Jpan/*.parquet""}]}, {""config_name"": ""tam-Taml"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/tam-Taml/*.parquet""}]}, {""config_name"": ""srp-Cyrl"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/srp-Cyrl/*.parquet""}]}, {""config_name"": ""npi-Deva"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/npi-Deva/*.parquet""}]}, {""config_name"": ""kat-Geor"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/kat-Geor/*.parquet""}]}, {""config_name"": ""hin-Deva"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/hin-Deva/*.parquet""}]}, {""config_name"": ""hye-Armn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/hye-Armn/*.parquet""}]}, {""config_name"": ""zsm-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/zsm-Latn/*.parquet""}]}, {""config_name"": ""als-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/als-Latn/*.parquet""}]}, {""config_name"": ""mkd-Cyrl"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/mkd-Cyrl/*.parquet""}]}, {""config_name"": ""mal-Mlym"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/mal-Mlym/*.parquet""}]}, {""config_name"": ""kiu-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/kiu-Latn/*.parquet""}]}, {""config_name"": ""urd-Arab"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/urd-Arab/*.parquet""}]}, {""config_name"": ""mya-Mymr"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/mya-Mymr/*.parquet""}]}, {""config_name"": ""glg-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/glg-Latn/*.parquet""}]}, {""config_name"": ""isl-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/isl-Latn/*.parquet""}]}, {""config_name"": ""mar-Deva"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/mar-Deva/*.parquet""}]}, {""config_name"": ""eus-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/eus-Latn/*.parquet""}]}, {""config_name"": ""kaz-Cyrl"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/kaz-Cyrl/*.parquet""}]}, {""config_name"": ""tel-Telu"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/tel-Telu/*.parquet""}]}, {""config_name"": ""lat-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/lat-Latn/*.parquet""}]}, {""config_name"": ""khk-Cyrl"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/khk-Cyrl/*.parquet""}]}, {""config_name"": ""khm-Khmr"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/khm-Khmr/*.parquet""}]}, {""config_name"": ""bel-Cyrl"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/bel-Cyrl/*.parquet""}]}, {""config_name"": ""kan-Knda"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/kan-Knda/*.parquet""}]}, {""config_name"": ""bos-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/bos-Latn/*.parquet""}]}, {""config_name"": ""guj-Gujr"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/guj-Gujr/*.parquet""}]}, {""config_name"": ""sin-Sinh"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/sin-Sinh/*.parquet""}]}, {""config_name"": ""uzn-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/uzn-Latn/*.parquet""}]}, {""config_name"": ""uzn-Cyrl"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/uzn-Cyrl/*.parquet""}]}, {""config_name"": ""fil-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/fil-Latn/*.parquet""}]}, {""config_name"": ""pan-Guru"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/pan-Guru/*.parquet""}]}, {""config_name"": ""nno-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/nno-Latn/*.parquet""}]}, {""config_name"": ""cym-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/cym-Latn/*.parquet""}]}, {""config_name"": ""afr-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/afr-Latn/*.parquet""}]}, {""config_name"": ""kir-Cyrl"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/kir-Cyrl/*.parquet""}]}, {""config_name"": ""tgk-Cyrl"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/tgk-Cyrl/*.parquet""}]}, {""config_name"": ""swh-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/swh-Latn/*.parquet""}]}, {""config_name"": ""epo-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/epo-Latn/*.parquet""}]}, {""config_name"": ""pbt-Arab"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/pbt-Arab/*.parquet""}]}, {""config_name"": ""gle-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/gle-Latn/*.parquet""}]}, {""config_name"": ""tat-Cyrl"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/tat-Cyrl/*.parquet""}]}, {""config_name"": ""anp-Deva"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/anp-Deva/*.parquet""}]}, {""config_name"": ""ory-Orya"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/ory-Orya/*.parquet""}]}, {""config_name"": ""uig-Arab"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/uig-Arab/*.parquet""}]}, {""config_name"": ""ary-Arab"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/ary-Arab/*.parquet""}]}, {""config_name"": ""lao-Laoo"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/lao-Laoo/*.parquet""}]}, {""config_name"": ""mlt-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/mlt-Latn/*.parquet""}]}, {""config_name"": ""amh-Ethi"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/amh-Ethi/*.parquet""}]}, {""config_name"": ""asm-Beng"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/asm-Beng/*.parquet""}]}, {""config_name"": ""bak-Cyrl"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/bak-Cyrl/*.parquet""}]}, {""config_name"": ""div-Thaa"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/div-Thaa/*.parquet""}]}, {""config_name"": ""fao-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/fao-Latn/*.parquet""}]}, {""config_name"": ""bod-Tibt"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/bod-Tibt/*.parquet""}]}, {""config_name"": ""som-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/som-Latn/*.parquet""}]}, {""config_name"": ""ydd-Hebr"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/ydd-Hebr/*.parquet""}]}, {""config_name"": ""ckb-Arab"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/ckb-Arab/*.parquet""}]}, {""config_name"": ""fry-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/fry-Latn/*.parquet""}]}, {""config_name"": ""kmr-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/kmr-Latn/*.parquet""}]}, {""config_name"": ""snd-Arab"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/snd-Arab/*.parquet""}]}, {""config_name"": ""ast-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/ast-Latn/*.parquet""}]}, {""config_name"": ""gla-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/gla-Latn/*.parquet""}]}, {""config_name"": ""oci-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/oci-Latn/*.parquet""}]}, {""config_name"": ""hau-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/hau-Latn/*.parquet""}]}, {""config_name"": ""plt-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/plt-Latn/*.parquet""}]}, {""config_name"": ""tuk-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/tuk-Latn/*.parquet""}]}, {""config_name"": ""ltz-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/ltz-Latn/*.parquet""}]}, {""config_name"": ""arz-Arab"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/arz-Arab/*.parquet""}]}, {""config_name"": ""hyw-Armn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/hyw-Armn/*.parquet""}]}, {""config_name"": ""san-Deva"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/san-Deva/*.parquet""}]}, {""config_name"": ""grc-Grek"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/grc-Grek/*.parquet""}]}, {""config_name"": ""cos-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/cos-Latn/*.parquet""}]}, {""config_name"": ""hat-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/hat-Latn/*.parquet""}]}, {""config_name"": ""mww-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/mww-Latn/*.parquet""}]}, {""config_name"": ""jav-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/jav-Latn/*.parquet""}]}, {""config_name"": ""sun-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/sun-Latn/*.parquet""}]}, {""config_name"": ""bew-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/bew-Latn/*.parquet""}]}, {""config_name"": ""fro-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/fro-Latn/*.parquet""}]}, {""config_name"": ""und-Mong"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/und-Mong/*.parquet""}]}, {""config_name"": ""mri-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/mri-Latn/*.parquet""}]}, {""config_name"": ""kin-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/kin-Latn/*.parquet""}]}, {""config_name"": ""xho-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/xho-Latn/*.parquet""}]}, {""config_name"": ""ibo-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/ibo-Latn/*.parquet""}]}, {""config_name"": ""kal-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/kal-Latn/*.parquet""}]}, {""config_name"": ""bre-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/bre-Latn/*.parquet""}]}, {""config_name"": ""yor-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/yor-Latn/*.parquet""}]}, {""config_name"": ""ceb-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/ceb-Latn/*.parquet""}]}, {""config_name"": ""smo-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/smo-Latn/*.parquet""}]}, {""config_name"": ""tir-Ethi"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/tir-Ethi/*.parquet""}]}, {""config_name"": ""crh-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/crh-Latn/*.parquet""}]}, {""config_name"": ""lim-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/lim-Latn/*.parquet""}]}, {""config_name"": ""haw-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/haw-Latn/*.parquet""}]}, {""config_name"": ""sot-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/sot-Latn/*.parquet""}]}, {""config_name"": ""glk-Arab"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/glk-Arab/*.parquet""}]}, {""config_name"": ""gsw-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/gsw-Latn/*.parquet""}]}, {""config_name"": ""sna-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/sna-Latn/*.parquet""}]}, {""config_name"": ""nya-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/nya-Latn/*.parquet""}]}, {""config_name"": ""zul-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/zul-Latn/*.parquet""}]}, {""config_name"": ""pnb-Arab"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/pnb-Arab/*.parquet""}]}, {""config_name"": ""tyv-Cyrl"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/tyv-Cyrl/*.parquet""}]}, {""config_name"": ""nds-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/nds-Latn/*.parquet""}]}, {""config_name"": ""srd-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/srd-Latn/*.parquet""}]}, {""config_name"": ""sme-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/sme-Latn/*.parquet""}]}, {""config_name"": ""san-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/san-Latn/*.parquet""}]}, {""config_name"": ""arg-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/arg-Latn/*.parquet""}]}, {""config_name"": ""pap-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/pap-Latn/*.parquet""}]}, {""config_name"": ""mhr-Cyrl"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/mhr-Cyrl/*.parquet""}]}, {""config_name"": ""srp-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/srp-Latn/*.parquet""}]}, {""config_name"": ""hsb-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/hsb-Latn/*.parquet""}]}, {""config_name"": ""chv-Cyrl"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/chv-Cyrl/*.parquet""}]}, {""config_name"": ""vec-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/vec-Latn/*.parquet""}]}, {""config_name"": ""roh-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/roh-Latn/*.parquet""}]}, {""config_name"": ""gaz-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/gaz-Latn/*.parquet""}]}, {""config_name"": ""che-Cyrl"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/che-Cyrl/*.parquet""}]}, {""config_name"": ""sah-Cyrl"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/sah-Cyrl/*.parquet""}]}, {""config_name"": ""uig-Cyrl"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/uig-Cyrl/*.parquet""}]}, {""config_name"": ""sdh-Arab"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/sdh-Arab/*.parquet""}]}, {""config_name"": ""azb-Arab"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/azb-Arab/*.parquet""}]}, {""config_name"": ""lus-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/lus-Latn/*.parquet""}]}, {""config_name"": ""wln-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/wln-Latn/*.parquet""}]}, {""config_name"": ""orv-Cyrl"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/orv-Cyrl/*.parquet""}]}, {""config_name"": ""sco-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/sco-Latn/*.parquet""}]}, {""config_name"": ""scn-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/scn-Latn/*.parquet""}]}, {""config_name"": ""cnh-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/cnh-Latn/*.parquet""}]}, {""config_name"": ""crh-Cyrl"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/crh-Cyrl/*.parquet""}]}, {""config_name"": ""mai-Deva"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/mai-Deva/*.parquet""}]}, {""config_name"": ""syc-Syrc"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/syc-Syrc/*.parquet""}]}, {""config_name"": ""run-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/run-Latn/*.parquet""}]}, {""config_name"": ""bar-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/bar-Latn/*.parquet""}]}, {""config_name"": ""rue-Cyrl"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/rue-Cyrl/*.parquet""}]}, {""config_name"": ""gom-Deva"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/gom-Deva/*.parquet""}]}, {""config_name"": ""bpy-Beng"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/bpy-Beng/*.parquet""}]}, {""config_name"": ""nrm-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/nrm-Latn/*.parquet""}]}, {""config_name"": ""uig-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/uig-Latn/*.parquet""}]}, {""config_name"": ""urd-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/urd-Latn/*.parquet""}]}, {""config_name"": ""bxr-Cyrl"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/bxr-Cyrl/*.parquet""}]}, {""config_name"": ""nap-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/nap-Latn/*.parquet""}]}, {""config_name"": ""hbo-Hebr"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/hbo-Hebr/*.parquet""}]}, {""config_name"": ""ina-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/ina-Latn/*.parquet""}]}, {""config_name"": ""ars-Arab"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/ars-Arab/*.parquet""}]}, {""config_name"": ""pcm-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/pcm-Latn/*.parquet""}]}, {""config_name"": ""dag-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/dag-Latn/*.parquet""}]}, {""config_name"": ""vls-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/vls-Latn/*.parquet""}]}, {""config_name"": ""kbd-Cyrl"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/kbd-Cyrl/*.parquet""}]}, {""config_name"": ""vro-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/vro-Latn/*.parquet""}]}, {""config_name"": ""mni-Beng"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/mni-Beng/*.parquet""}]}, {""config_name"": ""new-Deva"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/new-Deva/*.parquet""}]}, {""config_name"": ""ike-Cans"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/ike-Cans/*.parquet""}]}, {""config_name"": ""gmh-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/gmh-Latn/*.parquet""}]}, {""config_name"": ""gug-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/gug-Latn/*.parquet""}]}, {""config_name"": ""bcl-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/bcl-Latn/*.parquet""}]}, {""config_name"": ""tcy-Knda"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/tcy-Knda/*.parquet""}]}, {""config_name"": ""shn-Mymr"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/shn-Mymr/*.parquet""}]}, {""config_name"": ""xmf-Geor"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/xmf-Geor/*.parquet""}]}, {""config_name"": ""frp-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/frp-Latn/*.parquet""}]}, {""config_name"": ""fur-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/fur-Latn/*.parquet""}]}, {""config_name"": ""ilo-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/ilo-Latn/*.parquet""}]}, {""config_name"": ""eml-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/eml-Latn/*.parquet""}]}, {""config_name"": ""kha-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/kha-Latn/*.parquet""}]}, {""config_name"": ""lmo-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/lmo-Latn/*.parquet""}]}, {""config_name"": ""udm-Cyrl"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/udm-Cyrl/*.parquet""}]}, {""config_name"": ""min-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/min-Latn/*.parquet""}]}, {""config_name"": ""abk-Cyrl"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/abk-Cyrl/*.parquet""}]}, {""config_name"": ""bho-Deva"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/bho-Deva/*.parquet""}]}, {""config_name"": ""bbc-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/bbc-Latn/*.parquet""}]}, {""config_name"": ""zea-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/zea-Latn/*.parquet""}]}, {""config_name"": ""lij-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/lij-Latn/*.parquet""}]}, {""config_name"": ""smj-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/smj-Latn/*.parquet""}]}, {""config_name"": ""lfn-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/lfn-Latn/*.parquet""}]}, {""config_name"": ""yua-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/yua-Latn/*.parquet""}]}, {""config_name"": ""ctd-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/ctd-Latn/*.parquet""}]}, {""config_name"": ""ido-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/ido-Latn/*.parquet""}]}, {""config_name"": ""cnr-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/cnr-Latn/*.parquet""}]}, {""config_name"": ""sma-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/sma-Latn/*.parquet""}]}, {""config_name"": ""szl-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/szl-Latn/*.parquet""}]}, {""config_name"": ""kum-Cyrl"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/kum-Cyrl/*.parquet""}]}, {""config_name"": ""ksh-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/ksh-Latn/*.parquet""}]}, {""config_name"": ""war-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/war-Latn/*.parquet""}]}, {""config_name"": ""mag-Deva"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/mag-Deva/*.parquet""}]}, {""config_name"": ""lug-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/lug-Latn/*.parquet""}]}, {""config_name"": ""ile-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/ile-Latn/*.parquet""}]}, {""config_name"": ""krc-Cyrl"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/krc-Cyrl/*.parquet""}]}, {""config_name"": ""ava-Cyrl"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/ava-Cyrl/*.parquet""}]}, {""config_name"": ""diq-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/diq-Latn/*.parquet""}]}, {""config_name"": ""rmy-Cyrl"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/rmy-Cyrl/*.parquet""}]}, {""config_name"": ""kbp-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/kbp-Latn/*.parquet""}]}, {""config_name"": ""lad-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/lad-Latn/*.parquet""}]}, {""config_name"": ""kab-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/kab-Latn/*.parquet""}]}, {""config_name"": ""tzm-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/tzm-Latn/*.parquet""}]}, {""config_name"": ""hin-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/hin-Latn/*.parquet""}]}, {""config_name"": ""sgs-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/sgs-Latn/*.parquet""}]}, {""config_name"": ""cor-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/cor-Latn/*.parquet""}]}, {""config_name"": ""tcz-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/tcz-Latn/*.parquet""}]}, {""config_name"": ""pms-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/pms-Latn/*.parquet""}]}, {""config_name"": ""ace-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/ace-Latn/*.parquet""}]}, {""config_name"": ""csb-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/csb-Latn/*.parquet""}]}, {""config_name"": ""tah-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/tah-Latn/*.parquet""}]}, {""config_name"": ""tsn-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/tsn-Latn/*.parquet""}]}, {""config_name"": ""cuk-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/cuk-Latn/*.parquet""}]}, {""config_name"": ""dzo-Tibt"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/dzo-Tibt/*.parquet""}]}, {""config_name"": ""vep-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/vep-Latn/*.parquet""}]}, {""config_name"": ""kaa-Cyrl"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/kaa-Cyrl/*.parquet""}]}, {""config_name"": ""mwl-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/mwl-Latn/*.parquet""}]}, {""config_name"": ""bal-Arab"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/bal-Arab/*.parquet""}]}, {""config_name"": ""gcr-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/gcr-Latn/*.parquet""}]}, {""config_name"": ""cfm-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/cfm-Latn/*.parquet""}]}, {""config_name"": ""nan-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/nan-Latn/*.parquet""}]}, {""config_name"": ""dsb-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/dsb-Latn/*.parquet""}]}, {""config_name"": ""frr-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/frr-Latn/*.parquet""}]}, {""config_name"": ""tpi-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/tpi-Latn/*.parquet""}]}, {""config_name"": ""pjt-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/pjt-Latn/*.parquet""}]}, {""config_name"": ""fij-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/fij-Latn/*.parquet""}]}, {""config_name"": ""olo-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/olo-Latn/*.parquet""}]}, {""config_name"": ""hac-Arab"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/hac-Arab/*.parquet""}]}, {""config_name"": ""lez-Cyrl"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/lez-Cyrl/*.parquet""}]}, {""config_name"": ""smn-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/smn-Latn/*.parquet""}]}, {""config_name"": ""ban-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/ban-Latn/*.parquet""}]}, {""config_name"": ""bam-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/bam-Latn/*.parquet""}]}, {""config_name"": ""bjn-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/bjn-Latn/*.parquet""}]}, {""config_name"": ""tzm-Tfng"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/tzm-Tfng/*.parquet""}]}, {""config_name"": ""ewe-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/ewe-Latn/*.parquet""}]}, {""config_name"": ""pdc-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/pdc-Latn/*.parquet""}]}, {""config_name"": ""ltg-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/ltg-Latn/*.parquet""}]}, {""config_name"": ""kpv-Cyrl"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/kpv-Cyrl/*.parquet""}]}, {""config_name"": ""lin-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/lin-Latn/*.parquet""}]}, {""config_name"": ""pfl-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/pfl-Latn/*.parquet""}]}, {""config_name"": ""yue-Hani"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/yue-Hani/*.parquet""}]}, {""config_name"": ""nqo-Nkoo"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/nqo-Nkoo/*.parquet""}]}, {""config_name"": ""iso-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/iso-Latn/*.parquet""}]}, {""config_name"": ""dyu-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/dyu-Latn/*.parquet""}]}, {""config_name"": ""rup-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/rup-Latn/*.parquet""}]}, {""config_name"": ""aeb-Arab"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/aeb-Arab/*.parquet""}]}, {""config_name"": ""iba-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/iba-Latn/*.parquet""}]}, {""config_name"": ""gos-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/gos-Latn/*.parquet""}]}, {""config_name"": ""nde-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/nde-Latn/*.parquet""}]}, {""config_name"": ""tso-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/tso-Latn/*.parquet""}]}, {""config_name"": ""uzs-Arab"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/uzs-Arab/*.parquet""}]}, {""config_name"": ""krl-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/krl-Latn/*.parquet""}]}, {""config_name"": ""mzn-Arab"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/mzn-Arab/*.parquet""}]}, {""config_name"": ""stq-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/stq-Latn/*.parquet""}]}, {""config_name"": ""rmy-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/rmy-Latn/*.parquet""}]}, {""config_name"": ""avk-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/avk-Latn/*.parquet""}]}, {""config_name"": ""tok-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/tok-Latn/*.parquet""}]}, {""config_name"": ""inh-Cyrl"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/inh-Cyrl/*.parquet""}]}, {""config_name"": ""wol-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/wol-Latn/*.parquet""}]}, {""config_name"": ""ext-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/ext-Latn/*.parquet""}]}, {""config_name"": ""tat-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/tat-Latn/*.parquet""}]}, {""config_name"": ""lld-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/lld-Latn/*.parquet""}]}, {""config_name"": ""wuu-Hani"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/wuu-Hani/*.parquet""}]}, {""config_name"": ""aly-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/aly-Latn/*.parquet""}]}, {""config_name"": ""myv-Cyrl"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/myv-Cyrl/*.parquet""}]}, {""config_name"": ""tel-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/tel-Latn/*.parquet""}]}, {""config_name"": ""fit-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/fit-Latn/*.parquet""}]}, {""config_name"": ""mrw-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/mrw-Latn/*.parquet""}]}, {""config_name"": ""sms-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/sms-Latn/*.parquet""}]}, {""config_name"": ""nav-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/nav-Latn/*.parquet""}]}, {""config_name"": ""gil-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/gil-Latn/*.parquet""}]}, {""config_name"": ""ayr-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/ayr-Latn/*.parquet""}]}, {""config_name"": ""goh-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/goh-Latn/*.parquet""}]}, {""config_name"": ""hne-Deva"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/hne-Deva/*.parquet""}]}, {""config_name"": ""jbo-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/jbo-Latn/*.parquet""}]}, {""config_name"": ""ckm-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/ckm-Latn/*.parquet""}]}, {""config_name"": ""hil-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/hil-Latn/*.parquet""}]}, {""config_name"": ""pon-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/pon-Latn/*.parquet""}]}, {""config_name"": ""jam-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/jam-Latn/*.parquet""}]}, {""config_name"": ""bts-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/bts-Latn/*.parquet""}]}, {""config_name"": ""azj-Cyrl"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/azj-Cyrl/*.parquet""}]}, {""config_name"": ""mah-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/mah-Latn/*.parquet""}]}, {""config_name"": ""ubu-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/ubu-Latn/*.parquet""}]}, {""config_name"": ""sat-Olck"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/sat-Olck/*.parquet""}]}, {""config_name"": ""oss-Cyrl"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/oss-Cyrl/*.parquet""}]}, {""config_name"": ""hui-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/hui-Latn/*.parquet""}]}, {""config_name"": ""doi-Deva"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/doi-Deva/*.parquet""}]}, {""config_name"": ""swg-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/swg-Latn/*.parquet""}]}, {""config_name"": ""tuc-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/tuc-Latn/*.parquet""}]}, {""config_name"": ""nso-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/nso-Latn/*.parquet""}]}, {""config_name"": ""kjh-Cyrl"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/kjh-Cyrl/*.parquet""}]}, {""config_name"": ""awa-Deva"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/awa-Deva/*.parquet""}]}, {""config_name"": ""mnw-Mymr"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/mnw-Mymr/*.parquet""}]}, {""config_name"": ""jra-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/jra-Latn/*.parquet""}]}, {""config_name"": ""srn-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/srn-Latn/*.parquet""}]}, {""config_name"": ""twi-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/twi-Latn/*.parquet""}]}, {""config_name"": ""quc-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/quc-Latn/*.parquet""}]}, {""config_name"": ""pam-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/pam-Latn/*.parquet""}]}, {""config_name"": ""apc-Arab"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/apc-Arab/*.parquet""}]}, {""config_name"": ""pma-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/pma-Latn/*.parquet""}]}, {""config_name"": ""ven-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/ven-Latn/*.parquet""}]}, {""config_name"": ""hif-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/hif-Latn/*.parquet""}]}, {""config_name"": ""kng-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/kng-Latn/*.parquet""}]}, {""config_name"": ""acm-Arab"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/acm-Arab/*.parquet""}]}, {""config_name"": ""mrj-Cyrl"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/mrj-Cyrl/*.parquet""}]}, {""config_name"": ""tvl-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/tvl-Latn/*.parquet""}]}, {""config_name"": ""brx-Deva"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/brx-Deva/*.parquet""}]}, {""config_name"": ""aaz-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/aaz-Latn/*.parquet""}]}, {""config_name"": ""ton-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/ton-Latn/*.parquet""}]}, {""config_name"": ""ndo-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/ndo-Latn/*.parquet""}]}, {""config_name"": ""cbk-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/cbk-Latn/*.parquet""}]}, {""config_name"": ""gom-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/gom-Latn/*.parquet""}]}, {""config_name"": ""und-Newa"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/und-Newa/*.parquet""}]}, {""config_name"": ""yao-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/yao-Latn/*.parquet""}]}, {""config_name"": ""pan-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/pan-Latn/*.parquet""}]}, {""config_name"": ""aln-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/aln-Latn/*.parquet""}]}, {""config_name"": ""kpg-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/kpg-Latn/*.parquet""}]}, {""config_name"": ""non-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/non-Latn/*.parquet""}]}, {""config_name"": ""ksw-Mymr"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/ksw-Mymr/*.parquet""}]}, {""config_name"": ""vol-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/vol-Latn/*.parquet""}]}, {""config_name"": ""rmc-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/rmc-Latn/*.parquet""}]}, {""config_name"": ""bru-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/bru-Latn/*.parquet""}]}, {""config_name"": ""kas-Arab"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/kas-Arab/*.parquet""}]}, {""config_name"": ""tog-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/tog-Latn/*.parquet""}]}, {""config_name"": ""rmn-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/rmn-Latn/*.parquet""}]}, {""config_name"": ""pcd-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/pcd-Latn/*.parquet""}]}, {""config_name"": ""pis-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/pis-Latn/*.parquet""}]}, {""config_name"": ""nsn-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/nsn-Latn/*.parquet""}]}, {""config_name"": ""cmo-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/cmo-Latn/*.parquet""}]}, {""config_name"": ""blk-Mymr"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/blk-Mymr/*.parquet""}]}, {""config_name"": ""gnn-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/gnn-Latn/*.parquet""}]}, {""config_name"": ""pck-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/pck-Latn/*.parquet""}]}, {""config_name"": ""iou-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/iou-Latn/*.parquet""}]}, {""config_name"": ""und-Sylo"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/und-Sylo/*.parquet""}]}, {""config_name"": ""yml-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/yml-Latn/*.parquet""}]}, {""config_name"": ""atj-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/atj-Latn/*.parquet""}]}, {""config_name"": ""nia-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/nia-Latn/*.parquet""}]}, {""config_name"": ""glv-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/glv-Latn/*.parquet""}]}, {""config_name"": ""ium-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/ium-Latn/*.parquet""}]}, {""config_name"": ""hnj-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/hnj-Latn/*.parquet""}]}, {""config_name"": ""fue-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/fue-Latn/*.parquet""}]}, {""config_name"": ""ron-Cyrl"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/ron-Cyrl/*.parquet""}]}, {""config_name"": ""seh-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/seh-Latn/*.parquet""}]}, {""config_name"": ""gul-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/gul-Latn/*.parquet""}]}, {""config_name"": ""luo-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/luo-Latn/*.parquet""}]}, {""config_name"": ""etr-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/etr-Latn/*.parquet""}]}, {""config_name"": ""mfe-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/mfe-Latn/*.parquet""}]}, {""config_name"": ""und-Hung"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/und-Hung/*.parquet""}]}, {""config_name"": ""und-Dsrt"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/und-Dsrt/*.parquet""}]}, {""config_name"": ""tam-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/tam-Latn/*.parquet""}]}, {""config_name"": ""swb-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/swb-Latn/*.parquet""}]}, {""config_name"": ""agd-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/agd-Latn/*.parquet""}]}, {""config_name"": ""crs-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/crs-Latn/*.parquet""}]}, {""config_name"": ""kac-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/kac-Latn/*.parquet""}]}, {""config_name"": ""aoj-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/aoj-Latn/*.parquet""}]}, {""config_name"": ""enm-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/enm-Latn/*.parquet""}]}, {""config_name"": ""her-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/her-Latn/*.parquet""}]}, {""config_name"": ""yap-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/yap-Latn/*.parquet""}]}, {""config_name"": ""quz-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/quz-Latn/*.parquet""}]}, {""config_name"": ""gcf-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/gcf-Latn/*.parquet""}]}, {""config_name"": ""fkv-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/fkv-Latn/*.parquet""}]}, {""config_name"": ""tly-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/tly-Latn/*.parquet""}]}, {""config_name"": ""szy-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/szy-Latn/*.parquet""}]}, {""config_name"": ""rad-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/rad-Latn/*.parquet""}]}, {""config_name"": ""umb-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/umb-Latn/*.parquet""}]}, {""config_name"": ""hwc-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/hwc-Latn/*.parquet""}]}, {""config_name"": ""wmw-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/wmw-Latn/*.parquet""}]}, {""config_name"": ""knv-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/knv-Latn/*.parquet""}]}, {""config_name"": ""bxh-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/bxh-Latn/*.parquet""}]}, {""config_name"": ""mui-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/mui-Latn/*.parquet""}]}, {""config_name"": ""mdf-Cyrl"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/mdf-Cyrl/*.parquet""}]}, {""config_name"": ""cnk-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/cnk-Latn/*.parquet""}]}, {""config_name"": ""nbl-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/nbl-Latn/*.parquet""}]}, {""config_name"": ""llg-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/llg-Latn/*.parquet""}]}, {""config_name"": ""spl-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/spl-Latn/*.parquet""}]}, {""config_name"": ""abq-Cyrl"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/abq-Cyrl/*.parquet""}]}, {""config_name"": ""mns-Cyrl"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/mns-Cyrl/*.parquet""}]}, {""config_name"": ""kyc-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/kyc-Latn/*.parquet""}]}, {""config_name"": ""tnk-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/tnk-Latn/*.parquet""}]}, {""config_name"": ""ssd-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/ssd-Latn/*.parquet""}]}, {""config_name"": ""rug-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/rug-Latn/*.parquet""}]}, {""config_name"": ""und-Syrc"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/und-Syrc/*.parquet""}]}, {""config_name"": ""alt-Cyrl"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/alt-Cyrl/*.parquet""}]}, {""config_name"": ""quy-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/quy-Latn/*.parquet""}]}, {""config_name"": ""lif-Deva"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/lif-Deva/*.parquet""}]}, {""config_name"": ""nah-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/nah-Latn/*.parquet""}]}, {""config_name"": ""nwi-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/nwi-Latn/*.parquet""}]}, {""config_name"": ""wnu-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/wnu-Latn/*.parquet""}]}, {""config_name"": ""gfk-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/gfk-Latn/*.parquet""}]}, {""config_name"": ""iws-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/iws-Latn/*.parquet""}]}, {""config_name"": ""hot-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/hot-Latn/*.parquet""}]}, {""config_name"": ""ang-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/ang-Latn/*.parquet""}]}, {""config_name"": ""nyy-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/nyy-Latn/*.parquet""}]}, {""config_name"": ""dob-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/dob-Latn/*.parquet""}]}, {""config_name"": ""kpw-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/kpw-Latn/*.parquet""}]}, {""config_name"": ""kua-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/kua-Latn/*.parquet""}]}, {""config_name"": ""nch-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/nch-Latn/*.parquet""}]}, {""config_name"": ""row-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/row-Latn/*.parquet""}]}, {""config_name"": ""trp-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/trp-Latn/*.parquet""}]}, {""config_name"": ""koi-Cyrl"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/koi-Cyrl/*.parquet""}]}, {""config_name"": ""for-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/for-Latn/*.parquet""}]}, {""config_name"": ""mal-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/mal-Latn/*.parquet""}]}, {""config_name"": ""toi-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/toi-Latn/*.parquet""}]}, {""config_name"": ""dng-Cyrl"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/dng-Cyrl/*.parquet""}]}, {""config_name"": ""nhg-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/nhg-Latn/*.parquet""}]}, {""config_name"": ""lue-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/lue-Latn/*.parquet""}]}, {""config_name"": ""thl-Deva"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/thl-Deva/*.parquet""}]}, {""config_name"": ""sny-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/sny-Latn/*.parquet""}]}, {""config_name"": ""soq-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/soq-Latn/*.parquet""}]}, {""config_name"": ""mpp-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/mpp-Latn/*.parquet""}]}, {""config_name"": ""rwo-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/rwo-Latn/*.parquet""}]}, {""config_name"": ""roo-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/roo-Latn/*.parquet""}]}, {""config_name"": ""gaa-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/gaa-Latn/*.parquet""}]}, {""config_name"": ""ngl-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/ngl-Latn/*.parquet""}]}, {""config_name"": ""bis-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/bis-Latn/*.parquet""}]}, {""config_name"": ""tkl-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/tkl-Latn/*.parquet""}]}, {""config_name"": ""poi-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/poi-Latn/*.parquet""}]}, {""config_name"": ""dad-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/dad-Latn/*.parquet""}]}, {""config_name"": ""msy-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/msy-Latn/*.parquet""}]}, {""config_name"": ""asm-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/asm-Latn/*.parquet""}]}, {""config_name"": ""fuf-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/fuf-Latn/*.parquet""}]}, {""config_name"": ""rar-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/rar-Latn/*.parquet""}]}, {""config_name"": ""tnn-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/tnn-Latn/*.parquet""}]}, {""config_name"": ""wsk-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/wsk-Latn/*.parquet""}]}, {""config_name"": ""xal-Cyrl"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/xal-Cyrl/*.parquet""}]}, {""config_name"": ""arl-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/arl-Latn/*.parquet""}]}, {""config_name"": ""jac-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/jac-Latn/*.parquet""}]}, {""config_name"": ""zyb-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/zyb-Latn/*.parquet""}]}, {""config_name"": ""guw-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/guw-Latn/*.parquet""}]}, {""config_name"": ""mmx-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/mmx-Latn/*.parquet""}]}, {""config_name"": ""huu-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/huu-Latn/*.parquet""}]}, {""config_name"": ""emi-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/emi-Latn/*.parquet""}]}, {""config_name"": ""qxo-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/qxo-Latn/*.parquet""}]}, {""config_name"": ""yuj-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/yuj-Latn/*.parquet""}]}, {""config_name"": ""enq-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/enq-Latn/*.parquet""}]}, {""config_name"": ""mni-Mtei"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/mni-Mtei/*.parquet""}]}, {""config_name"": ""aii-Syrc"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/aii-Syrc/*.parquet""}]}, {""config_name"": ""bon-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/bon-Latn/*.parquet""}]}, {""config_name"": ""boa-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/boa-Latn/*.parquet""}]}, {""config_name"": ""gah-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/gah-Latn/*.parquet""}]}, {""config_name"": ""sus-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/sus-Latn/*.parquet""}]}, {""config_name"": ""kyu-Kali"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/kyu-Kali/*.parquet""}]}, {""config_name"": ""bch-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/bch-Latn/*.parquet""}]}, {""config_name"": ""mhw-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/mhw-Latn/*.parquet""}]}, {""config_name"": ""apb-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/apb-Latn/*.parquet""}]}, {""config_name"": ""zat-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/zat-Latn/*.parquet""}]}, {""config_name"": ""acu-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/acu-Latn/*.parquet""}]}, {""config_name"": ""ipi-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/ipi-Latn/*.parquet""}]}, {""config_name"": ""kze-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/kze-Latn/*.parquet""}]}, {""config_name"": ""ikt-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/ikt-Latn/*.parquet""}]}, {""config_name"": ""ong-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/ong-Latn/*.parquet""}]}, {""config_name"": ""bci-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/bci-Latn/*.parquet""}]}, {""config_name"": ""mps-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/mps-Latn/*.parquet""}]}, {""config_name"": ""ssw-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/ssw-Latn/*.parquet""}]}, {""config_name"": ""ben-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/ben-Latn/*.parquet""}]}, {""config_name"": ""gor-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/gor-Latn/*.parquet""}]}, {""config_name"": ""bmu-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/bmu-Latn/*.parquet""}]}, {""config_name"": ""mur-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/mur-Latn/*.parquet""}]}, {""config_name"": ""yle-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/yle-Latn/*.parquet""}]}, {""config_name"": ""mgh-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/mgh-Latn/*.parquet""}]}, {""config_name"": ""hmr-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/hmr-Latn/*.parquet""}]}, {""config_name"": ""cak-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/cak-Latn/*.parquet""}]}, {""config_name"": ""avt-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/avt-Latn/*.parquet""}]}, {""config_name"": ""kri-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/kri-Latn/*.parquet""}]}, {""config_name"": ""yss-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/yss-Latn/*.parquet""}]}, {""config_name"": ""ubr-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/ubr-Latn/*.parquet""}]}, {""config_name"": ""snn-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/snn-Latn/*.parquet""}]}, {""config_name"": ""brh-Arab"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/brh-Arab/*.parquet""}]}, {""config_name"": ""rop-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/rop-Latn/*.parquet""}]}, {""config_name"": ""btx-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/btx-Latn/*.parquet""}]}, {""config_name"": ""fon-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/fon-Latn/*.parquet""}]}, {""config_name"": ""bjp-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/bjp-Latn/*.parquet""}]}, {""config_name"": ""tum-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/tum-Latn/*.parquet""}]}, {""config_name"": ""aau-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/aau-Latn/*.parquet""}]}, {""config_name"": ""opm-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/opm-Latn/*.parquet""}]}, {""config_name"": ""efi-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/efi-Latn/*.parquet""}]}, {""config_name"": ""chu-Cyrl"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/chu-Cyrl/*.parquet""}]}, {""config_name"": ""mnk-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/mnk-Latn/*.parquet""}]}, {""config_name"": ""tnp-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/tnp-Latn/*.parquet""}]}, {""config_name"": ""cop-Copt"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/cop-Copt/*.parquet""}]}, {""config_name"": ""sbe-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/sbe-Latn/*.parquet""}]}, {""config_name"": ""kaq-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/kaq-Latn/*.parquet""}]}, {""config_name"": ""lww-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/lww-Latn/*.parquet""}]}, {""config_name"": ""aoi-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/aoi-Latn/*.parquet""}]}, {""config_name"": ""apz-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/apz-Latn/*.parquet""}]}, {""config_name"": ""ffm-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/ffm-Latn/*.parquet""}]}, {""config_name"": ""wes-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/wes-Latn/*.parquet""}]}, {""config_name"": ""dar-Cyrl"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/dar-Cyrl/*.parquet""}]}, {""config_name"": ""tke-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/tke-Latn/*.parquet""}]}, {""config_name"": ""tgp-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/tgp-Latn/*.parquet""}]}, {""config_name"": ""got-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/got-Latn/*.parquet""}]}, {""config_name"": ""gag-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/gag-Latn/*.parquet""}]}, {""config_name"": ""att-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/att-Latn/*.parquet""}]}, {""config_name"": ""arb-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/arb-Latn/*.parquet""}]}, {""config_name"": ""ify-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/ify-Latn/*.parquet""}]}, {""config_name"": ""kbh-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/kbh-Latn/*.parquet""}]}, {""config_name"": ""gaw-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/gaw-Latn/*.parquet""}]}, {""config_name"": ""rcf-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/rcf-Latn/*.parquet""}]}, {""config_name"": ""rro-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/rro-Latn/*.parquet""}]}, {""config_name"": ""mcq-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/mcq-Latn/*.parquet""}]}, {""config_name"": ""loz-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/loz-Latn/*.parquet""}]}, {""config_name"": ""cbu-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/cbu-Latn/*.parquet""}]}, {""config_name"": ""niu-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/niu-Latn/*.parquet""}]}, {""config_name"": ""tvk-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/tvk-Latn/*.parquet""}]}, {""config_name"": ""qup-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/qup-Latn/*.parquet""}]}, {""config_name"": ""kij-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/kij-Latn/*.parquet""}]}, {""config_name"": ""tlh-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/tlh-Latn/*.parquet""}]}, {""config_name"": ""cpc-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/cpc-Latn/*.parquet""}]}, {""config_name"": ""kqc-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/kqc-Latn/*.parquet""}]}, {""config_name"": ""kvg-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/kvg-Latn/*.parquet""}]}, {""config_name"": ""nov-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/nov-Latn/*.parquet""}]}, {""config_name"": ""bba-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/bba-Latn/*.parquet""}]}, {""config_name"": ""teo-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/teo-Latn/*.parquet""}]}, {""config_name"": ""pdt-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/pdt-Latn/*.parquet""}]}, {""config_name"": ""mux-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/mux-Latn/*.parquet""}]}, {""config_name"": ""geb-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/geb-Latn/*.parquet""}]}, {""config_name"": ""byr-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/byr-Latn/*.parquet""}]}, {""config_name"": ""dao-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/dao-Latn/*.parquet""}]}, {""config_name"": ""lbj-Tibt"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/lbj-Tibt/*.parquet""}]}, {""config_name"": ""kos-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/kos-Latn/*.parquet""}]}, {""config_name"": ""zom-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/zom-Latn/*.parquet""}]}, {""config_name"": ""nzi-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/nzi-Latn/*.parquet""}]}, {""config_name"": ""ann-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/ann-Latn/*.parquet""}]}, {""config_name"": ""nvm-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/nvm-Latn/*.parquet""}]}, {""config_name"": ""guz-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/guz-Latn/*.parquet""}]}, {""config_name"": ""liv-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/liv-Latn/*.parquet""}]}, {""config_name"": ""mkn-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/mkn-Latn/*.parquet""}]}, {""config_name"": ""kik-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/kik-Latn/*.parquet""}]}, {""config_name"": ""nog-Cyrl"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/nog-Cyrl/*.parquet""}]}, {""config_name"": ""ady-Cyrl"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/ady-Cyrl/*.parquet""}]}, {""config_name"": ""mic-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/mic-Latn/*.parquet""}]}, {""config_name"": ""shi-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/shi-Latn/*.parquet""}]}, {""config_name"": ""nop-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/nop-Latn/*.parquet""}]}, {""config_name"": ""pui-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/pui-Latn/*.parquet""}]}, {""config_name"": ""kdl-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/kdl-Latn/*.parquet""}]}, {""config_name"": ""agm-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/agm-Latn/*.parquet""}]}, {""config_name"": ""mna-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/mna-Latn/*.parquet""}]}, {""config_name"": ""mar-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/mar-Latn/*.parquet""}]}, {""config_name"": ""smk-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/smk-Latn/*.parquet""}]}, {""config_name"": ""chk-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/chk-Latn/*.parquet""}]}, {""config_name"": ""myy-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/myy-Latn/*.parquet""}]}, {""config_name"": ""kca-Cyrl"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/kca-Cyrl/*.parquet""}]}, {""config_name"": ""snp-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/snp-Latn/*.parquet""}]}, {""config_name"": ""mti-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/mti-Latn/*.parquet""}]}, {""config_name"": ""chd-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/chd-Latn/*.parquet""}]}, {""config_name"": ""tzo-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/tzo-Latn/*.parquet""}]}, {""config_name"": ""nnh-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/nnh-Latn/*.parquet""}]}, {""config_name"": ""klv-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/klv-Latn/*.parquet""}]}, {""config_name"": ""myk-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/myk-Latn/*.parquet""}]}, {""config_name"": ""gdn-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/gdn-Latn/*.parquet""}]}, {""config_name"": ""pao-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/pao-Latn/*.parquet""}]}, {""config_name"": ""kea-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/kea-Latn/*.parquet""}]}, {""config_name"": ""mni-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/mni-Latn/*.parquet""}]}, {""config_name"": ""meu-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/meu-Latn/*.parquet""}]}, {""config_name"": ""ata-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/ata-Latn/*.parquet""}]}, {""config_name"": ""mam-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/mam-Latn/*.parquet""}]}, {""config_name"": ""lub-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/lub-Latn/*.parquet""}]}, {""config_name"": ""cni-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/cni-Latn/*.parquet""}]}, {""config_name"": ""cjv-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/cjv-Latn/*.parquet""}]}, {""config_name"": ""fuh-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/fuh-Latn/*.parquet""}]}, {""config_name"": ""prg-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/prg-Latn/*.parquet""}]}, {""config_name"": ""suk-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/suk-Latn/*.parquet""}]}, {""config_name"": ""hak-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/hak-Latn/*.parquet""}]}, {""config_name"": ""tet-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/tet-Latn/*.parquet""}]}, {""config_name"": ""ghs-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/ghs-Latn/*.parquet""}]}, {""config_name"": ""gng-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/gng-Latn/*.parquet""}]}, {""config_name"": ""crn-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/crn-Latn/*.parquet""}]}, {""config_name"": ""mdy-Ethi"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/mdy-Ethi/*.parquet""}]}, {""config_name"": ""hns-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/hns-Latn/*.parquet""}]}, {""config_name"": ""skr-Arab"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/skr-Arab/*.parquet""}]}, {""config_name"": ""zas-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/zas-Latn/*.parquet""}]}, {""config_name"": ""arn-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/arn-Latn/*.parquet""}]}, {""config_name"": ""ami-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/ami-Latn/*.parquet""}]}, {""config_name"": ""gam-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/gam-Latn/*.parquet""}]}, {""config_name"": ""bug-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/bug-Latn/*.parquet""}]}, {""config_name"": ""cut-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/cut-Latn/*.parquet""}]}, {""config_name"": ""crk-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/crk-Latn/*.parquet""}]}, {""config_name"": ""fuv-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/fuv-Latn/*.parquet""}]}, {""config_name"": ""pag-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/pag-Latn/*.parquet""}]}, {""config_name"": ""wrs-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/wrs-Latn/*.parquet""}]}, {""config_name"": ""moh-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/moh-Latn/*.parquet""}]}, {""config_name"": ""cao-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/cao-Latn/*.parquet""}]}, {""config_name"": ""nlg-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/nlg-Latn/*.parquet""}]}, {""config_name"": ""hmo-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/hmo-Latn/*.parquet""}]}, {""config_name"": ""pwn-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/pwn-Latn/*.parquet""}]}, {""config_name"": ""vmy-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/vmy-Latn/*.parquet""}]}, {""config_name"": ""heg-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/heg-Latn/*.parquet""}]}, {""config_name"": ""spm-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/spm-Latn/*.parquet""}]}, {""config_name"": ""gwi-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/gwi-Latn/*.parquet""}]}, {""config_name"": ""syb-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/syb-Latn/*.parquet""}]}, {""config_name"": ""ian-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/ian-Latn/*.parquet""}]}, {""config_name"": ""ppk-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/ppk-Latn/*.parquet""}]}, {""config_name"": ""akb-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/akb-Latn/*.parquet""}]}, {""config_name"": ""raw-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/raw-Latn/*.parquet""}]}, {""config_name"": ""nhr-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/nhr-Latn/*.parquet""}]}, {""config_name"": ""chw-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/chw-Latn/*.parquet""}]}, {""config_name"": ""nas-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/nas-Latn/*.parquet""}]}, {""config_name"": ""cot-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/cot-Latn/*.parquet""}]}, {""config_name"": ""npi-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/npi-Latn/*.parquet""}]}, {""config_name"": ""nhe-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/nhe-Latn/*.parquet""}]}, {""config_name"": ""dik-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/dik-Latn/*.parquet""}]}, {""config_name"": ""kwj-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/kwj-Latn/*.parquet""}]}, {""config_name"": ""bkd-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/bkd-Latn/*.parquet""}]}, {""config_name"": ""tku-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/tku-Latn/*.parquet""}]}, {""config_name"": ""quh-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/quh-Latn/*.parquet""}]}, {""config_name"": ""agr-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/agr-Latn/*.parquet""}]}, {""config_name"": ""pau-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/pau-Latn/*.parquet""}]}, {""config_name"": ""lam-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/lam-Latn/*.parquet""}]}, {""config_name"": ""kmu-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/kmu-Latn/*.parquet""}]}, {""config_name"": ""mad-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/mad-Latn/*.parquet""}]}, {""config_name"": ""cpy-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/cpy-Latn/*.parquet""}]}, {""config_name"": ""snc-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/snc-Latn/*.parquet""}]}, {""config_name"": ""avu-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/avu-Latn/*.parquet""}]}, {""config_name"": ""nbu-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/nbu-Latn/*.parquet""}]}, {""config_name"": ""yut-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/yut-Latn/*.parquet""}]}, {""config_name"": ""vun-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/vun-Latn/*.parquet""}]}, {""config_name"": ""zac-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/zac-Latn/*.parquet""}]}, {""config_name"": ""gdg-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/gdg-Latn/*.parquet""}]}, {""config_name"": ""buk-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/buk-Latn/*.parquet""}]}, {""config_name"": ""ota-Arab"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/ota-Arab/*.parquet""}]}, {""config_name"": ""qub-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/qub-Latn/*.parquet""}]}, {""config_name"": ""dwr-Ethi"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/dwr-Ethi/*.parquet""}]}, {""config_name"": ""sue-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/sue-Latn/*.parquet""}]}, {""config_name"": ""ksd-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/ksd-Latn/*.parquet""}]}, {""config_name"": ""mak-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/mak-Latn/*.parquet""}]}, {""config_name"": ""bmh-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/bmh-Latn/*.parquet""}]}, {""config_name"": ""kto-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/kto-Latn/*.parquet""}]}, {""config_name"": ""azg-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/azg-Latn/*.parquet""}]}, {""config_name"": ""kaa-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/kaa-Latn/*.parquet""}]}, {""config_name"": ""amf-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/amf-Latn/*.parquet""}]}, {""config_name"": ""nfa-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/nfa-Latn/*.parquet""}]}, {""config_name"": ""abs-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/abs-Latn/*.parquet""}]}, {""config_name"": ""trq-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/trq-Latn/*.parquet""}]}, {""config_name"": ""fub-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/fub-Latn/*.parquet""}]}, {""config_name"": ""bhg-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/bhg-Latn/*.parquet""}]}, {""config_name"": ""uvh-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/uvh-Latn/*.parquet""}]}, {""config_name"": ""qxl-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/qxl-Latn/*.parquet""}]}, {""config_name"": ""nss-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/nss-Latn/*.parquet""}]}, {""config_name"": ""ndc-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/ndc-Latn/*.parquet""}]}, {""config_name"": ""mup-Deva"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/mup-Deva/*.parquet""}]}, {""config_name"": ""zia-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/zia-Latn/*.parquet""}]}, {""config_name"": ""bem-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/bem-Latn/*.parquet""}]}, {""config_name"": ""bvr-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/bvr-Latn/*.parquet""}]}, {""config_name"": ""aoz-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/aoz-Latn/*.parquet""}]}, {""config_name"": ""yal-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/yal-Latn/*.parquet""}]}, {""config_name"": ""ixl-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/ixl-Latn/*.parquet""}]}, {""config_name"": ""ach-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/ach-Latn/*.parquet""}]}, {""config_name"": ""lbk-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/lbk-Latn/*.parquet""}]}, {""config_name"": ""bnp-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/bnp-Latn/*.parquet""}]}, {""config_name"": ""lbb-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/lbb-Latn/*.parquet""}]}, {""config_name"": ""cac-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/cac-Latn/*.parquet""}]}, {""config_name"": ""otw-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/otw-Latn/*.parquet""}]}, {""config_name"": ""tbo-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/tbo-Latn/*.parquet""}]}, {""config_name"": ""nhi-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/nhi-Latn/*.parquet""}]}, {""config_name"": ""naq-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/naq-Latn/*.parquet""}]}, {""config_name"": ""tzj-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/tzj-Latn/*.parquet""}]}, {""config_name"": ""mmo-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/mmo-Latn/*.parquet""}]}, {""config_name"": ""dgz-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/dgz-Latn/*.parquet""}]}, {""config_name"": ""bbr-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/bbr-Latn/*.parquet""}]}, {""config_name"": ""nin-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/nin-Latn/*.parquet""}]}, {""config_name"": ""taq-Tfng"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/taq-Tfng/*.parquet""}]}, {""config_name"": ""cgc-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/cgc-Latn/*.parquet""}]}, {""config_name"": ""dak-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/dak-Latn/*.parquet""}]}, {""config_name"": ""okv-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/okv-Latn/*.parquet""}]}, {""config_name"": ""mxv-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/mxv-Latn/*.parquet""}]}, {""config_name"": ""mos-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/mos-Latn/*.parquet""}]}, {""config_name"": ""aom-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/aom-Latn/*.parquet""}]}, {""config_name"": ""wlx-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/wlx-Latn/*.parquet""}]}, {""config_name"": ""car-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/car-Latn/*.parquet""}]}, {""config_name"": ""jvn-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/jvn-Latn/*.parquet""}]}, {""config_name"": ""yrk-Cyrl"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/yrk-Cyrl/*.parquet""}]}, {""config_name"": ""mvn-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/mvn-Latn/*.parquet""}]}, {""config_name"": ""tmd-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/tmd-Latn/*.parquet""}]}, {""config_name"": ""omw-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/omw-Latn/*.parquet""}]}, {""config_name"": ""kyz-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/kyz-Latn/*.parquet""}]}, {""config_name"": ""suc-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/suc-Latn/*.parquet""}]}, {""config_name"": ""lgl-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/lgl-Latn/*.parquet""}]}, {""config_name"": ""zpt-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/zpt-Latn/*.parquet""}]}, {""config_name"": ""zao-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/zao-Latn/*.parquet""}]}, {""config_name"": ""nyn-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/nyn-Latn/*.parquet""}]}, {""config_name"": ""rmn-Cyrl"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/rmn-Cyrl/*.parquet""}]}, {""config_name"": ""kkc-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/kkc-Latn/*.parquet""}]}, {""config_name"": ""kpr-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/kpr-Latn/*.parquet""}]}, {""config_name"": ""wos-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/wos-Latn/*.parquet""}]}, {""config_name"": ""mbb-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/mbb-Latn/*.parquet""}]}, {""config_name"": ""kbm-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/kbm-Latn/*.parquet""}]}, {""config_name"": ""ajp-Arab"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/ajp-Arab/*.parquet""}]}, {""config_name"": ""caa-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/caa-Latn/*.parquet""}]}, {""config_name"": ""dgc-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/dgc-Latn/*.parquet""}]}, {""config_name"": ""leu-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/leu-Latn/*.parquet""}]}, {""config_name"": ""upv-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/upv-Latn/*.parquet""}]}, {""config_name"": ""qvc-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/qvc-Latn/*.parquet""}]}, {""config_name"": ""njn-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/njn-Latn/*.parquet""}]}, {""config_name"": ""tab-Cyrl"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/tab-Cyrl/*.parquet""}]}, {""config_name"": ""faa-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/faa-Latn/*.parquet""}]}, {""config_name"": ""ape-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/ape-Latn/*.parquet""}]}, {""config_name"": ""apt-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/apt-Latn/*.parquet""}]}, {""config_name"": ""kmh-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/kmh-Latn/*.parquet""}]}, {""config_name"": ""lmk-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/lmk-Latn/*.parquet""}]}, {""config_name"": ""kne-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/kne-Latn/*.parquet""}]}, {""config_name"": ""tay-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/tay-Latn/*.parquet""}]}, {""config_name"": ""lua-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/lua-Latn/*.parquet""}]}, {""config_name"": ""und-Shaw"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/und-Shaw/*.parquet""}]}, {""config_name"": ""dtp-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/dtp-Latn/*.parquet""}]}, {""config_name"": ""mas-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/mas-Latn/*.parquet""}]}, {""config_name"": ""tbg-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/tbg-Latn/*.parquet""}]}, {""config_name"": ""ckt-Cyrl"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/ckt-Cyrl/*.parquet""}]}, {""config_name"": ""wal-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/wal-Latn/*.parquet""}]}, {""config_name"": ""gyr-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/gyr-Latn/*.parquet""}]}, {""config_name"": ""bdd-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/bdd-Latn/*.parquet""}]}, {""config_name"": ""sid-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/sid-Latn/*.parquet""}]}, {""config_name"": ""yka-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/yka-Latn/*.parquet""}]}, {""config_name"": ""kan-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/kan-Latn/*.parquet""}]}, {""config_name"": ""jiv-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/jiv-Latn/*.parquet""}]}, {""config_name"": ""sil-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/sil-Latn/*.parquet""}]}, {""config_name"": ""trv-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/trv-Latn/*.parquet""}]}, {""config_name"": ""wls-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/wls-Latn/*.parquet""}]}, {""config_name"": ""mlh-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/mlh-Latn/*.parquet""}]}, {""config_name"": ""suz-Deva"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/suz-Deva/*.parquet""}]}, {""config_name"": ""tca-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/tca-Latn/*.parquet""}]}, {""config_name"": ""eve-Cyrl"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/eve-Cyrl/*.parquet""}]}, {""config_name"": ""dga-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/dga-Latn/*.parquet""}]}, {""config_name"": ""kmg-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/kmg-Latn/*.parquet""}]}, {""config_name"": ""enl-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/enl-Latn/*.parquet""}]}, {""config_name"": ""czt-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/czt-Latn/*.parquet""}]}, {""config_name"": ""kew-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/kew-Latn/*.parquet""}]}, {""config_name"": ""mpx-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/mpx-Latn/*.parquet""}]}, {""config_name"": ""pnt-Grek"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/pnt-Grek/*.parquet""}]}, {""config_name"": ""med-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/med-Latn/*.parquet""}]}, {""config_name"": ""ory-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/ory-Latn/*.parquet""}]}, {""config_name"": ""jae-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/jae-Latn/*.parquet""}]}, {""config_name"": ""pad-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/pad-Latn/*.parquet""}]}, {""config_name"": ""ppo-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/ppo-Latn/*.parquet""}]}, {""config_name"": ""bus-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/bus-Latn/*.parquet""}]}, {""config_name"": ""wuv-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/wuv-Latn/*.parquet""}]}, {""config_name"": ""bbb-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/bbb-Latn/*.parquet""}]}, {""config_name"": ""lex-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/lex-Latn/*.parquet""}]}, {""config_name"": ""dah-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/dah-Latn/*.parquet""}]}, {""config_name"": ""guj-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/guj-Latn/*.parquet""}]}, {""config_name"": ""mog-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/mog-Latn/*.parquet""}]}, {""config_name"": ""khz-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/khz-Latn/*.parquet""}]}, {""config_name"": ""ncj-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/ncj-Latn/*.parquet""}]}, {""config_name"": ""uvl-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/uvl-Latn/*.parquet""}]}, {""config_name"": ""adi-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/adi-Latn/*.parquet""}]}, {""config_name"": ""msb-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/msb-Latn/*.parquet""}]}, {""config_name"": ""pib-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/pib-Latn/*.parquet""}]}, {""config_name"": ""abt-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/abt-Latn/*.parquet""}]}, {""config_name"": ""kdc-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/kdc-Latn/*.parquet""}]}, {""config_name"": ""sda-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/sda-Latn/*.parquet""}]}, {""config_name"": ""nca-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/nca-Latn/*.parquet""}]}, {""config_name"": ""csw-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/csw-Latn/*.parquet""}]}, {""config_name"": ""tte-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/tte-Latn/*.parquet""}]}, {""config_name"": ""mxt-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/mxt-Latn/*.parquet""}]}, {""config_name"": ""sag-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/sag-Latn/*.parquet""}]}, {""config_name"": ""top-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/top-Latn/*.parquet""}]}, {""config_name"": ""zpo-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/zpo-Latn/*.parquet""}]}, {""config_name"": ""clu-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/clu-Latn/*.parquet""}]}, {""config_name"": ""bgs-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/bgs-Latn/*.parquet""}]}, {""config_name"": ""guc-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/guc-Latn/*.parquet""}]}, {""config_name"": ""nak-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/nak-Latn/*.parquet""}]}, {""config_name"": ""yom-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/yom-Latn/*.parquet""}]}, {""config_name"": ""ada-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/ada-Latn/*.parquet""}]}, {""config_name"": ""cub-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/cub-Latn/*.parquet""}]}, {""config_name"": ""mph-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/mph-Latn/*.parquet""}]}, {""config_name"": ""und-Gran"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/und-Gran/*.parquet""}]}, {""config_name"": ""sdc-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/sdc-Latn/*.parquet""}]}, {""config_name"": ""tap-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/tap-Latn/*.parquet""}]}, {""config_name"": ""maa-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/maa-Latn/*.parquet""}]}, {""config_name"": ""bjr-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/bjr-Latn/*.parquet""}]}, {""config_name"": ""sus-Arab"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/sus-Arab/*.parquet""}]}, {""config_name"": ""hch-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/hch-Latn/*.parquet""}]}, {""config_name"": ""ino-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/ino-Latn/*.parquet""}]}, {""config_name"": ""adz-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/adz-Latn/*.parquet""}]}, {""config_name"": ""taj-Deva"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/taj-Deva/*.parquet""}]}, {""config_name"": ""zai-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/zai-Latn/*.parquet""}]}, {""config_name"": ""rjs-Deva"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/rjs-Deva/*.parquet""}]}, {""config_name"": ""aso-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/aso-Latn/*.parquet""}]}, {""config_name"": ""dhm-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/dhm-Latn/*.parquet""}]}, {""config_name"": ""lki-Arab"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/lki-Arab/*.parquet""}]}, {""config_name"": ""lun-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/lun-Latn/*.parquet""}]}, {""config_name"": ""mvp-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/mvp-Latn/*.parquet""}]}, {""config_name"": ""dop-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/dop-Latn/*.parquet""}]}, {""config_name"": ""snf-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/snf-Latn/*.parquet""}]}, {""config_name"": ""ojb-Cans"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/ojb-Cans/*.parquet""}]}, {""config_name"": ""mwv-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/mwv-Latn/*.parquet""}]}, {""config_name"": ""ttc-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/ttc-Latn/*.parquet""}]}, {""config_name"": ""emp-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/emp-Latn/*.parquet""}]}, {""config_name"": ""lfn-Cyrl"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/lfn-Cyrl/*.parquet""}]}, {""config_name"": ""cab-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/cab-Latn/*.parquet""}]}, {""config_name"": ""pwg-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/pwg-Latn/*.parquet""}]}, {""config_name"": ""kmr-Cyrl"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/kmr-Cyrl/*.parquet""}]}, {""config_name"": ""mek-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/mek-Latn/*.parquet""}]}, {""config_name"": ""mbh-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/mbh-Latn/*.parquet""}]}, {""config_name"": ""ttq-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/ttq-Latn/*.parquet""}]}, {""config_name"": ""swc-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/swc-Latn/*.parquet""}]}, {""config_name"": ""nhw-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/nhw-Latn/*.parquet""}]}, {""config_name"": ""tsg-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/tsg-Latn/*.parquet""}]}, {""config_name"": ""aim-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/aim-Latn/*.parquet""}]}, {""config_name"": ""bzi-Thai"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/bzi-Thai/*.parquet""}]}, {""config_name"": ""ote-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/ote-Latn/*.parquet""}]}, {""config_name"": ""djk-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/djk-Latn/*.parquet""}]}, {""config_name"": ""ots-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/ots-Latn/*.parquet""}]}, {""config_name"": ""tuf-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/tuf-Latn/*.parquet""}]}, {""config_name"": ""crl-Cans"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/crl-Cans/*.parquet""}]}, {""config_name"": ""kek-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/kek-Latn/*.parquet""}]}, {""config_name"": ""shp-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/shp-Latn/*.parquet""}]}, {""config_name"": ""npy-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/npy-Latn/*.parquet""}]}, {""config_name"": ""kwn-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/kwn-Latn/*.parquet""}]}, {""config_name"": ""kzj-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/kzj-Latn/*.parquet""}]}, {""config_name"": ""mza-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/mza-Latn/*.parquet""}]}, {""config_name"": ""ngu-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/ngu-Latn/*.parquet""}]}, {""config_name"": ""ssx-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/ssx-Latn/*.parquet""}]}, {""config_name"": ""cta-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/cta-Latn/*.parquet""}]}, {""config_name"": ""msk-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/msk-Latn/*.parquet""}]}, {""config_name"": ""sab-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/sab-Latn/*.parquet""}]}, {""config_name"": ""klt-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/klt-Latn/*.parquet""}]}, {""config_name"": ""tuk-Cyrl"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/tuk-Cyrl/*.parquet""}]}, {""config_name"": ""mjc-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/mjc-Latn/*.parquet""}]}, {""config_name"": ""qxn-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/qxn-Latn/*.parquet""}]}, {""config_name"": ""wbp-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/wbp-Latn/*.parquet""}]}, {""config_name"": ""kjb-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/kjb-Latn/*.parquet""}]}, {""config_name"": ""laj-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/laj-Latn/*.parquet""}]}, {""config_name"": ""tll-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/tll-Latn/*.parquet""}]}, {""config_name"": ""ded-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/ded-Latn/*.parquet""}]}, {""config_name"": ""msc-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/msc-Latn/*.parquet""}]}, {""config_name"": ""nif-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/nif-Latn/*.parquet""}]}, {""config_name"": ""hvn-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/hvn-Latn/*.parquet""}]}, {""config_name"": ""bhl-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/bhl-Latn/*.parquet""}]}, {""config_name"": ""gvn-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/gvn-Latn/*.parquet""}]}, {""config_name"": ""knc-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/knc-Latn/*.parquet""}]}, {""config_name"": ""kpx-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/kpx-Latn/*.parquet""}]}, {""config_name"": ""nho-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/nho-Latn/*.parquet""}]}, {""config_name"": ""rmq-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/rmq-Latn/*.parquet""}]}, {""config_name"": ""crx-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/crx-Latn/*.parquet""}]}, {""config_name"": ""sml-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/sml-Latn/*.parquet""}]}, {""config_name"": ""xtn-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/xtn-Latn/*.parquet""}]}, {""config_name"": ""sxb-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/sxb-Latn/*.parquet""}]}, {""config_name"": ""adj-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/adj-Latn/*.parquet""}]}, {""config_name"": ""sop-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/sop-Latn/*.parquet""}]}, {""config_name"": ""kup-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/kup-Latn/*.parquet""}]}, {""config_name"": ""tod-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/tod-Latn/*.parquet""}]}, {""config_name"": ""apr-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/apr-Latn/*.parquet""}]}, {""config_name"": ""akh-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/akh-Latn/*.parquet""}]}, {""config_name"": ""zyp-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/zyp-Latn/*.parquet""}]}, {""config_name"": ""sxn-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/sxn-Latn/*.parquet""}]}, {""config_name"": ""lbe-Cyrl"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/lbe-Cyrl/*.parquet""}]}, {""config_name"": ""acf-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/acf-Latn/*.parquet""}]}, {""config_name"": ""big-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/big-Latn/*.parquet""}]}, {""config_name"": ""kzf-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/kzf-Latn/*.parquet""}]}, {""config_name"": ""cbr-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/cbr-Latn/*.parquet""}]}, {""config_name"": ""esk-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/esk-Latn/*.parquet""}]}, {""config_name"": ""kpf-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/kpf-Latn/*.parquet""}]}, {""config_name"": ""blz-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/blz-Latn/*.parquet""}]}, {""config_name"": ""naf-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/naf-Latn/*.parquet""}]}, {""config_name"": ""mif-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/mif-Latn/*.parquet""}]}, {""config_name"": ""alp-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/alp-Latn/*.parquet""}]}, {""config_name"": ""ish-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/ish-Latn/*.parquet""}]}, {""config_name"": ""ibg-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/ibg-Latn/*.parquet""}]}, {""config_name"": ""cax-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/cax-Latn/*.parquet""}]}, {""config_name"": ""sim-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/sim-Latn/*.parquet""}]}, {""config_name"": ""kam-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/kam-Latn/*.parquet""}]}, {""config_name"": ""zdj-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/zdj-Latn/*.parquet""}]}, {""config_name"": ""fai-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/fai-Latn/*.parquet""}]}, {""config_name"": ""kqf-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/kqf-Latn/*.parquet""}]}, {""config_name"": ""awx-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/awx-Latn/*.parquet""}]}, {""config_name"": ""rtm-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/rtm-Latn/*.parquet""}]}, {""config_name"": ""taq-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/taq-Latn/*.parquet""}]}, {""config_name"": ""syl-Beng"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/syl-Beng/*.parquet""}]}, {""config_name"": ""yva-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/yva-Latn/*.parquet""}]}, {""config_name"": ""tkr-Cyrl"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/tkr-Cyrl/*.parquet""}]}, {""config_name"": ""maz-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/maz-Latn/*.parquet""}]}, {""config_name"": ""nus-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/nus-Latn/*.parquet""}]}, {""config_name"": ""nii-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/nii-Latn/*.parquet""}]}, {""config_name"": ""bjn-Arab"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/bjn-Arab/*.parquet""}]}, {""config_name"": ""swp-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/swp-Latn/*.parquet""}]}, {""config_name"": ""atb-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/atb-Latn/*.parquet""}]}, {""config_name"": ""esu-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/esu-Latn/*.parquet""}]}, {""config_name"": ""gjn-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/gjn-Latn/*.parquet""}]}, {""config_name"": ""qvh-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/qvh-Latn/*.parquet""}]}, {""config_name"": ""mip-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/mip-Latn/*.parquet""}]}, {""config_name"": ""kpe-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/kpe-Latn/*.parquet""}]}, {""config_name"": ""hus-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/hus-Latn/*.parquet""}]}, {""config_name"": ""amu-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/amu-Latn/*.parquet""}]}, {""config_name"": ""mfq-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/mfq-Latn/*.parquet""}]}, {""config_name"": ""sgc-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/sgc-Latn/*.parquet""}]}, {""config_name"": ""abx-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/abx-Latn/*.parquet""}]}, {""config_name"": ""yli-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/yli-Latn/*.parquet""}]}, {""config_name"": ""isd-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/isd-Latn/*.parquet""}]}, {""config_name"": ""eri-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/eri-Latn/*.parquet""}]}, {""config_name"": ""bin-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/bin-Latn/*.parquet""}]}, {""config_name"": ""gmv-Ethi"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/gmv-Ethi/*.parquet""}]}, {""config_name"": ""snw-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/snw-Latn/*.parquet""}]}, {""config_name"": ""acr-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/acr-Latn/*.parquet""}]}, {""config_name"": ""dhg-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/dhg-Latn/*.parquet""}]}, {""config_name"": ""taw-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/taw-Latn/*.parquet""}]}, {""config_name"": ""eko-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/eko-Latn/*.parquet""}]}, {""config_name"": ""qvn-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/qvn-Latn/*.parquet""}]}, {""config_name"": ""mxx-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/mxx-Latn/*.parquet""}]}, {""config_name"": ""mva-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/mva-Latn/*.parquet""}]}, {""config_name"": ""mge-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/mge-Latn/*.parquet""}]}, {""config_name"": ""kyg-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/kyg-Latn/*.parquet""}]}, {""config_name"": ""arq-Arab"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/arq-Arab/*.parquet""}]}, {""config_name"": ""kms-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/kms-Latn/*.parquet""}]}, {""config_name"": ""gum-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/gum-Latn/*.parquet""}]}, {""config_name"": ""wnc-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/wnc-Latn/*.parquet""}]}, {""config_name"": ""qvw-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/qvw-Latn/*.parquet""}]}, {""config_name"": ""kbo-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/kbo-Latn/*.parquet""}]}, {""config_name"": ""kmk-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/kmk-Latn/*.parquet""}]}, {""config_name"": ""tar-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/tar-Latn/*.parquet""}]}, {""config_name"": ""zpm-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/zpm-Latn/*.parquet""}]}, {""config_name"": ""kbq-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/kbq-Latn/*.parquet""}]}, {""config_name"": ""ptu-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/ptu-Latn/*.parquet""}]}, {""config_name"": ""bno-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/bno-Latn/*.parquet""}]}, {""config_name"": ""kmb-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/kmb-Latn/*.parquet""}]}, {""config_name"": ""not-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/not-Latn/*.parquet""}]}, {""config_name"": ""mfy-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/mfy-Latn/*.parquet""}]}, {""config_name"": ""ntu-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/ntu-Latn/*.parquet""}]}, {""config_name"": ""cbt-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/cbt-Latn/*.parquet""}]}, {""config_name"": ""men-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/men-Latn/*.parquet""}]}, {""config_name"": ""xmm-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/xmm-Latn/*.parquet""}]}, {""config_name"": ""tby-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/tby-Latn/*.parquet""}]}, {""config_name"": ""mpm-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/mpm-Latn/*.parquet""}]}, {""config_name"": ""bgr-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/bgr-Latn/*.parquet""}]}, {""config_name"": ""cho-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/cho-Latn/*.parquet""}]}, {""config_name"": ""dru-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/dru-Latn/*.parquet""}]}, {""config_name"": ""btd-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/btd-Latn/*.parquet""}]}, {""config_name"": ""sgw-Ethi"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/sgw-Ethi/*.parquet""}]}, {""config_name"": ""nuj-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/nuj-Latn/*.parquet""}]}, {""config_name"": ""dje-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/dje-Latn/*.parquet""}]}, {""config_name"": ""tlf-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/tlf-Latn/*.parquet""}]}, {""config_name"": ""yuw-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/yuw-Latn/*.parquet""}]}, {""config_name"": ""kas-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/kas-Latn/*.parquet""}]}, {""config_name"": ""lzh-Hani"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/lzh-Hani/*.parquet""}]}, {""config_name"": ""miq-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/miq-Latn/*.parquet""}]}, {""config_name"": ""lrc-Arab"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/lrc-Arab/*.parquet""}]}, {""config_name"": ""mhx-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/mhx-Latn/*.parquet""}]}, {""config_name"": ""xog-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/xog-Latn/*.parquet""}]}, {""config_name"": ""myw-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/myw-Latn/*.parquet""}]}, {""config_name"": ""ktu-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/ktu-Latn/*.parquet""}]}, {""config_name"": ""shu-Arab"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/shu-Arab/*.parquet""}]}, {""config_name"": ""syl-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/syl-Latn/*.parquet""}]}, {""config_name"": ""sxw-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/sxw-Latn/*.parquet""}]}, {""config_name"": ""shk-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/shk-Latn/*.parquet""}]}, {""config_name"": ""dyo-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/dyo-Latn/*.parquet""}]}, {""config_name"": ""tiv-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/tiv-Latn/*.parquet""}]}, {""config_name"": ""mbt-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/mbt-Latn/*.parquet""}]}, {""config_name"": ""zam-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/zam-Latn/*.parquet""}]}, {""config_name"": ""bzj-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/bzj-Latn/*.parquet""}]}, {""config_name"": ""lia-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/lia-Latn/*.parquet""}]}, {""config_name"": ""kqn-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/kqn-Latn/*.parquet""}]}, {""config_name"": ""csy-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/csy-Latn/*.parquet""}]}, {""config_name"": ""tif-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/tif-Latn/*.parquet""}]}, {""config_name"": ""twu-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/twu-Latn/*.parquet""}]}, {""config_name"": ""ojb-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/ojb-Latn/*.parquet""}]}, {""config_name"": ""pmx-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/pmx-Latn/*.parquet""}]}, {""config_name"": ""mqy-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/mqy-Latn/*.parquet""}]}, {""config_name"": ""srr-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/srr-Latn/*.parquet""}]}, {""config_name"": ""gbi-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/gbi-Latn/*.parquet""}]}, {""config_name"": ""rkb-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/rkb-Latn/*.parquet""}]}, {""config_name"": ""xsr-Deva"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/xsr-Deva/*.parquet""}]}, {""config_name"": ""ktm-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/ktm-Latn/*.parquet""}]}, {""config_name"": ""mej-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/mej-Latn/*.parquet""}]}, {""config_name"": ""txu-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/txu-Latn/*.parquet""}]}, {""config_name"": ""tcs-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/tcs-Latn/*.parquet""}]}, {""config_name"": ""blw-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/blw-Latn/*.parquet""}]}, {""config_name"": ""pmf-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/pmf-Latn/*.parquet""}]}, {""config_name"": ""aui-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/aui-Latn/*.parquet""}]}, {""config_name"": ""kix-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/kix-Latn/*.parquet""}]}, {""config_name"": ""sps-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/sps-Latn/*.parquet""}]}, {""config_name"": ""kru-Deva"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/kru-Deva/*.parquet""}]}, {""config_name"": ""kqw-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/kqw-Latn/*.parquet""}]}, {""config_name"": ""nbc-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/nbc-Latn/*.parquet""}]}, {""config_name"": ""lsi-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/lsi-Latn/*.parquet""}]}, {""config_name"": ""fud-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/fud-Latn/*.parquet""}]}, {""config_name"": ""ewo-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/ewo-Latn/*.parquet""}]}, {""config_name"": ""ain-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/ain-Latn/*.parquet""}]}, {""config_name"": ""gai-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/gai-Latn/*.parquet""}]}, {""config_name"": ""bdq-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/bdq-Latn/*.parquet""}]}, {""config_name"": ""snd-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/snd-Latn/*.parquet""}]}, {""config_name"": ""bsp-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/bsp-Latn/*.parquet""}]}, {""config_name"": ""snd-Deva"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/snd-Deva/*.parquet""}]}, {""config_name"": ""sas-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/sas-Latn/*.parquet""}]}, {""config_name"": ""boj-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/boj-Latn/*.parquet""}]}, {""config_name"": ""xbi-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/xbi-Latn/*.parquet""}]}, {""config_name"": ""lud-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/lud-Latn/*.parquet""}]}, {""config_name"": ""auy-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/auy-Latn/*.parquet""}]}, {""config_name"": ""nzm-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/nzm-Latn/*.parquet""}]}, {""config_name"": ""kbc-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/kbc-Latn/*.parquet""}]}, {""config_name"": ""gnb-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/gnb-Latn/*.parquet""}]}, {""config_name"": ""tzh-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/tzh-Latn/*.parquet""}]}, {""config_name"": ""lew-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/lew-Latn/*.parquet""}]}, {""config_name"": ""icr-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/icr-Latn/*.parquet""}]}, {""config_name"": ""yby-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/yby-Latn/*.parquet""}]}, {""config_name"": ""mtg-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/mtg-Latn/*.parquet""}]}, {""config_name"": ""gwr-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/gwr-Latn/*.parquet""}]}, {""config_name"": ""agt-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/agt-Latn/*.parquet""}]}, {""config_name"": ""hla-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/hla-Latn/*.parquet""}]}, {""config_name"": ""kwy-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/kwy-Latn/*.parquet""}]}, {""config_name"": ""bkl-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/bkl-Latn/*.parquet""}]}, {""config_name"": ""quw-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/quw-Latn/*.parquet""}]}, {""config_name"": ""cpb-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/cpb-Latn/*.parquet""}]}, {""config_name"": ""too-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/too-Latn/*.parquet""}]}, {""config_name"": ""tig-Ethi"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/tig-Ethi/*.parquet""}]}, {""config_name"": ""kgr-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/kgr-Latn/*.parquet""}]}, {""config_name"": ""agn-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/agn-Latn/*.parquet""}]}, {""config_name"": ""cwt-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/cwt-Latn/*.parquet""}]}, {""config_name"": ""obo-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/obo-Latn/*.parquet""}]}, {""config_name"": ""tos-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/tos-Latn/*.parquet""}]}, {""config_name"": ""yrl-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/yrl-Latn/*.parquet""}]}, {""config_name"": ""rgu-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/rgu-Latn/*.parquet""}]}, {""config_name"": ""ctu-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/ctu-Latn/*.parquet""}]}, {""config_name"": ""tih-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/tih-Latn/*.parquet""}]}, {""config_name"": ""bum-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/bum-Latn/*.parquet""}]}, {""config_name"": ""pot-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/pot-Latn/*.parquet""}]}, {""config_name"": ""bdh-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/bdh-Latn/*.parquet""}]}, {""config_name"": ""mbf-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/mbf-Latn/*.parquet""}]}, {""config_name"": ""wat-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/wat-Latn/*.parquet""}]}, {""config_name"": ""tdx-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/tdx-Latn/*.parquet""}]}, {""config_name"": ""poh-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/poh-Latn/*.parquet""}]}, {""config_name"": ""ruf-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/ruf-Latn/*.parquet""}]}, {""config_name"": ""lhi-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/lhi-Latn/*.parquet""}]}, {""config_name"": ""qul-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/qul-Latn/*.parquet""}]}, {""config_name"": ""muh-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/muh-Latn/*.parquet""}]}, {""config_name"": ""sll-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/sll-Latn/*.parquet""}]}, {""config_name"": ""ntp-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/ntp-Latn/*.parquet""}]}, {""config_name"": ""nsu-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/nsu-Latn/*.parquet""}]}, {""config_name"": ""cya-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/cya-Latn/*.parquet""}]}, {""config_name"": ""kas-Deva"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/kas-Deva/*.parquet""}]}, {""config_name"": ""nbe-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/nbe-Latn/*.parquet""}]}, {""config_name"": ""mwp-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/mwp-Latn/*.parquet""}]}, {""config_name"": ""dhv-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/dhv-Latn/*.parquet""}]}, {""config_name"": ""gux-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/gux-Latn/*.parquet""}]}, {""config_name"": ""kck-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/kck-Latn/*.parquet""}]}, {""config_name"": ""aey-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/aey-Latn/*.parquet""}]}, {""config_name"": ""mcd-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/mcd-Latn/*.parquet""}]}, {""config_name"": ""lnd-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/lnd-Latn/*.parquet""}]}, {""config_name"": ""fat-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/fat-Latn/*.parquet""}]}, {""config_name"": ""spp-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/spp-Latn/*.parquet""}]}, {""config_name"": ""tlb-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/tlb-Latn/*.parquet""}]}, {""config_name"": ""bim-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/bim-Latn/*.parquet""}]}, {""config_name"": ""sat-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/sat-Latn/*.parquet""}]}, {""config_name"": ""xmv-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/xmv-Latn/*.parquet""}]}, {""config_name"": ""way-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/way-Latn/*.parquet""}]}, {""config_name"": ""ljp-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/ljp-Latn/*.parquet""}]}, {""config_name"": ""moc-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/moc-Latn/*.parquet""}]}, {""config_name"": ""gym-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/gym-Latn/*.parquet""}]}, {""config_name"": ""zos-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/zos-Latn/*.parquet""}]}, {""config_name"": ""hto-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/hto-Latn/*.parquet""}]}, {""config_name"": ""kby-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/kby-Latn/*.parquet""}]}, {""config_name"": ""bef-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/bef-Latn/*.parquet""}]}, {""config_name"": ""ikw-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/ikw-Latn/*.parquet""}]}, {""config_name"": ""ria-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/ria-Latn/*.parquet""}]}, {""config_name"": ""bzd-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/bzd-Latn/*.parquet""}]}, {""config_name"": ""lid-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/lid-Latn/*.parquet""}]}, {""config_name"": ""yaa-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/yaa-Latn/*.parquet""}]}, {""config_name"": ""kdi-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/kdi-Latn/*.parquet""}]}, {""config_name"": ""myu-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/myu-Latn/*.parquet""}]}, {""config_name"": ""tsz-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/tsz-Latn/*.parquet""}]}, {""config_name"": ""skg-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/skg-Latn/*.parquet""}]}, {""config_name"": ""nmz-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/nmz-Latn/*.parquet""}]}, {""config_name"": ""ptp-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/ptp-Latn/*.parquet""}]}, {""config_name"": ""njz-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/njz-Latn/*.parquet""}]}, {""config_name"": ""poe-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/poe-Latn/*.parquet""}]}, {""config_name"": ""njm-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/njm-Latn/*.parquet""}]}, {""config_name"": ""ivb-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/ivb-Latn/*.parquet""}]}, {""config_name"": ""mwc-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/mwc-Latn/*.parquet""}]}, {""config_name"": ""dis-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/dis-Latn/*.parquet""}]}, {""config_name"": ""myb-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/myb-Latn/*.parquet""}]}, {""config_name"": ""waj-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/waj-Latn/*.parquet""}]}, {""config_name"": ""bps-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/bps-Latn/*.parquet""}]}, {""config_name"": ""dua-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/dua-Latn/*.parquet""}]}, {""config_name"": ""bas-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/bas-Latn/*.parquet""}]}, {""config_name"": ""nyu-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/nyu-Latn/*.parquet""}]}, {""config_name"": ""nmf-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/nmf-Latn/*.parquet""}]}, {""config_name"": ""tfr-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/tfr-Latn/*.parquet""}]}, {""config_name"": ""bqc-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/bqc-Latn/*.parquet""}]}, {""config_name"": ""ajz-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/ajz-Latn/*.parquet""}]}, {""config_name"": ""got-Goth"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/got-Goth/*.parquet""}]}, {""config_name"": ""tgo-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/tgo-Latn/*.parquet""}]}, {""config_name"": ""wbm-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/wbm-Latn/*.parquet""}]}, {""config_name"": ""kyq-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/kyq-Latn/*.parquet""}]}, {""config_name"": ""aby-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/aby-Latn/*.parquet""}]}, {""config_name"": ""rej-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/rej-Latn/*.parquet""}]}, {""config_name"": ""amm-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/amm-Latn/*.parquet""}]}, {""config_name"": ""nnw-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/nnw-Latn/*.parquet""}]}, {""config_name"": ""crk-Cans"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/crk-Cans/*.parquet""}]}, {""config_name"": ""cmr-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/cmr-Latn/*.parquet""}]}, {""config_name"": ""hub-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/hub-Latn/*.parquet""}]}, {""config_name"": ""nij-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/nij-Latn/*.parquet""}]}, {""config_name"": ""srm-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/srm-Latn/*.parquet""}]}, {""config_name"": ""tnc-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/tnc-Latn/*.parquet""}]}, {""config_name"": ""plu-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/plu-Latn/*.parquet""}]}, {""config_name"": ""atq-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/atq-Latn/*.parquet""}]}, {""config_name"": ""xla-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/xla-Latn/*.parquet""}]}, {""config_name"": ""mjw-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/mjw-Latn/*.parquet""}]}, {""config_name"": ""qug-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/qug-Latn/*.parquet""}]}, {""config_name"": ""vid-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/vid-Latn/*.parquet""}]}, {""config_name"": ""did-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/did-Latn/*.parquet""}]}, {""config_name"": ""mxb-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/mxb-Latn/*.parquet""}]}, {""config_name"": ""mwn-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/mwn-Latn/*.parquet""}]}, {""config_name"": ""lis-Lisu"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/lis-Lisu/*.parquet""}]}, {""config_name"": ""cha-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/cha-Latn/*.parquet""}]}, {""config_name"": ""idu-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/idu-Latn/*.parquet""}]}, {""config_name"": ""mqj-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/mqj-Latn/*.parquet""}]}, {""config_name"": ""trc-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/trc-Latn/*.parquet""}]}, {""config_name"": ""sgb-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/sgb-Latn/*.parquet""}]}, {""config_name"": ""xtd-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/xtd-Latn/*.parquet""}]}, {""config_name"": ""zgh-Tfng"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/zgh-Tfng/*.parquet""}]}, {""config_name"": ""rif-Tfng"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/rif-Tfng/*.parquet""}]}, {""config_name"": ""rif-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/rif-Latn/*.parquet""}]}, {""config_name"": ""mnb-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/mnb-Latn/*.parquet""}]}, {""config_name"": ""wsg-Telu"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/wsg-Telu/*.parquet""}]}, {""config_name"": ""ncx-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/ncx-Latn/*.parquet""}]}, {""config_name"": ""ccp-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/ccp-Latn/*.parquet""}]}, {""config_name"": ""nuy-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/nuy-Latn/*.parquet""}]}, {""config_name"": ""usa-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/usa-Latn/*.parquet""}]}, {""config_name"": ""wlv-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/wlv-Latn/*.parquet""}]}, {""config_name"": ""qvs-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/qvs-Latn/*.parquet""}]}, {""config_name"": ""mhi-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/mhi-Latn/*.parquet""}]}, {""config_name"": ""atd-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/atd-Latn/*.parquet""}]}, {""config_name"": ""sgh-Cyrl"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/sgh-Cyrl/*.parquet""}]}, {""config_name"": ""tbw-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/tbw-Latn/*.parquet""}]}, {""config_name"": ""plw-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/plw-Latn/*.parquet""}]}, {""config_name"": ""tnr-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/tnr-Latn/*.parquet""}]}, {""config_name"": ""bwd-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/bwd-Latn/*.parquet""}]}, {""config_name"": ""nhx-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/nhx-Latn/*.parquet""}]}, {""config_name"": ""aer-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/aer-Latn/*.parquet""}]}, {""config_name"": ""gub-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/gub-Latn/*.parquet""}]}, {""config_name"": ""quf-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/quf-Latn/*.parquet""}]}, {""config_name"": ""dyi-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/dyi-Latn/*.parquet""}]}, {""config_name"": ""tob-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/tob-Latn/*.parquet""}]}, {""config_name"": ""kwf-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/kwf-Latn/*.parquet""}]}, {""config_name"": ""xtm-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/xtm-Latn/*.parquet""}]}, {""config_name"": ""dww-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/dww-Latn/*.parquet""}]}, {""config_name"": ""mwf-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/mwf-Latn/*.parquet""}]}, {""config_name"": ""kak-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/kak-Latn/*.parquet""}]}, {""config_name"": ""omb-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/omb-Latn/*.parquet""}]}, {""config_name"": ""mbs-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/mbs-Latn/*.parquet""}]}, {""config_name"": ""mwm-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/mwm-Latn/*.parquet""}]}, {""config_name"": ""bpr-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/bpr-Latn/*.parquet""}]}, {""config_name"": ""gnw-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/gnw-Latn/*.parquet""}]}, {""config_name"": ""cjk-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/cjk-Latn/*.parquet""}]}, {""config_name"": ""nnp-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/nnp-Latn/*.parquet""}]}, {""config_name"": ""agw-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/agw-Latn/*.parquet""}]}, {""config_name"": ""rml-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/rml-Latn/*.parquet""}]}, {""config_name"": ""kkl-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/kkl-Latn/*.parquet""}]}, {""config_name"": ""ksr-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/ksr-Latn/*.parquet""}]}, {""config_name"": ""kpj-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/kpj-Latn/*.parquet""}]}, {""config_name"": ""zpu-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/zpu-Latn/*.parquet""}]}, {""config_name"": ""itv-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/itv-Latn/*.parquet""}]}, {""config_name"": ""cbs-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/cbs-Latn/*.parquet""}]}, {""config_name"": ""con-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/con-Latn/*.parquet""}]}, {""config_name"": ""eza-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/eza-Latn/*.parquet""}]}, {""config_name"": ""vap-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/vap-Latn/*.parquet""}]}, {""config_name"": ""bom-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/bom-Latn/*.parquet""}]}, {""config_name"": ""biu-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/biu-Latn/*.parquet""}]}, {""config_name"": ""iqw-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/iqw-Latn/*.parquet""}]}, {""config_name"": ""hra-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/hra-Latn/*.parquet""}]}, {""config_name"": ""nst-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/nst-Latn/*.parquet""}]}, {""config_name"": ""bgz-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/bgz-Latn/*.parquet""}]}, {""config_name"": ""bhw-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/bhw-Latn/*.parquet""}]}, {""config_name"": ""tui-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/tui-Latn/*.parquet""}]}, {""config_name"": ""nyk-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/nyk-Latn/*.parquet""}]}, {""config_name"": ""cbi-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/cbi-Latn/*.parquet""}]}, {""config_name"": ""mck-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/mck-Latn/*.parquet""}]}, {""config_name"": ""cwe-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/cwe-Latn/*.parquet""}]}, {""config_name"": ""mzh-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/mzh-Latn/*.parquet""}]}, {""config_name"": ""yon-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/yon-Latn/*.parquet""}]}, {""config_name"": ""agu-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/agu-Latn/*.parquet""}]}, {""config_name"": ""qvi-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/qvi-Latn/*.parquet""}]}, {""config_name"": ""cnw-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/cnw-Latn/*.parquet""}]}, {""config_name"": ""mau-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/mau-Latn/*.parquet""}]}, {""config_name"": ""gur-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/gur-Latn/*.parquet""}]}, {""config_name"": ""sbd-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/sbd-Latn/*.parquet""}]}, {""config_name"": ""ikk-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/ikk-Latn/*.parquet""}]}, {""config_name"": ""tbz-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/tbz-Latn/*.parquet""}]}, {""config_name"": ""rnl-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/rnl-Latn/*.parquet""}]}, {""config_name"": ""bqj-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/bqj-Latn/*.parquet""}]}, {""config_name"": ""inb-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/inb-Latn/*.parquet""}]}, {""config_name"": ""maw-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/maw-Latn/*.parquet""}]}, {""config_name"": ""guj-Arab"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/guj-Arab/*.parquet""}]}, {""config_name"": ""hag-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/hag-Latn/*.parquet""}]}, {""config_name"": ""acn-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/acn-Latn/*.parquet""}]}, {""config_name"": ""nph-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/nph-Latn/*.parquet""}]}, {""config_name"": ""lwg-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/lwg-Latn/*.parquet""}]}, {""config_name"": ""kog-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/kog-Latn/*.parquet""}]}, {""config_name"": ""djr-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/djr-Latn/*.parquet""}]}, {""config_name"": ""urh-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/urh-Latn/*.parquet""}]}, {""config_name"": ""cag-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/cag-Latn/*.parquet""}]}, {""config_name"": ""kcg-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/kcg-Latn/*.parquet""}]}, {""config_name"": ""dts-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/dts-Latn/*.parquet""}]}, {""config_name"": ""nsa-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/nsa-Latn/*.parquet""}]}, {""config_name"": ""xon-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/xon-Latn/*.parquet""}]}, {""config_name"": ""lif-Limb"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/lif-Limb/*.parquet""}]}, {""config_name"": ""tuk-Arab"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/tuk-Arab/*.parquet""}]}, {""config_name"": ""kmm-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/kmm-Latn/*.parquet""}]}, {""config_name"": ""plg-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/plg-Latn/*.parquet""}]}, {""config_name"": ""kmy-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/kmy-Latn/*.parquet""}]}, {""config_name"": ""rmo-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/rmo-Latn/*.parquet""}]}, {""config_name"": ""lcm-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/lcm-Latn/*.parquet""}]}, {""config_name"": ""nnb-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/nnb-Latn/*.parquet""}]}, {""config_name"": ""kgk-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/kgk-Latn/*.parquet""}]}, {""config_name"": ""vmw-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/vmw-Latn/*.parquet""}]}, {""config_name"": ""kjs-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/kjs-Latn/*.parquet""}]}, {""config_name"": ""met-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/met-Latn/*.parquet""}]}, {""config_name"": ""trn-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/trn-Latn/*.parquet""}]}, {""config_name"": ""ivv-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/ivv-Latn/*.parquet""}]}, {""config_name"": ""ktz-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/ktz-Latn/*.parquet""}]}, {""config_name"": ""kpo-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/kpo-Latn/*.parquet""}]}, {""config_name"": ""pbb-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/pbb-Latn/*.parquet""}]}, {""config_name"": ""wed-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/wed-Latn/*.parquet""}]}, {""config_name"": ""zsm-Arab"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/zsm-Arab/*.parquet""}]}, {""config_name"": ""alq-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/alq-Latn/*.parquet""}]}, {""config_name"": ""ssg-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/ssg-Latn/*.parquet""}]}, {""config_name"": ""mie-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/mie-Latn/*.parquet""}]}, {""config_name"": ""ddg-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/ddg-Latn/*.parquet""}]}, {""config_name"": ""ses-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/ses-Latn/*.parquet""}]}, {""config_name"": ""toj-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/toj-Latn/*.parquet""}]}, {""config_name"": ""pls-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/pls-Latn/*.parquet""}]}, {""config_name"": ""kyu-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/kyu-Latn/*.parquet""}]}, {""config_name"": ""otd-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/otd-Latn/*.parquet""}]}, {""config_name"": ""mfi-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/mfi-Latn/*.parquet""}]}, {""config_name"": ""kmd-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/kmd-Latn/*.parquet""}]}, {""config_name"": ""rap-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/rap-Latn/*.parquet""}]}, {""config_name"": ""kde-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/kde-Latn/*.parquet""}]}, {""config_name"": ""any-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/any-Latn/*.parquet""}]}, {""config_name"": ""pps-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/pps-Latn/*.parquet""}]}, {""config_name"": ""rhg-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/rhg-Latn/*.parquet""}]}, {""config_name"": ""lgg-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/lgg-Latn/*.parquet""}]}, {""config_name"": ""xsm-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/xsm-Latn/*.parquet""}]}, {""config_name"": ""usp-Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""v1.0/usp-Latn/*.parquet""}]}], ""multilinguality"": [""multilingual""], ""pinned"": true, ""tags"": [""multilingual""], ""language"": [""aau"", ""aaz"", ""ab"", ""abk"", ""abq"", ""abs"", ""abt"", ""abx"", ""aby"", ""ace"", ""acf"", ""ach"", ""acm"", ""acn"", ""acr"", ""acu"", ""ada"", ""adi"", ""adj"", ""ady"", ""adz"", ""aeb"", ""aer"", ""aey"", ""af"", ""afr"", ""agd"", ""agm"", ""agn"", ""agr"", ""agt"", ""agu"", ""agw"", ""aii"", ""aim"", ""ain"", ""ajp"", ""ajz"", ""aka"", ""akb"", ""akh"", ""aln"", ""alp"", ""alq"", ""als"", ""alt"", ""aly"", ""am"", ""amf"", ""amh"", ""ami"", ""amm"", ""amu"", ""an"", ""ang"", ""ann"", ""anp"", ""any"", ""aoi"", ""aoj"", ""aom"", ""aoz"", ""apb"", ""apc"", ""ape"", ""apr"", ""apt"", ""apz"", ""ara"", ""arb"", ""arg"", ""arl"", ""arn"", ""arq"", ""ars"", ""ary"", ""arz"", ""as"", ""asm"", ""aso"", ""ast"", ""ata"", ""atb"", ""atd"", ""atj"", ""atq"", ""att"", ""aui"", ""auy"", ""av"", ""ava"", ""avk"", ""avt"", ""avu"", ""awa"", ""awx"", ""aym"", ""ayr"", ""azb"", ""aze"", ""azg"", ""azj"", ""ba"", ""bak"", ""bal"", ""bam"", ""ban"", ""bar"", ""bas"", ""bba"", ""bbb"", ""bbc"", ""bbr"", ""bcc"", ""bch"", ""bci"", ""bcl"", ""bdd"", ""bdh"", ""bdq"", ""be"", ""bef"", ""bel"", ""bem"", ""ben"", ""bew"", ""bg"", ""bgr"", ""bgs"", ""bgz"", ""bhg"", ""bhl"", ""bho"", ""bhw"", ""bi"", ""big"", ""bik"", ""bim"", ""bin"", ""bis"", ""biu"", ""bjn"", ""bjp"", ""bjr"", ""bkd"", ""bkl"", ""blk"", ""blw"", ""blz"", ""bm"", ""bmh"", ""bmu"", ""bn"", ""bnc"", ""bno"", ""bnp"", ""bo"", ""boa"", ""bod"", ""boj"", ""bom"", ""bon"", ""bos"", ""bpr"", ""bps"", ""bpy"", ""bqc"", ""bqj"", ""br"", ""bre"", ""brh"", ""bru"", ""brx"", ""bs"", ""bsp"", ""btd"", ""bts"", ""btx"", ""bua"", ""bug"", ""buk"", ""bul"", ""bum"", ""bus"", ""bvr"", ""bwd"", ""bxh"", ""bxr"", ""byr"", ""bzd"", ""bzi"", ""bzj"", ""ca"", ""caa"", ""cab"", ""cac"", ""cag"", ""cak"", ""cao"", ""car"", ""cat"", ""cax"", ""cbi"", ""cbk"", ""cbr"", ""cbs"", ""cbt"", ""cbu"", ""ccp"", ""ce"", ""ceb"", ""ces"", ""cfm"", ""cgc"", ""ch"", ""cha"", ""chd"", ""che"", ""chk"", ""chm"", ""cho"", ""chu"", ""chv"", ""chw"", ""cjk"", ""cjv"", ""ckb"", ""ckm"", ""ckt"", ""clu"", ""cmn"", ""cmo"", ""cmr"", ""cnh"", ""cni"", ""cnk"", ""cnr"", ""cnw"", ""co"", ""con"", ""cop"", ""cor"", ""cos"", ""cot"", ""cpb"", ""cpc"", ""cpy"", ""cre"", ""crh"", ""crk"", ""crl"", ""crn"", ""crs"", ""crx"", ""cs"", ""csb"", ""csw"", ""csy"", ""cta"", ""ctd"", ""ctu"", ""cu"", ""cub"", ""cuk"", ""cut"", ""cv"", ""cwe"", ""cwt"", ""cy"", ""cya"", ""cym"", ""czt"", ""da"", ""dad"", ""dag"", ""dah"", ""dak"", ""dan"", ""dao"", ""dar"", ""ddg"", ""de"", ""ded"", ""deu"", ""dga"", ""dgc"", ""dgz"", ""dhg"", ""dhm"", ""dhv"", ""did"", ""dik"", ""din"", ""diq"", ""dis"", ""div"", ""dje"", ""djk"", ""djr"", ""dng"", ""dob"", ""doi"", ""dop"", ""dru"", ""dsb"", ""dtp"", ""dts"", ""dua"", ""dv"", ""dwr"", ""dww"", ""dyi"", ""dyo"", ""dyu"", ""dz"", ""dzo"", ""ee"", ""efi"", ""ekk"", ""eko"", ""el"", ""ell"", ""emi"", ""eml"", ""emp"", ""en"", ""eng"", ""enl"", ""enm"", ""enq"", ""eo"", ""epo"", ""eri"", ""es"", ""esk"", ""est"", ""esu"", ""etr"", ""eu"", ""eus"", ""eve"", ""ewe"", ""ewo"", ""ext"", ""eza"", ""fa"", ""faa"", ""fai"", ""fao"", ""fas"", ""fat"", ""ffm"", ""fi"", ""fij"", ""fil"", ""fin"", ""fit"", ""fj"", ""fkv"", ""fo"", ""fon"", ""for"", ""fr"", ""fra"", ""fro"", ""frp"", ""frr"", ""fry"", ""fub"", ""fud"", ""fue"", ""fuf"", ""fuh"", ""ful"", ""fur"", ""fuv"", ""fy"", ""ga"", ""gaa"", ""gag"", ""gah"", ""gai"", ""gam"", ""gaw"", ""gaz"", ""gbi"", ""gcf"", ""gcr"", ""gd"", ""gdg"", ""gdn"", ""geb"", ""gfk"", ""ghs"", ""gil"", ""gjn"", ""gl"", ""gla"", ""gle"", ""glg"", ""glk"", ""glv"", ""gmh"", ""gmv"", ""gnb"", ""gng"", ""gnn"", ""gnw"", ""goh"", ""gom"", ""gon"", ""gor"", ""gos"", ""got"", ""grc"", ""grn"", ""gsw"", ""gu"", ""gub"", ""guc"", ""gug"", ""guj"", ""gul"", ""gum"", ""gur"", ""guw"", ""gux"", ""guz"", ""gv"", ""gvn"", ""gwi"", ""gwr"", ""gym"", ""gyr"", ""ha"", ""hac"", ""hag"", ""hak"", ""hat"", ""hau"", ""haw"", ""hbo"", ""hbs"", ""hch"", ""he"", ""heb"", ""heg"", ""her"", ""hi"", ""hif"", ""hil"", ""hin"", ""hla"", ""hmn"", ""hmo"", ""hmr"", ""hne"", ""hnj"", ""hns"", ""ho"", ""hot"", ""hr"", ""hra"", ""hrv"", ""hsb"", ""ht"", ""hto"", ""hu"", ""hub"", ""hui"", ""hun"", ""hus"", ""huu"", ""hvn"", ""hwc"", ""hy"", ""hye"", ""hyw"", ""hz"", ""ia"", ""ian"", ""iba"", ""ibg"", ""ibo"", ""icr"", ""id"", ""ido"", ""idu"", ""ie"", ""ify"", ""ig"", ""ike"", ""ikk"", ""ikt"", ""iku"", ""ikw"", ""ile"", ""ilo"", ""ina"", ""inb"", ""ind"", ""inh"", ""ino"", ""io"", ""iou"", ""ipi"", ""ipk"", ""iqw"", ""is"", ""isd"", ""ish"", ""isl"", ""iso"", ""it"", ""ita"", ""itv"", ""ium"", ""ivb"", ""ivv"", ""iws"", ""ixl"", ""ja"", ""jac"", ""jae"", ""jam"", ""jav"", ""jbo"", ""jiv"", ""jpn"", ""jra"", ""jrb"", ""jv"", ""jvn"", ""ka"", ""kaa"", ""kab"", ""kac"", ""kak"", ""kal"", ""kam"", ""kan"", ""kaq"", ""kas"", ""kat"", ""kau"", ""kaz"", ""kbc"", ""kbd"", ""kbh"", ""kbm"", ""kbo"", ""kbp"", ""kbq"", ""kby"", ""kca"", ""kcg"", ""kck"", ""kdc"", ""kde"", ""kdi"", ""kdl"", ""kea"", ""kek"", ""kew"", ""kgk"", ""kgr"", ""kha"", ""khk"", ""khm"", ""khz"", ""ki"", ""kij"", ""kik"", ""kin"", ""kir"", ""kiu"", ""kix"", ""kj"", ""kjb"", ""kjh"", ""kjs"", ""kk"", ""kkc"", ""kkl"", ""kl"", ""kln"", ""klt"", ""klv"", ""km"", ""kmb"", ""kmd"", ""kmg"", ""kmh"", ""kmk"", ""kmm"", ""kmr"", ""kms"", ""kmu"", ""kmy"", ""kn"", ""knc"", ""kne"", ""kng"", ""knv"", ""ko"", ""kog"", ""koi"", ""kok"", ""kom"", ""kon"", ""kor"", ""kos"", ""kpe"", ""kpf"", ""kpg"", ""kpj"", ""kpo"", ""kpr"", ""kpv"", ""kpw"", ""kpx"", ""kqc"", ""kqf"", ""kqn"", ""kqw"", ""krc"", ""kri"", ""krl"", ""kru"", ""ks"", ""ksd"", ""ksh"", ""ksr"", ""ksw"", ""ktm"", ""kto"", ""ktu"", ""ktz"", ""kua"", ""kum"", ""kup"", ""kur"", ""kvg"", ""kw"", ""kwf"", ""kwj"", ""kwn"", ""kwy"", ""ky"", ""kyc"", ""kyg"", ""kyq"", ""kyu"", ""kyz"", ""kze"", ""kzf"", ""kzj"", ""la"", ""lad"", ""lah"", ""laj"", ""lam"", ""lao"", ""lat"", ""lav"", ""lb"", ""lbb"", ""lbe"", ""lbj"", ""lbk"", ""lcm"", ""leu"", ""lew"", ""lex"", ""lez"", ""lfn"", ""lg"", ""lgg"", ""lgl"", ""lhi"", ""li"", ""lia"", ""lid"", ""lif"", ""lij"", ""lim"", ""lin"", ""lis"", ""lit"", ""liv"", ""ljp"", ""lki"", ""lld"", ""llg"", ""lmk"", ""lmo"", ""ln"", ""lnd"", ""lo"", ""loz"", ""lrc"", ""lsi"", ""lt"", ""ltg"", ""ltz"", ""lu"", ""lua"", ""lub"", ""lud"", ""lue"", ""lug"", ""lun"", ""luo"", ""lus"", ""luy"", ""lvs"", ""lwg"", ""lww"", ""lzh"", ""maa"", ""mad"", ""mag"", ""mah"", ""mai"", ""mak"", ""mal"", ""mam"", ""man"", ""mar"", ""mas"", ""mau"", ""maw"", ""maz"", ""mbb"", ""mbf"", ""mbh"", ""mbs"", ""mbt"", ""mcd"", ""mck"", ""mcq"", ""mdf"", ""mdy"", ""med"", ""mej"", ""mek"", ""men"", ""met"", ""meu"", ""mfe"", ""mfi"", ""mfq"", ""mfy"", ""mge"", ""mgh"", ""mh"", ""mhi"", ""mhr"", ""mhw"", ""mhx"", ""mi"", ""mic"", ""mie"", ""mif"", ""min"", ""mip"", ""miq"", ""mjc"", ""mjw"", ""mk"", ""mkd"", ""mkn"", ""ml"", ""mlg"", ""mlh"", ""mlt"", ""mmo"", ""mmx"", ""mna"", ""mnb"", ""mni"", ""mnk"", ""mns"", ""mnw"", ""moc"", ""mog"", ""moh"", ""mon"", ""mos"", ""mph"", ""mpm"", ""mpp"", ""mps"", ""mpx"", ""mqj"", ""mqy"", ""mr"", ""mri"", ""mrj"", ""mrw"", ""msa"", ""msb"", ""msc"", ""msk"", ""msy"", ""mt"", ""mtg"", ""mti"", ""muh"", ""mui"", ""mup"", ""mur"", ""mux"", ""mva"", ""mvn"", ""mvp"", ""mwc"", ""mwf"", ""mwl"", ""mwm"", ""mwn"", ""mwp"", ""mwv"", ""mww"", ""mxb"", ""mxt"", ""mxv"", ""mxx"", ""my"", ""mya"", ""myb"", ""myk"", ""myu"", ""myv"", ""myw"", ""myy"", ""mza"", ""mzh"", ""mzn"", ""naf"", ""nah"", ""nak"", ""nan"", ""nap"", ""naq"", ""nas"", ""nav"", ""nb"", ""nbc"", ""nbe"", ""nbl"", ""nbu"", ""nca"", ""nch"", ""ncj"", ""ncx"", ""nd"", ""ndc"", ""nde"", ""ndo"", ""nds"", ""nep"", ""new"", ""nfa"", ""ng"", ""ngl"", ""ngu"", ""nhe"", ""nhg"", ""nhi"", ""nho"", ""nhr"", ""nhw"", ""nhx"", ""nia"", ""nif"", ""nii"", ""nij"", ""nin"", ""niu"", ""njm"", ""njn"", ""njz"", ""nl"", ""nld"", ""nlg"", ""nmf"", ""nmz"", ""nn"", ""nnb"", ""nnh"", ""nno"", ""nnp"", ""nnw"", ""nob"", ""nog"", ""non"", ""nop"", ""nor"", ""not"", ""nov"", ""nph"", ""npi"", ""npy"", ""nqo"", ""nr"", ""nrm"", ""nsa"", ""nsn"", ""nso"", ""nss"", ""nst"", ""nsu"", ""ntp"", ""ntu"", ""nuj"", ""nus"", ""nuy"", ""nv"", ""nvm"", ""nwi"", ""ny"", ""nya"", ""nyk"", ""nyn"", ""nyu"", ""nyy"", ""nzi"", ""nzm"", ""obo"", ""oc"", ""oci"", ""ojb"", ""oji"", ""okv"", ""olo"", ""omb"", ""omw"", ""ong"", ""opm"", ""ori"", ""orm"", ""orv"", ""ory"", ""os"", ""oss"", ""ota"", ""otd"", ""ote"", ""ots"", ""otw"", ""pa"", ""pad"", ""pag"", ""pam"", ""pan"", ""pao"", ""pap"", ""pau"", ""pbb"", ""pbt"", ""pcd"", ""pck"", ""pcm"", ""pdc"", ""pdt"", ""pes"", ""pfl"", ""pib"", ""pis"", ""pjt"", ""pl"", ""plg"", ""pls"", ""plt"", ""plu"", ""plw"", ""pma"", ""pmf"", ""pms"", ""pmx"", ""pnb"", ""pnt"", ""poe"", ""poh"", ""poi"", ""pol"", ""pon"", ""por"", ""pot"", ""ppk"", ""ppo"", ""pps"", ""prg"", ""prs"", ""pt"", ""ptp"", ""ptu"", ""pui"", ""pus"", ""pwg"", ""pwn"", ""qub"", ""quc"", ""que"", ""quf"", ""qug"", ""quh"", ""qul"", ""qup"", ""quw"", ""quy"", ""quz"", ""qvc"", ""qvh"", ""qvi"", ""qvn"", ""qvs"", ""qvw"", ""qxl"", ""qxn"", ""qxo"", ""rad"", ""raj"", ""rap"", ""rar"", ""raw"", ""rcf"", ""rej"", ""rgu"", ""rhg"", ""ria"", ""rif"", ""rjs"", ""rkb"", ""rm"", ""rmc"", ""rml"", ""rmn"", ""rmo"", ""rmq"", ""rmy"", ""rn"", ""rnl"", ""ro"", ""roh"", ""rom"", ""ron"", ""roo"", ""rop"", ""row"", ""rro"", ""rtm"", ""ru"", ""rue"", ""ruf"", ""rug"", ""run"", ""rup"", ""rus"", ""rw"", ""rwo"", ""sa"", ""sab"", ""sag"", ""sah"", ""san"", ""sas"", ""sat"", ""sbd"", ""sbe"", ""sc"", ""scn"", ""sco"", ""sd"", ""sda"", ""sdc"", ""sdh"", ""se"", ""seh"", ""ses"", ""sg"", ""sgb"", ""sgc"", ""sgh"", ""sgs"", ""sgw"", ""shi"", ""shk"", ""shn"", ""shp"", ""shu"", ""si"", ""sid"", ""sil"", ""sim"", ""sin"", ""sk"", ""skg"", ""skr"", ""sl"", ""slk"", ""sll"", ""slv"", ""sm"", ""sma"", ""sme"", ""smj"", ""smk"", ""sml"", ""smn"", ""smo"", ""sms"", ""sn"", ""sna"", ""snc"", ""snd"", ""snf"", ""snn"", ""snp"", ""snw"", ""sny"", ""so"", ""som"", ""sop"", ""soq"", ""sot"", ""spa"", ""spl"", ""spm"", ""spp"", ""sps"", ""sqi"", ""sr"", ""srd"", ""srm"", ""srn"", ""srp"", ""srr"", ""ss"", ""ssd"", ""ssg"", ""ssw"", ""ssx"", ""st"", ""stq"", ""su"", ""suc"", ""sue"", ""suk"", ""sun"", ""sus"", ""suz"", ""sv"", ""swa"", ""swb"", ""swc"", ""swe"", ""swg"", ""swh"", ""swp"", ""sxb"", ""sxn"", ""sxw"", ""syb"", ""syc"", ""syl"", ""syr"", ""szl"", ""szy"", ""ta"", ""tab"", ""tah"", ""taj"", ""tam"", ""tap"", ""taq"", ""tar"", ""tat"", ""taw"", ""tay"", ""tbg"", ""tbo"", ""tbw"", ""tby"", ""tbz"", ""tca"", ""tcs"", ""tcy"", ""tcz"", ""tdt"", ""tdx"", ""te"", ""tel"", ""teo"", ""tet"", ""tfr"", ""tg"", ""tgk"", ""tgo"", ""tgp"", ""th"", ""tha"", ""thl"", ""ti"", ""tif"", ""tig"", ""tih"", ""tir"", ""tiv"", ""tk"", ""tke"", ""tkl"", ""tkr"", ""tku"", ""tlb"", ""tlf"", ""tlh"", ""tll"", ""tly"", ""tmd"", ""tmh"", ""tn"", ""tnc"", ""tnk"", ""tnn"", ""tnp"", ""tnr"", ""to"", ""tob"", ""tod"", ""tog"", ""toi"", ""toj"", ""tok"", ""ton"", ""too"", ""top"", ""tos"", ""tpi"", ""tr"", ""trc"", ""trn"", ""trp"", ""trq"", ""trv"", ""ts"", ""tsg"", ""tsn"", ""tso"", ""tsz"", ""tt"", ""ttc"", ""tte"", ""ttq"", ""tuc"", ""tuf"", ""tui"", ""tuk"", ""tum"", ""tur"", ""tvk"", ""tvl"", ""tw"", ""twi"", ""twu"", ""txu"", ""ty"", ""tyv"", ""tzh"", ""tzj"", ""tzm"", ""tzo"", ""ubr"", ""ubu"", ""udm"", ""ug"", ""uig"", ""uk"", ""ukr"", ""umb"", ""und"", ""upv"", ""ur"", ""urd"", ""urh"", ""usa"", ""usp"", ""uvh"", ""uvl"", ""uzb"", ""uzn"", ""uzs"", ""vap"", ""ve"", ""vec"", ""ven"", ""vep"", ""vi"", ""vid"", ""vie"", ""vls"", ""vmw"", ""vmy"", ""vo"", ""vol"", ""vro"", ""vun"", ""wa"", ""waj"", ""wal"", ""war"", ""wat"", ""way"", ""wbm"", ""wbp"", ""wed"", ""wes"", ""wln"", ""wls"", ""wlv"", ""wlx"", ""wmw"", ""wnc"", ""wnu"", ""wo"", ""wol"", ""wos"", ""wrs"", ""wsg"", ""wsk"", ""wuu"", ""wuv"", ""xal"", ""xbi"", ""xh"", ""xho"", ""xla"", ""xmf"", ""xmm"", ""xmv"", ""xog"", ""xon"", ""xsm"", ""xsr"", ""xtd"", ""xtm"", ""xtn"", ""yaa"", ""yal"", ""yao"", ""yap"", ""yby"", ""ydd"", ""yid"", ""yka"", ""yle"", ""yli"", ""yml"", ""yo"", ""yom"", ""yon"", ""yor"", ""yrk"", ""yrl"", ""yss"", ""yua"", ""yue"", ""yuj"", ""yut"", ""yuw"", ""yva"", ""zac"", ""zai"", ""zam"", ""zao"", ""zap"", ""zas"", ""zat"", ""zdj"", ""zea"", ""zgh"", ""zha"", ""zho"", ""zia"", ""zom"", ""zos"", ""zpm"", ""zpo"", ""zpt"", ""zpu"", ""zsm"", ""zu"", ""zul"", ""zyb"", ""zyp"", ""zza""]}","## Dataset Description
- **Repository:** [https://github.com/cisnlp/GlotCC](https://github.com/cisnlp/GlotCC)
- **Paper:** [https://arxiv.org/abs/2410.23825](https://arxiv.org/abs/2410.23825)
- **Point of Contact:** [amir@cis.lmu.de](mailto:amir@cis.lmu.de)
### Dataset Summary
**GlotCC-V1.0** is a document-level, general domain dataset derived from CommonCrawl, covering more than **1000** languages.
It is built using the [GlotLID](https://github.com/cisnlp/GlotLID) language identification and [Ungoliant](https://github.com/kargaranamir/ungoliant) pipeline from CommonCrawl.
We release our pipeline as open-source at [https://github.com/cisnlp/GlotCC](https://github.com/cisnlp/GlotCC).
**List of Languages:** See [https://datasets-server.huggingface.co/splits?dataset=cis-lmu/GlotCC-V1](https://datasets-server.huggingface.co/splits?dataset=cis-lmu/GlotCC-V1) to get the list of splits available.
### Usage (Huggingface Hub -- Recommended)
Replace `bal-Arab` with your specific language.
```python
from huggingface_hub import snapshot_download
folder = snapshot_download(
""cis-lmu/glotcc-v1"",
repo_type=""dataset"",
local_dir=""./path/to/glotcc-v1/"",
# Replace ""v1.0/bal-Arab/*"" with the path for any other language available in the dataset
allow_patterns=""v1.0/bal-Arab/*""
)
```
For faster downloads, make sure to `pip install huggingface_hub[hf_transfer]` and set the environment variable `HF_HUB_ENABLE_HF_TRANSFER`=1.
Then you can load it with any library that supports Parquet files, such as Pandas:
```python
import pandas as pd
# Load the dataset from a Parquet file
# Replace the file path with the path to the desired language's Parquet file
dataset = pd.read_parquet('./path/to/glotcc-v1/v1.0/bal-Arab/bal-Arab_0.parquet')
# Print the first 5 rows of the dataset
print(dataset.head())
```
### Usage (Huggingface datasets)
```python
from datasets import load_dataset
# Replace ""bal-Arab"" with the name of any other language available in the dataset
dataset = load_dataset(""cis-lmu/glotcc-v1"", name=""bal-Arab"", split=""train"")
# Print the first row of data
print(dataset[0])
```
### Usage (Huggingface datasets -- streaming=True)
```python
from datasets import load_dataset
# Replace ""bal-Arab"" with the name of any other language available in the dataset
fw = load_dataset(""cis-lmu/glotcc-v1"", name=""bal-Arab"", split=""train"", streaming=True)
# Create an iterator from the streaming dataset
iterator = iter(fw)
# Print the next item from the iterator
print(next(iterator))
```
### Usage (direct download)
If you prefer not to use the Hugging Face datasets or hub you can download it directly. For example, to download the first file of `bal-Arab`:
```python
!wget https://huggingface.co/datasets/cis-lmu/GlotCC-V1/resolve/main/v1.0/bal-Arab/bal-Arab_0.parquet
```
## Additional Information
The dataset is currently heavily under audit and changes accordingly.
### Licensing Information
GlotCC data is released under the following licensing scheme: We do not own any of the text from which this data has been extracted. The data is licensed under the terms of the CommonCrawl [Terms of Use](https://commoncrawl.org/terms-of-use). We license the actual packaging, metadata, and annotations of this data under the Creative Commons [CC0 license](https://github.com/cisnlp/GlotCC/blob/main/LICENSE).
### Citation Information
If you find our data useful for your research, please cite:
```
@article{kargaran2024glotcc,
title = {Glot{CC}: An Open Broad-Coverage CommonCrawl Corpus and Pipeline for Minority Languages},
author = {Kargaran, Amir Hossein and Yvon, Fran{\c{c}}ois and Sch{\""u}tze, Hinrich},
journal = {Advances in Neural Information Processing Systems},
year = {2024},
url = {https://arxiv.org/abs/2410.23825}
}
```"
simon3000/genshin-voice,"{""language"": [""zh"", ""en"", ""ja"", ""ko""], ""task_categories"": [""audio-classification"", ""automatic-speech-recognition"", ""text-to-speech""], ""pretty_name"": ""Genshin Voice"", ""dataset_info"": {""features"": [{""name"": ""audio"", ""dtype"": ""audio""}, {""name"": ""transcription"", ""dtype"": ""string""}, {""name"": ""language"", ""dtype"": ""string""}, {""name"": ""speaker"", ""dtype"": ""string""}, {""name"": ""speaker_type"", ""dtype"": ""string""}, {""name"": ""type"", ""dtype"": ""string""}, {""name"": ""inGameFilename"", ""dtype"": ""string""}], ""splits"": [{""name"": ""train"", ""num_bytes"": 264598217401.752, ""num_examples"": 463383}], ""download_size"": 227704444125, ""dataset_size"": 264598217401.752}, ""configs"": [{""config_name"": ""default"", ""data_files"": [{""split"": ""train"", ""path"": ""data/train-*""}]}]}","# Genshin Voice
Genshin Voice is a dataset of voice lines from the popular game [Genshin Impact](https://genshin.hoyoverse.com/).
Hugging Face 🤗 [Genshin-Voice](https://huggingface.co/datasets/simon3000/genshin-voice)
Last update at `2024-08-30`
`463383` wavs
`20231` without speaker (4%)
`24819` without transcription (5%)
`602` without inGameFilename (0%)
## Dataset Details
### Dataset Description
The dataset contains voice lines from the game's characters in multiple languages, including Chinese, English, Japanese, and Korean.
The voice lines are spoken by the characters in the game and cover a wide range of topics, including greetings, combat, and story dialogue.
- **Language(s) (NLP):** Chinese, English, Japanese, Korean
## Uses
To install Hugging Face's datasets library, follow the instructions from [this link](https://huggingface.co/docs/datasets/installation#audio).
### Example: Load the dataset and filter for Chinese voices of Ganyu with transcriptions
```python
from datasets import load_dataset
import soundfile as sf
import os
# Load the dataset
dataset = load_dataset('simon3000/genshin-voice', split='train', streaming=True)
# Filter the dataset for Chinese voices of Ganyu with transcriptions
chinese_ganyu = dataset.filter(lambda voice: voice['language'] == 'Chinese' and voice['speaker'] == 'Ganyu' and voice['transcription'] != '')
# Create a folder to store the audio and transcription files
ganyu_folder = 'ganyu'
os.makedirs(ganyu_folder, exist_ok=True)
# Process each voice in the filtered dataset
for i, voice in enumerate(chinese_ganyu):
audio_path = os.path.join(ganyu_folder, f'{i}_audio.wav') # Path to save the audio file
transcription_path = os.path.join(ganyu_folder, f'{i}_transcription.txt') # Path to save the transcription file
# Save the audio file
sf.write(audio_path, voice['audio']['array'], voice['audio']['sampling_rate'])
# Save the transcription file
with open(transcription_path, 'w') as transcription_file:
transcription_file.write(voice['transcription'])
print(f'{i} done') # Print the progress
```
### You unpacked the game and just want to know what the wavs are about
result.json format: (subject to change)
```json
{
""9b5502fb1b83cb97.wav"": {
""inGameFilename"": ""VO_friendship\\VO_raidenShogun\\vo_raidenEi_dialog_pendant.wem"",
""filename"": ""9b5502fb1b83cb97.wav"",
""language"": ""English(US)"",
""transcription"": ""Really? So in all this time, no new Electro Visions have appeared in the outside world? Well, what I can say on this topic is subject to certain constraints, but... it is not by my will that Visions are granted or denied. The key is people's desire, and... well, there's another side to it too."",
""speaker"": ""Raiden Shogun"",
""talkRoleType"": """",
""talkRoleID"": """",
""guid"": ""f8e72b65-6c0a-4df1-a2f0-2bb08dbeab75"",
""voiceConfigs"": [
{
""gameTrigger"": ""Fetter"",
""gameTriggerArgs"": 3001,
""avatarName"": ""Switch_raidenShogun""
}
]
}
}
```
## Dataset Creation
### Source Data
The data was obtained by unpacking the [Genshin Impact](https://genshin.hoyoverse.com/) game.
#### Data Collection and Processing
Please refer to [Genshin-Voice](https://github.com/simon300000/genshin-voice) and [w4123/GenshinVoice](https://github.com/w4123/GenshinVoice) for more information on how the data was processed.
#### Who are the source data producers?
The source data producers are the developers of the game, miHoYo.
### Annotations
The dataset contains official annotations from the game, including language, speaker name, and transcription.
## Bias, Risks, and Limitations
Annotations are incomplete. Some voice lines are missing speaker names and transcriptions.
Speakers and transcriptions may contain markups and placeholders: `#パイモン:{NICKNAME}、すごく怖い悪夢を見たことってあるか?\\n{NICKNAME}:...`
### Recommendations
Users should be made aware of the risks, biases and limitations of the dataset.
Speaker names can be partially inferred from the ingame filenames.
## Licensing Information
Copyright © COGNOSPHERE. All Rights Reserved.
## More Information
I can upload wav files on demand."
WueNLP/sib-fleurs,"{""license"": ""cc-by-sa-4.0"", ""language"": [""ace"", ""acm"", ""acq"", ""aeb"", ""af"", ""ajp"", ""ak"", ""als"", ""am"", ""apc"", ""ar"", ""ars"", ""ary"", ""arz"", ""as"", ""ast"", ""awa"", ""ayr"", ""azb"", ""azj"", ""ba"", ""bm"", ""ban"", ""be"", ""bem"", ""bn"", ""bho"", ""bjn"", ""bo"", ""bs"", ""bug"", ""bg"", ""ca"", ""ceb"", ""cs"", ""cjk"", ""ckb"", ""crh"", ""cy"", ""da"", ""de"", ""dik"", ""dyu"", ""dz"", ""el"", ""en"", ""eo"", ""et"", ""eu"", ""ee"", ""fo"", ""fj"", ""fi"", ""fon"", ""fr"", ""fur"", ""fuv"", ""gaz"", ""gd"", ""ga"", ""gl"", ""gn"", ""gu"", ""ht"", ""ha"", ""he"", ""hi"", ""hne"", ""hr"", ""hu"", ""hy"", ""ig"", ""ilo"", ""id"", ""is"", ""it"", ""jv"", ""ja"", ""kab"", ""kac"", ""kam"", ""kn"", ""ks"", ""ka"", ""kk"", ""kbp"", ""kea"", ""khk"", ""km"", ""ki"", ""rw"", ""ky"", ""kmb"", ""kmr"", ""knc"", ""kg"", ""ko"", ""lo"", ""lij"", ""li"", ""ln"", ""lt"", ""lmo"", ""ltg"", ""lb"", ""lua"", ""lg"", ""luo"", ""lus"", ""lvs"", ""mag"", ""mai"", ""ml"", ""mar"", ""min"", ""mk"", ""mt"", ""mni"", ""mos"", ""mi"", ""my"", ""nl"", ""nn"", ""nb"", ""npi"", ""nqo"", ""nso"", ""nus"", ""ny"", ""oc"", ""ory"", ""pag"", ""pa"", ""pap"", ""pbt"", ""pes"", ""plt"", ""pl"", ""pt"", ""prs"", ""quy"", ""ro"", ""rn"", ""ru"", ""sg"", ""sa"", ""sat"", ""scn"", ""shn"", ""si"", ""sk"", ""sl"", ""sm"", ""sn"", ""sd"", ""so"", ""st"", ""es"", ""sc"", ""sr"", ""ss"", ""su"", ""sv"", ""swh"", ""szl"", ""ta"", ""taq"", ""tt"", ""te"", ""tg"", ""tl"", ""th"", ""ti"", ""tpi"", ""tn"", ""ts"", ""tk"", ""tum"", ""tr"", ""tw"", ""tzm"", ""ug"", ""uk"", ""umb"", ""ur"", ""uzn"", ""vec"", ""vi"", ""war"", ""wo"", ""xh"", ""ydd"", ""yo"", ""yue"", ""zh"", ""zsm"", ""zu"", ""multilingual""], ""annotations_creators"": [""found""], ""language_creators"": [""expert-generated""], ""multilinguality"": [""multilingual""], ""task_categories"": [""audio-classification"", ""automatic-speech-recognition"", ""audio-text-to-text"", ""text-to-speech"", ""question-answering"", ""document-question-answering""], ""pretty_name"": ""SIB-Fleurs"", ""dataset_info"": [{""config_name"": ""afr_Latn"", ""features"": [{""name"": ""sentence"", ""dtype"": ""string""}, {""name"": ""URL"", ""dtype"": ""string""}, {""name"": ""id"", ""dtype"": ""int32""}, {""name"": ""domain"", ""dtype"": ""string""}, {""name"": ""topic"", ""dtype"": ""string""}, {""name"": ""has_image"", ""dtype"": ""int32""}, {""name"": ""has_hyperlink"", ""dtype"": ""int32""}, {""name"": ""fleurs_id"", ""dtype"": ""int32""}, {""name"": ""filename"", ""sequence"": ""string""}, {""name"": ""raw_transcription"", ""dtype"": ""string""}, {""name"": ""transcription"", ""dtype"": ""string""}, {""name"": ""num_samples"", ""sequence"": ""int64""}, {""name"": ""speaker_id"", ""sequence"": ""int64""}, {""name"": ""gender"", ""sequence"": ""string""}, {""name"": ""whisper_asr"", ""sequence"": ""string""}, {""name"": ""whisper_asr_cer"", ""sequence"": ""float64""}, {""name"": ""whisper_asr_wer"", ""sequence"": ""float64""}, {""name"": ""whisper_asr_translation"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr_cer"", ""sequence"": ""float64""}, {""name"": ""seamlessm4t_asr_wer"", ""sequence"": ""float64""}, {""name"": ""seamlessm4t_asr_translation"", ""sequence"": ""string""}, {""name"": ""index_id"", ""dtype"": ""int64""}, {""name"": ""category"", ""dtype"": {""class_label"": {""names"": {""0"": ""science/technology"", ""1"": ""travel"", ""2"": ""politics"", ""3"": ""sports"", ""4"": ""health"", ""5"": ""entertainment"", ""6"": ""geography""}}}}, {""name"": ""text"", ""dtype"": ""string""}, {""name"": ""audio"", ""sequence"": {""audio"": {""sampling_rate"": 16000}}}], ""splits"": [{""name"": ""train"", ""num_bytes"": 524232877.0, ""num_examples"": 406}, {""name"": ""validation"", ""num_bytes"": 76384271.0, ""num_examples"": 86}, {""name"": ""test"", ""num_bytes"": 84400076.0, ""num_examples"": 95}], ""download_size"": 673661100, ""dataset_size"": 685017224.0}, {""config_name"": ""amh_Ethi"", ""features"": [{""name"": ""sentence"", ""dtype"": ""string""}, {""name"": ""URL"", ""dtype"": ""string""}, {""name"": ""id"", ""dtype"": ""int32""}, {""name"": ""domain"", ""dtype"": ""string""}, {""name"": ""topic"", ""dtype"": ""string""}, {""name"": ""has_image"", ""dtype"": ""int32""}, {""name"": ""has_hyperlink"", ""dtype"": ""int32""}, {""name"": ""fleurs_id"", ""dtype"": ""int32""}, {""name"": ""filename"", ""sequence"": ""string""}, {""name"": ""raw_transcription"", ""dtype"": ""string""}, {""name"": ""transcription"", ""dtype"": ""string""}, {""name"": ""num_samples"", ""sequence"": ""int64""}, {""name"": ""speaker_id"", ""sequence"": ""int64""}, {""name"": ""gender"", ""sequence"": ""string""}, {""name"": ""whisper_asr"", ""sequence"": ""string""}, {""name"": ""whisper_asr_cer"", ""sequence"": ""float64""}, {""name"": ""whisper_asr_wer"", ""sequence"": ""float64""}, {""name"": ""whisper_asr_translation"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr_cer"", ""sequence"": ""float64""}, {""name"": ""seamlessm4t_asr_wer"", ""sequence"": ""float64""}, {""name"": ""seamlessm4t_asr_translation"", ""sequence"": ""string""}, {""name"": ""index_id"", ""dtype"": ""int64""}, {""name"": ""category"", ""dtype"": {""class_label"": {""names"": {""0"": ""science/technology"", ""1"": ""travel"", ""2"": ""politics"", ""3"": ""sports"", ""4"": ""health"", ""5"": ""entertainment"", ""6"": ""geography""}}}}, {""name"": ""text"", ""dtype"": ""string""}, {""name"": ""audio"", ""sequence"": {""audio"": {""sampling_rate"": 16000}}}], ""splits"": [{""name"": ""train"", ""num_bytes"": 1289823377.0, ""num_examples"": 752}, {""name"": ""validation"", ""num_bytes"": 65389982.0, ""num_examples"": 54}, {""name"": ""test"", ""num_bytes"": 185857834.0, ""num_examples"": 149}], ""download_size"": 1525564166, ""dataset_size"": 1541071193.0}, {""config_name"": ""arb_Arab"", ""features"": [{""name"": ""sentence"", ""dtype"": ""string""}, {""name"": ""URL"", ""dtype"": ""string""}, {""name"": ""id"", ""dtype"": ""int32""}, {""name"": ""domain"", ""dtype"": ""string""}, {""name"": ""topic"", ""dtype"": ""string""}, {""name"": ""has_image"", ""dtype"": ""int32""}, {""name"": ""has_hyperlink"", ""dtype"": ""int32""}, {""name"": ""fleurs_id"", ""dtype"": ""int32""}, {""name"": ""filename"", ""sequence"": ""string""}, {""name"": ""raw_transcription"", ""dtype"": ""string""}, {""name"": ""transcription"", ""dtype"": ""string""}, {""name"": ""num_samples"", ""sequence"": ""int64""}, {""name"": ""speaker_id"", ""sequence"": ""int64""}, {""name"": ""gender"", ""sequence"": ""string""}, {""name"": ""whisper_asr"", ""sequence"": ""string""}, {""name"": ""whisper_asr_cer"", ""sequence"": ""float64""}, {""name"": ""whisper_asr_wer"", ""sequence"": ""float64""}, {""name"": ""whisper_asr_translation"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr_cer"", ""sequence"": ""float64""}, {""name"": ""seamlessm4t_asr_wer"", ""sequence"": ""float64""}, {""name"": ""seamlessm4t_asr_translation"", ""sequence"": ""string""}, {""name"": ""index_id"", ""dtype"": ""int64""}, {""name"": ""category"", ""dtype"": {""class_label"": {""names"": {""0"": ""science/technology"", ""1"": ""travel"", ""2"": ""politics"", ""3"": ""sports"", ""4"": ""health"", ""5"": ""entertainment"", ""6"": ""geography""}}}}, {""name"": ""text"", ""dtype"": ""string""}, {""name"": ""audio"", ""sequence"": {""audio"": {""sampling_rate"": 16000}}}], ""splits"": [{""name"": ""train"", ""num_bytes"": 646819902.0, ""num_examples"": 579}, {""name"": ""validation"", ""num_bytes"": 95091075.0, ""num_examples"": 64}, {""name"": ""test"", ""num_bytes"": 144786307.0, ""num_examples"": 133}], ""download_size"": 878867591, ""dataset_size"": 886697284.0}, {""config_name"": ""asm_Beng"", ""features"": [{""name"": ""sentence"", ""dtype"": ""string""}, {""name"": ""URL"", ""dtype"": ""string""}, {""name"": ""id"", ""dtype"": ""int32""}, {""name"": ""domain"", ""dtype"": ""string""}, {""name"": ""topic"", ""dtype"": ""string""}, {""name"": ""has_image"", ""dtype"": ""int32""}, {""name"": ""has_hyperlink"", ""dtype"": ""int32""}, {""name"": ""fleurs_id"", ""dtype"": ""int32""}, {""name"": ""filename"", ""sequence"": ""string""}, {""name"": ""raw_transcription"", ""dtype"": ""string""}, {""name"": ""transcription"", ""dtype"": ""string""}, {""name"": ""num_samples"", ""sequence"": ""int64""}, {""name"": ""speaker_id"", ""sequence"": ""int64""}, {""name"": ""gender"", ""sequence"": ""string""}, {""name"": ""whisper_asr"", ""sequence"": ""string""}, {""name"": ""whisper_asr_cer"", ""sequence"": ""float64""}, {""name"": ""whisper_asr_wer"", ""sequence"": ""float64""}, {""name"": ""whisper_asr_translation"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr_cer"", ""sequence"": ""float64""}, {""name"": ""seamlessm4t_asr_wer"", ""sequence"": ""float64""}, {""name"": ""seamlessm4t_asr_translation"", ""sequence"": ""string""}, {""name"": ""index_id"", ""dtype"": ""int64""}, {""name"": ""category"", ""dtype"": {""class_label"": {""names"": {""0"": ""science/technology"", ""1"": ""travel"", ""2"": ""politics"", ""3"": ""sports"", ""4"": ""health"", ""5"": ""entertainment"", ""6"": ""geography""}}}}, {""name"": ""text"", ""dtype"": ""string""}, {""name"": ""audio"", ""sequence"": {""audio"": {""sampling_rate"": 16000}}}], ""splits"": [{""name"": ""train"", ""num_bytes"": 1235366957.0, ""num_examples"": 730}, {""name"": ""validation"", ""num_bytes"": 158536549.0, ""num_examples"": 71}, {""name"": ""test"", ""num_bytes"": 400145792.0, ""num_examples"": 176}], ""download_size"": 1782426273, ""dataset_size"": 1794049298.0}, {""config_name"": ""ast_Latn"", ""features"": [{""name"": ""sentence"", ""dtype"": ""string""}, {""name"": ""URL"", ""dtype"": ""string""}, {""name"": ""id"", ""dtype"": ""int32""}, {""name"": ""domain"", ""dtype"": ""string""}, {""name"": ""topic"", ""dtype"": ""string""}, {""name"": ""has_image"", ""dtype"": ""int32""}, {""name"": ""has_hyperlink"", ""dtype"": ""int32""}, {""name"": ""fleurs_id"", ""dtype"": ""int32""}, {""name"": ""filename"", ""sequence"": ""string""}, {""name"": ""raw_transcription"", ""dtype"": ""string""}, {""name"": ""transcription"", ""dtype"": ""string""}, {""name"": ""num_samples"", ""sequence"": ""int64""}, {""name"": ""speaker_id"", ""sequence"": ""int64""}, {""name"": ""gender"", ""sequence"": ""string""}, {""name"": ""whisper_asr"", ""sequence"": ""string""}, {""name"": ""whisper_asr_cer"", ""sequence"": ""float64""}, {""name"": ""whisper_asr_wer"", ""sequence"": ""float64""}, {""name"": ""whisper_asr_translation"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr_cer"", ""sequence"": ""float64""}, {""name"": ""seamlessm4t_asr_wer"", ""sequence"": ""float64""}, {""name"": ""seamlessm4t_asr_translation"", ""sequence"": ""string""}, {""name"": ""index_id"", ""dtype"": ""int64""}, {""name"": ""category"", ""dtype"": {""class_label"": {""names"": {""0"": ""science/technology"", ""1"": ""travel"", ""2"": ""politics"", ""3"": ""sports"", ""4"": ""health"", ""5"": ""entertainment"", ""6"": ""geography""}}}}, {""name"": ""text"", ""dtype"": ""string""}, {""name"": ""audio"", ""sequence"": {""audio"": {""sampling_rate"": 16000}}}], ""splits"": [{""name"": ""train"", ""num_bytes"": 866679990.0, ""num_examples"": 701}, {""name"": ""validation"", ""num_bytes"": 102384453.0, ""num_examples"": 69}, {""name"": ""test"", ""num_bytes"": 282753773.0, ""num_examples"": 177}], ""download_size"": 1245085728, ""dataset_size"": 1251818216.0}, {""config_name"": ""azj_Latn"", ""features"": [{""name"": ""sentence"", ""dtype"": ""string""}, {""name"": ""URL"", ""dtype"": ""string""}, {""name"": ""id"", ""dtype"": ""int32""}, {""name"": ""domain"", ""dtype"": ""string""}, {""name"": ""topic"", ""dtype"": ""string""}, {""name"": ""has_image"", ""dtype"": ""int32""}, {""name"": ""has_hyperlink"", ""dtype"": ""int32""}, {""name"": ""fleurs_id"", ""dtype"": ""int32""}, {""name"": ""filename"", ""sequence"": ""string""}, {""name"": ""raw_transcription"", ""dtype"": ""string""}, {""name"": ""transcription"", ""dtype"": ""string""}, {""name"": ""num_samples"", ""sequence"": ""int64""}, {""name"": ""speaker_id"", ""sequence"": ""int64""}, {""name"": ""gender"", ""sequence"": ""string""}, {""name"": ""whisper_asr"", ""sequence"": ""string""}, {""name"": ""whisper_asr_cer"", ""sequence"": ""float64""}, {""name"": ""whisper_asr_wer"", ""sequence"": ""float64""}, {""name"": ""whisper_asr_translation"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr_cer"", ""sequence"": ""float64""}, {""name"": ""seamlessm4t_asr_wer"", ""sequence"": ""float64""}, {""name"": ""seamlessm4t_asr_translation"", ""sequence"": ""string""}, {""name"": ""index_id"", ""dtype"": ""int64""}, {""name"": ""category"", ""dtype"": {""class_label"": {""names"": {""0"": ""science/technology"", ""1"": ""travel"", ""2"": ""politics"", ""3"": ""sports"", ""4"": ""health"", ""5"": ""entertainment"", ""6"": ""geography""}}}}, {""name"": ""text"", ""dtype"": ""string""}, {""name"": ""audio"", ""sequence"": {""audio"": {""sampling_rate"": 16000}}}], ""splits"": [{""name"": ""train"", ""num_bytes"": 1090899299.0, ""num_examples"": 712}, {""name"": ""validation"", ""num_bytes"": 147617247.0, ""num_examples"": 71}, {""name"": ""test"", ""num_bytes"": 379234055.0, ""num_examples"": 174}], ""download_size"": 1602247163, ""dataset_size"": 1617750601.0}, {""config_name"": ""bel_Cyrl"", ""features"": [{""name"": ""sentence"", ""dtype"": ""string""}, {""name"": ""URL"", ""dtype"": ""string""}, {""name"": ""id"", ""dtype"": ""int32""}, {""name"": ""domain"", ""dtype"": ""string""}, {""name"": ""topic"", ""dtype"": ""string""}, {""name"": ""has_image"", ""dtype"": ""int32""}, {""name"": ""has_hyperlink"", ""dtype"": ""int32""}, {""name"": ""fleurs_id"", ""dtype"": ""int32""}, {""name"": ""filename"", ""sequence"": ""string""}, {""name"": ""raw_transcription"", ""dtype"": ""string""}, {""name"": ""transcription"", ""dtype"": ""string""}, {""name"": ""num_samples"", ""sequence"": ""int64""}, {""name"": ""speaker_id"", ""sequence"": ""int64""}, {""name"": ""gender"", ""sequence"": ""string""}, {""name"": ""whisper_asr"", ""sequence"": ""string""}, {""name"": ""whisper_asr_cer"", ""sequence"": ""float64""}, {""name"": ""whisper_asr_wer"", ""sequence"": ""float64""}, {""name"": ""whisper_asr_translation"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr_cer"", ""sequence"": ""float64""}, {""name"": ""seamlessm4t_asr_wer"", ""sequence"": ""float64""}, {""name"": ""seamlessm4t_asr_translation"", ""sequence"": ""string""}, {""name"": ""index_id"", ""dtype"": ""int64""}, {""name"": ""category"", ""dtype"": {""class_label"": {""names"": {""0"": ""science/technology"", ""1"": ""travel"", ""2"": ""politics"", ""3"": ""sports"", ""4"": ""health"", ""5"": ""entertainment"", ""6"": ""geography""}}}}, {""name"": ""text"", ""dtype"": ""string""}, {""name"": ""audio"", ""sequence"": {""audio"": {""sampling_rate"": 16000}}}], ""splits"": [{""name"": ""train"", ""num_bytes"": 1105817781.0, ""num_examples"": 690}, {""name"": ""validation"", ""num_bytes"": 186825266.0, ""num_examples"": 71}, {""name"": ""test"", ""num_bytes"": 486320479.0, ""num_examples"": 177}], ""download_size"": 1753989008, ""dataset_size"": 1778963526.0}, {""config_name"": ""ben_Beng"", ""features"": [{""name"": ""sentence"", ""dtype"": ""string""}, {""name"": ""URL"", ""dtype"": ""string""}, {""name"": ""id"", ""dtype"": ""int32""}, {""name"": ""domain"", ""dtype"": ""string""}, {""name"": ""topic"", ""dtype"": ""string""}, {""name"": ""has_image"", ""dtype"": ""int32""}, {""name"": ""has_hyperlink"", ""dtype"": ""int32""}, {""name"": ""fleurs_id"", ""dtype"": ""int32""}, {""name"": ""filename"", ""sequence"": ""string""}, {""name"": ""raw_transcription"", ""dtype"": ""string""}, {""name"": ""transcription"", ""dtype"": ""string""}, {""name"": ""num_samples"", ""sequence"": ""int64""}, {""name"": ""speaker_id"", ""sequence"": ""int64""}, {""name"": ""gender"", ""sequence"": ""string""}, {""name"": ""whisper_asr"", ""sequence"": ""string""}, {""name"": ""whisper_asr_cer"", ""sequence"": ""float64""}, {""name"": ""whisper_asr_wer"", ""sequence"": ""float64""}, {""name"": ""whisper_asr_translation"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr_cer"", ""sequence"": ""float64""}, {""name"": ""seamlessm4t_asr_wer"", ""sequence"": ""float64""}, {""name"": ""seamlessm4t_asr_translation"", ""sequence"": ""string""}, {""name"": ""index_id"", ""dtype"": ""int64""}, {""name"": ""category"", ""dtype"": {""class_label"": {""names"": {""0"": ""science/technology"", ""1"": ""travel"", ""2"": ""politics"", ""3"": ""sports"", ""4"": ""health"", ""5"": ""entertainment"", ""6"": ""geography""}}}}, {""name"": ""text"", ""dtype"": ""string""}, {""name"": ""audio"", ""sequence"": {""audio"": {""sampling_rate"": 16000}}}], ""splits"": [{""name"": ""train"", ""num_bytes"": 1232070743.0, ""num_examples"": 742}, {""name"": ""validation"", ""num_bytes"": 157285034.0, ""num_examples"": 71}, {""name"": ""test"", ""num_bytes"": 397951833.0, ""num_examples"": 176}], ""download_size"": 1782546384, ""dataset_size"": 1787307610.0}, {""config_name"": ""bos_Latn"", ""features"": [{""name"": ""sentence"", ""dtype"": ""string""}, {""name"": ""URL"", ""dtype"": ""string""}, {""name"": ""id"", ""dtype"": ""int32""}, {""name"": ""domain"", ""dtype"": ""string""}, {""name"": ""topic"", ""dtype"": ""string""}, {""name"": ""has_image"", ""dtype"": ""int32""}, {""name"": ""has_hyperlink"", ""dtype"": ""int32""}, {""name"": ""fleurs_id"", ""dtype"": ""int32""}, {""name"": ""filename"", ""sequence"": ""string""}, {""name"": ""raw_transcription"", ""dtype"": ""string""}, {""name"": ""transcription"", ""dtype"": ""string""}, {""name"": ""num_samples"", ""sequence"": ""int64""}, {""name"": ""speaker_id"", ""sequence"": ""int64""}, {""name"": ""gender"", ""sequence"": ""string""}, {""name"": ""whisper_asr"", ""sequence"": ""string""}, {""name"": ""whisper_asr_cer"", ""sequence"": ""float64""}, {""name"": ""whisper_asr_wer"", ""sequence"": ""float64""}, {""name"": ""whisper_asr_translation"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr_cer"", ""sequence"": ""float64""}, {""name"": ""seamlessm4t_asr_wer"", ""sequence"": ""float64""}, {""name"": ""seamlessm4t_asr_translation"", ""sequence"": ""string""}, {""name"": ""index_id"", ""dtype"": ""int64""}, {""name"": ""category"", ""dtype"": {""class_label"": {""names"": {""0"": ""science/technology"", ""1"": ""travel"", ""2"": ""politics"", ""3"": ""sports"", ""4"": ""health"", ""5"": ""entertainment"", ""6"": ""geography""}}}}, {""name"": ""text"", ""dtype"": ""string""}, {""name"": ""audio"", ""sequence"": {""audio"": {""sampling_rate"": 16000}}}], ""splits"": [{""name"": ""train"", ""num_bytes"": 1173791520.0, ""num_examples"": 746}, {""name"": ""validation"", ""num_bytes"": 149405247.0, ""num_examples"": 71}, {""name"": ""test"", ""num_bytes"": 369790849.0, ""num_examples"": 177}], ""download_size"": 1654694782, ""dataset_size"": 1692987616.0}, {""config_name"": ""bul_Cyrl"", ""features"": [{""name"": ""sentence"", ""dtype"": ""string""}, {""name"": ""URL"", ""dtype"": ""string""}, {""name"": ""id"", ""dtype"": ""int32""}, {""name"": ""domain"", ""dtype"": ""string""}, {""name"": ""topic"", ""dtype"": ""string""}, {""name"": ""has_image"", ""dtype"": ""int32""}, {""name"": ""has_hyperlink"", ""dtype"": ""int32""}, {""name"": ""fleurs_id"", ""dtype"": ""int32""}, {""name"": ""filename"", ""sequence"": ""string""}, {""name"": ""raw_transcription"", ""dtype"": ""string""}, {""name"": ""transcription"", ""dtype"": ""string""}, {""name"": ""num_samples"", ""sequence"": ""int64""}, {""name"": ""speaker_id"", ""sequence"": ""int64""}, {""name"": ""gender"", ""sequence"": ""string""}, {""name"": ""whisper_asr"", ""sequence"": ""string""}, {""name"": ""whisper_asr_cer"", ""sequence"": ""float64""}, {""name"": ""whisper_asr_wer"", ""sequence"": ""float64""}, {""name"": ""whisper_asr_translation"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr_cer"", ""sequence"": ""float64""}, {""name"": ""seamlessm4t_asr_wer"", ""sequence"": ""float64""}, {""name"": ""seamlessm4t_asr_translation"", ""sequence"": ""string""}, {""name"": ""index_id"", ""dtype"": ""int64""}, {""name"": ""category"", ""dtype"": {""class_label"": {""names"": {""0"": ""science/technology"", ""1"": ""travel"", ""2"": ""politics"", ""3"": ""sports"", ""4"": ""health"", ""5"": ""entertainment"", ""6"": ""geography""}}}}, {""name"": ""text"", ""dtype"": ""string""}, {""name"": ""audio"", ""sequence"": {""audio"": {""sampling_rate"": 16000}}}], ""splits"": [{""name"": ""train"", ""num_bytes"": 1101248058.0, ""num_examples"": 749}, {""name"": ""validation"", ""num_bytes"": 117353674.0, ""num_examples"": 70}, {""name"": ""test"", ""num_bytes"": 221557279.0, ""num_examples"": 176}], ""download_size"": 1421883953, ""dataset_size"": 1440159011.0}, {""config_name"": ""cat_Latn"", ""features"": [{""name"": ""sentence"", ""dtype"": ""string""}, {""name"": ""URL"", ""dtype"": ""string""}, {""name"": ""id"", ""dtype"": ""int32""}, {""name"": ""domain"", ""dtype"": ""string""}, {""name"": ""topic"", ""dtype"": ""string""}, {""name"": ""has_image"", ""dtype"": ""int32""}, {""name"": ""has_hyperlink"", ""dtype"": ""int32""}, {""name"": ""fleurs_id"", ""dtype"": ""int32""}, {""name"": ""filename"", ""sequence"": ""string""}, {""name"": ""raw_transcription"", ""dtype"": ""string""}, {""name"": ""transcription"", ""dtype"": ""string""}, {""name"": ""num_samples"", ""sequence"": ""int64""}, {""name"": ""speaker_id"", ""sequence"": ""int64""}, {""name"": ""gender"", ""sequence"": ""string""}, {""name"": ""whisper_asr"", ""sequence"": ""string""}, {""name"": ""whisper_asr_cer"", ""sequence"": ""float64""}, {""name"": ""whisper_asr_wer"", ""sequence"": ""float64""}, {""name"": ""whisper_asr_translation"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr_cer"", ""sequence"": ""float64""}, {""name"": ""seamlessm4t_asr_wer"", ""sequence"": ""float64""}, {""name"": ""seamlessm4t_asr_translation"", ""sequence"": ""string""}, {""name"": ""index_id"", ""dtype"": ""int64""}, {""name"": ""category"", ""dtype"": {""class_label"": {""names"": {""0"": ""science/technology"", ""1"": ""travel"", ""2"": ""politics"", ""3"": ""sports"", ""4"": ""health"", ""5"": ""entertainment"", ""6"": ""geography""}}}}, {""name"": ""text"", ""dtype"": ""string""}, {""name"": ""audio"", ""sequence"": {""audio"": {""sampling_rate"": 16000}}}], ""splits"": [{""name"": ""train"", ""num_bytes"": 863830240.0, ""num_examples"": 683}, {""name"": ""validation"", ""num_bytes"": 147554660.0, ""num_examples"": 71}, {""name"": ""test"", ""num_bytes"": 353869370.0, ""num_examples"": 177}], ""download_size"": 1340643723, ""dataset_size"": 1365254270.0}, {""config_name"": ""ceb_Latn"", ""features"": [{""name"": ""sentence"", ""dtype"": ""string""}, {""name"": ""URL"", ""dtype"": ""string""}, {""name"": ""id"", ""dtype"": ""int32""}, {""name"": ""domain"", ""dtype"": ""string""}, {""name"": ""topic"", ""dtype"": ""string""}, {""name"": ""has_image"", ""dtype"": ""int32""}, {""name"": ""has_hyperlink"", ""dtype"": ""int32""}, {""name"": ""fleurs_id"", ""dtype"": ""int32""}, {""name"": ""filename"", ""sequence"": ""string""}, {""name"": ""raw_transcription"", ""dtype"": ""string""}, {""name"": ""transcription"", ""dtype"": ""string""}, {""name"": ""num_samples"", ""sequence"": ""int64""}, {""name"": ""speaker_id"", ""sequence"": ""int64""}, {""name"": ""gender"", ""sequence"": ""string""}, {""name"": ""whisper_asr"", ""sequence"": ""string""}, {""name"": ""whisper_asr_cer"", ""sequence"": ""float64""}, {""name"": ""whisper_asr_wer"", ""sequence"": ""float64""}, {""name"": ""whisper_asr_translation"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr_cer"", ""sequence"": ""float64""}, {""name"": ""seamlessm4t_asr_wer"", ""sequence"": ""float64""}, {""name"": ""seamlessm4t_asr_translation"", ""sequence"": ""string""}, {""name"": ""index_id"", ""dtype"": ""int64""}, {""name"": ""category"", ""dtype"": {""class_label"": {""names"": {""0"": ""science/technology"", ""1"": ""travel"", ""2"": ""politics"", ""3"": ""sports"", ""4"": ""health"", ""5"": ""entertainment"", ""6"": ""geography""}}}}, {""name"": ""text"", ""dtype"": ""string""}, {""name"": ""audio"", ""sequence"": {""audio"": {""sampling_rate"": 16000}}}], ""splits"": [{""name"": ""train"", ""num_bytes"": 1398384311.0, ""num_examples"": 741}, {""name"": ""validation"", ""num_bytes"": 95970795.0, ""num_examples"": 61}, {""name"": ""test"", ""num_bytes"": 240259442.0, ""num_examples"": 149}], ""download_size"": 1718325671, ""dataset_size"": 1734614548.0}, {""config_name"": ""ces_Latn"", ""features"": [{""name"": ""sentence"", ""dtype"": ""string""}, {""name"": ""URL"", ""dtype"": ""string""}, {""name"": ""id"", ""dtype"": ""int32""}, {""name"": ""domain"", ""dtype"": ""string""}, {""name"": ""topic"", ""dtype"": ""string""}, {""name"": ""has_image"", ""dtype"": ""int32""}, {""name"": ""has_hyperlink"", ""dtype"": ""int32""}, {""name"": ""fleurs_id"", ""dtype"": ""int32""}, {""name"": ""filename"", ""sequence"": ""string""}, {""name"": ""raw_transcription"", ""dtype"": ""string""}, {""name"": ""transcription"", ""dtype"": ""string""}, {""name"": ""num_samples"", ""sequence"": ""int64""}, {""name"": ""speaker_id"", ""sequence"": ""int64""}, {""name"": ""gender"", ""sequence"": ""string""}, {""name"": ""whisper_asr"", ""sequence"": ""string""}, {""name"": ""whisper_asr_cer"", ""sequence"": ""float64""}, {""name"": ""whisper_asr_wer"", ""sequence"": ""float64""}, {""name"": ""whisper_asr_translation"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr_cer"", ""sequence"": ""float64""}, {""name"": ""seamlessm4t_asr_wer"", ""sequence"": ""float64""}, {""name"": ""seamlessm4t_asr_translation"", ""sequence"": ""string""}, {""name"": ""index_id"", ""dtype"": ""int64""}, {""name"": ""category"", ""dtype"": {""class_label"": {""names"": {""0"": ""science/technology"", ""1"": ""travel"", ""2"": ""politics"", ""3"": ""sports"", ""4"": ""health"", ""5"": ""entertainment"", ""6"": ""geography""}}}}, {""name"": ""text"", ""dtype"": ""string""}, {""name"": ""audio"", ""sequence"": {""audio"": {""sampling_rate"": 16000}}}], ""splits"": [{""name"": ""train"", ""num_bytes"": 970924211.0, ""num_examples"": 732}, {""name"": ""validation"", ""num_bytes"": 112601348.0, ""num_examples"": 68}, {""name"": ""test"", ""num_bytes"": 277229156.0, ""num_examples"": 172}], ""download_size"": 1333906872, ""dataset_size"": 1360754715.0}, {""config_name"": ""ckb_Arab"", ""features"": [{""name"": ""sentence"", ""dtype"": ""string""}, {""name"": ""URL"", ""dtype"": ""string""}, {""name"": ""id"", ""dtype"": ""int32""}, {""name"": ""domain"", ""dtype"": ""string""}, {""name"": ""topic"", ""dtype"": ""string""}, {""name"": ""has_image"", ""dtype"": ""int32""}, {""name"": ""has_hyperlink"", ""dtype"": ""int32""}, {""name"": ""fleurs_id"", ""dtype"": ""int32""}, {""name"": ""filename"", ""sequence"": ""string""}, {""name"": ""raw_transcription"", ""dtype"": ""string""}, {""name"": ""transcription"", ""dtype"": ""string""}, {""name"": ""num_samples"", ""sequence"": ""int64""}, {""name"": ""speaker_id"", ""sequence"": ""int64""}, {""name"": ""gender"", ""sequence"": ""string""}, {""name"": ""whisper_asr"", ""sequence"": ""string""}, {""name"": ""whisper_asr_cer"", ""sequence"": ""float64""}, {""name"": ""whisper_asr_wer"", ""sequence"": ""float64""}, {""name"": ""whisper_asr_translation"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr_cer"", ""sequence"": ""float64""}, {""name"": ""seamlessm4t_asr_wer"", ""sequence"": ""float64""}, {""name"": ""seamlessm4t_asr_translation"", ""sequence"": ""string""}, {""name"": ""index_id"", ""dtype"": ""int64""}, {""name"": ""category"", ""dtype"": {""class_label"": {""names"": {""0"": ""science/technology"", ""1"": ""travel"", ""2"": ""politics"", ""3"": ""sports"", ""4"": ""health"", ""5"": ""entertainment"", ""6"": ""geography""}}}}, {""name"": ""text"", ""dtype"": ""string""}, {""name"": ""audio"", ""sequence"": {""audio"": {""sampling_rate"": 16000}}}], ""splits"": [{""name"": ""train"", ""num_bytes"": 1178357835.0, ""num_examples"": 738}, {""name"": ""validation"", ""num_bytes"": 134860481.0, ""num_examples"": 70}, {""name"": ""test"", ""num_bytes"": 342458168.0, ""num_examples"": 176}], ""download_size"": 1613748924, ""dataset_size"": 1655676484.0}, {""config_name"": ""cym_Latn"", ""features"": [{""name"": ""sentence"", ""dtype"": ""string""}, {""name"": ""URL"", ""dtype"": ""string""}, {""name"": ""id"", ""dtype"": ""int32""}, {""name"": ""domain"", ""dtype"": ""string""}, {""name"": ""topic"", ""dtype"": ""string""}, {""name"": ""has_image"", ""dtype"": ""int32""}, {""name"": ""has_hyperlink"", ""dtype"": ""int32""}, {""name"": ""fleurs_id"", ""dtype"": ""int32""}, {""name"": ""filename"", ""sequence"": ""string""}, {""name"": ""raw_transcription"", ""dtype"": ""string""}, {""name"": ""transcription"", ""dtype"": ""string""}, {""name"": ""num_samples"", ""sequence"": ""int64""}, {""name"": ""speaker_id"", ""sequence"": ""int64""}, {""name"": ""gender"", ""sequence"": ""string""}, {""name"": ""whisper_asr"", ""sequence"": ""string""}, {""name"": ""whisper_asr_cer"", ""sequence"": ""float64""}, {""name"": ""whisper_asr_wer"", ""sequence"": ""float64""}, {""name"": ""whisper_asr_translation"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr_cer"", ""sequence"": ""float64""}, {""name"": ""seamlessm4t_asr_wer"", ""sequence"": ""float64""}, {""name"": ""seamlessm4t_asr_translation"", ""sequence"": ""string""}, {""name"": ""index_id"", ""dtype"": ""int64""}, {""name"": ""category"", ""dtype"": {""class_label"": {""names"": {""0"": ""science/technology"", ""1"": ""travel"", ""2"": ""politics"", ""3"": ""sports"", ""4"": ""health"", ""5"": ""entertainment"", ""6"": ""geography""}}}}, {""name"": ""text"", ""dtype"": ""string""}, {""name"": ""audio"", ""sequence"": {""audio"": {""sampling_rate"": 16000}}}], ""splits"": [{""name"": ""train"", ""num_bytes"": 1385174116.0, ""num_examples"": 739}, {""name"": ""validation"", ""num_bytes"": 200018352.0, ""num_examples"": 71}, {""name"": ""test"", ""num_bytes"": 486565088.0, ""num_examples"": 177}], ""download_size"": 2038201423, ""dataset_size"": 2071757556.0}, {""config_name"": ""dan_Latn"", ""features"": [{""name"": ""sentence"", ""dtype"": ""string""}, {""name"": ""URL"", ""dtype"": ""string""}, {""name"": ""id"", ""dtype"": ""int32""}, {""name"": ""domain"", ""dtype"": ""string""}, {""name"": ""topic"", ""dtype"": ""string""}, {""name"": ""has_image"", ""dtype"": ""int32""}, {""name"": ""has_hyperlink"", ""dtype"": ""int32""}, {""name"": ""fleurs_id"", ""dtype"": ""int32""}, {""name"": ""filename"", ""sequence"": ""string""}, {""name"": ""raw_transcription"", ""dtype"": ""string""}, {""name"": ""transcription"", ""dtype"": ""string""}, {""name"": ""num_samples"", ""sequence"": ""int64""}, {""name"": ""speaker_id"", ""sequence"": ""int64""}, {""name"": ""gender"", ""sequence"": ""string""}, {""name"": ""whisper_asr"", ""sequence"": ""string""}, {""name"": ""whisper_asr_cer"", ""sequence"": ""float64""}, {""name"": ""whisper_asr_wer"", ""sequence"": ""float64""}, {""name"": ""whisper_asr_translation"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr_cer"", ""sequence"": ""float64""}, {""name"": ""seamlessm4t_asr_wer"", ""sequence"": ""float64""}, {""name"": ""seamlessm4t_asr_translation"", ""sequence"": ""string""}, {""name"": ""index_id"", ""dtype"": ""int64""}, {""name"": ""category"", ""dtype"": {""class_label"": {""names"": {""0"": ""science/technology"", ""1"": ""travel"", ""2"": ""politics"", ""3"": ""sports"", ""4"": ""health"", ""5"": ""entertainment"", ""6"": ""geography""}}}}, {""name"": ""text"", ""dtype"": ""string""}, {""name"": ""audio"", ""sequence"": {""audio"": {""sampling_rate"": 16000}}}], ""splits"": [{""name"": ""train"", ""num_bytes"": 877728248.0, ""num_examples"": 696}, {""name"": ""validation"", ""num_bytes"": 130348707.0, ""num_examples"": 70}, {""name"": ""test"", ""num_bytes"": 340140011.0, ""num_examples"": 177}], ""download_size"": 1319500991, ""dataset_size"": 1348216966.0}, {""config_name"": ""deu_Latn"", ""features"": [{""name"": ""sentence"", ""dtype"": ""string""}, {""name"": ""URL"", ""dtype"": ""string""}, {""name"": ""id"", ""dtype"": ""int32""}, {""name"": ""domain"", ""dtype"": ""string""}, {""name"": ""topic"", ""dtype"": ""string""}, {""name"": ""has_image"", ""dtype"": ""int32""}, {""name"": ""has_hyperlink"", ""dtype"": ""int32""}, {""name"": ""fleurs_id"", ""dtype"": ""int32""}, {""name"": ""filename"", ""sequence"": ""string""}, {""name"": ""raw_transcription"", ""dtype"": ""string""}, {""name"": ""transcription"", ""dtype"": ""string""}, {""name"": ""num_samples"", ""sequence"": ""int64""}, {""name"": ""speaker_id"", ""sequence"": ""int64""}, {""name"": ""gender"", ""sequence"": ""string""}, {""name"": ""whisper_asr"", ""sequence"": ""string""}, {""name"": ""whisper_asr_cer"", ""sequence"": ""float64""}, {""name"": ""whisper_asr_wer"", ""sequence"": ""float64""}, {""name"": ""whisper_asr_translation"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr_cer"", ""sequence"": ""float64""}, {""name"": ""seamlessm4t_asr_wer"", ""sequence"": ""float64""}, {""name"": ""seamlessm4t_asr_translation"", ""sequence"": ""string""}, {""name"": ""index_id"", ""dtype"": ""int64""}, {""name"": ""category"", ""dtype"": {""class_label"": {""names"": {""0"": ""science/technology"", ""1"": ""travel"", ""2"": ""politics"", ""3"": ""sports"", ""4"": ""health"", ""5"": ""entertainment"", ""6"": ""geography""}}}}, {""name"": ""text"", ""dtype"": ""string""}, {""name"": ""audio"", ""sequence"": {""audio"": {""sampling_rate"": 16000}}}], ""splits"": [{""name"": ""train"", ""num_bytes"": 1059347230.0, ""num_examples"": 736}, {""name"": ""validation"", ""num_bytes"": 136254869.0, ""num_examples"": 69}, {""name"": ""test"", ""num_bytes"": 364325435.0, ""num_examples"": 175}], ""download_size"": 1542935687, ""dataset_size"": 1559927534.0}, {""config_name"": ""ell_Grek"", ""features"": [{""name"": ""sentence"", ""dtype"": ""string""}, {""name"": ""URL"", ""dtype"": ""string""}, {""name"": ""id"", ""dtype"": ""int32""}, {""name"": ""domain"", ""dtype"": ""string""}, {""name"": ""topic"", ""dtype"": ""string""}, {""name"": ""has_image"", ""dtype"": ""int32""}, {""name"": ""has_hyperlink"", ""dtype"": ""int32""}, {""name"": ""fleurs_id"", ""dtype"": ""int32""}, {""name"": ""filename"", ""sequence"": ""string""}, {""name"": ""raw_transcription"", ""dtype"": ""string""}, {""name"": ""transcription"", ""dtype"": ""string""}, {""name"": ""num_samples"", ""sequence"": ""int64""}, {""name"": ""speaker_id"", ""sequence"": ""int64""}, {""name"": ""gender"", ""sequence"": ""string""}, {""name"": ""whisper_asr"", ""sequence"": ""string""}, {""name"": ""whisper_asr_cer"", ""sequence"": ""float64""}, {""name"": ""whisper_asr_wer"", ""sequence"": ""float64""}, {""name"": ""whisper_asr_translation"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr_cer"", ""sequence"": ""float64""}, {""name"": ""seamlessm4t_asr_wer"", ""sequence"": ""float64""}, {""name"": ""seamlessm4t_asr_translation"", ""sequence"": ""string""}, {""name"": ""index_id"", ""dtype"": ""int64""}, {""name"": ""category"", ""dtype"": {""class_label"": {""names"": {""0"": ""science/technology"", ""1"": ""travel"", ""2"": ""politics"", ""3"": ""sports"", ""4"": ""health"", ""5"": ""entertainment"", ""6"": ""geography""}}}}, {""name"": ""text"", ""dtype"": ""string""}, {""name"": ""audio"", ""sequence"": {""audio"": {""sampling_rate"": 16000}}}], ""splits"": [{""name"": ""train"", ""num_bytes"": 1169505435.0, ""num_examples"": 750}, {""name"": ""validation"", ""num_bytes"": 86533682.0, ""num_examples"": 67}, {""name"": ""test"", ""num_bytes"": 228840869.0, ""num_examples"": 168}], ""download_size"": 1470419073, ""dataset_size"": 1484879986.0}, {""config_name"": ""eng_Latn"", ""features"": [{""name"": ""sentence"", ""dtype"": ""string""}, {""name"": ""URL"", ""dtype"": ""string""}, {""name"": ""id"", ""dtype"": ""int32""}, {""name"": ""domain"", ""dtype"": ""string""}, {""name"": ""topic"", ""dtype"": ""string""}, {""name"": ""has_image"", ""dtype"": ""int32""}, {""name"": ""has_hyperlink"", ""dtype"": ""int32""}, {""name"": ""fleurs_id"", ""dtype"": ""int32""}, {""name"": ""filename"", ""sequence"": ""string""}, {""name"": ""raw_transcription"", ""dtype"": ""string""}, {""name"": ""transcription"", ""dtype"": ""string""}, {""name"": ""num_samples"", ""sequence"": ""int64""}, {""name"": ""speaker_id"", ""sequence"": ""int64""}, {""name"": ""gender"", ""sequence"": ""string""}, {""name"": ""whisper_asr"", ""sequence"": ""string""}, {""name"": ""whisper_asr_cer"", ""sequence"": ""float64""}, {""name"": ""whisper_asr_wer"", ""sequence"": ""float64""}, {""name"": ""whisper_asr_translation"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr_cer"", ""sequence"": ""float64""}, {""name"": ""seamlessm4t_asr_wer"", ""sequence"": ""float64""}, {""name"": ""seamlessm4t_asr_translation"", ""sequence"": ""string""}, {""name"": ""index_id"", ""dtype"": ""int64""}, {""name"": ""category"", ""dtype"": {""class_label"": {""names"": {""0"": ""science/technology"", ""1"": ""travel"", ""2"": ""politics"", ""3"": ""sports"", ""4"": ""health"", ""5"": ""entertainment"", ""6"": ""geography""}}}}, {""name"": ""text"", ""dtype"": ""string""}, {""name"": ""audio"", ""sequence"": {""audio"": {""sampling_rate"": 16000}}}], ""splits"": [{""name"": ""train"", ""num_bytes"": 865407552.0, ""num_examples"": 738}, {""name"": ""validation"", ""num_bytes"": 113902786.0, ""num_examples"": 71}, {""name"": ""test"", ""num_bytes"": 197416856.0, ""num_examples"": 177}], ""download_size"": 1168283579, ""dataset_size"": 1176727194.0}, {""config_name"": ""est_Latn"", ""features"": [{""name"": ""sentence"", ""dtype"": ""string""}, {""name"": ""URL"", ""dtype"": ""string""}, {""name"": ""id"", ""dtype"": ""int32""}, {""name"": ""domain"", ""dtype"": ""string""}, {""name"": ""topic"", ""dtype"": ""string""}, {""name"": ""has_image"", ""dtype"": ""int32""}, {""name"": ""has_hyperlink"", ""dtype"": ""int32""}, {""name"": ""fleurs_id"", ""dtype"": ""int32""}, {""name"": ""filename"", ""sequence"": ""string""}, {""name"": ""raw_transcription"", ""dtype"": ""string""}, {""name"": ""transcription"", ""dtype"": ""string""}, {""name"": ""num_samples"", ""sequence"": ""int64""}, {""name"": ""speaker_id"", ""sequence"": ""int64""}, {""name"": ""gender"", ""sequence"": ""string""}, {""name"": ""whisper_asr"", ""sequence"": ""string""}, {""name"": ""whisper_asr_cer"", ""sequence"": ""float64""}, {""name"": ""whisper_asr_wer"", ""sequence"": ""float64""}, {""name"": ""whisper_asr_translation"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr_cer"", ""sequence"": ""float64""}, {""name"": ""seamlessm4t_asr_wer"", ""sequence"": ""float64""}, {""name"": ""seamlessm4t_asr_translation"", ""sequence"": ""string""}, {""name"": ""index_id"", ""dtype"": ""int64""}, {""name"": ""category"", ""dtype"": {""class_label"": {""names"": {""0"": ""science/technology"", ""1"": ""travel"", ""2"": ""politics"", ""3"": ""sports"", ""4"": ""health"", ""5"": ""entertainment"", ""6"": ""geography""}}}}, {""name"": ""text"", ""dtype"": ""string""}, {""name"": ""audio"", ""sequence"": {""audio"": {""sampling_rate"": 16000}}}], ""splits"": [{""name"": ""train"", ""num_bytes"": 841849798.0, ""num_examples"": 700}, {""name"": ""validation"", ""num_bytes"": 136854050.0, ""num_examples"": 71}, {""name"": ""test"", ""num_bytes"": 352690669.0, ""num_examples"": 176}], ""download_size"": 1311922527, ""dataset_size"": 1331394517.0}, {""config_name"": ""fin_Latn"", ""features"": [{""name"": ""sentence"", ""dtype"": ""string""}, {""name"": ""URL"", ""dtype"": ""string""}, {""name"": ""id"", ""dtype"": ""int32""}, {""name"": ""domain"", ""dtype"": ""string""}, {""name"": ""topic"", ""dtype"": ""string""}, {""name"": ""has_image"", ""dtype"": ""int32""}, {""name"": ""has_hyperlink"", ""dtype"": ""int32""}, {""name"": ""fleurs_id"", ""dtype"": ""int32""}, {""name"": ""filename"", ""sequence"": ""string""}, {""name"": ""raw_transcription"", ""dtype"": ""string""}, {""name"": ""transcription"", ""dtype"": ""string""}, {""name"": ""num_samples"", ""sequence"": ""int64""}, {""name"": ""speaker_id"", ""sequence"": ""int64""}, {""name"": ""gender"", ""sequence"": ""string""}, {""name"": ""whisper_asr"", ""sequence"": ""string""}, {""name"": ""whisper_asr_cer"", ""sequence"": ""float64""}, {""name"": ""whisper_asr_wer"", ""sequence"": ""float64""}, {""name"": ""whisper_asr_translation"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr_cer"", ""sequence"": ""float64""}, {""name"": ""seamlessm4t_asr_wer"", ""sequence"": ""float64""}, {""name"": ""seamlessm4t_asr_translation"", ""sequence"": ""string""}, {""name"": ""index_id"", ""dtype"": ""int64""}, {""name"": ""category"", ""dtype"": {""class_label"": {""names"": {""0"": ""science/technology"", ""1"": ""travel"", ""2"": ""politics"", ""3"": ""sports"", ""4"": ""health"", ""5"": ""entertainment"", ""6"": ""geography""}}}}, {""name"": ""text"", ""dtype"": ""string""}, {""name"": ""audio"", ""sequence"": {""audio"": {""sampling_rate"": 16000}}}], ""splits"": [{""name"": ""train"", ""num_bytes"": 1013025006.0, ""num_examples"": 735}, {""name"": ""validation"", ""num_bytes"": 154629039.0, ""num_examples"": 71}, {""name"": ""test"", ""num_bytes"": 377490223.0, ""num_examples"": 175}], ""download_size"": 1514479202, ""dataset_size"": 1545144268.0}, {""config_name"": ""fra_Latn"", ""features"": [{""name"": ""sentence"", ""dtype"": ""string""}, {""name"": ""URL"", ""dtype"": ""string""}, {""name"": ""id"", ""dtype"": ""int32""}, {""name"": ""domain"", ""dtype"": ""string""}, {""name"": ""topic"", ""dtype"": ""string""}, {""name"": ""has_image"", ""dtype"": ""int32""}, {""name"": ""has_hyperlink"", ""dtype"": ""int32""}, {""name"": ""fleurs_id"", ""dtype"": ""int32""}, {""name"": ""filename"", ""sequence"": ""string""}, {""name"": ""raw_transcription"", ""dtype"": ""string""}, {""name"": ""transcription"", ""dtype"": ""string""}, {""name"": ""num_samples"", ""sequence"": ""int64""}, {""name"": ""speaker_id"", ""sequence"": ""int64""}, {""name"": ""gender"", ""sequence"": ""string""}, {""name"": ""whisper_asr"", ""sequence"": ""string""}, {""name"": ""whisper_asr_cer"", ""sequence"": ""float64""}, {""name"": ""whisper_asr_wer"", ""sequence"": ""float64""}, {""name"": ""whisper_asr_translation"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr_cer"", ""sequence"": ""float64""}, {""name"": ""seamlessm4t_asr_wer"", ""sequence"": ""float64""}, {""name"": ""seamlessm4t_asr_translation"", ""sequence"": ""string""}, {""name"": ""index_id"", ""dtype"": ""int64""}, {""name"": ""category"", ""dtype"": {""class_label"": {""names"": {""0"": ""science/technology"", ""1"": ""travel"", ""2"": ""politics"", ""3"": ""sports"", ""4"": ""health"", ""5"": ""entertainment"", ""6"": ""geography""}}}}, {""name"": ""text"", ""dtype"": ""string""}, {""name"": ""audio"", ""sequence"": {""audio"": {""sampling_rate"": 16000}}}], ""splits"": [{""name"": ""train"", ""num_bytes"": 1200484942.0, ""num_examples"": 753}, {""name"": ""validation"", ""num_bytes"": 89645406.0, ""num_examples"": 65}, {""name"": ""test"", ""num_bytes"": 219759551.0, ""num_examples"": 164}], ""download_size"": 1473280670, ""dataset_size"": 1509889899.0}, {""config_name"": ""fuv_Latn"", ""features"": [{""name"": ""sentence"", ""dtype"": ""string""}, {""name"": ""URL"", ""dtype"": ""string""}, {""name"": ""id"", ""dtype"": ""int32""}, {""name"": ""domain"", ""dtype"": ""string""}, {""name"": ""topic"", ""dtype"": ""string""}, {""name"": ""has_image"", ""dtype"": ""int32""}, {""name"": ""has_hyperlink"", ""dtype"": ""int32""}, {""name"": ""fleurs_id"", ""dtype"": ""int32""}, {""name"": ""filename"", ""sequence"": ""string""}, {""name"": ""raw_transcription"", ""dtype"": ""string""}, {""name"": ""transcription"", ""dtype"": ""string""}, {""name"": ""num_samples"", ""sequence"": ""int64""}, {""name"": ""speaker_id"", ""sequence"": ""int64""}, {""name"": ""gender"", ""sequence"": ""string""}, {""name"": ""whisper_asr"", ""sequence"": ""string""}, {""name"": ""whisper_asr_cer"", ""sequence"": ""float64""}, {""name"": ""whisper_asr_wer"", ""sequence"": ""float64""}, {""name"": ""whisper_asr_translation"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr_cer"", ""sequence"": ""float64""}, {""name"": ""seamlessm4t_asr_wer"", ""sequence"": ""float64""}, {""name"": ""seamlessm4t_asr_translation"", ""sequence"": ""string""}, {""name"": ""index_id"", ""dtype"": ""int64""}, {""name"": ""category"", ""dtype"": {""class_label"": {""names"": {""0"": ""science/technology"", ""1"": ""travel"", ""2"": ""politics"", ""3"": ""sports"", ""4"": ""health"", ""5"": ""entertainment"", ""6"": ""geography""}}}}, {""name"": ""text"", ""dtype"": ""string""}, {""name"": ""audio"", ""sequence"": {""audio"": {""sampling_rate"": 16000}}}], ""splits"": [{""name"": ""train"", ""num_bytes"": 1625942463.0, ""num_examples"": 752}, {""name"": ""validation"", ""num_bytes"": 110111916.0, ""num_examples"": 68}, {""name"": ""test"", ""num_bytes"": 305080005.0, ""num_examples"": 166}], ""download_size"": 2031410049, ""dataset_size"": 2041134384.0}, {""config_name"": ""gaz_Latn"", ""features"": [{""name"": ""sentence"", ""dtype"": ""string""}, {""name"": ""URL"", ""dtype"": ""string""}, {""name"": ""id"", ""dtype"": ""int32""}, {""name"": ""domain"", ""dtype"": ""string""}, {""name"": ""topic"", ""dtype"": ""string""}, {""name"": ""has_image"", ""dtype"": ""int32""}, {""name"": ""has_hyperlink"", ""dtype"": ""int32""}, {""name"": ""fleurs_id"", ""dtype"": ""int32""}, {""name"": ""filename"", ""sequence"": ""string""}, {""name"": ""raw_transcription"", ""dtype"": ""string""}, {""name"": ""transcription"", ""dtype"": ""string""}, {""name"": ""num_samples"", ""sequence"": ""int64""}, {""name"": ""speaker_id"", ""sequence"": ""int64""}, {""name"": ""gender"", ""sequence"": ""string""}, {""name"": ""whisper_asr"", ""sequence"": ""string""}, {""name"": ""whisper_asr_cer"", ""sequence"": ""float64""}, {""name"": ""whisper_asr_wer"", ""sequence"": ""float64""}, {""name"": ""whisper_asr_translation"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr_cer"", ""sequence"": ""float64""}, {""name"": ""seamlessm4t_asr_wer"", ""sequence"": ""float64""}, {""name"": ""seamlessm4t_asr_translation"", ""sequence"": ""string""}, {""name"": ""index_id"", ""dtype"": ""int64""}, {""name"": ""category"", ""dtype"": {""class_label"": {""names"": {""0"": ""science/technology"", ""1"": ""travel"", ""2"": ""politics"", ""3"": ""sports"", ""4"": ""health"", ""5"": ""entertainment"", ""6"": ""geography""}}}}, {""name"": ""text"", ""dtype"": ""string""}, {""name"": ""audio"", ""sequence"": {""audio"": {""sampling_rate"": 16000}}}], ""splits"": [{""name"": ""train"", ""num_bytes"": 766698042.0, ""num_examples"": 574}, {""name"": ""validation"", ""num_bytes"": 4996373.0, ""num_examples"": 6}, {""name"": ""test"", ""num_bytes"": 12726015.0, ""num_examples"": 17}], ""download_size"": 778314621, ""dataset_size"": 784420430.0}, {""config_name"": ""gle_Latn"", ""features"": [{""name"": ""sentence"", ""dtype"": ""string""}, {""name"": ""URL"", ""dtype"": ""string""}, {""name"": ""id"", ""dtype"": ""int32""}, {""name"": ""domain"", ""dtype"": ""string""}, {""name"": ""topic"", ""dtype"": ""string""}, {""name"": ""has_image"", ""dtype"": ""int32""}, {""name"": ""has_hyperlink"", ""dtype"": ""int32""}, {""name"": ""fleurs_id"", ""dtype"": ""int32""}, {""name"": ""filename"", ""sequence"": ""string""}, {""name"": ""raw_transcription"", ""dtype"": ""string""}, {""name"": ""transcription"", ""dtype"": ""string""}, {""name"": ""num_samples"", ""sequence"": ""int64""}, {""name"": ""speaker_id"", ""sequence"": ""int64""}, {""name"": ""gender"", ""sequence"": ""string""}, {""name"": ""whisper_asr"", ""sequence"": ""string""}, {""name"": ""whisper_asr_cer"", ""sequence"": ""float64""}, {""name"": ""whisper_asr_wer"", ""sequence"": ""float64""}, {""name"": ""whisper_asr_translation"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr_cer"", ""sequence"": ""float64""}, {""name"": ""seamlessm4t_asr_wer"", ""sequence"": ""float64""}, {""name"": ""seamlessm4t_asr_translation"", ""sequence"": ""string""}, {""name"": ""index_id"", ""dtype"": ""int64""}, {""name"": ""category"", ""dtype"": {""class_label"": {""names"": {""0"": ""science/technology"", ""1"": ""travel"", ""2"": ""politics"", ""3"": ""sports"", ""4"": ""health"", ""5"": ""entertainment"", ""6"": ""geography""}}}}, {""name"": ""text"", ""dtype"": ""string""}, {""name"": ""audio"", ""sequence"": {""audio"": {""sampling_rate"": 16000}}}], ""splits"": [{""name"": ""train"", ""num_bytes"": 1417633991.0, ""num_examples"": 731}, {""name"": ""validation"", ""num_bytes"": 168821631.0, ""num_examples"": 71}, {""name"": ""test"", ""num_bytes"": 409952057.0, ""num_examples"": 176}], ""download_size"": 1963143042, ""dataset_size"": 1996407679.0}, {""config_name"": ""glg_Latn"", ""features"": [{""name"": ""sentence"", ""dtype"": ""string""}, {""name"": ""URL"", ""dtype"": ""string""}, {""name"": ""id"", ""dtype"": ""int32""}, {""name"": ""domain"", ""dtype"": ""string""}, {""name"": ""topic"", ""dtype"": ""string""}, {""name"": ""has_image"", ""dtype"": ""int32""}, {""name"": ""has_hyperlink"", ""dtype"": ""int32""}, {""name"": ""fleurs_id"", ""dtype"": ""int32""}, {""name"": ""filename"", ""sequence"": ""string""}, {""name"": ""raw_transcription"", ""dtype"": ""string""}, {""name"": ""transcription"", ""dtype"": ""string""}, {""name"": ""num_samples"", ""sequence"": ""int64""}, {""name"": ""speaker_id"", ""sequence"": ""int64""}, {""name"": ""gender"", ""sequence"": ""string""}, {""name"": ""whisper_asr"", ""sequence"": ""string""}, {""name"": ""whisper_asr_cer"", ""sequence"": ""float64""}, {""name"": ""whisper_asr_wer"", ""sequence"": ""float64""}, {""name"": ""whisper_asr_translation"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr_cer"", ""sequence"": ""float64""}, {""name"": ""seamlessm4t_asr_wer"", ""sequence"": ""float64""}, {""name"": ""seamlessm4t_asr_translation"", ""sequence"": ""string""}, {""name"": ""index_id"", ""dtype"": ""int64""}, {""name"": ""category"", ""dtype"": {""class_label"": {""names"": {""0"": ""science/technology"", ""1"": ""travel"", ""2"": ""politics"", ""3"": ""sports"", ""4"": ""health"", ""5"": ""entertainment"", ""6"": ""geography""}}}}, {""name"": ""text"", ""dtype"": ""string""}, {""name"": ""audio"", ""sequence"": {""audio"": {""sampling_rate"": 16000}}}], ""splits"": [{""name"": ""train"", ""num_bytes"": 776866102.0, ""num_examples"": 660}, {""name"": ""validation"", ""num_bytes"": 115361837.0, ""num_examples"": 71}, {""name"": ""test"", ""num_bytes"": 308185507.0, ""num_examples"": 174}], ""download_size"": 1195253363, ""dataset_size"": 1200413446.0}, {""config_name"": ""guj_Gujr"", ""features"": [{""name"": ""sentence"", ""dtype"": ""string""}, {""name"": ""URL"", ""dtype"": ""string""}, {""name"": ""id"", ""dtype"": ""int32""}, {""name"": ""domain"", ""dtype"": ""string""}, {""name"": ""topic"", ""dtype"": ""string""}, {""name"": ""has_image"", ""dtype"": ""int32""}, {""name"": ""has_hyperlink"", ""dtype"": ""int32""}, {""name"": ""fleurs_id"", ""dtype"": ""int32""}, {""name"": ""filename"", ""sequence"": ""string""}, {""name"": ""raw_transcription"", ""dtype"": ""string""}, {""name"": ""transcription"", ""dtype"": ""string""}, {""name"": ""num_samples"", ""sequence"": ""int64""}, {""name"": ""speaker_id"", ""sequence"": ""int64""}, {""name"": ""gender"", ""sequence"": ""string""}, {""name"": ""whisper_asr"", ""sequence"": ""string""}, {""name"": ""whisper_asr_cer"", ""sequence"": ""float64""}, {""name"": ""whisper_asr_wer"", ""sequence"": ""float64""}, {""name"": ""whisper_asr_translation"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr_cer"", ""sequence"": ""float64""}, {""name"": ""seamlessm4t_asr_wer"", ""sequence"": ""float64""}, {""name"": ""seamlessm4t_asr_translation"", ""sequence"": ""string""}, {""name"": ""index_id"", ""dtype"": ""int64""}, {""name"": ""category"", ""dtype"": {""class_label"": {""names"": {""0"": ""science/technology"", ""1"": ""travel"", ""2"": ""politics"", ""3"": ""sports"", ""4"": ""health"", ""5"": ""entertainment"", ""6"": ""geography""}}}}, {""name"": ""text"", ""dtype"": ""string""}, {""name"": ""audio"", ""sequence"": {""audio"": {""sampling_rate"": 16000}}}], ""splits"": [{""name"": ""train"", ""num_bytes"": 1049403304.0, ""num_examples"": 752}, {""name"": ""validation"", ""num_bytes"": 130727757.0, ""num_examples"": 71}, {""name"": ""test"", ""num_bytes"": 341271185.0, ""num_examples"": 177}], ""download_size"": 1519511715, ""dataset_size"": 1521402246.0}, {""config_name"": ""hau_Latn"", ""features"": [{""name"": ""sentence"", ""dtype"": ""string""}, {""name"": ""URL"", ""dtype"": ""string""}, {""name"": ""id"", ""dtype"": ""int32""}, {""name"": ""domain"", ""dtype"": ""string""}, {""name"": ""topic"", ""dtype"": ""string""}, {""name"": ""has_image"", ""dtype"": ""int32""}, {""name"": ""has_hyperlink"", ""dtype"": ""int32""}, {""name"": ""fleurs_id"", ""dtype"": ""int32""}, {""name"": ""filename"", ""sequence"": ""string""}, {""name"": ""raw_transcription"", ""dtype"": ""string""}, {""name"": ""transcription"", ""dtype"": ""string""}, {""name"": ""num_samples"", ""sequence"": ""int64""}, {""name"": ""speaker_id"", ""sequence"": ""int64""}, {""name"": ""gender"", ""sequence"": ""string""}, {""name"": ""whisper_asr"", ""sequence"": ""string""}, {""name"": ""whisper_asr_cer"", ""sequence"": ""float64""}, {""name"": ""whisper_asr_wer"", ""sequence"": ""float64""}, {""name"": ""whisper_asr_translation"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr_cer"", ""sequence"": ""float64""}, {""name"": ""seamlessm4t_asr_wer"", ""sequence"": ""float64""}, {""name"": ""seamlessm4t_asr_translation"", ""sequence"": ""string""}, {""name"": ""index_id"", ""dtype"": ""int64""}, {""name"": ""category"", ""dtype"": {""class_label"": {""names"": {""0"": ""science/technology"", ""1"": ""travel"", ""2"": ""politics"", ""3"": ""sports"", ""4"": ""health"", ""5"": ""entertainment"", ""6"": ""geography""}}}}, {""name"": ""text"", ""dtype"": ""string""}, {""name"": ""audio"", ""sequence"": {""audio"": {""sampling_rate"": 16000}}}], ""splits"": [{""name"": ""train"", ""num_bytes"": 1579344700.0, ""num_examples"": 753}, {""name"": ""validation"", ""num_bytes"": 175883597.0, ""num_examples"": 70}, {""name"": ""test"", ""num_bytes"": 374260636.0, ""num_examples"": 166}], ""download_size"": 2128392442, ""dataset_size"": 2129488933.0}, {""config_name"": ""heb_Hebr"", ""features"": [{""name"": ""sentence"", ""dtype"": ""string""}, {""name"": ""URL"", ""dtype"": ""string""}, {""name"": ""id"", ""dtype"": ""int32""}, {""name"": ""domain"", ""dtype"": ""string""}, {""name"": ""topic"", ""dtype"": ""string""}, {""name"": ""has_image"", ""dtype"": ""int32""}, {""name"": ""has_hyperlink"", ""dtype"": ""int32""}, {""name"": ""fleurs_id"", ""dtype"": ""int32""}, {""name"": ""filename"", ""sequence"": ""string""}, {""name"": ""raw_transcription"", ""dtype"": ""string""}, {""name"": ""transcription"", ""dtype"": ""string""}, {""name"": ""num_samples"", ""sequence"": ""int64""}, {""name"": ""speaker_id"", ""sequence"": ""int64""}, {""name"": ""gender"", ""sequence"": ""string""}, {""name"": ""whisper_asr"", ""sequence"": ""string""}, {""name"": ""whisper_asr_cer"", ""sequence"": ""float64""}, {""name"": ""whisper_asr_wer"", ""sequence"": ""float64""}, {""name"": ""whisper_asr_translation"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr_cer"", ""sequence"": ""float64""}, {""name"": ""seamlessm4t_asr_wer"", ""sequence"": ""float64""}, {""name"": ""seamlessm4t_asr_translation"", ""sequence"": ""string""}, {""name"": ""index_id"", ""dtype"": ""int64""}, {""name"": ""category"", ""dtype"": {""class_label"": {""names"": {""0"": ""science/technology"", ""1"": ""travel"", ""2"": ""politics"", ""3"": ""sports"", ""4"": ""health"", ""5"": ""entertainment"", ""6"": ""geography""}}}}, {""name"": ""text"", ""dtype"": ""string""}, {""name"": ""audio"", ""sequence"": {""audio"": {""sampling_rate"": 16000}}}], ""splits"": [{""name"": ""train"", ""num_bytes"": 1094855635.0, ""num_examples"": 754}, {""name"": ""validation"", ""num_bytes"": 91724842.0, ""num_examples"": 70}, {""name"": ""test"", ""num_bytes"": 238749489.0, ""num_examples"": 175}], ""download_size"": 1420931124, ""dataset_size"": 1425329966.0}, {""config_name"": ""hin_Deva"", ""features"": [{""name"": ""sentence"", ""dtype"": ""string""}, {""name"": ""URL"", ""dtype"": ""string""}, {""name"": ""id"", ""dtype"": ""int32""}, {""name"": ""domain"", ""dtype"": ""string""}, {""name"": ""topic"", ""dtype"": ""string""}, {""name"": ""has_image"", ""dtype"": ""int32""}, {""name"": ""has_hyperlink"", ""dtype"": ""int32""}, {""name"": ""fleurs_id"", ""dtype"": ""int32""}, {""name"": ""filename"", ""sequence"": ""string""}, {""name"": ""raw_transcription"", ""dtype"": ""string""}, {""name"": ""transcription"", ""dtype"": ""string""}, {""name"": ""num_samples"", ""sequence"": ""int64""}, {""name"": ""speaker_id"", ""sequence"": ""int64""}, {""name"": ""gender"", ""sequence"": ""string""}, {""name"": ""whisper_asr"", ""sequence"": ""string""}, {""name"": ""whisper_asr_cer"", ""sequence"": ""float64""}, {""name"": ""whisper_asr_wer"", ""sequence"": ""float64""}, {""name"": ""whisper_asr_translation"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr_cer"", ""sequence"": ""float64""}, {""name"": ""seamlessm4t_asr_wer"", ""sequence"": ""float64""}, {""name"": ""seamlessm4t_asr_translation"", ""sequence"": ""string""}, {""name"": ""index_id"", ""dtype"": ""int64""}, {""name"": ""category"", ""dtype"": {""class_label"": {""names"": {""0"": ""science/technology"", ""1"": ""travel"", ""2"": ""politics"", ""3"": ""sports"", ""4"": ""health"", ""5"": ""entertainment"", ""6"": ""geography""}}}}, {""name"": ""text"", ""dtype"": ""string""}, {""name"": ""audio"", ""sequence"": {""audio"": {""sampling_rate"": 16000}}}], ""splits"": [{""name"": ""train"", ""num_bytes"": 784008338.0, ""num_examples"": 653}, {""name"": ""validation"", ""num_bytes"": 73025370.0, ""num_examples"": 60}, {""name"": ""test"", ""num_bytes"": 148402410.0, ""num_examples"": 132}], ""download_size"": 999448112, ""dataset_size"": 1005436118.0}, {""config_name"": ""hrv_Latn"", ""features"": [{""name"": ""sentence"", ""dtype"": ""string""}, {""name"": ""URL"", ""dtype"": ""string""}, {""name"": ""id"", ""dtype"": ""int32""}, {""name"": ""domain"", ""dtype"": ""string""}, {""name"": ""topic"", ""dtype"": ""string""}, {""name"": ""has_image"", ""dtype"": ""int32""}, {""name"": ""has_hyperlink"", ""dtype"": ""int32""}, {""name"": ""fleurs_id"", ""dtype"": ""int32""}, {""name"": ""filename"", ""sequence"": ""string""}, {""name"": ""raw_transcription"", ""dtype"": ""string""}, {""name"": ""transcription"", ""dtype"": ""string""}, {""name"": ""num_samples"", ""sequence"": ""int64""}, {""name"": ""speaker_id"", ""sequence"": ""int64""}, {""name"": ""gender"", ""sequence"": ""string""}, {""name"": ""whisper_asr"", ""sequence"": ""string""}, {""name"": ""whisper_asr_cer"", ""sequence"": ""float64""}, {""name"": ""whisper_asr_wer"", ""sequence"": ""float64""}, {""name"": ""whisper_asr_translation"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr_cer"", ""sequence"": ""float64""}, {""name"": ""seamlessm4t_asr_wer"", ""sequence"": ""float64""}, {""name"": ""seamlessm4t_asr_translation"", ""sequence"": ""string""}, {""name"": ""index_id"", ""dtype"": ""int64""}, {""name"": ""category"", ""dtype"": {""class_label"": {""names"": {""0"": ""science/technology"", ""1"": ""travel"", ""2"": ""politics"", ""3"": ""sports"", ""4"": ""health"", ""5"": ""entertainment"", ""6"": ""geography""}}}}, {""name"": ""text"", ""dtype"": ""string""}, {""name"": ""audio"", ""sequence"": {""audio"": {""sampling_rate"": 16000}}}], ""splits"": [{""name"": ""train"", ""num_bytes"": 1374835242.0, ""num_examples"": 756}, {""name"": ""validation"", ""num_bytes"": 116395175.0, ""num_examples"": 71}, {""name"": ""test"", ""num_bytes"": 300472197.0, ""num_examples"": 176}], ""download_size"": 1739639653, ""dataset_size"": 1791702614.0}, {""config_name"": ""hun_Latn"", ""features"": [{""name"": ""sentence"", ""dtype"": ""string""}, {""name"": ""URL"", ""dtype"": ""string""}, {""name"": ""id"", ""dtype"": ""int32""}, {""name"": ""domain"", ""dtype"": ""string""}, {""name"": ""topic"", ""dtype"": ""string""}, {""name"": ""has_image"", ""dtype"": ""int32""}, {""name"": ""has_hyperlink"", ""dtype"": ""int32""}, {""name"": ""fleurs_id"", ""dtype"": ""int32""}, {""name"": ""filename"", ""sequence"": ""string""}, {""name"": ""raw_transcription"", ""dtype"": ""string""}, {""name"": ""transcription"", ""dtype"": ""string""}, {""name"": ""num_samples"", ""sequence"": ""int64""}, {""name"": ""speaker_id"", ""sequence"": ""int64""}, {""name"": ""gender"", ""sequence"": ""string""}, {""name"": ""whisper_asr"", ""sequence"": ""string""}, {""name"": ""whisper_asr_cer"", ""sequence"": ""float64""}, {""name"": ""whisper_asr_wer"", ""sequence"": ""float64""}, {""name"": ""whisper_asr_translation"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr_cer"", ""sequence"": ""float64""}, {""name"": ""seamlessm4t_asr_wer"", ""sequence"": ""float64""}, {""name"": ""seamlessm4t_asr_translation"", ""sequence"": ""string""}, {""name"": ""index_id"", ""dtype"": ""int64""}, {""name"": ""category"", ""dtype"": {""class_label"": {""names"": {""0"": ""science/technology"", ""1"": ""travel"", ""2"": ""politics"", ""3"": ""sports"", ""4"": ""health"", ""5"": ""entertainment"", ""6"": ""geography""}}}}, {""name"": ""text"", ""dtype"": ""string""}, {""name"": ""audio"", ""sequence"": {""audio"": {""sampling_rate"": 16000}}}], ""splits"": [{""name"": ""train"", ""num_bytes"": 1070647343.0, ""num_examples"": 750}, {""name"": ""validation"", ""num_bytes"": 146140834.0, ""num_examples"": 71}, {""name"": ""test"", ""num_bytes"": 359201948.0, ""num_examples"": 177}], ""download_size"": 1560605445, ""dataset_size"": 1575990125.0}, {""config_name"": ""hye_Armn"", ""features"": [{""name"": ""sentence"", ""dtype"": ""string""}, {""name"": ""URL"", ""dtype"": ""string""}, {""name"": ""id"", ""dtype"": ""int32""}, {""name"": ""domain"", ""dtype"": ""string""}, {""name"": ""topic"", ""dtype"": ""string""}, {""name"": ""has_image"", ""dtype"": ""int32""}, {""name"": ""has_hyperlink"", ""dtype"": ""int32""}, {""name"": ""fleurs_id"", ""dtype"": ""int32""}, {""name"": ""filename"", ""sequence"": ""string""}, {""name"": ""raw_transcription"", ""dtype"": ""string""}, {""name"": ""transcription"", ""dtype"": ""string""}, {""name"": ""num_samples"", ""sequence"": ""int64""}, {""name"": ""speaker_id"", ""sequence"": ""int64""}, {""name"": ""gender"", ""sequence"": ""string""}, {""name"": ""whisper_asr"", ""sequence"": ""string""}, {""name"": ""whisper_asr_cer"", ""sequence"": ""float64""}, {""name"": ""whisper_asr_wer"", ""sequence"": ""float64""}, {""name"": ""whisper_asr_translation"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr_cer"", ""sequence"": ""float64""}, {""name"": ""seamlessm4t_asr_wer"", ""sequence"": ""float64""}, {""name"": ""seamlessm4t_asr_translation"", ""sequence"": ""string""}, {""name"": ""index_id"", ""dtype"": ""int64""}, {""name"": ""category"", ""dtype"": {""class_label"": {""names"": {""0"": ""science/technology"", ""1"": ""travel"", ""2"": ""politics"", ""3"": ""sports"", ""4"": ""health"", ""5"": ""entertainment"", ""6"": ""geography""}}}}, {""name"": ""text"", ""dtype"": ""string""}, {""name"": ""audio"", ""sequence"": {""audio"": {""sampling_rate"": 16000}}}], ""splits"": [{""name"": ""train"", ""num_bytes"": 1199310259.0, ""num_examples"": 741}, {""name"": ""validation"", ""num_bytes"": 133092440.0, ""num_examples"": 71}, {""name"": ""test"", ""num_bytes"": 348410386.0, ""num_examples"": 177}], ""download_size"": 1641173951, ""dataset_size"": 1680813085.0}, {""config_name"": ""ibo_Latn"", ""features"": [{""name"": ""sentence"", ""dtype"": ""string""}, {""name"": ""URL"", ""dtype"": ""string""}, {""name"": ""id"", ""dtype"": ""int32""}, {""name"": ""domain"", ""dtype"": ""string""}, {""name"": ""topic"", ""dtype"": ""string""}, {""name"": ""has_image"", ""dtype"": ""int32""}, {""name"": ""has_hyperlink"", ""dtype"": ""int32""}, {""name"": ""fleurs_id"", ""dtype"": ""int32""}, {""name"": ""filename"", ""sequence"": ""string""}, {""name"": ""raw_transcription"", ""dtype"": ""string""}, {""name"": ""transcription"", ""dtype"": ""string""}, {""name"": ""num_samples"", ""sequence"": ""int64""}, {""name"": ""speaker_id"", ""sequence"": ""int64""}, {""name"": ""gender"", ""sequence"": ""string""}, {""name"": ""whisper_asr"", ""sequence"": ""string""}, {""name"": ""whisper_asr_cer"", ""sequence"": ""float64""}, {""name"": ""whisper_asr_wer"", ""sequence"": ""float64""}, {""name"": ""whisper_asr_translation"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr_cer"", ""sequence"": ""float64""}, {""name"": ""seamlessm4t_asr_wer"", ""sequence"": ""float64""}, {""name"": ""seamlessm4t_asr_translation"", ""sequence"": ""string""}, {""name"": ""index_id"", ""dtype"": ""int64""}, {""name"": ""category"", ""dtype"": {""class_label"": {""names"": {""0"": ""science/technology"", ""1"": ""travel"", ""2"": ""politics"", ""3"": ""sports"", ""4"": ""health"", ""5"": ""entertainment"", ""6"": ""geography""}}}}, {""name"": ""text"", ""dtype"": ""string""}, {""name"": ""audio"", ""sequence"": {""audio"": {""sampling_rate"": 16000}}}], ""splits"": [{""name"": ""train"", ""num_bytes"": 1627430548.0, ""num_examples"": 737}, {""name"": ""validation"", ""num_bytes"": 215297933.0, ""num_examples"": 71}, {""name"": ""test"", ""num_bytes"": 554277405.0, ""num_examples"": 177}], ""download_size"": 2327164690, ""dataset_size"": 2397005886.0}, {""config_name"": ""ind_Latn"", ""features"": [{""name"": ""sentence"", ""dtype"": ""string""}, {""name"": ""URL"", ""dtype"": ""string""}, {""name"": ""id"", ""dtype"": ""int32""}, {""name"": ""domain"", ""dtype"": ""string""}, {""name"": ""topic"", ""dtype"": ""string""}, {""name"": ""has_image"", ""dtype"": ""int32""}, {""name"": ""has_hyperlink"", ""dtype"": ""int32""}, {""name"": ""fleurs_id"", ""dtype"": ""int32""}, {""name"": ""filename"", ""sequence"": ""string""}, {""name"": ""raw_transcription"", ""dtype"": ""string""}, {""name"": ""transcription"", ""dtype"": ""string""}, {""name"": ""num_samples"", ""sequence"": ""int64""}, {""name"": ""speaker_id"", ""sequence"": ""int64""}, {""name"": ""gender"", ""sequence"": ""string""}, {""name"": ""whisper_asr"", ""sequence"": ""string""}, {""name"": ""whisper_asr_cer"", ""sequence"": ""float64""}, {""name"": ""whisper_asr_wer"", ""sequence"": ""float64""}, {""name"": ""whisper_asr_translation"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr_cer"", ""sequence"": ""float64""}, {""name"": ""seamlessm4t_asr_wer"", ""sequence"": ""float64""}, {""name"": ""seamlessm4t_asr_translation"", ""sequence"": ""string""}, {""name"": ""index_id"", ""dtype"": ""int64""}, {""name"": ""category"", ""dtype"": {""class_label"": {""names"": {""0"": ""science/technology"", ""1"": ""travel"", ""2"": ""politics"", ""3"": ""sports"", ""4"": ""health"", ""5"": ""entertainment"", ""6"": ""geography""}}}}, {""name"": ""text"", ""dtype"": ""string""}, {""name"": ""audio"", ""sequence"": {""audio"": {""sampling_rate"": 16000}}}], ""splits"": [{""name"": ""train"", ""num_bytes"": 1070832404.0, ""num_examples"": 728}, {""name"": ""validation"", ""num_bytes"": 114893806.0, ""num_examples"": 69}, {""name"": ""test"", ""num_bytes"": 278118946.0, ""num_examples"": 167}], ""download_size"": 1457872159, ""dataset_size"": 1463845156.0}, {""config_name"": ""isl_Latn"", ""features"": [{""name"": ""sentence"", ""dtype"": ""string""}, {""name"": ""URL"", ""dtype"": ""string""}, {""name"": ""id"", ""dtype"": ""int32""}, {""name"": ""domain"", ""dtype"": ""string""}, {""name"": ""topic"", ""dtype"": ""string""}, {""name"": ""has_image"", ""dtype"": ""int32""}, {""name"": ""has_hyperlink"", ""dtype"": ""int32""}, {""name"": ""fleurs_id"", ""dtype"": ""int32""}, {""name"": ""filename"", ""sequence"": ""string""}, {""name"": ""raw_transcription"", ""dtype"": ""string""}, {""name"": ""transcription"", ""dtype"": ""string""}, {""name"": ""num_samples"", ""sequence"": ""int64""}, {""name"": ""speaker_id"", ""sequence"": ""int64""}, {""name"": ""gender"", ""sequence"": ""string""}, {""name"": ""whisper_asr"", ""sequence"": ""string""}, {""name"": ""whisper_asr_cer"", ""sequence"": ""float64""}, {""name"": ""whisper_asr_wer"", ""sequence"": ""float64""}, {""name"": ""whisper_asr_translation"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr_cer"", ""sequence"": ""float64""}, {""name"": ""seamlessm4t_asr_wer"", ""sequence"": ""float64""}, {""name"": ""seamlessm4t_asr_translation"", ""sequence"": ""string""}, {""name"": ""index_id"", ""dtype"": ""int64""}, {""name"": ""category"", ""dtype"": {""class_label"": {""names"": {""0"": ""science/technology"", ""1"": ""travel"", ""2"": ""politics"", ""3"": ""sports"", ""4"": ""health"", ""5"": ""entertainment"", ""6"": ""geography""}}}}, {""name"": ""text"", ""dtype"": ""string""}, {""name"": ""audio"", ""sequence"": {""audio"": {""sampling_rate"": 16000}}}], ""splits"": [{""name"": ""train"", ""num_bytes"": 330951934.0, ""num_examples"": 381}, {""name"": ""validation"", ""num_bytes"": 14249666.0, ""num_examples"": 18}, {""name"": ""test"", ""num_bytes"": 20416835.0, ""num_examples"": 23}], ""download_size"": 363202271, ""dataset_size"": 365618435.0}, {""config_name"": ""ita_Latn"", ""features"": [{""name"": ""sentence"", ""dtype"": ""string""}, {""name"": ""URL"", ""dtype"": ""string""}, {""name"": ""id"", ""dtype"": ""int32""}, {""name"": ""domain"", ""dtype"": ""string""}, {""name"": ""topic"", ""dtype"": ""string""}, {""name"": ""has_image"", ""dtype"": ""int32""}, {""name"": ""has_hyperlink"", ""dtype"": ""int32""}, {""name"": ""fleurs_id"", ""dtype"": ""int32""}, {""name"": ""filename"", ""sequence"": ""string""}, {""name"": ""raw_transcription"", ""dtype"": ""string""}, {""name"": ""transcription"", ""dtype"": ""string""}, {""name"": ""num_samples"", ""sequence"": ""int64""}, {""name"": ""speaker_id"", ""sequence"": ""int64""}, {""name"": ""gender"", ""sequence"": ""string""}, {""name"": ""whisper_asr"", ""sequence"": ""string""}, {""name"": ""whisper_asr_cer"", ""sequence"": ""float64""}, {""name"": ""whisper_asr_wer"", ""sequence"": ""float64""}, {""name"": ""whisper_asr_translation"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr_cer"", ""sequence"": ""float64""}, {""name"": ""seamlessm4t_asr_wer"", ""sequence"": ""float64""}, {""name"": ""seamlessm4t_asr_translation"", ""sequence"": ""string""}, {""name"": ""index_id"", ""dtype"": ""int64""}, {""name"": ""category"", ""dtype"": {""class_label"": {""names"": {""0"": ""science/technology"", ""1"": ""travel"", ""2"": ""politics"", ""3"": ""sports"", ""4"": ""health"", ""5"": ""entertainment"", ""6"": ""geography""}}}}, {""name"": ""text"", ""dtype"": ""string""}, {""name"": ""audio"", ""sequence"": {""audio"": {""sampling_rate"": 16000}}}], ""splits"": [{""name"": ""train"", ""num_bytes"": 1038872172.0, ""num_examples"": 743}, {""name"": ""validation"", ""num_bytes"": 171701144.0, ""num_examples"": 69}, {""name"": ""test"", ""num_bytes"": 409588210.0, ""num_examples"": 175}], ""download_size"": 1597494540, ""dataset_size"": 1620161526.0}, {""config_name"": ""jav_Latn"", ""features"": [{""name"": ""sentence"", ""dtype"": ""string""}, {""name"": ""URL"", ""dtype"": ""string""}, {""name"": ""id"", ""dtype"": ""int32""}, {""name"": ""domain"", ""dtype"": ""string""}, {""name"": ""topic"", ""dtype"": ""string""}, {""name"": ""has_image"", ""dtype"": ""int32""}, {""name"": ""has_hyperlink"", ""dtype"": ""int32""}, {""name"": ""fleurs_id"", ""dtype"": ""int32""}, {""name"": ""filename"", ""sequence"": ""string""}, {""name"": ""raw_transcription"", ""dtype"": ""string""}, {""name"": ""transcription"", ""dtype"": ""string""}, {""name"": ""num_samples"", ""sequence"": ""int64""}, {""name"": ""speaker_id"", ""sequence"": ""int64""}, {""name"": ""gender"", ""sequence"": ""string""}, {""name"": ""whisper_asr"", ""sequence"": ""string""}, {""name"": ""whisper_asr_cer"", ""sequence"": ""float64""}, {""name"": ""whisper_asr_wer"", ""sequence"": ""float64""}, {""name"": ""whisper_asr_translation"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr_cer"", ""sequence"": ""float64""}, {""name"": ""seamlessm4t_asr_wer"", ""sequence"": ""float64""}, {""name"": ""seamlessm4t_asr_translation"", ""sequence"": ""string""}, {""name"": ""index_id"", ""dtype"": ""int64""}, {""name"": ""category"", ""dtype"": {""class_label"": {""names"": {""0"": ""science/technology"", ""1"": ""travel"", ""2"": ""politics"", ""3"": ""sports"", ""4"": ""health"", ""5"": ""entertainment"", ""6"": ""geography""}}}}, {""name"": ""text"", ""dtype"": ""string""}, {""name"": ""audio"", ""sequence"": {""audio"": {""sampling_rate"": 16000}}}], ""splits"": [{""name"": ""train"", ""num_bytes"": 1297339053.0, ""num_examples"": 740}, {""name"": ""validation"", ""num_bytes"": 119962974.0, ""num_examples"": 67}, {""name"": ""test"", ""num_bytes"": 326612734.0, ""num_examples"": 171}], ""download_size"": 1737637397, ""dataset_size"": 1743914761.0}, {""config_name"": ""jpn_Jpan"", ""features"": [{""name"": ""sentence"", ""dtype"": ""string""}, {""name"": ""URL"", ""dtype"": ""string""}, {""name"": ""id"", ""dtype"": ""int32""}, {""name"": ""domain"", ""dtype"": ""string""}, {""name"": ""topic"", ""dtype"": ""string""}, {""name"": ""has_image"", ""dtype"": ""int32""}, {""name"": ""has_hyperlink"", ""dtype"": ""int32""}, {""name"": ""fleurs_id"", ""dtype"": ""int32""}, {""name"": ""filename"", ""sequence"": ""string""}, {""name"": ""raw_transcription"", ""dtype"": ""string""}, {""name"": ""transcription"", ""dtype"": ""string""}, {""name"": ""num_samples"", ""sequence"": ""int64""}, {""name"": ""speaker_id"", ""sequence"": ""int64""}, {""name"": ""gender"", ""sequence"": ""string""}, {""name"": ""whisper_asr"", ""sequence"": ""string""}, {""name"": ""whisper_asr_cer"", ""sequence"": ""float64""}, {""name"": ""whisper_asr_wer"", ""sequence"": ""float64""}, {""name"": ""whisper_asr_translation"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr_cer"", ""sequence"": ""float64""}, {""name"": ""seamlessm4t_asr_wer"", ""sequence"": ""float64""}, {""name"": ""seamlessm4t_asr_translation"", ""sequence"": ""string""}, {""name"": ""index_id"", ""dtype"": ""int64""}, {""name"": ""category"", ""dtype"": {""class_label"": {""names"": {""0"": ""science/technology"", ""1"": ""travel"", ""2"": ""politics"", ""3"": ""sports"", ""4"": ""health"", ""5"": ""entertainment"", ""6"": ""geography""}}}}, {""name"": ""text"", ""dtype"": ""string""}, {""name"": ""audio"", ""sequence"": {""audio"": {""sampling_rate"": 16000}}}], ""splits"": [{""name"": ""train"", ""num_bytes"": 868173246.0, ""num_examples"": 662}, {""name"": ""validation"", ""num_bytes"": 106866406.0, ""num_examples"": 62}, {""name"": ""test"", ""num_bytes"": 279227775.0, ""num_examples"": 164}], ""download_size"": 1239767618, ""dataset_size"": 1254267427.0}, {""config_name"": ""kam_Latn"", ""features"": [{""name"": ""sentence"", ""dtype"": ""string""}, {""name"": ""URL"", ""dtype"": ""string""}, {""name"": ""id"", ""dtype"": ""int32""}, {""name"": ""domain"", ""dtype"": ""string""}, {""name"": ""topic"", ""dtype"": ""string""}, {""name"": ""has_image"", ""dtype"": ""int32""}, {""name"": ""has_hyperlink"", ""dtype"": ""int32""}, {""name"": ""fleurs_id"", ""dtype"": ""int32""}, {""name"": ""filename"", ""sequence"": ""string""}, {""name"": ""raw_transcription"", ""dtype"": ""string""}, {""name"": ""transcription"", ""dtype"": ""string""}, {""name"": ""num_samples"", ""sequence"": ""int64""}, {""name"": ""speaker_id"", ""sequence"": ""int64""}, {""name"": ""gender"", ""sequence"": ""string""}, {""name"": ""whisper_asr"", ""sequence"": ""string""}, {""name"": ""whisper_asr_cer"", ""sequence"": ""float64""}, {""name"": ""whisper_asr_wer"", ""sequence"": ""float64""}, {""name"": ""whisper_asr_translation"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr_cer"", ""sequence"": ""float64""}, {""name"": ""seamlessm4t_asr_wer"", ""sequence"": ""float64""}, {""name"": ""seamlessm4t_asr_translation"", ""sequence"": ""string""}, {""name"": ""index_id"", ""dtype"": ""int64""}, {""name"": ""category"", ""dtype"": {""class_label"": {""names"": {""0"": ""science/technology"", ""1"": ""travel"", ""2"": ""politics"", ""3"": ""sports"", ""4"": ""health"", ""5"": ""entertainment"", ""6"": ""geography""}}}}, {""name"": ""text"", ""dtype"": ""string""}, {""name"": ""audio"", ""sequence"": {""audio"": {""sampling_rate"": 16000}}}], ""splits"": [{""name"": ""train"", ""num_bytes"": 1719637560.0, ""num_examples"": 752}, {""name"": ""validation"", ""num_bytes"": 164496223.0, ""num_examples"": 69}, {""name"": ""test"", ""num_bytes"": 459710265.0, ""num_examples"": 179}], ""download_size"": 2328603553, ""dataset_size"": 2343844048.0}, {""config_name"": ""kan_Knda"", ""features"": [{""name"": ""sentence"", ""dtype"": ""string""}, {""name"": ""URL"", ""dtype"": ""string""}, {""name"": ""id"", ""dtype"": ""int32""}, {""name"": ""domain"", ""dtype"": ""string""}, {""name"": ""topic"", ""dtype"": ""string""}, {""name"": ""has_image"", ""dtype"": ""int32""}, {""name"": ""has_hyperlink"", ""dtype"": ""int32""}, {""name"": ""fleurs_id"", ""dtype"": ""int32""}, {""name"": ""filename"", ""sequence"": ""string""}, {""name"": ""raw_transcription"", ""dtype"": ""string""}, {""name"": ""transcription"", ""dtype"": ""string""}, {""name"": ""num_samples"", ""sequence"": ""int64""}, {""name"": ""speaker_id"", ""sequence"": ""int64""}, {""name"": ""gender"", ""sequence"": ""string""}, {""name"": ""whisper_asr"", ""sequence"": ""string""}, {""name"": ""whisper_asr_cer"", ""sequence"": ""float64""}, {""name"": ""whisper_asr_wer"", ""sequence"": ""float64""}, {""name"": ""whisper_asr_translation"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr_cer"", ""sequence"": ""float64""}, {""name"": ""seamlessm4t_asr_wer"", ""sequence"": ""float64""}, {""name"": ""seamlessm4t_asr_translation"", ""sequence"": ""string""}, {""name"": ""index_id"", ""dtype"": ""int64""}, {""name"": ""category"", ""dtype"": {""class_label"": {""names"": {""0"": ""science/technology"", ""1"": ""travel"", ""2"": ""politics"", ""3"": ""sports"", ""4"": ""health"", ""5"": ""entertainment"", ""6"": ""geography""}}}}, {""name"": ""text"", ""dtype"": ""string""}, {""name"": ""audio"", ""sequence"": {""audio"": {""sampling_rate"": 16000}}}], ""splits"": [{""name"": ""train"", ""num_bytes"": 964875026.0, ""num_examples"": 660}, {""name"": ""validation"", ""num_bytes"": 145728311.0, ""num_examples"": 70}, {""name"": ""test"", ""num_bytes"": 362445901.0, ""num_examples"": 174}], ""download_size"": 1458922305, ""dataset_size"": 1473049238.0}, {""config_name"": ""kat_Geor"", ""features"": [{""name"": ""sentence"", ""dtype"": ""string""}, {""name"": ""URL"", ""dtype"": ""string""}, {""name"": ""id"", ""dtype"": ""int32""}, {""name"": ""domain"", ""dtype"": ""string""}, {""name"": ""topic"", ""dtype"": ""string""}, {""name"": ""has_image"", ""dtype"": ""int32""}, {""name"": ""has_hyperlink"", ""dtype"": ""int32""}, {""name"": ""fleurs_id"", ""dtype"": ""int32""}, {""name"": ""filename"", ""sequence"": ""string""}, {""name"": ""raw_transcription"", ""dtype"": ""string""}, {""name"": ""transcription"", ""dtype"": ""string""}, {""name"": ""num_samples"", ""sequence"": ""int64""}, {""name"": ""speaker_id"", ""sequence"": ""int64""}, {""name"": ""gender"", ""sequence"": ""string""}, {""name"": ""whisper_asr"", ""sequence"": ""string""}, {""name"": ""whisper_asr_cer"", ""sequence"": ""float64""}, {""name"": ""whisper_asr_wer"", ""sequence"": ""float64""}, {""name"": ""whisper_asr_translation"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr_cer"", ""sequence"": ""float64""}, {""name"": ""seamlessm4t_asr_wer"", ""sequence"": ""float64""}, {""name"": ""seamlessm4t_asr_translation"", ""sequence"": ""string""}, {""name"": ""index_id"", ""dtype"": ""int64""}, {""name"": ""category"", ""dtype"": {""class_label"": {""names"": {""0"": ""science/technology"", ""1"": ""travel"", ""2"": ""politics"", ""3"": ""sports"", ""4"": ""health"", ""5"": ""entertainment"", ""6"": ""geography""}}}}, {""name"": ""text"", ""dtype"": ""string""}, {""name"": ""audio"", ""sequence"": {""audio"": {""sampling_rate"": 16000}}}], ""splits"": [{""name"": ""train"", ""num_bytes"": 602155489.0, ""num_examples"": 557}, {""name"": ""validation"", ""num_bytes"": 130581034.0, ""num_examples"": 69}, {""name"": ""test"", ""num_bytes"": 359417267.0, ""num_examples"": 177}], ""download_size"": 1079955726, ""dataset_size"": 1092153790.0}, {""config_name"": ""kaz_Cyrl"", ""features"": [{""name"": ""sentence"", ""dtype"": ""string""}, {""name"": ""URL"", ""dtype"": ""string""}, {""name"": ""id"", ""dtype"": ""int32""}, {""name"": ""domain"", ""dtype"": ""string""}, {""name"": ""topic"", ""dtype"": ""string""}, {""name"": ""has_image"", ""dtype"": ""int32""}, {""name"": ""has_hyperlink"", ""dtype"": ""int32""}, {""name"": ""fleurs_id"", ""dtype"": ""int32""}, {""name"": ""filename"", ""sequence"": ""string""}, {""name"": ""raw_transcription"", ""dtype"": ""string""}, {""name"": ""transcription"", ""dtype"": ""string""}, {""name"": ""num_samples"", ""sequence"": ""int64""}, {""name"": ""speaker_id"", ""sequence"": ""int64""}, {""name"": ""gender"", ""sequence"": ""string""}, {""name"": ""whisper_asr"", ""sequence"": ""string""}, {""name"": ""whisper_asr_cer"", ""sequence"": ""float64""}, {""name"": ""whisper_asr_wer"", ""sequence"": ""float64""}, {""name"": ""whisper_asr_translation"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr_cer"", ""sequence"": ""float64""}, {""name"": ""seamlessm4t_asr_wer"", ""sequence"": ""float64""}, {""name"": ""seamlessm4t_asr_translation"", ""sequence"": ""string""}, {""name"": ""index_id"", ""dtype"": ""int64""}, {""name"": ""category"", ""dtype"": {""class_label"": {""names"": {""0"": ""science/technology"", ""1"": ""travel"", ""2"": ""politics"", ""3"": ""sports"", ""4"": ""health"", ""5"": ""entertainment"", ""6"": ""geography""}}}}, {""name"": ""text"", ""dtype"": ""string""}, {""name"": ""audio"", ""sequence"": {""audio"": {""sampling_rate"": 16000}}}], ""splits"": [{""name"": ""train"", ""num_bytes"": 1371437438.0, ""num_examples"": 749}, {""name"": ""validation"", ""num_bytes"": 175322718.0, ""num_examples"": 70}, {""name"": ""test"", ""num_bytes"": 446125883.0, ""num_examples"": 176}], ""download_size"": 1943326254, ""dataset_size"": 1992886039.0}, {""config_name"": ""kea_Latn"", ""features"": [{""name"": ""sentence"", ""dtype"": ""string""}, {""name"": ""URL"", ""dtype"": ""string""}, {""name"": ""id"", ""dtype"": ""int32""}, {""name"": ""domain"", ""dtype"": ""string""}, {""name"": ""topic"", ""dtype"": ""string""}, {""name"": ""has_image"", ""dtype"": ""int32""}, {""name"": ""has_hyperlink"", ""dtype"": ""int32""}, {""name"": ""fleurs_id"", ""dtype"": ""int32""}, {""name"": ""filename"", ""sequence"": ""string""}, {""name"": ""raw_transcription"", ""dtype"": ""string""}, {""name"": ""transcription"", ""dtype"": ""string""}, {""name"": ""num_samples"", ""sequence"": ""int64""}, {""name"": ""speaker_id"", ""sequence"": ""int64""}, {""name"": ""gender"", ""sequence"": ""string""}, {""name"": ""whisper_asr"", ""sequence"": ""string""}, {""name"": ""whisper_asr_cer"", ""sequence"": ""float64""}, {""name"": ""whisper_asr_wer"", ""sequence"": ""float64""}, {""name"": ""whisper_asr_translation"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr_cer"", ""sequence"": ""float64""}, {""name"": ""seamlessm4t_asr_wer"", ""sequence"": ""float64""}, {""name"": ""seamlessm4t_asr_translation"", ""sequence"": ""string""}, {""name"": ""index_id"", ""dtype"": ""int64""}, {""name"": ""category"", ""dtype"": {""class_label"": {""names"": {""0"": ""science/technology"", ""1"": ""travel"", ""2"": ""politics"", ""3"": ""sports"", ""4"": ""health"", ""5"": ""entertainment"", ""6"": ""geography""}}}}, {""name"": ""text"", ""dtype"": ""string""}, {""name"": ""audio"", ""sequence"": {""audio"": {""sampling_rate"": 16000}}}], ""splits"": [{""name"": ""train"", ""num_bytes"": 1233357045.0, ""num_examples"": 725}, {""name"": ""validation"", ""num_bytes"": 143947103.0, ""num_examples"": 71}, {""name"": ""test"", ""num_bytes"": 373071809.0, ""num_examples"": 175}], ""download_size"": 1738909295, ""dataset_size"": 1750375957.0}, {""config_name"": ""khk_Cyrl"", ""features"": [{""name"": ""sentence"", ""dtype"": ""string""}, {""name"": ""URL"", ""dtype"": ""string""}, {""name"": ""id"", ""dtype"": ""int32""}, {""name"": ""domain"", ""dtype"": ""string""}, {""name"": ""topic"", ""dtype"": ""string""}, {""name"": ""has_image"", ""dtype"": ""int32""}, {""name"": ""has_hyperlink"", ""dtype"": ""int32""}, {""name"": ""fleurs_id"", ""dtype"": ""int32""}, {""name"": ""filename"", ""sequence"": ""string""}, {""name"": ""raw_transcription"", ""dtype"": ""string""}, {""name"": ""transcription"", ""dtype"": ""string""}, {""name"": ""num_samples"", ""sequence"": ""int64""}, {""name"": ""speaker_id"", ""sequence"": ""int64""}, {""name"": ""gender"", ""sequence"": ""string""}, {""name"": ""whisper_asr"", ""sequence"": ""string""}, {""name"": ""whisper_asr_cer"", ""sequence"": ""float64""}, {""name"": ""whisper_asr_wer"", ""sequence"": ""float64""}, {""name"": ""whisper_asr_translation"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr_cer"", ""sequence"": ""float64""}, {""name"": ""seamlessm4t_asr_wer"", ""sequence"": ""float64""}, {""name"": ""seamlessm4t_asr_translation"", ""sequence"": ""string""}, {""name"": ""index_id"", ""dtype"": ""int64""}, {""name"": ""category"", ""dtype"": {""class_label"": {""names"": {""0"": ""science/technology"", ""1"": ""travel"", ""2"": ""politics"", ""3"": ""sports"", ""4"": ""health"", ""5"": ""entertainment"", ""6"": ""geography""}}}}, {""name"": ""text"", ""dtype"": ""string""}, {""name"": ""audio"", ""sequence"": {""audio"": {""sampling_rate"": 16000}}}], ""splits"": [{""name"": ""train"", ""num_bytes"": 1140695534.0, ""num_examples"": 743}, {""name"": ""validation"", ""num_bytes"": 128375508.0, ""num_examples"": 71}, {""name"": ""test"", ""num_bytes"": 336105200.0, ""num_examples"": 177}], ""download_size"": 1560413545, ""dataset_size"": 1605176242.0}, {""config_name"": ""khm_Khmr"", ""features"": [{""name"": ""sentence"", ""dtype"": ""string""}, {""name"": ""URL"", ""dtype"": ""string""}, {""name"": ""id"", ""dtype"": ""int32""}, {""name"": ""domain"", ""dtype"": ""string""}, {""name"": ""topic"", ""dtype"": ""string""}, {""name"": ""has_image"", ""dtype"": ""int32""}, {""name"": ""has_hyperlink"", ""dtype"": ""int32""}, {""name"": ""fleurs_id"", ""dtype"": ""int32""}, {""name"": ""filename"", ""sequence"": ""string""}, {""name"": ""raw_transcription"", ""dtype"": ""string""}, {""name"": ""transcription"", ""dtype"": ""string""}, {""name"": ""num_samples"", ""sequence"": ""int64""}, {""name"": ""speaker_id"", ""sequence"": ""int64""}, {""name"": ""gender"", ""sequence"": ""string""}, {""name"": ""whisper_asr"", ""sequence"": ""string""}, {""name"": ""whisper_asr_cer"", ""sequence"": ""float64""}, {""name"": ""whisper_asr_wer"", ""sequence"": ""float64""}, {""name"": ""whisper_asr_translation"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr_cer"", ""sequence"": ""float64""}, {""name"": ""seamlessm4t_asr_wer"", ""sequence"": ""float64""}, {""name"": ""seamlessm4t_asr_translation"", ""sequence"": ""string""}, {""name"": ""index_id"", ""dtype"": ""int64""}, {""name"": ""category"", ""dtype"": {""class_label"": {""names"": {""0"": ""science/technology"", ""1"": ""travel"", ""2"": ""politics"", ""3"": ""sports"", ""4"": ""health"", ""5"": ""entertainment"", ""6"": ""geography""}}}}, {""name"": ""text"", ""dtype"": ""string""}, {""name"": ""audio"", ""sequence"": {""audio"": {""sampling_rate"": 16000}}}], ""splits"": [{""name"": ""train"", ""num_bytes"": 854736195.0, ""num_examples"": 588}, {""name"": ""validation"", ""num_bytes"": 150787699.0, ""num_examples"": 69}, {""name"": ""test"", ""num_bytes"": 358796063.0, ""num_examples"": 168}], ""download_size"": 1336834917, ""dataset_size"": 1364319957.0}, {""config_name"": ""kir_Cyrl"", ""features"": [{""name"": ""sentence"", ""dtype"": ""string""}, {""name"": ""URL"", ""dtype"": ""string""}, {""name"": ""id"", ""dtype"": ""int32""}, {""name"": ""domain"", ""dtype"": ""string""}, {""name"": ""topic"", ""dtype"": ""string""}, {""name"": ""has_image"", ""dtype"": ""int32""}, {""name"": ""has_hyperlink"", ""dtype"": ""int32""}, {""name"": ""fleurs_id"", ""dtype"": ""int32""}, {""name"": ""filename"", ""sequence"": ""string""}, {""name"": ""raw_transcription"", ""dtype"": ""string""}, {""name"": ""transcription"", ""dtype"": ""string""}, {""name"": ""num_samples"", ""sequence"": ""int64""}, {""name"": ""speaker_id"", ""sequence"": ""int64""}, {""name"": ""gender"", ""sequence"": ""string""}, {""name"": ""whisper_asr"", ""sequence"": ""string""}, {""name"": ""whisper_asr_cer"", ""sequence"": ""float64""}, {""name"": ""whisper_asr_wer"", ""sequence"": ""float64""}, {""name"": ""whisper_asr_translation"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr_cer"", ""sequence"": ""float64""}, {""name"": ""seamlessm4t_asr_wer"", ""sequence"": ""float64""}, {""name"": ""seamlessm4t_asr_translation"", ""sequence"": ""string""}, {""name"": ""index_id"", ""dtype"": ""int64""}, {""name"": ""category"", ""dtype"": {""class_label"": {""names"": {""0"": ""science/technology"", ""1"": ""travel"", ""2"": ""politics"", ""3"": ""sports"", ""4"": ""health"", ""5"": ""entertainment"", ""6"": ""geography""}}}}, {""name"": ""text"", ""dtype"": ""string""}, {""name"": ""audio"", ""sequence"": {""audio"": {""sampling_rate"": 16000}}}], ""splits"": [{""name"": ""train"", ""num_bytes"": 1085673831.0, ""num_examples"": 729}, {""name"": ""validation"", ""num_bytes"": 145080284.0, ""num_examples"": 71}, {""name"": ""test"", ""num_bytes"": 383793918.0, ""num_examples"": 177}], ""download_size"": 1580489766, ""dataset_size"": 1614548033.0}, {""config_name"": ""kor_Hang"", ""features"": [{""name"": ""sentence"", ""dtype"": ""string""}, {""name"": ""URL"", ""dtype"": ""string""}, {""name"": ""id"", ""dtype"": ""int32""}, {""name"": ""domain"", ""dtype"": ""string""}, {""name"": ""topic"", ""dtype"": ""string""}, {""name"": ""has_image"", ""dtype"": ""int32""}, {""name"": ""has_hyperlink"", ""dtype"": ""int32""}, {""name"": ""fleurs_id"", ""dtype"": ""int32""}, {""name"": ""filename"", ""sequence"": ""string""}, {""name"": ""raw_transcription"", ""dtype"": ""string""}, {""name"": ""transcription"", ""dtype"": ""string""}, {""name"": ""num_samples"", ""sequence"": ""int64""}, {""name"": ""speaker_id"", ""sequence"": ""int64""}, {""name"": ""gender"", ""sequence"": ""string""}, {""name"": ""whisper_asr"", ""sequence"": ""string""}, {""name"": ""whisper_asr_cer"", ""sequence"": ""float64""}, {""name"": ""whisper_asr_wer"", ""sequence"": ""float64""}, {""name"": ""whisper_asr_translation"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr_cer"", ""sequence"": ""float64""}, {""name"": ""seamlessm4t_asr_wer"", ""sequence"": ""float64""}, {""name"": ""seamlessm4t_asr_translation"", ""sequence"": ""string""}, {""name"": ""index_id"", ""dtype"": ""int64""}, {""name"": ""category"", ""dtype"": {""class_label"": {""names"": {""0"": ""science/technology"", ""1"": ""travel"", ""2"": ""politics"", ""3"": ""sports"", ""4"": ""health"", ""5"": ""entertainment"", ""6"": ""geography""}}}}, {""name"": ""text"", ""dtype"": ""string""}, {""name"": ""audio"", ""sequence"": {""audio"": {""sampling_rate"": 16000}}}], ""splits"": [{""name"": ""train"", ""num_bytes"": 926056164.0, ""num_examples"": 669}, {""name"": ""validation"", ""num_bytes"": 92920021.0, ""num_examples"": 61}, {""name"": ""test"", ""num_bytes"": 163632188.0, ""num_examples"": 141}], ""download_size"": 1162697408, ""dataset_size"": 1182608373.0}, {""config_name"": ""lao_Laoo"", ""features"": [{""name"": ""sentence"", ""dtype"": ""string""}, {""name"": ""URL"", ""dtype"": ""string""}, {""name"": ""id"", ""dtype"": ""int32""}, {""name"": ""domain"", ""dtype"": ""string""}, {""name"": ""topic"", ""dtype"": ""string""}, {""name"": ""has_image"", ""dtype"": ""int32""}, {""name"": ""has_hyperlink"", ""dtype"": ""int32""}, {""name"": ""fleurs_id"", ""dtype"": ""int32""}, {""name"": ""filename"", ""sequence"": ""string""}, {""name"": ""raw_transcription"", ""dtype"": ""string""}, {""name"": ""transcription"", ""dtype"": ""string""}, {""name"": ""num_samples"", ""sequence"": ""int64""}, {""name"": ""speaker_id"", ""sequence"": ""int64""}, {""name"": ""gender"", ""sequence"": ""string""}, {""name"": ""whisper_asr"", ""sequence"": ""string""}, {""name"": ""whisper_asr_cer"", ""sequence"": ""float64""}, {""name"": ""whisper_asr_wer"", ""sequence"": ""float64""}, {""name"": ""whisper_asr_translation"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr_cer"", ""sequence"": ""float64""}, {""name"": ""seamlessm4t_asr_wer"", ""sequence"": ""float64""}, {""name"": ""seamlessm4t_asr_translation"", ""sequence"": ""string""}, {""name"": ""index_id"", ""dtype"": ""int64""}, {""name"": ""category"", ""dtype"": {""class_label"": {""names"": {""0"": ""science/technology"", ""1"": ""travel"", ""2"": ""politics"", ""3"": ""sports"", ""4"": ""health"", ""5"": ""entertainment"", ""6"": ""geography""}}}}, {""name"": ""text"", ""dtype"": ""string""}, {""name"": ""audio"", ""sequence"": {""audio"": {""sampling_rate"": 16000}}}], ""splits"": [{""name"": ""train"", ""num_bytes"": 848523584.0, ""num_examples"": 591}, {""name"": ""validation"", ""num_bytes"": 59517451.0, ""num_examples"": 54}, {""name"": ""test"", ""num_bytes"": 168578213.0, ""num_examples"": 132}], ""download_size"": 1075447131, ""dataset_size"": 1076619248.0}, {""config_name"": ""lin_Latn"", ""features"": [{""name"": ""sentence"", ""dtype"": ""string""}, {""name"": ""URL"", ""dtype"": ""string""}, {""name"": ""id"", ""dtype"": ""int32""}, {""name"": ""domain"", ""dtype"": ""string""}, {""name"": ""topic"", ""dtype"": ""string""}, {""name"": ""has_image"", ""dtype"": ""int32""}, {""name"": ""has_hyperlink"", ""dtype"": ""int32""}, {""name"": ""fleurs_id"", ""dtype"": ""int32""}, {""name"": ""filename"", ""sequence"": ""string""}, {""name"": ""raw_transcription"", ""dtype"": ""string""}, {""name"": ""transcription"", ""dtype"": ""string""}, {""name"": ""num_samples"", ""sequence"": ""int64""}, {""name"": ""speaker_id"", ""sequence"": ""int64""}, {""name"": ""gender"", ""sequence"": ""string""}, {""name"": ""whisper_asr"", ""sequence"": ""string""}, {""name"": ""whisper_asr_cer"", ""sequence"": ""float64""}, {""name"": ""whisper_asr_wer"", ""sequence"": ""float64""}, {""name"": ""whisper_asr_translation"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr_cer"", ""sequence"": ""float64""}, {""name"": ""seamlessm4t_asr_wer"", ""sequence"": ""float64""}, {""name"": ""seamlessm4t_asr_translation"", ""sequence"": ""string""}, {""name"": ""index_id"", ""dtype"": ""int64""}, {""name"": ""category"", ""dtype"": {""class_label"": {""names"": {""0"": ""science/technology"", ""1"": ""travel"", ""2"": ""politics"", ""3"": ""sports"", ""4"": ""health"", ""5"": ""entertainment"", ""6"": ""geography""}}}}, {""name"": ""text"", ""dtype"": ""string""}, {""name"": ""audio"", ""sequence"": {""audio"": {""sampling_rate"": 16000}}}], ""splits"": [{""name"": ""train"", ""num_bytes"": 2110793488.0, ""num_examples"": 755}, {""name"": ""validation"", ""num_bytes"": 114332315.0, ""num_examples"": 59}, {""name"": ""test"", ""num_bytes"": 291255234.0, ""num_examples"": 139}], ""download_size"": 2505804321, ""dataset_size"": 2516381037.0}, {""config_name"": ""lit_Latn"", ""features"": [{""name"": ""sentence"", ""dtype"": ""string""}, {""name"": ""URL"", ""dtype"": ""string""}, {""name"": ""id"", ""dtype"": ""int32""}, {""name"": ""domain"", ""dtype"": ""string""}, {""name"": ""topic"", ""dtype"": ""string""}, {""name"": ""has_image"", ""dtype"": ""int32""}, {""name"": ""has_hyperlink"", ""dtype"": ""int32""}, {""name"": ""fleurs_id"", ""dtype"": ""int32""}, {""name"": ""filename"", ""sequence"": ""string""}, {""name"": ""raw_transcription"", ""dtype"": ""string""}, {""name"": ""transcription"", ""dtype"": ""string""}, {""name"": ""num_samples"", ""sequence"": ""int64""}, {""name"": ""speaker_id"", ""sequence"": ""int64""}, {""name"": ""gender"", ""sequence"": ""string""}, {""name"": ""whisper_asr"", ""sequence"": ""string""}, {""name"": ""whisper_asr_cer"", ""sequence"": ""float64""}, {""name"": ""whisper_asr_wer"", ""sequence"": ""float64""}, {""name"": ""whisper_asr_translation"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr_cer"", ""sequence"": ""float64""}, {""name"": ""seamlessm4t_asr_wer"", ""sequence"": ""float64""}, {""name"": ""seamlessm4t_asr_translation"", ""sequence"": ""string""}, {""name"": ""index_id"", ""dtype"": ""int64""}, {""name"": ""category"", ""dtype"": {""class_label"": {""names"": {""0"": ""science/technology"", ""1"": ""travel"", ""2"": ""politics"", ""3"": ""sports"", ""4"": ""health"", ""5"": ""entertainment"", ""6"": ""geography""}}}}, {""name"": ""text"", ""dtype"": ""string""}, {""name"": ""audio"", ""sequence"": {""audio"": {""sampling_rate"": 16000}}}], ""splits"": [{""name"": ""train"", ""num_bytes"": 1140098095.0, ""num_examples"": 730}, {""name"": ""validation"", ""num_bytes"": 134247036.0, ""num_examples"": 71}, {""name"": ""test"", ""num_bytes"": 346763275.0, ""num_examples"": 178}], ""download_size"": 1580279654, ""dataset_size"": 1621108406.0}, {""config_name"": ""ltz_Latn"", ""features"": [{""name"": ""sentence"", ""dtype"": ""string""}, {""name"": ""URL"", ""dtype"": ""string""}, {""name"": ""id"", ""dtype"": ""int32""}, {""name"": ""domain"", ""dtype"": ""string""}, {""name"": ""topic"", ""dtype"": ""string""}, {""name"": ""has_image"", ""dtype"": ""int32""}, {""name"": ""has_hyperlink"", ""dtype"": ""int32""}, {""name"": ""fleurs_id"", ""dtype"": ""int32""}, {""name"": ""filename"", ""sequence"": ""string""}, {""name"": ""raw_transcription"", ""dtype"": ""string""}, {""name"": ""transcription"", ""dtype"": ""string""}, {""name"": ""num_samples"", ""sequence"": ""int64""}, {""name"": ""speaker_id"", ""sequence"": ""int64""}, {""name"": ""gender"", ""sequence"": ""string""}, {""name"": ""whisper_asr"", ""sequence"": ""string""}, {""name"": ""whisper_asr_cer"", ""sequence"": ""float64""}, {""name"": ""whisper_asr_wer"", ""sequence"": ""float64""}, {""name"": ""whisper_asr_translation"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr_cer"", ""sequence"": ""float64""}, {""name"": ""seamlessm4t_asr_wer"", ""sequence"": ""float64""}, {""name"": ""seamlessm4t_asr_translation"", ""sequence"": ""string""}, {""name"": ""index_id"", ""dtype"": ""int64""}, {""name"": ""category"", ""dtype"": {""class_label"": {""names"": {""0"": ""science/technology"", ""1"": ""travel"", ""2"": ""politics"", ""3"": ""sports"", ""4"": ""health"", ""5"": ""entertainment"", ""6"": ""geography""}}}}, {""name"": ""text"", ""dtype"": ""string""}, {""name"": ""audio"", ""sequence"": {""audio"": {""sampling_rate"": 16000}}}], ""splits"": [{""name"": ""train"", ""num_bytes"": 983863587.0, ""num_examples"": 703}, {""name"": ""validation"", ""num_bytes"": 122673699.0, ""num_examples"": 71}, {""name"": ""test"", ""num_bytes"": 321154716.0, ""num_examples"": 176}], ""download_size"": 1380540072, ""dataset_size"": 1427692002.0}, {""config_name"": ""lug_Latn"", ""features"": [{""name"": ""sentence"", ""dtype"": ""string""}, {""name"": ""URL"", ""dtype"": ""string""}, {""name"": ""id"", ""dtype"": ""int32""}, {""name"": ""domain"", ""dtype"": ""string""}, {""name"": ""topic"", ""dtype"": ""string""}, {""name"": ""has_image"", ""dtype"": ""int32""}, {""name"": ""has_hyperlink"", ""dtype"": ""int32""}, {""name"": ""fleurs_id"", ""dtype"": ""int32""}, {""name"": ""filename"", ""sequence"": ""string""}, {""name"": ""raw_transcription"", ""dtype"": ""string""}, {""name"": ""transcription"", ""dtype"": ""string""}, {""name"": ""num_samples"", ""sequence"": ""int64""}, {""name"": ""speaker_id"", ""sequence"": ""int64""}, {""name"": ""gender"", ""sequence"": ""string""}, {""name"": ""whisper_asr"", ""sequence"": ""string""}, {""name"": ""whisper_asr_cer"", ""sequence"": ""float64""}, {""name"": ""whisper_asr_wer"", ""sequence"": ""float64""}, {""name"": ""whisper_asr_translation"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr_cer"", ""sequence"": ""float64""}, {""name"": ""seamlessm4t_asr_wer"", ""sequence"": ""float64""}, {""name"": ""seamlessm4t_asr_translation"", ""sequence"": ""string""}, {""name"": ""index_id"", ""dtype"": ""int64""}, {""name"": ""category"", ""dtype"": {""class_label"": {""names"": {""0"": ""science/technology"", ""1"": ""travel"", ""2"": ""politics"", ""3"": ""sports"", ""4"": ""health"", ""5"": ""entertainment"", ""6"": ""geography""}}}}, {""name"": ""text"", ""dtype"": ""string""}, {""name"": ""audio"", ""sequence"": {""audio"": {""sampling_rate"": 16000}}}], ""splits"": [{""name"": ""train"", ""num_bytes"": 1439075546.0, ""num_examples"": 691}, {""name"": ""validation"", ""num_bytes"": 153280122.0, ""num_examples"": 70}, {""name"": ""test"", ""num_bytes"": 409606012.0, ""num_examples"": 173}], ""download_size"": 1972461167, ""dataset_size"": 2001961680.0}, {""config_name"": ""luo_Latn"", ""features"": [{""name"": ""sentence"", ""dtype"": ""string""}, {""name"": ""URL"", ""dtype"": ""string""}, {""name"": ""id"", ""dtype"": ""int32""}, {""name"": ""domain"", ""dtype"": ""string""}, {""name"": ""topic"", ""dtype"": ""string""}, {""name"": ""has_image"", ""dtype"": ""int32""}, {""name"": ""has_hyperlink"", ""dtype"": ""int32""}, {""name"": ""fleurs_id"", ""dtype"": ""int32""}, {""name"": ""filename"", ""sequence"": ""string""}, {""name"": ""raw_transcription"", ""dtype"": ""string""}, {""name"": ""transcription"", ""dtype"": ""string""}, {""name"": ""num_samples"", ""sequence"": ""int64""}, {""name"": ""speaker_id"", ""sequence"": ""int64""}, {""name"": ""gender"", ""sequence"": ""string""}, {""name"": ""whisper_asr"", ""sequence"": ""string""}, {""name"": ""whisper_asr_cer"", ""sequence"": ""float64""}, {""name"": ""whisper_asr_wer"", ""sequence"": ""float64""}, {""name"": ""whisper_asr_translation"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr_cer"", ""sequence"": ""float64""}, {""name"": ""seamlessm4t_asr_wer"", ""sequence"": ""float64""}, {""name"": ""seamlessm4t_asr_translation"", ""sequence"": ""string""}, {""name"": ""index_id"", ""dtype"": ""int64""}, {""name"": ""category"", ""dtype"": {""class_label"": {""names"": {""0"": ""science/technology"", ""1"": ""travel"", ""2"": ""politics"", ""3"": ""sports"", ""4"": ""health"", ""5"": ""entertainment"", ""6"": ""geography""}}}}, {""name"": ""text"", ""dtype"": ""string""}, {""name"": ""audio"", ""sequence"": {""audio"": {""sampling_rate"": 16000}}}], ""splits"": [{""name"": ""train"", ""num_bytes"": 1164176310.0, ""num_examples"": 698}, {""name"": ""validation"", ""num_bytes"": 42851382.0, ""num_examples"": 39}, {""name"": ""test"", ""num_bytes"": 118611386.0, ""num_examples"": 98}], ""download_size"": 1282391858, ""dataset_size"": 1325639078.0}, {""config_name"": ""lvs_Latn"", ""features"": [{""name"": ""sentence"", ""dtype"": ""string""}, {""name"": ""URL"", ""dtype"": ""string""}, {""name"": ""id"", ""dtype"": ""int32""}, {""name"": ""domain"", ""dtype"": ""string""}, {""name"": ""topic"", ""dtype"": ""string""}, {""name"": ""has_image"", ""dtype"": ""int32""}, {""name"": ""has_hyperlink"", ""dtype"": ""int32""}, {""name"": ""fleurs_id"", ""dtype"": ""int32""}, {""name"": ""filename"", ""sequence"": ""string""}, {""name"": ""raw_transcription"", ""dtype"": ""string""}, {""name"": ""transcription"", ""dtype"": ""string""}, {""name"": ""num_samples"", ""sequence"": ""int64""}, {""name"": ""speaker_id"", ""sequence"": ""int64""}, {""name"": ""gender"", ""sequence"": ""string""}, {""name"": ""whisper_asr"", ""sequence"": ""string""}, {""name"": ""whisper_asr_cer"", ""sequence"": ""float64""}, {""name"": ""whisper_asr_wer"", ""sequence"": ""float64""}, {""name"": ""whisper_asr_translation"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr_cer"", ""sequence"": ""float64""}, {""name"": ""seamlessm4t_asr_wer"", ""sequence"": ""float64""}, {""name"": ""seamlessm4t_asr_translation"", ""sequence"": ""string""}, {""name"": ""index_id"", ""dtype"": ""int64""}, {""name"": ""category"", ""dtype"": {""class_label"": {""names"": {""0"": ""science/technology"", ""1"": ""travel"", ""2"": ""politics"", ""3"": ""sports"", ""4"": ""health"", ""5"": ""entertainment"", ""6"": ""geography""}}}}, {""name"": ""text"", ""dtype"": ""string""}, {""name"": ""audio"", ""sequence"": {""audio"": {""sampling_rate"": 16000}}}], ""splits"": [{""name"": ""train"", ""num_bytes"": 763295296.0, ""num_examples"": 634}, {""name"": ""validation"", ""num_bytes"": 119412393.0, ""num_examples"": 69}, {""name"": ""test"", ""num_bytes"": 321894301.0, ""num_examples"": 174}], ""download_size"": 1178718753, ""dataset_size"": 1204601990.0}, {""config_name"": ""mal_Mlym"", ""features"": [{""name"": ""sentence"", ""dtype"": ""string""}, {""name"": ""URL"", ""dtype"": ""string""}, {""name"": ""id"", ""dtype"": ""int32""}, {""name"": ""domain"", ""dtype"": ""string""}, {""name"": ""topic"", ""dtype"": ""string""}, {""name"": ""has_image"", ""dtype"": ""int32""}, {""name"": ""has_hyperlink"", ""dtype"": ""int32""}, {""name"": ""fleurs_id"", ""dtype"": ""int32""}, {""name"": ""filename"", ""sequence"": ""string""}, {""name"": ""raw_transcription"", ""dtype"": ""string""}, {""name"": ""transcription"", ""dtype"": ""string""}, {""name"": ""num_samples"", ""sequence"": ""int64""}, {""name"": ""speaker_id"", ""sequence"": ""int64""}, {""name"": ""gender"", ""sequence"": ""string""}, {""name"": ""whisper_asr"", ""sequence"": ""string""}, {""name"": ""whisper_asr_cer"", ""sequence"": ""float64""}, {""name"": ""whisper_asr_wer"", ""sequence"": ""float64""}, {""name"": ""whisper_asr_translation"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr_cer"", ""sequence"": ""float64""}, {""name"": ""seamlessm4t_asr_wer"", ""sequence"": ""float64""}, {""name"": ""seamlessm4t_asr_translation"", ""sequence"": ""string""}, {""name"": ""index_id"", ""dtype"": ""int64""}, {""name"": ""category"", ""dtype"": {""class_label"": {""names"": {""0"": ""science/technology"", ""1"": ""travel"", ""2"": ""politics"", ""3"": ""sports"", ""4"": ""health"", ""5"": ""entertainment"", ""6"": ""geography""}}}}, {""name"": ""text"", ""dtype"": ""string""}, {""name"": ""audio"", ""sequence"": {""audio"": {""sampling_rate"": 16000}}}], ""splits"": [{""name"": ""train"", ""num_bytes"": 1160280548.0, ""num_examples"": 723}, {""name"": ""validation"", ""num_bytes"": 180187126.0, ""num_examples"": 68}, {""name"": ""test"", ""num_bytes"": 453064428.0, ""num_examples"": 174}], ""download_size"": 1782291408, ""dataset_size"": 1793532102.0}, {""config_name"": ""mar_Deva"", ""features"": [{""name"": ""sentence"", ""dtype"": ""string""}, {""name"": ""URL"", ""dtype"": ""string""}, {""name"": ""id"", ""dtype"": ""int32""}, {""name"": ""domain"", ""dtype"": ""string""}, {""name"": ""topic"", ""dtype"": ""string""}, {""name"": ""has_image"", ""dtype"": ""int32""}, {""name"": ""has_hyperlink"", ""dtype"": ""int32""}, {""name"": ""fleurs_id"", ""dtype"": ""int32""}, {""name"": ""filename"", ""sequence"": ""string""}, {""name"": ""raw_transcription"", ""dtype"": ""string""}, {""name"": ""transcription"", ""dtype"": ""string""}, {""name"": ""num_samples"", ""sequence"": ""int64""}, {""name"": ""speaker_id"", ""sequence"": ""int64""}, {""name"": ""gender"", ""sequence"": ""string""}, {""name"": ""whisper_asr"", ""sequence"": ""string""}, {""name"": ""whisper_asr_cer"", ""sequence"": ""float64""}, {""name"": ""whisper_asr_wer"", ""sequence"": ""float64""}, {""name"": ""whisper_asr_translation"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr_cer"", ""sequence"": ""float64""}, {""name"": ""seamlessm4t_asr_wer"", ""sequence"": ""float64""}, {""name"": ""seamlessm4t_asr_translation"", ""sequence"": ""string""}, {""name"": ""index_id"", ""dtype"": ""int64""}, {""name"": ""category"", ""dtype"": {""class_label"": {""names"": {""0"": ""science/technology"", ""1"": ""travel"", ""2"": ""politics"", ""3"": ""sports"", ""4"": ""health"", ""5"": ""entertainment"", ""6"": ""geography""}}}}, {""name"": ""text"", ""dtype"": ""string""}, {""name"": ""audio"", ""sequence"": {""audio"": {""sampling_rate"": 16000}}}], ""splits"": [{""name"": ""train"", ""num_bytes"": 1393470704.0, ""num_examples"": 749}, {""name"": ""validation"", ""num_bytes"": 174902664.0, ""num_examples"": 71}, {""name"": ""test"", ""num_bytes"": 454714148.0, ""num_examples"": 177}], ""download_size"": 2008870076, ""dataset_size"": 2023087516.0}, {""config_name"": ""mkd_Cyrl"", ""features"": [{""name"": ""sentence"", ""dtype"": ""string""}, {""name"": ""URL"", ""dtype"": ""string""}, {""name"": ""id"", ""dtype"": ""int32""}, {""name"": ""domain"", ""dtype"": ""string""}, {""name"": ""topic"", ""dtype"": ""string""}, {""name"": ""has_image"", ""dtype"": ""int32""}, {""name"": ""has_hyperlink"", ""dtype"": ""int32""}, {""name"": ""fleurs_id"", ""dtype"": ""int32""}, {""name"": ""filename"", ""sequence"": ""string""}, {""name"": ""raw_transcription"", ""dtype"": ""string""}, {""name"": ""transcription"", ""dtype"": ""string""}, {""name"": ""num_samples"", ""sequence"": ""int64""}, {""name"": ""speaker_id"", ""sequence"": ""int64""}, {""name"": ""gender"", ""sequence"": ""string""}, {""name"": ""whisper_asr"", ""sequence"": ""string""}, {""name"": ""whisper_asr_cer"", ""sequence"": ""float64""}, {""name"": ""whisper_asr_wer"", ""sequence"": ""float64""}, {""name"": ""whisper_asr_translation"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr_cer"", ""sequence"": ""float64""}, {""name"": ""seamlessm4t_asr_wer"", ""sequence"": ""float64""}, {""name"": ""seamlessm4t_asr_translation"", ""sequence"": ""string""}, {""name"": ""index_id"", ""dtype"": ""int64""}, {""name"": ""category"", ""dtype"": {""class_label"": {""names"": {""0"": ""science/technology"", ""1"": ""travel"", ""2"": ""politics"", ""3"": ""sports"", ""4"": ""health"", ""5"": ""entertainment"", ""6"": ""geography""}}}}, {""name"": ""text"", ""dtype"": ""string""}, {""name"": ""audio"", ""sequence"": {""audio"": {""sampling_rate"": 16000}}}], ""splits"": [{""name"": ""train"", ""num_bytes"": 792228116.0, ""num_examples"": 680}, {""name"": ""validation"", ""num_bytes"": 143667110.0, ""num_examples"": 71}, {""name"": ""test"", ""num_bytes"": 370880347.0, ""num_examples"": 177}], ""download_size"": 1293866922, ""dataset_size"": 1306775573.0}, {""config_name"": ""mlt_Latn"", ""features"": [{""name"": ""sentence"", ""dtype"": ""string""}, {""name"": ""URL"", ""dtype"": ""string""}, {""name"": ""id"", ""dtype"": ""int32""}, {""name"": ""domain"", ""dtype"": ""string""}, {""name"": ""topic"", ""dtype"": ""string""}, {""name"": ""has_image"", ""dtype"": ""int32""}, {""name"": ""has_hyperlink"", ""dtype"": ""int32""}, {""name"": ""fleurs_id"", ""dtype"": ""int32""}, {""name"": ""filename"", ""sequence"": ""string""}, {""name"": ""raw_transcription"", ""dtype"": ""string""}, {""name"": ""transcription"", ""dtype"": ""string""}, {""name"": ""num_samples"", ""sequence"": ""int64""}, {""name"": ""speaker_id"", ""sequence"": ""int64""}, {""name"": ""gender"", ""sequence"": ""string""}, {""name"": ""whisper_asr"", ""sequence"": ""string""}, {""name"": ""whisper_asr_cer"", ""sequence"": ""float64""}, {""name"": ""whisper_asr_wer"", ""sequence"": ""float64""}, {""name"": ""whisper_asr_translation"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr_cer"", ""sequence"": ""float64""}, {""name"": ""seamlessm4t_asr_wer"", ""sequence"": ""float64""}, {""name"": ""seamlessm4t_asr_translation"", ""sequence"": ""string""}, {""name"": ""index_id"", ""dtype"": ""int64""}, {""name"": ""category"", ""dtype"": {""class_label"": {""names"": {""0"": ""science/technology"", ""1"": ""travel"", ""2"": ""politics"", ""3"": ""sports"", ""4"": ""health"", ""5"": ""entertainment"", ""6"": ""geography""}}}}, {""name"": ""text"", ""dtype"": ""string""}, {""name"": ""audio"", ""sequence"": {""audio"": {""sampling_rate"": 16000}}}], ""splits"": [{""name"": ""train"", ""num_bytes"": 1148930224.0, ""num_examples"": 731}, {""name"": ""validation"", ""num_bytes"": 164475812.0, ""num_examples"": 71}, {""name"": ""test"", ""num_bytes"": 415254022.0, ""num_examples"": 176}], ""download_size"": 1702013186, ""dataset_size"": 1728660058.0}, {""config_name"": ""mri_Latn"", ""features"": [{""name"": ""sentence"", ""dtype"": ""string""}, {""name"": ""URL"", ""dtype"": ""string""}, {""name"": ""id"", ""dtype"": ""int32""}, {""name"": ""domain"", ""dtype"": ""string""}, {""name"": ""topic"", ""dtype"": ""string""}, {""name"": ""has_image"", ""dtype"": ""int32""}, {""name"": ""has_hyperlink"", ""dtype"": ""int32""}, {""name"": ""fleurs_id"", ""dtype"": ""int32""}, {""name"": ""filename"", ""sequence"": ""string""}, {""name"": ""raw_transcription"", ""dtype"": ""string""}, {""name"": ""transcription"", ""dtype"": ""string""}, {""name"": ""num_samples"", ""sequence"": ""int64""}, {""name"": ""speaker_id"", ""sequence"": ""int64""}, {""name"": ""gender"", ""sequence"": ""string""}, {""name"": ""whisper_asr"", ""sequence"": ""string""}, {""name"": ""whisper_asr_cer"", ""sequence"": ""float64""}, {""name"": ""whisper_asr_wer"", ""sequence"": ""float64""}, {""name"": ""whisper_asr_translation"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr_cer"", ""sequence"": ""float64""}, {""name"": ""seamlessm4t_asr_wer"", ""sequence"": ""float64""}, {""name"": ""seamlessm4t_asr_translation"", ""sequence"": ""string""}, {""name"": ""index_id"", ""dtype"": ""int64""}, {""name"": ""category"", ""dtype"": {""class_label"": {""names"": {""0"": ""science/technology"", ""1"": ""travel"", ""2"": ""politics"", ""3"": ""sports"", ""4"": ""health"", ""5"": ""entertainment"", ""6"": ""geography""}}}}, {""name"": ""text"", ""dtype"": ""string""}, {""name"": ""audio"", ""sequence"": {""audio"": {""sampling_rate"": 16000}}}], ""splits"": [{""name"": ""train"", ""num_bytes"": 2110842467.0, ""num_examples"": 749}, {""name"": ""validation"", ""num_bytes"": 251256822.0, ""num_examples"": 71}, {""name"": ""test"", ""num_bytes"": 674673277.0, ""num_examples"": 176}], ""download_size"": 3021722547, ""dataset_size"": 3036772566.0}, {""config_name"": ""mya_Mymr"", ""features"": [{""name"": ""sentence"", ""dtype"": ""string""}, {""name"": ""URL"", ""dtype"": ""string""}, {""name"": ""id"", ""dtype"": ""int32""}, {""name"": ""domain"", ""dtype"": ""string""}, {""name"": ""topic"", ""dtype"": ""string""}, {""name"": ""has_image"", ""dtype"": ""int32""}, {""name"": ""has_hyperlink"", ""dtype"": ""int32""}, {""name"": ""fleurs_id"", ""dtype"": ""int32""}, {""name"": ""filename"", ""sequence"": ""string""}, {""name"": ""raw_transcription"", ""dtype"": ""string""}, {""name"": ""transcription"", ""dtype"": ""string""}, {""name"": ""num_samples"", ""sequence"": ""int64""}, {""name"": ""speaker_id"", ""sequence"": ""int64""}, {""name"": ""gender"", ""sequence"": ""string""}, {""name"": ""whisper_asr"", ""sequence"": ""string""}, {""name"": ""whisper_asr_cer"", ""sequence"": ""float64""}, {""name"": ""whisper_asr_wer"", ""sequence"": ""float64""}, {""name"": ""whisper_asr_translation"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr_cer"", ""sequence"": ""float64""}, {""name"": ""seamlessm4t_asr_wer"", ""sequence"": ""float64""}, {""name"": ""seamlessm4t_asr_translation"", ""sequence"": ""string""}, {""name"": ""index_id"", ""dtype"": ""int64""}, {""name"": ""category"", ""dtype"": {""class_label"": {""names"": {""0"": ""science/technology"", ""1"": ""travel"", ""2"": ""politics"", ""3"": ""sports"", ""4"": ""health"", ""5"": ""entertainment"", ""6"": ""geography""}}}}, {""name"": ""text"", ""dtype"": ""string""}, {""name"": ""audio"", ""sequence"": {""audio"": {""sampling_rate"": 16000}}}], ""splits"": [{""name"": ""train"", ""num_bytes"": 1401863756.0, ""num_examples"": 746}, {""name"": ""validation"", ""num_bytes"": 187724129.0, ""num_examples"": 71}, {""name"": ""test"", ""num_bytes"": 447605905.0, ""num_examples"": 175}], ""download_size"": 1976133845, ""dataset_size"": 2037193790.0}, {""config_name"": ""nld_Latn"", ""features"": [{""name"": ""sentence"", ""dtype"": ""string""}, {""name"": ""URL"", ""dtype"": ""string""}, {""name"": ""id"", ""dtype"": ""int32""}, {""name"": ""domain"", ""dtype"": ""string""}, {""name"": ""topic"", ""dtype"": ""string""}, {""name"": ""has_image"", ""dtype"": ""int32""}, {""name"": ""has_hyperlink"", ""dtype"": ""int32""}, {""name"": ""fleurs_id"", ""dtype"": ""int32""}, {""name"": ""filename"", ""sequence"": ""string""}, {""name"": ""raw_transcription"", ""dtype"": ""string""}, {""name"": ""transcription"", ""dtype"": ""string""}, {""name"": ""num_samples"", ""sequence"": ""int64""}, {""name"": ""speaker_id"", ""sequence"": ""int64""}, {""name"": ""gender"", ""sequence"": ""string""}, {""name"": ""whisper_asr"", ""sequence"": ""string""}, {""name"": ""whisper_asr_cer"", ""sequence"": ""float64""}, {""name"": ""whisper_asr_wer"", ""sequence"": ""float64""}, {""name"": ""whisper_asr_translation"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr_cer"", ""sequence"": ""float64""}, {""name"": ""seamlessm4t_asr_wer"", ""sequence"": ""float64""}, {""name"": ""seamlessm4t_asr_translation"", ""sequence"": ""string""}, {""name"": ""index_id"", ""dtype"": ""int64""}, {""name"": ""category"", ""dtype"": {""class_label"": {""names"": {""0"": ""science/technology"", ""1"": ""travel"", ""2"": ""politics"", ""3"": ""sports"", ""4"": ""health"", ""5"": ""entertainment"", ""6"": ""geography""}}}}, {""name"": ""text"", ""dtype"": ""string""}, {""name"": ""audio"", ""sequence"": {""audio"": {""sampling_rate"": 16000}}}], ""splits"": [{""name"": ""train"", ""num_bytes"": 881930671.0, ""num_examples"": 729}, {""name"": ""validation"", ""num_bytes"": 54214254.0, ""num_examples"": 58}, {""name"": ""test"", ""num_bytes"": 110418567.0, ""num_examples"": 123}], ""download_size"": 1039307907, ""dataset_size"": 1046563492.0}, {""config_name"": ""nob_Latn"", ""features"": [{""name"": ""sentence"", ""dtype"": ""string""}, {""name"": ""URL"", ""dtype"": ""string""}, {""name"": ""id"", ""dtype"": ""int32""}, {""name"": ""domain"", ""dtype"": ""string""}, {""name"": ""topic"", ""dtype"": ""string""}, {""name"": ""has_image"", ""dtype"": ""int32""}, {""name"": ""has_hyperlink"", ""dtype"": ""int32""}, {""name"": ""fleurs_id"", ""dtype"": ""int32""}, {""name"": ""filename"", ""sequence"": ""string""}, {""name"": ""raw_transcription"", ""dtype"": ""string""}, {""name"": ""transcription"", ""dtype"": ""string""}, {""name"": ""num_samples"", ""sequence"": ""int64""}, {""name"": ""speaker_id"", ""sequence"": ""int64""}, {""name"": ""gender"", ""sequence"": ""string""}, {""name"": ""whisper_asr"", ""sequence"": ""string""}, {""name"": ""whisper_asr_cer"", ""sequence"": ""float64""}, {""name"": ""whisper_asr_wer"", ""sequence"": ""float64""}, {""name"": ""whisper_asr_translation"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr_cer"", ""sequence"": ""float64""}, {""name"": ""seamlessm4t_asr_wer"", ""sequence"": ""float64""}, {""name"": ""seamlessm4t_asr_translation"", ""sequence"": ""string""}, {""name"": ""index_id"", ""dtype"": ""int64""}, {""name"": ""category"", ""dtype"": {""class_label"": {""names"": {""0"": ""science/technology"", ""1"": ""travel"", ""2"": ""politics"", ""3"": ""sports"", ""4"": ""health"", ""5"": ""entertainment"", ""6"": ""geography""}}}}, {""name"": ""text"", ""dtype"": ""string""}, {""name"": ""audio"", ""sequence"": {""audio"": {""sampling_rate"": 16000}}}], ""splits"": [{""name"": ""train"", ""num_bytes"": 1020875353.0, ""num_examples"": 723}, {""name"": ""validation"", ""num_bytes"": 64869416.0, ""num_examples"": 51}, {""name"": ""test"", ""num_bytes"": 149463914.0, ""num_examples"": 127}], ""download_size"": 1224624229, ""dataset_size"": 1235208683.0}, {""config_name"": ""npi_Deva"", ""features"": [{""name"": ""sentence"", ""dtype"": ""string""}, {""name"": ""URL"", ""dtype"": ""string""}, {""name"": ""id"", ""dtype"": ""int32""}, {""name"": ""domain"", ""dtype"": ""string""}, {""name"": ""topic"", ""dtype"": ""string""}, {""name"": ""has_image"", ""dtype"": ""int32""}, {""name"": ""has_hyperlink"", ""dtype"": ""int32""}, {""name"": ""fleurs_id"", ""dtype"": ""int32""}, {""name"": ""filename"", ""sequence"": ""string""}, {""name"": ""raw_transcription"", ""dtype"": ""string""}, {""name"": ""transcription"", ""dtype"": ""string""}, {""name"": ""num_samples"", ""sequence"": ""int64""}, {""name"": ""speaker_id"", ""sequence"": ""int64""}, {""name"": ""gender"", ""sequence"": ""string""}, {""name"": ""whisper_asr"", ""sequence"": ""string""}, {""name"": ""whisper_asr_cer"", ""sequence"": ""float64""}, {""name"": ""whisper_asr_wer"", ""sequence"": ""float64""}, {""name"": ""whisper_asr_translation"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr_cer"", ""sequence"": ""float64""}, {""name"": ""seamlessm4t_asr_wer"", ""sequence"": ""float64""}, {""name"": ""seamlessm4t_asr_translation"", ""sequence"": ""string""}, {""name"": ""index_id"", ""dtype"": ""int64""}, {""name"": ""category"", ""dtype"": {""class_label"": {""names"": {""0"": ""science/technology"", ""1"": ""travel"", ""2"": ""politics"", ""3"": ""sports"", ""4"": ""health"", ""5"": ""entertainment"", ""6"": ""geography""}}}}, {""name"": ""text"", ""dtype"": ""string""}, {""name"": ""audio"", ""sequence"": {""audio"": {""sampling_rate"": 16000}}}], ""splits"": [{""name"": ""train"", ""num_bytes"": 1304469469.0, ""num_examples"": 754}, {""name"": ""validation"", ""num_bytes"": 98605223.0, ""num_examples"": 70}, {""name"": ""test"", ""num_bytes"": 263321688.0, ""num_examples"": 175}], ""download_size"": 1645679853, ""dataset_size"": 1666396380.0}, {""config_name"": ""nso_Latn"", ""features"": [{""name"": ""sentence"", ""dtype"": ""string""}, {""name"": ""URL"", ""dtype"": ""string""}, {""name"": ""id"", ""dtype"": ""int32""}, {""name"": ""domain"", ""dtype"": ""string""}, {""name"": ""topic"", ""dtype"": ""string""}, {""name"": ""has_image"", ""dtype"": ""int32""}, {""name"": ""has_hyperlink"", ""dtype"": ""int32""}, {""name"": ""fleurs_id"", ""dtype"": ""int32""}, {""name"": ""filename"", ""sequence"": ""string""}, {""name"": ""raw_transcription"", ""dtype"": ""string""}, {""name"": ""transcription"", ""dtype"": ""string""}, {""name"": ""num_samples"", ""sequence"": ""int64""}, {""name"": ""speaker_id"", ""sequence"": ""int64""}, {""name"": ""gender"", ""sequence"": ""string""}, {""name"": ""whisper_asr"", ""sequence"": ""string""}, {""name"": ""whisper_asr_cer"", ""sequence"": ""float64""}, {""name"": ""whisper_asr_wer"", ""sequence"": ""float64""}, {""name"": ""whisper_asr_translation"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr_cer"", ""sequence"": ""float64""}, {""name"": ""seamlessm4t_asr_wer"", ""sequence"": ""float64""}, {""name"": ""seamlessm4t_asr_translation"", ""sequence"": ""string""}, {""name"": ""index_id"", ""dtype"": ""int64""}, {""name"": ""category"", ""dtype"": {""class_label"": {""names"": {""0"": ""science/technology"", ""1"": ""travel"", ""2"": ""politics"", ""3"": ""sports"", ""4"": ""health"", ""5"": ""entertainment"", ""6"": ""geography""}}}}, {""name"": ""text"", ""dtype"": ""string""}, {""name"": ""audio"", ""sequence"": {""audio"": {""sampling_rate"": 16000}}}], ""splits"": [{""name"": ""train"", ""num_bytes"": 1531120766.0, ""num_examples"": 633}, {""name"": ""validation"", ""num_bytes"": 203234215.0, ""num_examples"": 70}, {""name"": ""test"", ""num_bytes"": 489116622.0, ""num_examples"": 169}], ""download_size"": 2206857309, ""dataset_size"": 2223471603.0}, {""config_name"": ""nya_Latn"", ""features"": [{""name"": ""sentence"", ""dtype"": ""string""}, {""name"": ""URL"", ""dtype"": ""string""}, {""name"": ""id"", ""dtype"": ""int32""}, {""name"": ""domain"", ""dtype"": ""string""}, {""name"": ""topic"", ""dtype"": ""string""}, {""name"": ""has_image"", ""dtype"": ""int32""}, {""name"": ""has_hyperlink"", ""dtype"": ""int32""}, {""name"": ""fleurs_id"", ""dtype"": ""int32""}, {""name"": ""filename"", ""sequence"": ""string""}, {""name"": ""raw_transcription"", ""dtype"": ""string""}, {""name"": ""transcription"", ""dtype"": ""string""}, {""name"": ""num_samples"", ""sequence"": ""int64""}, {""name"": ""speaker_id"", ""sequence"": ""int64""}, {""name"": ""gender"", ""sequence"": ""string""}, {""name"": ""whisper_asr"", ""sequence"": ""string""}, {""name"": ""whisper_asr_cer"", ""sequence"": ""float64""}, {""name"": ""whisper_asr_wer"", ""sequence"": ""float64""}, {""name"": ""whisper_asr_translation"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr_cer"", ""sequence"": ""float64""}, {""name"": ""seamlessm4t_asr_wer"", ""sequence"": ""float64""}, {""name"": ""seamlessm4t_asr_translation"", ""sequence"": ""string""}, {""name"": ""index_id"", ""dtype"": ""int64""}, {""name"": ""category"", ""dtype"": {""class_label"": {""names"": {""0"": ""science/technology"", ""1"": ""travel"", ""2"": ""politics"", ""3"": ""sports"", ""4"": ""health"", ""5"": ""entertainment"", ""6"": ""geography""}}}}, {""name"": ""text"", ""dtype"": ""string""}, {""name"": ""audio"", ""sequence"": {""audio"": {""sampling_rate"": 16000}}}], ""splits"": [{""name"": ""train"", ""num_bytes"": 1242935371.0, ""num_examples"": 720}, {""name"": ""validation"", ""num_bytes"": 141588805.0, ""num_examples"": 68}, {""name"": ""test"", ""num_bytes"": 416888257.0, ""num_examples"": 169}], ""download_size"": 1794458304, ""dataset_size"": 1801412433.0}, {""config_name"": ""oci_Latn"", ""features"": [{""name"": ""sentence"", ""dtype"": ""string""}, {""name"": ""URL"", ""dtype"": ""string""}, {""name"": ""id"", ""dtype"": ""int32""}, {""name"": ""domain"", ""dtype"": ""string""}, {""name"": ""topic"", ""dtype"": ""string""}, {""name"": ""has_image"", ""dtype"": ""int32""}, {""name"": ""has_hyperlink"", ""dtype"": ""int32""}, {""name"": ""fleurs_id"", ""dtype"": ""int32""}, {""name"": ""filename"", ""sequence"": ""string""}, {""name"": ""raw_transcription"", ""dtype"": ""string""}, {""name"": ""transcription"", ""dtype"": ""string""}, {""name"": ""num_samples"", ""sequence"": ""int64""}, {""name"": ""speaker_id"", ""sequence"": ""int64""}, {""name"": ""gender"", ""sequence"": ""string""}, {""name"": ""whisper_asr"", ""sequence"": ""string""}, {""name"": ""whisper_asr_cer"", ""sequence"": ""float64""}, {""name"": ""whisper_asr_wer"", ""sequence"": ""float64""}, {""name"": ""whisper_asr_translation"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr_cer"", ""sequence"": ""float64""}, {""name"": ""seamlessm4t_asr_wer"", ""sequence"": ""float64""}, {""name"": ""seamlessm4t_asr_translation"", ""sequence"": ""string""}, {""name"": ""index_id"", ""dtype"": ""int64""}, {""name"": ""category"", ""dtype"": {""class_label"": {""names"": {""0"": ""science/technology"", ""1"": ""travel"", ""2"": ""politics"", ""3"": ""sports"", ""4"": ""health"", ""5"": ""entertainment"", ""6"": ""geography""}}}}, {""name"": ""text"", ""dtype"": ""string""}, {""name"": ""audio"", ""sequence"": {""audio"": {""sampling_rate"": 16000}}}], ""splits"": [{""name"": ""train"", ""num_bytes"": 1638068285.0, ""num_examples"": 756}, {""name"": ""validation"", ""num_bytes"": 196795145.0, ""num_examples"": 71}, {""name"": ""test"", ""num_bytes"": 522449568.0, ""num_examples"": 177}], ""download_size"": 2301324869, ""dataset_size"": 2357312998.0}, {""config_name"": ""ory_Orya"", ""features"": [{""name"": ""sentence"", ""dtype"": ""string""}, {""name"": ""URL"", ""dtype"": ""string""}, {""name"": ""id"", ""dtype"": ""int32""}, {""name"": ""domain"", ""dtype"": ""string""}, {""name"": ""topic"", ""dtype"": ""string""}, {""name"": ""has_image"", ""dtype"": ""int32""}, {""name"": ""has_hyperlink"", ""dtype"": ""int32""}, {""name"": ""fleurs_id"", ""dtype"": ""int32""}, {""name"": ""filename"", ""sequence"": ""string""}, {""name"": ""raw_transcription"", ""dtype"": ""string""}, {""name"": ""transcription"", ""dtype"": ""string""}, {""name"": ""num_samples"", ""sequence"": ""int64""}, {""name"": ""speaker_id"", ""sequence"": ""int64""}, {""name"": ""gender"", ""sequence"": ""string""}, {""name"": ""whisper_asr"", ""sequence"": ""string""}, {""name"": ""whisper_asr_cer"", ""sequence"": ""float64""}, {""name"": ""whisper_asr_wer"", ""sequence"": ""float64""}, {""name"": ""whisper_asr_translation"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr_cer"", ""sequence"": ""float64""}, {""name"": ""seamlessm4t_asr_wer"", ""sequence"": ""float64""}, {""name"": ""seamlessm4t_asr_translation"", ""sequence"": ""string""}, {""name"": ""index_id"", ""dtype"": ""int64""}, {""name"": ""category"", ""dtype"": {""class_label"": {""names"": {""0"": ""science/technology"", ""1"": ""travel"", ""2"": ""politics"", ""3"": ""sports"", ""4"": ""health"", ""5"": ""entertainment"", ""6"": ""geography""}}}}, {""name"": ""text"", ""dtype"": ""string""}, {""name"": ""audio"", ""sequence"": {""audio"": {""sampling_rate"": 16000}}}], ""splits"": [{""name"": ""train"", ""num_bytes"": 413419271.0, ""num_examples"": 442}, {""name"": ""validation"", ""num_bytes"": 141272977.0, ""num_examples"": 71}, {""name"": ""test"", ""num_bytes"": 344554257.0, ""num_examples"": 168}], ""download_size"": 888825647, ""dataset_size"": 899246505.0}, {""config_name"": ""pan_Guru"", ""features"": [{""name"": ""sentence"", ""dtype"": ""string""}, {""name"": ""URL"", ""dtype"": ""string""}, {""name"": ""id"", ""dtype"": ""int32""}, {""name"": ""domain"", ""dtype"": ""string""}, {""name"": ""topic"", ""dtype"": ""string""}, {""name"": ""has_image"", ""dtype"": ""int32""}, {""name"": ""has_hyperlink"", ""dtype"": ""int32""}, {""name"": ""fleurs_id"", ""dtype"": ""int32""}, {""name"": ""filename"", ""sequence"": ""string""}, {""name"": ""raw_transcription"", ""dtype"": ""string""}, {""name"": ""transcription"", ""dtype"": ""string""}, {""name"": ""num_samples"", ""sequence"": ""int64""}, {""name"": ""speaker_id"", ""sequence"": ""int64""}, {""name"": ""gender"", ""sequence"": ""string""}, {""name"": ""whisper_asr"", ""sequence"": ""string""}, {""name"": ""whisper_asr_cer"", ""sequence"": ""float64""}, {""name"": ""whisper_asr_wer"", ""sequence"": ""float64""}, {""name"": ""whisper_asr_translation"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr_cer"", ""sequence"": ""float64""}, {""name"": ""seamlessm4t_asr_wer"", ""sequence"": ""float64""}, {""name"": ""seamlessm4t_asr_translation"", ""sequence"": ""string""}, {""name"": ""index_id"", ""dtype"": ""int64""}, {""name"": ""category"", ""dtype"": {""class_label"": {""names"": {""0"": ""science/technology"", ""1"": ""travel"", ""2"": ""politics"", ""3"": ""sports"", ""4"": ""health"", ""5"": ""entertainment"", ""6"": ""geography""}}}}, {""name"": ""text"", ""dtype"": ""string""}, {""name"": ""audio"", ""sequence"": {""audio"": {""sampling_rate"": 16000}}}], ""splits"": [{""name"": ""train"", ""num_bytes"": 704776871.0, ""num_examples"": 580}, {""name"": ""validation"", ""num_bytes"": 71033142.0, ""num_examples"": 56}, {""name"": ""test"", ""num_bytes"": 215863237.0, ""num_examples"": 143}], ""download_size"": 982824064, ""dataset_size"": 991673250.0}, {""config_name"": ""pbt_Arab"", ""features"": [{""name"": ""sentence"", ""dtype"": ""string""}, {""name"": ""URL"", ""dtype"": ""string""}, {""name"": ""id"", ""dtype"": ""int32""}, {""name"": ""domain"", ""dtype"": ""string""}, {""name"": ""topic"", ""dtype"": ""string""}, {""name"": ""has_image"", ""dtype"": ""int32""}, {""name"": ""has_hyperlink"", ""dtype"": ""int32""}, {""name"": ""fleurs_id"", ""dtype"": ""int32""}, {""name"": ""filename"", ""sequence"": ""string""}, {""name"": ""raw_transcription"", ""dtype"": ""string""}, {""name"": ""transcription"", ""dtype"": ""string""}, {""name"": ""num_samples"", ""sequence"": ""int64""}, {""name"": ""speaker_id"", ""sequence"": ""int64""}, {""name"": ""gender"", ""sequence"": ""string""}, {""name"": ""whisper_asr"", ""sequence"": ""string""}, {""name"": ""whisper_asr_cer"", ""sequence"": ""float64""}, {""name"": ""whisper_asr_wer"", ""sequence"": ""float64""}, {""name"": ""whisper_asr_translation"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr_cer"", ""sequence"": ""float64""}, {""name"": ""seamlessm4t_asr_wer"", ""sequence"": ""float64""}, {""name"": ""seamlessm4t_asr_translation"", ""sequence"": ""string""}, {""name"": ""index_id"", ""dtype"": ""int64""}, {""name"": ""category"", ""dtype"": {""class_label"": {""names"": {""0"": ""science/technology"", ""1"": ""travel"", ""2"": ""politics"", ""3"": ""sports"", ""4"": ""health"", ""5"": ""entertainment"", ""6"": ""geography""}}}}, {""name"": ""text"", ""dtype"": ""string""}, {""name"": ""audio"", ""sequence"": {""audio"": {""sampling_rate"": 16000}}}], ""splits"": [{""name"": ""train"", ""num_bytes"": 1031899644.0, ""num_examples"": 701}, {""name"": ""validation"", ""num_bytes"": 76541359.0, ""num_examples"": 55}, {""name"": ""test"", ""num_bytes"": 203202602.0, ""num_examples"": 144}], ""download_size"": 1291546195, ""dataset_size"": 1311643605.0}, {""config_name"": ""pes_Arab"", ""features"": [{""name"": ""sentence"", ""dtype"": ""string""}, {""name"": ""URL"", ""dtype"": ""string""}, {""name"": ""id"", ""dtype"": ""int32""}, {""name"": ""domain"", ""dtype"": ""string""}, {""name"": ""topic"", ""dtype"": ""string""}, {""name"": ""has_image"", ""dtype"": ""int32""}, {""name"": ""has_hyperlink"", ""dtype"": ""int32""}, {""name"": ""fleurs_id"", ""dtype"": ""int32""}, {""name"": ""filename"", ""sequence"": ""string""}, {""name"": ""raw_transcription"", ""dtype"": ""string""}, {""name"": ""transcription"", ""dtype"": ""string""}, {""name"": ""num_samples"", ""sequence"": ""int64""}, {""name"": ""speaker_id"", ""sequence"": ""int64""}, {""name"": ""gender"", ""sequence"": ""string""}, {""name"": ""whisper_asr"", ""sequence"": ""string""}, {""name"": ""whisper_asr_cer"", ""sequence"": ""float64""}, {""name"": ""whisper_asr_wer"", ""sequence"": ""float64""}, {""name"": ""whisper_asr_translation"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr_cer"", ""sequence"": ""float64""}, {""name"": ""seamlessm4t_asr_wer"", ""sequence"": ""float64""}, {""name"": ""seamlessm4t_asr_translation"", ""sequence"": ""string""}, {""name"": ""index_id"", ""dtype"": ""int64""}, {""name"": ""category"", ""dtype"": {""class_label"": {""names"": {""0"": ""science/technology"", ""1"": ""travel"", ""2"": ""politics"", ""3"": ""sports"", ""4"": ""health"", ""5"": ""entertainment"", ""6"": ""geography""}}}}, {""name"": ""text"", ""dtype"": ""string""}, {""name"": ""audio"", ""sequence"": {""audio"": {""sampling_rate"": 16000}}}], ""splits"": [{""name"": ""train"", ""num_bytes"": 1384720135.0, ""num_examples"": 692}, {""name"": ""validation"", ""num_bytes"": 167633472.0, ""num_examples"": 66}, {""name"": ""test"", ""num_bytes"": 425199297.0, ""num_examples"": 165}], ""download_size"": 1949938822, ""dataset_size"": 1977552904.0}, {""config_name"": ""pol_Latn"", ""features"": [{""name"": ""sentence"", ""dtype"": ""string""}, {""name"": ""URL"", ""dtype"": ""string""}, {""name"": ""id"", ""dtype"": ""int32""}, {""name"": ""domain"", ""dtype"": ""string""}, {""name"": ""topic"", ""dtype"": ""string""}, {""name"": ""has_image"", ""dtype"": ""int32""}, {""name"": ""has_hyperlink"", ""dtype"": ""int32""}, {""name"": ""fleurs_id"", ""dtype"": ""int32""}, {""name"": ""filename"", ""sequence"": ""string""}, {""name"": ""raw_transcription"", ""dtype"": ""string""}, {""name"": ""transcription"", ""dtype"": ""string""}, {""name"": ""num_samples"", ""sequence"": ""int64""}, {""name"": ""speaker_id"", ""sequence"": ""int64""}, {""name"": ""gender"", ""sequence"": ""string""}, {""name"": ""whisper_asr"", ""sequence"": ""string""}, {""name"": ""whisper_asr_cer"", ""sequence"": ""float64""}, {""name"": ""whisper_asr_wer"", ""sequence"": ""float64""}, {""name"": ""whisper_asr_translation"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr_cer"", ""sequence"": ""float64""}, {""name"": ""seamlessm4t_asr_wer"", ""sequence"": ""float64""}, {""name"": ""seamlessm4t_asr_translation"", ""sequence"": ""string""}, {""name"": ""index_id"", ""dtype"": ""int64""}, {""name"": ""category"", ""dtype"": {""class_label"": {""names"": {""0"": ""science/technology"", ""1"": ""travel"", ""2"": ""politics"", ""3"": ""sports"", ""4"": ""health"", ""5"": ""entertainment"", ""6"": ""geography""}}}}, {""name"": ""text"", ""dtype"": ""string""}, {""name"": ""audio"", ""sequence"": {""audio"": {""sampling_rate"": 16000}}}], ""splits"": [{""name"": ""train"", ""num_bytes"": 1062588014.0, ""num_examples"": 723}, {""name"": ""validation"", ""num_bytes"": 91888076.0, ""num_examples"": 68}, {""name"": ""test"", ""num_bytes"": 235374860.0, ""num_examples"": 165}], ""download_size"": 1365817507, ""dataset_size"": 1389850950.0}, {""config_name"": ""por_Latn"", ""features"": [{""name"": ""sentence"", ""dtype"": ""string""}, {""name"": ""URL"", ""dtype"": ""string""}, {""name"": ""id"", ""dtype"": ""int32""}, {""name"": ""domain"", ""dtype"": ""string""}, {""name"": ""topic"", ""dtype"": ""string""}, {""name"": ""has_image"", ""dtype"": ""int32""}, {""name"": ""has_hyperlink"", ""dtype"": ""int32""}, {""name"": ""fleurs_id"", ""dtype"": ""int32""}, {""name"": ""filename"", ""sequence"": ""string""}, {""name"": ""raw_transcription"", ""dtype"": ""string""}, {""name"": ""transcription"", ""dtype"": ""string""}, {""name"": ""num_samples"", ""sequence"": ""int64""}, {""name"": ""speaker_id"", ""sequence"": ""int64""}, {""name"": ""gender"", ""sequence"": ""string""}, {""name"": ""whisper_asr"", ""sequence"": ""string""}, {""name"": ""whisper_asr_cer"", ""sequence"": ""float64""}, {""name"": ""whisper_asr_wer"", ""sequence"": ""float64""}, {""name"": ""whisper_asr_translation"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr_cer"", ""sequence"": ""float64""}, {""name"": ""seamlessm4t_asr_wer"", ""sequence"": ""float64""}, {""name"": ""seamlessm4t_asr_translation"", ""sequence"": ""string""}, {""name"": ""index_id"", ""dtype"": ""int64""}, {""name"": ""category"", ""dtype"": {""class_label"": {""names"": {""0"": ""science/technology"", ""1"": ""travel"", ""2"": ""politics"", ""3"": ""sports"", ""4"": ""health"", ""5"": ""entertainment"", ""6"": ""geography""}}}}, {""name"": ""text"", ""dtype"": ""string""}, {""name"": ""audio"", ""sequence"": {""audio"": {""sampling_rate"": 16000}}}], ""splits"": [{""name"": ""train"", ""num_bytes"": 1193510057.0, ""num_examples"": 728}, {""name"": ""validation"", ""num_bytes"": 142827506.0, ""num_examples"": 70}, {""name"": ""test"", ""num_bytes"": 373148629.0, ""num_examples"": 177}], ""download_size"": 1691529909, ""dataset_size"": 1709486192.0}, {""config_name"": ""ron_Latn"", ""features"": [{""name"": ""sentence"", ""dtype"": ""string""}, {""name"": ""URL"", ""dtype"": ""string""}, {""name"": ""id"", ""dtype"": ""int32""}, {""name"": ""domain"", ""dtype"": ""string""}, {""name"": ""topic"", ""dtype"": ""string""}, {""name"": ""has_image"", ""dtype"": ""int32""}, {""name"": ""has_hyperlink"", ""dtype"": ""int32""}, {""name"": ""fleurs_id"", ""dtype"": ""int32""}, {""name"": ""filename"", ""sequence"": ""string""}, {""name"": ""raw_transcription"", ""dtype"": ""string""}, {""name"": ""transcription"", ""dtype"": ""string""}, {""name"": ""num_samples"", ""sequence"": ""int64""}, {""name"": ""speaker_id"", ""sequence"": ""int64""}, {""name"": ""gender"", ""sequence"": ""string""}, {""name"": ""whisper_asr"", ""sequence"": ""string""}, {""name"": ""whisper_asr_cer"", ""sequence"": ""float64""}, {""name"": ""whisper_asr_wer"", ""sequence"": ""float64""}, {""name"": ""whisper_asr_translation"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr_cer"", ""sequence"": ""float64""}, {""name"": ""seamlessm4t_asr_wer"", ""sequence"": ""float64""}, {""name"": ""seamlessm4t_asr_translation"", ""sequence"": ""string""}, {""name"": ""index_id"", ""dtype"": ""int64""}, {""name"": ""category"", ""dtype"": {""class_label"": {""names"": {""0"": ""science/technology"", ""1"": ""travel"", ""2"": ""politics"", ""3"": ""sports"", ""4"": ""health"", ""5"": ""entertainment"", ""6"": ""geography""}}}}, {""name"": ""text"", ""dtype"": ""string""}, {""name"": ""audio"", ""sequence"": {""audio"": {""sampling_rate"": 16000}}}], ""splits"": [{""name"": ""train"", ""num_bytes"": 1175359850.0, ""num_examples"": 734}, {""name"": ""validation"", ""num_bytes"": 120325985.0, ""num_examples"": 69}, {""name"": ""test"", ""num_bytes"": 299962993.0, ""num_examples"": 177}], ""download_size"": 1587074331, ""dataset_size"": 1595648828.0}, {""config_name"": ""rus_Cyrl"", ""features"": [{""name"": ""sentence"", ""dtype"": ""string""}, {""name"": ""URL"", ""dtype"": ""string""}, {""name"": ""id"", ""dtype"": ""int32""}, {""name"": ""domain"", ""dtype"": ""string""}, {""name"": ""topic"", ""dtype"": ""string""}, {""name"": ""has_image"", ""dtype"": ""int32""}, {""name"": ""has_hyperlink"", ""dtype"": ""int32""}, {""name"": ""fleurs_id"", ""dtype"": ""int32""}, {""name"": ""filename"", ""sequence"": ""string""}, {""name"": ""raw_transcription"", ""dtype"": ""string""}, {""name"": ""transcription"", ""dtype"": ""string""}, {""name"": ""num_samples"", ""sequence"": ""int64""}, {""name"": ""speaker_id"", ""sequence"": ""int64""}, {""name"": ""gender"", ""sequence"": ""string""}, {""name"": ""whisper_asr"", ""sequence"": ""string""}, {""name"": ""whisper_asr_cer"", ""sequence"": ""float64""}, {""name"": ""whisper_asr_wer"", ""sequence"": ""float64""}, {""name"": ""whisper_asr_translation"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr_cer"", ""sequence"": ""float64""}, {""name"": ""seamlessm4t_asr_wer"", ""sequence"": ""float64""}, {""name"": ""seamlessm4t_asr_translation"", ""sequence"": ""string""}, {""name"": ""index_id"", ""dtype"": ""int64""}, {""name"": ""category"", ""dtype"": {""class_label"": {""names"": {""0"": ""science/technology"", ""1"": ""travel"", ""2"": ""politics"", ""3"": ""sports"", ""4"": ""health"", ""5"": ""entertainment"", ""6"": ""geography""}}}}, {""name"": ""text"", ""dtype"": ""string""}, {""name"": ""audio"", ""sequence"": {""audio"": {""sampling_rate"": 16000}}}], ""splits"": [{""name"": ""train"", ""num_bytes"": 937479552.0, ""num_examples"": 733}, {""name"": ""validation"", ""num_bytes"": 119059292.0, ""num_examples"": 71}, {""name"": ""test"", ""num_bytes"": 285964850.0, ""num_examples"": 173}], ""download_size"": 1320374803, ""dataset_size"": 1342503694.0}, {""config_name"": ""slk_Latn"", ""features"": [{""name"": ""sentence"", ""dtype"": ""string""}, {""name"": ""URL"", ""dtype"": ""string""}, {""name"": ""id"", ""dtype"": ""int32""}, {""name"": ""domain"", ""dtype"": ""string""}, {""name"": ""topic"", ""dtype"": ""string""}, {""name"": ""has_image"", ""dtype"": ""int32""}, {""name"": ""has_hyperlink"", ""dtype"": ""int32""}, {""name"": ""fleurs_id"", ""dtype"": ""int32""}, {""name"": ""filename"", ""sequence"": ""string""}, {""name"": ""raw_transcription"", ""dtype"": ""string""}, {""name"": ""transcription"", ""dtype"": ""string""}, {""name"": ""num_samples"", ""sequence"": ""int64""}, {""name"": ""speaker_id"", ""sequence"": ""int64""}, {""name"": ""gender"", ""sequence"": ""string""}, {""name"": ""whisper_asr"", ""sequence"": ""string""}, {""name"": ""whisper_asr_cer"", ""sequence"": ""float64""}, {""name"": ""whisper_asr_wer"", ""sequence"": ""float64""}, {""name"": ""whisper_asr_translation"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr_cer"", ""sequence"": ""float64""}, {""name"": ""seamlessm4t_asr_wer"", ""sequence"": ""float64""}, {""name"": ""seamlessm4t_asr_translation"", ""sequence"": ""string""}, {""name"": ""index_id"", ""dtype"": ""int64""}, {""name"": ""category"", ""dtype"": {""class_label"": {""names"": {""0"": ""science/technology"", ""1"": ""travel"", ""2"": ""politics"", ""3"": ""sports"", ""4"": ""health"", ""5"": ""entertainment"", ""6"": ""geography""}}}}, {""name"": ""text"", ""dtype"": ""string""}, {""name"": ""audio"", ""sequence"": {""audio"": {""sampling_rate"": 16000}}}], ""splits"": [{""name"": ""train"", ""num_bytes"": 690086197.0, ""num_examples"": 628}, {""name"": ""validation"", ""num_bytes"": 120987120.0, ""num_examples"": 71}, {""name"": ""test"", ""num_bytes"": 299231287.0, ""num_examples"": 169}], ""download_size"": 1088431291, ""dataset_size"": 1110304604.0}, {""config_name"": ""slv_Latn"", ""features"": [{""name"": ""sentence"", ""dtype"": ""string""}, {""name"": ""URL"", ""dtype"": ""string""}, {""name"": ""id"", ""dtype"": ""int32""}, {""name"": ""domain"", ""dtype"": ""string""}, {""name"": ""topic"", ""dtype"": ""string""}, {""name"": ""has_image"", ""dtype"": ""int32""}, {""name"": ""has_hyperlink"", ""dtype"": ""int32""}, {""name"": ""fleurs_id"", ""dtype"": ""int32""}, {""name"": ""filename"", ""sequence"": ""string""}, {""name"": ""raw_transcription"", ""dtype"": ""string""}, {""name"": ""transcription"", ""dtype"": ""string""}, {""name"": ""num_samples"", ""sequence"": ""int64""}, {""name"": ""speaker_id"", ""sequence"": ""int64""}, {""name"": ""gender"", ""sequence"": ""string""}, {""name"": ""whisper_asr"", ""sequence"": ""string""}, {""name"": ""whisper_asr_cer"", ""sequence"": ""float64""}, {""name"": ""whisper_asr_wer"", ""sequence"": ""float64""}, {""name"": ""whisper_asr_translation"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr_cer"", ""sequence"": ""float64""}, {""name"": ""seamlessm4t_asr_wer"", ""sequence"": ""float64""}, {""name"": ""seamlessm4t_asr_translation"", ""sequence"": ""string""}, {""name"": ""index_id"", ""dtype"": ""int64""}, {""name"": ""category"", ""dtype"": {""class_label"": {""names"": {""0"": ""science/technology"", ""1"": ""travel"", ""2"": ""politics"", ""3"": ""sports"", ""4"": ""health"", ""5"": ""entertainment"", ""6"": ""geography""}}}}, {""name"": ""text"", ""dtype"": ""string""}, {""name"": ""audio"", ""sequence"": {""audio"": {""sampling_rate"": 16000}}}], ""splits"": [{""name"": ""train"", ""num_bytes"": 910836873.0, ""num_examples"": 704}, {""name"": ""validation"", ""num_bytes"": 105291567.0, ""num_examples"": 71}, {""name"": ""test"", ""num_bytes"": 251449782.0, ""num_examples"": 174}], ""download_size"": 1243394087, ""dataset_size"": 1267578222.0}, {""config_name"": ""sna_Latn"", ""features"": [{""name"": ""sentence"", ""dtype"": ""string""}, {""name"": ""URL"", ""dtype"": ""string""}, {""name"": ""id"", ""dtype"": ""int32""}, {""name"": ""domain"", ""dtype"": ""string""}, {""name"": ""topic"", ""dtype"": ""string""}, {""name"": ""has_image"", ""dtype"": ""int32""}, {""name"": ""has_hyperlink"", ""dtype"": ""int32""}, {""name"": ""fleurs_id"", ""dtype"": ""int32""}, {""name"": ""filename"", ""sequence"": ""string""}, {""name"": ""raw_transcription"", ""dtype"": ""string""}, {""name"": ""transcription"", ""dtype"": ""string""}, {""name"": ""num_samples"", ""sequence"": ""int64""}, {""name"": ""speaker_id"", ""sequence"": ""int64""}, {""name"": ""gender"", ""sequence"": ""string""}, {""name"": ""whisper_asr"", ""sequence"": ""string""}, {""name"": ""whisper_asr_cer"", ""sequence"": ""float64""}, {""name"": ""whisper_asr_wer"", ""sequence"": ""float64""}, {""name"": ""whisper_asr_translation"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr_cer"", ""sequence"": ""float64""}, {""name"": ""seamlessm4t_asr_wer"", ""sequence"": ""float64""}, {""name"": ""seamlessm4t_asr_translation"", ""sequence"": ""string""}, {""name"": ""index_id"", ""dtype"": ""int64""}, {""name"": ""category"", ""dtype"": {""class_label"": {""names"": {""0"": ""science/technology"", ""1"": ""travel"", ""2"": ""politics"", ""3"": ""sports"", ""4"": ""health"", ""5"": ""entertainment"", ""6"": ""geography""}}}}, {""name"": ""text"", ""dtype"": ""string""}, {""name"": ""audio"", ""sequence"": {""audio"": {""sampling_rate"": 16000}}}], ""splits"": [{""name"": ""train"", ""num_bytes"": 1110133395.0, ""num_examples"": 689}, {""name"": ""test"", ""num_bytes"": 445118730.0, ""num_examples"": 176}, {""name"": ""validation"", ""num_bytes"": 170022414.0, ""num_examples"": 71}], ""download_size"": 1686849697, ""dataset_size"": 1725274539.0}, {""config_name"": ""snd_Arab"", ""features"": [{""name"": ""sentence"", ""dtype"": ""string""}, {""name"": ""URL"", ""dtype"": ""string""}, {""name"": ""id"", ""dtype"": ""int32""}, {""name"": ""domain"", ""dtype"": ""string""}, {""name"": ""topic"", ""dtype"": ""string""}, {""name"": ""has_image"", ""dtype"": ""int32""}, {""name"": ""has_hyperlink"", ""dtype"": ""int32""}, {""name"": ""fleurs_id"", ""dtype"": ""int32""}, {""name"": ""filename"", ""sequence"": ""string""}, {""name"": ""raw_transcription"", ""dtype"": ""string""}, {""name"": ""transcription"", ""dtype"": ""string""}, {""name"": ""num_samples"", ""sequence"": ""int64""}, {""name"": ""speaker_id"", ""sequence"": ""int64""}, {""name"": ""gender"", ""sequence"": ""string""}, {""name"": ""whisper_asr"", ""sequence"": ""string""}, {""name"": ""whisper_asr_cer"", ""sequence"": ""float64""}, {""name"": ""whisper_asr_wer"", ""sequence"": ""float64""}, {""name"": ""whisper_asr_translation"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr_cer"", ""sequence"": ""float64""}, {""name"": ""seamlessm4t_asr_wer"", ""sequence"": ""float64""}, {""name"": ""seamlessm4t_asr_translation"", ""sequence"": ""string""}, {""name"": ""index_id"", ""dtype"": ""int64""}, {""name"": ""category"", ""dtype"": {""class_label"": {""names"": {""0"": ""science/technology"", ""1"": ""travel"", ""2"": ""politics"", ""3"": ""sports"", ""4"": ""health"", ""5"": ""entertainment"", ""6"": ""geography""}}}}, {""name"": ""text"", ""dtype"": ""string""}, {""name"": ""audio"", ""sequence"": {""audio"": {""sampling_rate"": 16000}}}], ""splits"": [{""name"": ""train"", ""num_bytes"": 1291698615.0, ""num_examples"": 749}, {""name"": ""validation"", ""num_bytes"": 145074769.0, ""num_examples"": 71}, {""name"": ""test"", ""num_bytes"": 379409498.0, ""num_examples"": 177}], ""download_size"": 1814212764, ""dataset_size"": 1816182882.0}, {""config_name"": ""som_Latn"", ""features"": [{""name"": ""sentence"", ""dtype"": ""string""}, {""name"": ""URL"", ""dtype"": ""string""}, {""name"": ""id"", ""dtype"": ""int32""}, {""name"": ""domain"", ""dtype"": ""string""}, {""name"": ""topic"", ""dtype"": ""string""}, {""name"": ""has_image"", ""dtype"": ""int32""}, {""name"": ""has_hyperlink"", ""dtype"": ""int32""}, {""name"": ""fleurs_id"", ""dtype"": ""int32""}, {""name"": ""filename"", ""sequence"": ""string""}, {""name"": ""raw_transcription"", ""dtype"": ""string""}, {""name"": ""transcription"", ""dtype"": ""string""}, {""name"": ""num_samples"", ""sequence"": ""int64""}, {""name"": ""speaker_id"", ""sequence"": ""int64""}, {""name"": ""gender"", ""sequence"": ""string""}, {""name"": ""whisper_asr"", ""sequence"": ""string""}, {""name"": ""whisper_asr_cer"", ""sequence"": ""float64""}, {""name"": ""whisper_asr_wer"", ""sequence"": ""float64""}, {""name"": ""whisper_asr_translation"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr_cer"", ""sequence"": ""float64""}, {""name"": ""seamlessm4t_asr_wer"", ""sequence"": ""float64""}, {""name"": ""seamlessm4t_asr_translation"", ""sequence"": ""string""}, {""name"": ""index_id"", ""dtype"": ""int64""}, {""name"": ""category"", ""dtype"": {""class_label"": {""names"": {""0"": ""science/technology"", ""1"": ""travel"", ""2"": ""politics"", ""3"": ""sports"", ""4"": ""health"", ""5"": ""entertainment"", ""6"": ""geography""}}}}, {""name"": ""text"", ""dtype"": ""string""}, {""name"": ""audio"", ""sequence"": {""audio"": {""sampling_rate"": 16000}}}], ""splits"": [{""name"": ""train"", ""num_bytes"": 1536544214.0, ""num_examples"": 746}, {""name"": ""validation"", ""num_bytes"": 166072722.0, ""num_examples"": 70}, {""name"": ""test"", ""num_bytes"": 450228170.0, ""num_examples"": 177}], ""download_size"": 2114947059, ""dataset_size"": 2152845106.0}, {""config_name"": ""spa_Latn"", ""features"": [{""name"": ""sentence"", ""dtype"": ""string""}, {""name"": ""URL"", ""dtype"": ""string""}, {""name"": ""id"", ""dtype"": ""int32""}, {""name"": ""domain"", ""dtype"": ""string""}, {""name"": ""topic"", ""dtype"": ""string""}, {""name"": ""has_image"", ""dtype"": ""int32""}, {""name"": ""has_hyperlink"", ""dtype"": ""int32""}, {""name"": ""fleurs_id"", ""dtype"": ""int32""}, {""name"": ""filename"", ""sequence"": ""string""}, {""name"": ""raw_transcription"", ""dtype"": ""string""}, {""name"": ""transcription"", ""dtype"": ""string""}, {""name"": ""num_samples"", ""sequence"": ""int64""}, {""name"": ""speaker_id"", ""sequence"": ""int64""}, {""name"": ""gender"", ""sequence"": ""string""}, {""name"": ""whisper_asr"", ""sequence"": ""string""}, {""name"": ""whisper_asr_cer"", ""sequence"": ""float64""}, {""name"": ""whisper_asr_wer"", ""sequence"": ""float64""}, {""name"": ""whisper_asr_translation"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr_cer"", ""sequence"": ""float64""}, {""name"": ""seamlessm4t_asr_wer"", ""sequence"": ""float64""}, {""name"": ""seamlessm4t_asr_translation"", ""sequence"": ""string""}, {""name"": ""index_id"", ""dtype"": ""int64""}, {""name"": ""category"", ""dtype"": {""class_label"": {""names"": {""0"": ""science/technology"", ""1"": ""travel"", ""2"": ""politics"", ""3"": ""sports"", ""4"": ""health"", ""5"": ""entertainment"", ""6"": ""geography""}}}}, {""name"": ""text"", ""dtype"": ""string""}, {""name"": ""audio"", ""sequence"": {""audio"": {""sampling_rate"": 16000}}}], ""splits"": [{""name"": ""validation"", ""num_bytes"": 148994823.0, ""num_examples"": 71}, {""name"": ""test"", ""num_bytes"": 365613179.0, ""num_examples"": 177}, {""name"": ""train"", ""num_bytes"": 827138506.0, ""num_examples"": 676}], ""download_size"": 1311850951, ""dataset_size"": 1341746508.0}, {""config_name"": ""srp_Cyrl"", ""features"": [{""name"": ""sentence"", ""dtype"": ""string""}, {""name"": ""URL"", ""dtype"": ""string""}, {""name"": ""id"", ""dtype"": ""int32""}, {""name"": ""domain"", ""dtype"": ""string""}, {""name"": ""topic"", ""dtype"": ""string""}, {""name"": ""has_image"", ""dtype"": ""int32""}, {""name"": ""has_hyperlink"", ""dtype"": ""int32""}, {""name"": ""fleurs_id"", ""dtype"": ""int32""}, {""name"": ""filename"", ""sequence"": ""string""}, {""name"": ""raw_transcription"", ""dtype"": ""string""}, {""name"": ""transcription"", ""dtype"": ""string""}, {""name"": ""num_samples"", ""sequence"": ""int64""}, {""name"": ""speaker_id"", ""sequence"": ""int64""}, {""name"": ""gender"", ""sequence"": ""string""}, {""name"": ""whisper_asr"", ""sequence"": ""string""}, {""name"": ""whisper_asr_cer"", ""sequence"": ""float64""}, {""name"": ""whisper_asr_wer"", ""sequence"": ""float64""}, {""name"": ""whisper_asr_translation"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr_cer"", ""sequence"": ""float64""}, {""name"": ""seamlessm4t_asr_wer"", ""sequence"": ""float64""}, {""name"": ""seamlessm4t_asr_translation"", ""sequence"": ""string""}, {""name"": ""index_id"", ""dtype"": ""int64""}, {""name"": ""category"", ""dtype"": {""class_label"": {""names"": {""0"": ""science/technology"", ""1"": ""travel"", ""2"": ""politics"", ""3"": ""sports"", ""4"": ""health"", ""5"": ""entertainment"", ""6"": ""geography""}}}}, {""name"": ""text"", ""dtype"": ""string""}, {""name"": ""audio"", ""sequence"": {""audio"": {""sampling_rate"": 16000}}}], ""splits"": [{""name"": ""train"", ""num_bytes"": 1227301685.0, ""num_examples"": 730}, {""name"": ""validation"", ""num_bytes"": 89523997.0, ""num_examples"": 63}, {""name"": ""test"", ""num_bytes"": 246805416.0, ""num_examples"": 164}], ""download_size"": 1555922233, ""dataset_size"": 1563631098.0}, {""config_name"": ""swe_Latn"", ""features"": [{""name"": ""sentence"", ""dtype"": ""string""}, {""name"": ""URL"", ""dtype"": ""string""}, {""name"": ""id"", ""dtype"": ""int32""}, {""name"": ""domain"", ""dtype"": ""string""}, {""name"": ""topic"", ""dtype"": ""string""}, {""name"": ""has_image"", ""dtype"": ""int32""}, {""name"": ""has_hyperlink"", ""dtype"": ""int32""}, {""name"": ""fleurs_id"", ""dtype"": ""int32""}, {""name"": ""filename"", ""sequence"": ""string""}, {""name"": ""raw_transcription"", ""dtype"": ""string""}, {""name"": ""transcription"", ""dtype"": ""string""}, {""name"": ""num_samples"", ""sequence"": ""int64""}, {""name"": ""speaker_id"", ""sequence"": ""int64""}, {""name"": ""gender"", ""sequence"": ""string""}, {""name"": ""whisper_asr"", ""sequence"": ""string""}, {""name"": ""whisper_asr_cer"", ""sequence"": ""float64""}, {""name"": ""whisper_asr_wer"", ""sequence"": ""float64""}, {""name"": ""whisper_asr_translation"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr_cer"", ""sequence"": ""float64""}, {""name"": ""seamlessm4t_asr_wer"", ""sequence"": ""float64""}, {""name"": ""seamlessm4t_asr_translation"", ""sequence"": ""string""}, {""name"": ""index_id"", ""dtype"": ""int64""}, {""name"": ""category"", ""dtype"": {""class_label"": {""names"": {""0"": ""science/technology"", ""1"": ""travel"", ""2"": ""politics"", ""3"": ""sports"", ""4"": ""health"", ""5"": ""entertainment"", ""6"": ""geography""}}}}, {""name"": ""text"", ""dtype"": ""string""}, {""name"": ""audio"", ""sequence"": {""audio"": {""sampling_rate"": 16000}}}], ""splits"": [{""name"": ""train"", ""num_bytes"": 955198106.0, ""num_examples"": 686}, {""name"": ""validation"", ""num_bytes"": 111263173.0, ""num_examples"": 71}, {""name"": ""test"", ""num_bytes"": 265392664.0, ""num_examples"": 168}], ""download_size"": 1276165655, ""dataset_size"": 1331853943.0}, {""config_name"": ""swh_Latn"", ""features"": [{""name"": ""sentence"", ""dtype"": ""string""}, {""name"": ""URL"", ""dtype"": ""string""}, {""name"": ""id"", ""dtype"": ""int32""}, {""name"": ""domain"", ""dtype"": ""string""}, {""name"": ""topic"", ""dtype"": ""string""}, {""name"": ""has_image"", ""dtype"": ""int32""}, {""name"": ""has_hyperlink"", ""dtype"": ""int32""}, {""name"": ""fleurs_id"", ""dtype"": ""int32""}, {""name"": ""filename"", ""sequence"": ""string""}, {""name"": ""raw_transcription"", ""dtype"": ""string""}, {""name"": ""transcription"", ""dtype"": ""string""}, {""name"": ""num_samples"", ""sequence"": ""int64""}, {""name"": ""speaker_id"", ""sequence"": ""int64""}, {""name"": ""gender"", ""sequence"": ""string""}, {""name"": ""whisper_asr"", ""sequence"": ""string""}, {""name"": ""whisper_asr_cer"", ""sequence"": ""float64""}, {""name"": ""whisper_asr_wer"", ""sequence"": ""float64""}, {""name"": ""whisper_asr_translation"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr_cer"", ""sequence"": ""float64""}, {""name"": ""seamlessm4t_asr_wer"", ""sequence"": ""float64""}, {""name"": ""seamlessm4t_asr_translation"", ""sequence"": ""string""}, {""name"": ""index_id"", ""dtype"": ""int64""}, {""name"": ""category"", ""dtype"": {""class_label"": {""names"": {""0"": ""science/technology"", ""1"": ""travel"", ""2"": ""politics"", ""3"": ""sports"", ""4"": ""health"", ""5"": ""entertainment"", ""6"": ""geography""}}}}, {""name"": ""text"", ""dtype"": ""string""}, {""name"": ""audio"", ""sequence"": {""audio"": {""sampling_rate"": 16000}}}], ""splits"": [{""name"": ""train"", ""num_bytes"": 1575032236.0, ""num_examples"": 745}, {""name"": ""validation"", ""num_bytes"": 89978234.0, ""num_examples"": 65}, {""name"": ""test"", ""num_bytes"": 214908159.0, ""num_examples"": 154}], ""download_size"": 1871495254, ""dataset_size"": 1879918629.0}, {""config_name"": ""tam_Taml"", ""features"": [{""name"": ""sentence"", ""dtype"": ""string""}, {""name"": ""URL"", ""dtype"": ""string""}, {""name"": ""id"", ""dtype"": ""int32""}, {""name"": ""domain"", ""dtype"": ""string""}, {""name"": ""topic"", ""dtype"": ""string""}, {""name"": ""has_image"", ""dtype"": ""int32""}, {""name"": ""has_hyperlink"", ""dtype"": ""int32""}, {""name"": ""fleurs_id"", ""dtype"": ""int32""}, {""name"": ""filename"", ""sequence"": ""string""}, {""name"": ""raw_transcription"", ""dtype"": ""string""}, {""name"": ""transcription"", ""dtype"": ""string""}, {""name"": ""num_samples"", ""sequence"": ""int64""}, {""name"": ""speaker_id"", ""sequence"": ""int64""}, {""name"": ""gender"", ""sequence"": ""string""}, {""name"": ""whisper_asr"", ""sequence"": ""string""}, {""name"": ""whisper_asr_cer"", ""sequence"": ""float64""}, {""name"": ""whisper_asr_wer"", ""sequence"": ""float64""}, {""name"": ""whisper_asr_translation"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr_cer"", ""sequence"": ""float64""}, {""name"": ""seamlessm4t_asr_wer"", ""sequence"": ""float64""}, {""name"": ""seamlessm4t_asr_translation"", ""sequence"": ""string""}, {""name"": ""index_id"", ""dtype"": ""int64""}, {""name"": ""category"", ""dtype"": {""class_label"": {""names"": {""0"": ""science/technology"", ""1"": ""travel"", ""2"": ""politics"", ""3"": ""sports"", ""4"": ""health"", ""5"": ""entertainment"", ""6"": ""geography""}}}}, {""name"": ""text"", ""dtype"": ""string""}, {""name"": ""audio"", ""sequence"": {""audio"": {""sampling_rate"": 16000}}}], ""splits"": [{""name"": ""train"", ""num_bytes"": 1012440645.0, ""num_examples"": 693}, {""name"": ""validation"", ""num_bytes"": 143768511.0, ""num_examples"": 71}, {""name"": ""test"", ""num_bytes"": 245308874.0, ""num_examples"": 169}], ""download_size"": 1391470321, ""dataset_size"": 1401518030.0}, {""config_name"": ""tel_Telu"", ""features"": [{""name"": ""sentence"", ""dtype"": ""string""}, {""name"": ""URL"", ""dtype"": ""string""}, {""name"": ""id"", ""dtype"": ""int32""}, {""name"": ""domain"", ""dtype"": ""string""}, {""name"": ""topic"", ""dtype"": ""string""}, {""name"": ""has_image"", ""dtype"": ""int32""}, {""name"": ""has_hyperlink"", ""dtype"": ""int32""}, {""name"": ""fleurs_id"", ""dtype"": ""int32""}, {""name"": ""filename"", ""sequence"": ""string""}, {""name"": ""raw_transcription"", ""dtype"": ""string""}, {""name"": ""transcription"", ""dtype"": ""string""}, {""name"": ""num_samples"", ""sequence"": ""int64""}, {""name"": ""speaker_id"", ""sequence"": ""int64""}, {""name"": ""gender"", ""sequence"": ""string""}, {""name"": ""whisper_asr"", ""sequence"": ""string""}, {""name"": ""whisper_asr_cer"", ""sequence"": ""float64""}, {""name"": ""whisper_asr_wer"", ""sequence"": ""float64""}, {""name"": ""whisper_asr_translation"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr_cer"", ""sequence"": ""float64""}, {""name"": ""seamlessm4t_asr_wer"", ""sequence"": ""float64""}, {""name"": ""seamlessm4t_asr_translation"", ""sequence"": ""string""}, {""name"": ""index_id"", ""dtype"": ""int64""}, {""name"": ""category"", ""dtype"": {""class_label"": {""names"": {""0"": ""science/technology"", ""1"": ""travel"", ""2"": ""politics"", ""3"": ""sports"", ""4"": ""health"", ""5"": ""entertainment"", ""6"": ""geography""}}}}, {""name"": ""text"", ""dtype"": ""string""}, {""name"": ""audio"", ""sequence"": {""audio"": {""sampling_rate"": 16000}}}], ""splits"": [{""name"": ""train"", ""num_bytes"": 930917723.0, ""num_examples"": 658}, {""name"": ""validation"", ""num_bytes"": 93943473.0, ""num_examples"": 66}, {""name"": ""test"", ""num_bytes"": 171925062.0, ""num_examples"": 153}], ""download_size"": 1184754231, ""dataset_size"": 1196786258.0}, {""config_name"": ""tgk_Cyrl"", ""features"": [{""name"": ""sentence"", ""dtype"": ""string""}, {""name"": ""URL"", ""dtype"": ""string""}, {""name"": ""id"", ""dtype"": ""int32""}, {""name"": ""domain"", ""dtype"": ""string""}, {""name"": ""topic"", ""dtype"": ""string""}, {""name"": ""has_image"", ""dtype"": ""int32""}, {""name"": ""has_hyperlink"", ""dtype"": ""int32""}, {""name"": ""fleurs_id"", ""dtype"": ""int32""}, {""name"": ""filename"", ""sequence"": ""string""}, {""name"": ""raw_transcription"", ""dtype"": ""string""}, {""name"": ""transcription"", ""dtype"": ""string""}, {""name"": ""num_samples"", ""sequence"": ""int64""}, {""name"": ""speaker_id"", ""sequence"": ""int64""}, {""name"": ""gender"", ""sequence"": ""string""}, {""name"": ""whisper_asr"", ""sequence"": ""string""}, {""name"": ""whisper_asr_cer"", ""sequence"": ""float64""}, {""name"": ""whisper_asr_wer"", ""sequence"": ""float64""}, {""name"": ""whisper_asr_translation"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr_cer"", ""sequence"": ""float64""}, {""name"": ""seamlessm4t_asr_wer"", ""sequence"": ""float64""}, {""name"": ""seamlessm4t_asr_translation"", ""sequence"": ""string""}, {""name"": ""index_id"", ""dtype"": ""int64""}, {""name"": ""category"", ""dtype"": {""class_label"": {""names"": {""0"": ""science/technology"", ""1"": ""travel"", ""2"": ""politics"", ""3"": ""sports"", ""4"": ""health"", ""5"": ""entertainment"", ""6"": ""geography""}}}}, {""name"": ""text"", ""dtype"": ""string""}, {""name"": ""audio"", ""sequence"": {""audio"": {""sampling_rate"": 16000}}}], ""splits"": [{""name"": ""train"", ""num_bytes"": 1016396657.0, ""num_examples"": 680}, {""name"": ""validation"", ""num_bytes"": 104367919.0, ""num_examples"": 69}, {""name"": ""test"", ""num_bytes"": 291493556.0, ""num_examples"": 163}], ""download_size"": 1377337730, ""dataset_size"": 1412258132.0}, {""config_name"": ""tgl_Latn"", ""features"": [{""name"": ""sentence"", ""dtype"": ""string""}, {""name"": ""URL"", ""dtype"": ""string""}, {""name"": ""id"", ""dtype"": ""int32""}, {""name"": ""domain"", ""dtype"": ""string""}, {""name"": ""topic"", ""dtype"": ""string""}, {""name"": ""has_image"", ""dtype"": ""int32""}, {""name"": ""has_hyperlink"", ""dtype"": ""int32""}, {""name"": ""fleurs_id"", ""dtype"": ""int32""}, {""name"": ""filename"", ""sequence"": ""string""}, {""name"": ""raw_transcription"", ""dtype"": ""string""}, {""name"": ""transcription"", ""dtype"": ""string""}, {""name"": ""num_samples"", ""sequence"": ""int64""}, {""name"": ""speaker_id"", ""sequence"": ""int64""}, {""name"": ""gender"", ""sequence"": ""string""}, {""name"": ""whisper_asr"", ""sequence"": ""string""}, {""name"": ""whisper_asr_cer"", ""sequence"": ""float64""}, {""name"": ""whisper_asr_wer"", ""sequence"": ""float64""}, {""name"": ""whisper_asr_translation"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr_cer"", ""sequence"": ""float64""}, {""name"": ""seamlessm4t_asr_wer"", ""sequence"": ""float64""}, {""name"": ""seamlessm4t_asr_translation"", ""sequence"": ""string""}, {""name"": ""index_id"", ""dtype"": ""int64""}, {""name"": ""category"", ""dtype"": {""class_label"": {""names"": {""0"": ""science/technology"", ""1"": ""travel"", ""2"": ""politics"", ""3"": ""sports"", ""4"": ""health"", ""5"": ""entertainment"", ""6"": ""geography""}}}}, {""name"": ""text"", ""dtype"": ""string""}, {""name"": ""audio"", ""sequence"": {""audio"": {""sampling_rate"": 16000}}}], ""splits"": [{""name"": ""train"", ""num_bytes"": 902994682.0, ""num_examples"": 604}, {""name"": ""validation"", ""num_bytes"": 219686509.0, ""num_examples"": 71}, {""name"": ""test"", ""num_bytes"": 551553192.0, ""num_examples"": 176}], ""download_size"": 1663149178, ""dataset_size"": 1674234383.0}, {""config_name"": ""tha_Thai"", ""features"": [{""name"": ""sentence"", ""dtype"": ""string""}, {""name"": ""URL"", ""dtype"": ""string""}, {""name"": ""id"", ""dtype"": ""int32""}, {""name"": ""domain"", ""dtype"": ""string""}, {""name"": ""topic"", ""dtype"": ""string""}, {""name"": ""has_image"", ""dtype"": ""int32""}, {""name"": ""has_hyperlink"", ""dtype"": ""int32""}, {""name"": ""fleurs_id"", ""dtype"": ""int32""}, {""name"": ""filename"", ""sequence"": ""string""}, {""name"": ""raw_transcription"", ""dtype"": ""string""}, {""name"": ""transcription"", ""dtype"": ""string""}, {""name"": ""num_samples"", ""sequence"": ""int64""}, {""name"": ""speaker_id"", ""sequence"": ""int64""}, {""name"": ""gender"", ""sequence"": ""string""}, {""name"": ""whisper_asr"", ""sequence"": ""string""}, {""name"": ""whisper_asr_cer"", ""sequence"": ""float64""}, {""name"": ""whisper_asr_wer"", ""sequence"": ""float64""}, {""name"": ""whisper_asr_translation"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr_cer"", ""sequence"": ""float64""}, {""name"": ""seamlessm4t_asr_wer"", ""sequence"": ""float64""}, {""name"": ""seamlessm4t_asr_translation"", ""sequence"": ""string""}, {""name"": ""index_id"", ""dtype"": ""int64""}, {""name"": ""category"", ""dtype"": {""class_label"": {""names"": {""0"": ""science/technology"", ""1"": ""travel"", ""2"": ""politics"", ""3"": ""sports"", ""4"": ""health"", ""5"": ""entertainment"", ""6"": ""geography""}}}}, {""name"": ""text"", ""dtype"": ""string""}, {""name"": ""audio"", ""sequence"": {""audio"": {""sampling_rate"": 16000}}}], ""splits"": [{""name"": ""train"", ""num_bytes"": 980135962.0, ""num_examples"": 710}, {""name"": ""validation"", ""num_bytes"": 162467332.0, ""num_examples"": 71}, {""name"": ""test"", ""num_bytes"": 406974484.0, ""num_examples"": 176}], ""download_size"": 1542445400, ""dataset_size"": 1549577778.0}, {""config_name"": ""tur_Latn"", ""features"": [{""name"": ""sentence"", ""dtype"": ""string""}, {""name"": ""URL"", ""dtype"": ""string""}, {""name"": ""id"", ""dtype"": ""int32""}, {""name"": ""domain"", ""dtype"": ""string""}, {""name"": ""topic"", ""dtype"": ""string""}, {""name"": ""has_image"", ""dtype"": ""int32""}, {""name"": ""has_hyperlink"", ""dtype"": ""int32""}, {""name"": ""fleurs_id"", ""dtype"": ""int32""}, {""name"": ""filename"", ""sequence"": ""string""}, {""name"": ""raw_transcription"", ""dtype"": ""string""}, {""name"": ""transcription"", ""dtype"": ""string""}, {""name"": ""num_samples"", ""sequence"": ""int64""}, {""name"": ""speaker_id"", ""sequence"": ""int64""}, {""name"": ""gender"", ""sequence"": ""string""}, {""name"": ""whisper_asr"", ""sequence"": ""string""}, {""name"": ""whisper_asr_cer"", ""sequence"": ""float64""}, {""name"": ""whisper_asr_wer"", ""sequence"": ""float64""}, {""name"": ""whisper_asr_translation"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr_cer"", ""sequence"": ""float64""}, {""name"": ""seamlessm4t_asr_wer"", ""sequence"": ""float64""}, {""name"": ""seamlessm4t_asr_translation"", ""sequence"": ""string""}, {""name"": ""index_id"", ""dtype"": ""int64""}, {""name"": ""category"", ""dtype"": {""class_label"": {""names"": {""0"": ""science/technology"", ""1"": ""travel"", ""2"": ""politics"", ""3"": ""sports"", ""4"": ""health"", ""5"": ""entertainment"", ""6"": ""geography""}}}}, {""name"": ""text"", ""dtype"": ""string""}, {""name"": ""audio"", ""sequence"": {""audio"": {""sampling_rate"": 16000}}}], ""splits"": [{""name"": ""train"", ""num_bytes"": 944011748.0, ""num_examples"": 692}, {""name"": ""validation"", ""num_bytes"": 124523218.0, ""num_examples"": 67}, {""name"": ""test"", ""num_bytes"": 297991664.0, ""num_examples"": 164}], ""download_size"": 1350130541, ""dataset_size"": 1366526630.0}, {""config_name"": ""ukr_Cyrl"", ""features"": [{""name"": ""sentence"", ""dtype"": ""string""}, {""name"": ""URL"", ""dtype"": ""string""}, {""name"": ""id"", ""dtype"": ""int32""}, {""name"": ""domain"", ""dtype"": ""string""}, {""name"": ""topic"", ""dtype"": ""string""}, {""name"": ""has_image"", ""dtype"": ""int32""}, {""name"": ""has_hyperlink"", ""dtype"": ""int32""}, {""name"": ""fleurs_id"", ""dtype"": ""int32""}, {""name"": ""filename"", ""sequence"": ""string""}, {""name"": ""raw_transcription"", ""dtype"": ""string""}, {""name"": ""transcription"", ""dtype"": ""string""}, {""name"": ""num_samples"", ""sequence"": ""int64""}, {""name"": ""speaker_id"", ""sequence"": ""int64""}, {""name"": ""gender"", ""sequence"": ""string""}, {""name"": ""whisper_asr"", ""sequence"": ""string""}, {""name"": ""whisper_asr_cer"", ""sequence"": ""float64""}, {""name"": ""whisper_asr_wer"", ""sequence"": ""float64""}, {""name"": ""whisper_asr_translation"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr_cer"", ""sequence"": ""float64""}, {""name"": ""seamlessm4t_asr_wer"", ""sequence"": ""float64""}, {""name"": ""seamlessm4t_asr_translation"", ""sequence"": ""string""}, {""name"": ""index_id"", ""dtype"": ""int64""}, {""name"": ""category"", ""dtype"": {""class_label"": {""names"": {""0"": ""science/technology"", ""1"": ""travel"", ""2"": ""politics"", ""3"": ""sports"", ""4"": ""health"", ""5"": ""entertainment"", ""6"": ""geography""}}}}, {""name"": ""text"", ""dtype"": ""string""}, {""name"": ""audio"", ""sequence"": {""audio"": {""sampling_rate"": 16000}}}], ""splits"": [{""name"": ""train"", ""num_bytes"": 1048224078.0, ""num_examples"": 732}, {""name"": ""validation"", ""num_bytes"": 111149706.0, ""num_examples"": 67}, {""name"": ""test"", ""num_bytes"": 259797654.0, ""num_examples"": 164}], ""download_size"": 1392703995, ""dataset_size"": 1419171438.0}, {""config_name"": ""umb_Latn"", ""features"": [{""name"": ""sentence"", ""dtype"": ""string""}, {""name"": ""URL"", ""dtype"": ""string""}, {""name"": ""id"", ""dtype"": ""int32""}, {""name"": ""domain"", ""dtype"": ""string""}, {""name"": ""topic"", ""dtype"": ""string""}, {""name"": ""has_image"", ""dtype"": ""int32""}, {""name"": ""has_hyperlink"", ""dtype"": ""int32""}, {""name"": ""fleurs_id"", ""dtype"": ""int32""}, {""name"": ""filename"", ""sequence"": ""string""}, {""name"": ""raw_transcription"", ""dtype"": ""string""}, {""name"": ""transcription"", ""dtype"": ""string""}, {""name"": ""num_samples"", ""sequence"": ""int64""}, {""name"": ""speaker_id"", ""sequence"": ""int64""}, {""name"": ""gender"", ""sequence"": ""string""}, {""name"": ""whisper_asr"", ""sequence"": ""string""}, {""name"": ""whisper_asr_cer"", ""sequence"": ""float64""}, {""name"": ""whisper_asr_wer"", ""sequence"": ""float64""}, {""name"": ""whisper_asr_translation"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr_cer"", ""sequence"": ""float64""}, {""name"": ""seamlessm4t_asr_wer"", ""sequence"": ""float64""}, {""name"": ""seamlessm4t_asr_translation"", ""sequence"": ""string""}, {""name"": ""index_id"", ""dtype"": ""int64""}, {""name"": ""category"", ""dtype"": {""class_label"": {""names"": {""0"": ""science/technology"", ""1"": ""travel"", ""2"": ""politics"", ""3"": ""sports"", ""4"": ""health"", ""5"": ""entertainment"", ""6"": ""geography""}}}}, {""name"": ""text"", ""dtype"": ""string""}, {""name"": ""audio"", ""sequence"": {""audio"": {""sampling_rate"": 16000}}}], ""splits"": [{""name"": ""train"", ""num_bytes"": 1091341178.0, ""num_examples"": 473}, {""name"": ""validation"", ""num_bytes"": 85473293.0, ""num_examples"": 39}, {""name"": ""test"", ""num_bytes"": 270947610.0, ""num_examples"": 108}], ""download_size"": 1437512568, ""dataset_size"": 1447762081.0}, {""config_name"": ""urd_Arab"", ""features"": [{""name"": ""sentence"", ""dtype"": ""string""}, {""name"": ""URL"", ""dtype"": ""string""}, {""name"": ""id"", ""dtype"": ""int32""}, {""name"": ""domain"", ""dtype"": ""string""}, {""name"": ""topic"", ""dtype"": ""string""}, {""name"": ""has_image"", ""dtype"": ""int32""}, {""name"": ""has_hyperlink"", ""dtype"": ""int32""}, {""name"": ""fleurs_id"", ""dtype"": ""int32""}, {""name"": ""filename"", ""sequence"": ""string""}, {""name"": ""raw_transcription"", ""dtype"": ""string""}, {""name"": ""transcription"", ""dtype"": ""string""}, {""name"": ""num_samples"", ""sequence"": ""int64""}, {""name"": ""speaker_id"", ""sequence"": ""int64""}, {""name"": ""gender"", ""sequence"": ""string""}, {""name"": ""whisper_asr"", ""sequence"": ""string""}, {""name"": ""whisper_asr_cer"", ""sequence"": ""float64""}, {""name"": ""whisper_asr_wer"", ""sequence"": ""float64""}, {""name"": ""whisper_asr_translation"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr_cer"", ""sequence"": ""float64""}, {""name"": ""seamlessm4t_asr_wer"", ""sequence"": ""float64""}, {""name"": ""seamlessm4t_asr_translation"", ""sequence"": ""string""}, {""name"": ""index_id"", ""dtype"": ""int64""}, {""name"": ""category"", ""dtype"": {""class_label"": {""names"": {""0"": ""science/technology"", ""1"": ""travel"", ""2"": ""politics"", ""3"": ""sports"", ""4"": ""health"", ""5"": ""entertainment"", ""6"": ""geography""}}}}, {""name"": ""text"", ""dtype"": ""string""}, {""name"": ""audio"", ""sequence"": {""audio"": {""sampling_rate"": 16000}}}], ""splits"": [{""name"": ""train"", ""num_bytes"": 826013912.0, ""num_examples"": 636}, {""name"": ""validation"", ""num_bytes"": 85561681.0, ""num_examples"": 65}, {""name"": ""test"", ""num_bytes"": 100881890.0, ""num_examples"": 120}], ""download_size"": 994627663, ""dataset_size"": 1012457483.0}, {""config_name"": ""uzn_Latn"", ""features"": [{""name"": ""sentence"", ""dtype"": ""string""}, {""name"": ""URL"", ""dtype"": ""string""}, {""name"": ""id"", ""dtype"": ""int32""}, {""name"": ""domain"", ""dtype"": ""string""}, {""name"": ""topic"", ""dtype"": ""string""}, {""name"": ""has_image"", ""dtype"": ""int32""}, {""name"": ""has_hyperlink"", ""dtype"": ""int32""}, {""name"": ""fleurs_id"", ""dtype"": ""int32""}, {""name"": ""filename"", ""sequence"": ""string""}, {""name"": ""raw_transcription"", ""dtype"": ""string""}, {""name"": ""transcription"", ""dtype"": ""string""}, {""name"": ""num_samples"", ""sequence"": ""int64""}, {""name"": ""speaker_id"", ""sequence"": ""int64""}, {""name"": ""gender"", ""sequence"": ""string""}, {""name"": ""whisper_asr"", ""sequence"": ""string""}, {""name"": ""whisper_asr_cer"", ""sequence"": ""float64""}, {""name"": ""whisper_asr_wer"", ""sequence"": ""float64""}, {""name"": ""whisper_asr_translation"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr_cer"", ""sequence"": ""float64""}, {""name"": ""seamlessm4t_asr_wer"", ""sequence"": ""float64""}, {""name"": ""seamlessm4t_asr_translation"", ""sequence"": ""string""}, {""name"": ""index_id"", ""dtype"": ""int64""}, {""name"": ""category"", ""dtype"": {""class_label"": {""names"": {""0"": ""science/technology"", ""1"": ""travel"", ""2"": ""politics"", ""3"": ""sports"", ""4"": ""health"", ""5"": ""entertainment"", ""6"": ""geography""}}}}, {""name"": ""text"", ""dtype"": ""string""}, {""name"": ""audio"", ""sequence"": {""audio"": {""sampling_rate"": 16000}}}], ""splits"": [{""name"": ""train"", ""num_bytes"": 1164650128.0, ""num_examples"": 734}, {""name"": ""validation"", ""num_bytes"": 129797222.0, ""num_examples"": 69}, {""name"": ""test"", ""num_bytes"": 329525580.0, ""num_examples"": 175}], ""download_size"": 1595253953, ""dataset_size"": 1623972930.0}, {""config_name"": ""vie_Latn"", ""features"": [{""name"": ""sentence"", ""dtype"": ""string""}, {""name"": ""URL"", ""dtype"": ""string""}, {""name"": ""id"", ""dtype"": ""int32""}, {""name"": ""domain"", ""dtype"": ""string""}, {""name"": ""topic"", ""dtype"": ""string""}, {""name"": ""has_image"", ""dtype"": ""int32""}, {""name"": ""has_hyperlink"", ""dtype"": ""int32""}, {""name"": ""fleurs_id"", ""dtype"": ""int32""}, {""name"": ""filename"", ""sequence"": ""string""}, {""name"": ""raw_transcription"", ""dtype"": ""string""}, {""name"": ""transcription"", ""dtype"": ""string""}, {""name"": ""num_samples"", ""sequence"": ""int64""}, {""name"": ""speaker_id"", ""sequence"": ""int64""}, {""name"": ""gender"", ""sequence"": ""string""}, {""name"": ""whisper_asr"", ""sequence"": ""string""}, {""name"": ""whisper_asr_cer"", ""sequence"": ""float64""}, {""name"": ""whisper_asr_wer"", ""sequence"": ""float64""}, {""name"": ""whisper_asr_translation"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr_cer"", ""sequence"": ""float64""}, {""name"": ""seamlessm4t_asr_wer"", ""sequence"": ""float64""}, {""name"": ""seamlessm4t_asr_translation"", ""sequence"": ""string""}, {""name"": ""index_id"", ""dtype"": ""int64""}, {""name"": ""category"", ""dtype"": {""class_label"": {""names"": {""0"": ""science/technology"", ""1"": ""travel"", ""2"": ""politics"", ""3"": ""sports"", ""4"": ""health"", ""5"": ""entertainment"", ""6"": ""geography""}}}}, {""name"": ""text"", ""dtype"": ""string""}, {""name"": ""audio"", ""sequence"": {""audio"": {""sampling_rate"": 16000}}}], ""splits"": [{""name"": ""train"", ""num_bytes"": 1047994786.0, ""num_examples"": 737}, {""name"": ""validation"", ""num_bytes"": 129736494.0, ""num_examples"": 70}, {""name"": ""test"", ""num_bytes"": 350270337.0, ""num_examples"": 176}], ""download_size"": 1516592431, ""dataset_size"": 1528001617.0}, {""config_name"": ""wol_Latn"", ""features"": [{""name"": ""sentence"", ""dtype"": ""string""}, {""name"": ""URL"", ""dtype"": ""string""}, {""name"": ""id"", ""dtype"": ""int32""}, {""name"": ""domain"", ""dtype"": ""string""}, {""name"": ""topic"", ""dtype"": ""string""}, {""name"": ""has_image"", ""dtype"": ""int32""}, {""name"": ""has_hyperlink"", ""dtype"": ""int32""}, {""name"": ""fleurs_id"", ""dtype"": ""int32""}, {""name"": ""filename"", ""sequence"": ""string""}, {""name"": ""raw_transcription"", ""dtype"": ""string""}, {""name"": ""transcription"", ""dtype"": ""string""}, {""name"": ""num_samples"", ""sequence"": ""int64""}, {""name"": ""speaker_id"", ""sequence"": ""int64""}, {""name"": ""gender"", ""sequence"": ""string""}, {""name"": ""whisper_asr"", ""sequence"": ""string""}, {""name"": ""whisper_asr_cer"", ""sequence"": ""float64""}, {""name"": ""whisper_asr_wer"", ""sequence"": ""float64""}, {""name"": ""whisper_asr_translation"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr_cer"", ""sequence"": ""float64""}, {""name"": ""seamlessm4t_asr_wer"", ""sequence"": ""float64""}, {""name"": ""seamlessm4t_asr_translation"", ""sequence"": ""string""}, {""name"": ""index_id"", ""dtype"": ""int64""}, {""name"": ""category"", ""dtype"": {""class_label"": {""names"": {""0"": ""science/technology"", ""1"": ""travel"", ""2"": ""politics"", ""3"": ""sports"", ""4"": ""health"", ""5"": ""entertainment"", ""6"": ""geography""}}}}, {""name"": ""text"", ""dtype"": ""string""}, {""name"": ""audio"", ""sequence"": {""audio"": {""sampling_rate"": 16000}}}], ""splits"": [{""name"": ""train"", ""num_bytes"": 990326354.0, ""num_examples"": 643}, {""name"": ""validation"", ""num_bytes"": 78434391.0, ""num_examples"": 52}, {""name"": ""test"", ""num_bytes"": 210576385.0, ""num_examples"": 123}], ""download_size"": 1178479659, ""dataset_size"": 1279337130.0}, {""config_name"": ""xho_Latn"", ""features"": [{""name"": ""sentence"", ""dtype"": ""string""}, {""name"": ""URL"", ""dtype"": ""string""}, {""name"": ""id"", ""dtype"": ""int32""}, {""name"": ""domain"", ""dtype"": ""string""}, {""name"": ""topic"", ""dtype"": ""string""}, {""name"": ""has_image"", ""dtype"": ""int32""}, {""name"": ""has_hyperlink"", ""dtype"": ""int32""}, {""name"": ""fleurs_id"", ""dtype"": ""int32""}, {""name"": ""filename"", ""sequence"": ""string""}, {""name"": ""raw_transcription"", ""dtype"": ""string""}, {""name"": ""transcription"", ""dtype"": ""string""}, {""name"": ""num_samples"", ""sequence"": ""int64""}, {""name"": ""speaker_id"", ""sequence"": ""int64""}, {""name"": ""gender"", ""sequence"": ""string""}, {""name"": ""whisper_asr"", ""sequence"": ""string""}, {""name"": ""whisper_asr_cer"", ""sequence"": ""float64""}, {""name"": ""whisper_asr_wer"", ""sequence"": ""float64""}, {""name"": ""whisper_asr_translation"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr_cer"", ""sequence"": ""float64""}, {""name"": ""seamlessm4t_asr_wer"", ""sequence"": ""float64""}, {""name"": ""seamlessm4t_asr_translation"", ""sequence"": ""string""}, {""name"": ""index_id"", ""dtype"": ""int64""}, {""name"": ""category"", ""dtype"": {""class_label"": {""names"": {""0"": ""science/technology"", ""1"": ""travel"", ""2"": ""politics"", ""3"": ""sports"", ""4"": ""health"", ""5"": ""entertainment"", ""6"": ""geography""}}}}, {""name"": ""text"", ""dtype"": ""string""}, {""name"": ""audio"", ""sequence"": {""audio"": {""sampling_rate"": 16000}}}], ""splits"": [{""name"": ""train"", ""num_bytes"": 1551460398.0, ""num_examples"": 756}, {""name"": ""validation"", ""num_bytes"": 171791181.0, ""num_examples"": 71}, {""name"": ""test"", ""num_bytes"": 440037468.0, ""num_examples"": 177}], ""download_size"": 2117855982, ""dataset_size"": 2163289047.0}, {""config_name"": ""yor_Latn"", ""features"": [{""name"": ""sentence"", ""dtype"": ""string""}, {""name"": ""URL"", ""dtype"": ""string""}, {""name"": ""id"", ""dtype"": ""int32""}, {""name"": ""domain"", ""dtype"": ""string""}, {""name"": ""topic"", ""dtype"": ""string""}, {""name"": ""has_image"", ""dtype"": ""int32""}, {""name"": ""has_hyperlink"", ""dtype"": ""int32""}, {""name"": ""fleurs_id"", ""dtype"": ""int32""}, {""name"": ""filename"", ""sequence"": ""string""}, {""name"": ""raw_transcription"", ""dtype"": ""string""}, {""name"": ""transcription"", ""dtype"": ""string""}, {""name"": ""num_samples"", ""sequence"": ""int64""}, {""name"": ""speaker_id"", ""sequence"": ""int64""}, {""name"": ""gender"", ""sequence"": ""string""}, {""name"": ""whisper_asr"", ""sequence"": ""string""}, {""name"": ""whisper_asr_cer"", ""sequence"": ""float64""}, {""name"": ""whisper_asr_wer"", ""sequence"": ""float64""}, {""name"": ""whisper_asr_translation"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr_cer"", ""sequence"": ""float64""}, {""name"": ""seamlessm4t_asr_wer"", ""sequence"": ""float64""}, {""name"": ""seamlessm4t_asr_translation"", ""sequence"": ""string""}, {""name"": ""index_id"", ""dtype"": ""int64""}, {""name"": ""category"", ""dtype"": {""class_label"": {""names"": {""0"": ""science/technology"", ""1"": ""travel"", ""2"": ""politics"", ""3"": ""sports"", ""4"": ""health"", ""5"": ""entertainment"", ""6"": ""geography""}}}}, {""name"": ""text"", ""dtype"": ""string""}, {""name"": ""audio"", ""sequence"": {""audio"": {""sampling_rate"": 16000}}}], ""splits"": [{""name"": ""train"", ""num_bytes"": 1150630919.0, ""num_examples"": 686}, {""name"": ""validation"", ""num_bytes"": 196405974.0, ""num_examples"": 71}, {""name"": ""test"", ""num_bytes"": 438420901.0, ""num_examples"": 172}], ""download_size"": 1783974678, ""dataset_size"": 1785457794.0}, {""config_name"": ""zho_Hans"", ""features"": [{""name"": ""sentence"", ""dtype"": ""string""}, {""name"": ""URL"", ""dtype"": ""string""}, {""name"": ""id"", ""dtype"": ""int32""}, {""name"": ""domain"", ""dtype"": ""string""}, {""name"": ""topic"", ""dtype"": ""string""}, {""name"": ""has_image"", ""dtype"": ""int32""}, {""name"": ""has_hyperlink"", ""dtype"": ""int32""}, {""name"": ""fleurs_id"", ""dtype"": ""int32""}, {""name"": ""filename"", ""sequence"": ""string""}, {""name"": ""raw_transcription"", ""dtype"": ""string""}, {""name"": ""transcription"", ""dtype"": ""string""}, {""name"": ""num_samples"", ""sequence"": ""int64""}, {""name"": ""speaker_id"", ""sequence"": ""int64""}, {""name"": ""gender"", ""sequence"": ""string""}, {""name"": ""whisper_asr"", ""sequence"": ""string""}, {""name"": ""whisper_asr_cer"", ""sequence"": ""float64""}, {""name"": ""whisper_asr_wer"", ""sequence"": ""float64""}, {""name"": ""whisper_asr_translation"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr_cer"", ""sequence"": ""float64""}, {""name"": ""seamlessm4t_asr_wer"", ""sequence"": ""float64""}, {""name"": ""seamlessm4t_asr_translation"", ""sequence"": ""string""}, {""name"": ""index_id"", ""dtype"": ""int64""}, {""name"": ""category"", ""dtype"": {""class_label"": {""names"": {""0"": ""science/technology"", ""1"": ""travel"", ""2"": ""politics"", ""3"": ""sports"", ""4"": ""health"", ""5"": ""entertainment"", ""6"": ""geography""}}}}, {""name"": ""text"", ""dtype"": ""string""}, {""name"": ""audio"", ""sequence"": {""audio"": {""sampling_rate"": 16000}}}], ""splits"": [{""name"": ""train"", ""num_bytes"": 1142150085.0, ""num_examples"": 751}, {""name"": ""validation"", ""num_bytes"": 137384393.0, ""num_examples"": 71}, {""name"": ""test"", ""num_bytes"": 363570798.0, ""num_examples"": 176}], ""download_size"": 1620354318, ""dataset_size"": 1643105276.0}, {""config_name"": ""zho_Hant"", ""features"": [{""name"": ""sentence"", ""dtype"": ""string""}, {""name"": ""URL"", ""dtype"": ""string""}, {""name"": ""id"", ""dtype"": ""int32""}, {""name"": ""domain"", ""dtype"": ""string""}, {""name"": ""topic"", ""dtype"": ""string""}, {""name"": ""has_image"", ""dtype"": ""int32""}, {""name"": ""has_hyperlink"", ""dtype"": ""int32""}, {""name"": ""fleurs_id"", ""dtype"": ""int32""}, {""name"": ""filename"", ""sequence"": ""string""}, {""name"": ""raw_transcription"", ""dtype"": ""string""}, {""name"": ""transcription"", ""dtype"": ""string""}, {""name"": ""num_samples"", ""sequence"": ""int64""}, {""name"": ""speaker_id"", ""sequence"": ""int64""}, {""name"": ""gender"", ""sequence"": ""string""}, {""name"": ""whisper_asr"", ""sequence"": ""string""}, {""name"": ""whisper_asr_cer"", ""sequence"": ""float64""}, {""name"": ""whisper_asr_wer"", ""sequence"": ""float64""}, {""name"": ""whisper_asr_translation"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr_cer"", ""sequence"": ""float64""}, {""name"": ""seamlessm4t_asr_wer"", ""sequence"": ""float64""}, {""name"": ""seamlessm4t_asr_translation"", ""sequence"": ""string""}, {""name"": ""index_id"", ""dtype"": ""int64""}, {""name"": ""category"", ""dtype"": {""class_label"": {""names"": {""0"": ""science/technology"", ""1"": ""travel"", ""2"": ""politics"", ""3"": ""sports"", ""4"": ""health"", ""5"": ""entertainment"", ""6"": ""geography""}}}}, {""name"": ""text"", ""dtype"": ""string""}, {""name"": ""audio"", ""sequence"": {""audio"": {""sampling_rate"": 16000}}}], ""splits"": [{""name"": ""train"", ""num_bytes"": 854618903.0, ""num_examples"": 624}, {""name"": ""validation"", ""num_bytes"": 125089728.0, ""num_examples"": 70}, {""name"": ""test"", ""num_bytes"": 304504543.0, ""num_examples"": 172}], ""download_size"": 1280993945, ""dataset_size"": 1284213174.0}, {""config_name"": ""zsm_Latn"", ""features"": [{""name"": ""sentence"", ""dtype"": ""string""}, {""name"": ""URL"", ""dtype"": ""string""}, {""name"": ""id"", ""dtype"": ""int32""}, {""name"": ""domain"", ""dtype"": ""string""}, {""name"": ""topic"", ""dtype"": ""string""}, {""name"": ""has_image"", ""dtype"": ""int32""}, {""name"": ""has_hyperlink"", ""dtype"": ""int32""}, {""name"": ""fleurs_id"", ""dtype"": ""int32""}, {""name"": ""filename"", ""sequence"": ""string""}, {""name"": ""raw_transcription"", ""dtype"": ""string""}, {""name"": ""transcription"", ""dtype"": ""string""}, {""name"": ""num_samples"", ""sequence"": ""int64""}, {""name"": ""speaker_id"", ""sequence"": ""int64""}, {""name"": ""gender"", ""sequence"": ""string""}, {""name"": ""whisper_asr"", ""sequence"": ""string""}, {""name"": ""whisper_asr_cer"", ""sequence"": ""float64""}, {""name"": ""whisper_asr_wer"", ""sequence"": ""float64""}, {""name"": ""whisper_asr_translation"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr_cer"", ""sequence"": ""float64""}, {""name"": ""seamlessm4t_asr_wer"", ""sequence"": ""float64""}, {""name"": ""seamlessm4t_asr_translation"", ""sequence"": ""string""}, {""name"": ""index_id"", ""dtype"": ""int64""}, {""name"": ""category"", ""dtype"": {""class_label"": {""names"": {""0"": ""science/technology"", ""1"": ""travel"", ""2"": ""politics"", ""3"": ""sports"", ""4"": ""health"", ""5"": ""entertainment"", ""6"": ""geography""}}}}, {""name"": ""text"", ""dtype"": ""string""}, {""name"": ""audio"", ""sequence"": {""audio"": {""sampling_rate"": 16000}}}], ""splits"": [{""name"": ""train"", ""num_bytes"": 1108735035.0, ""num_examples"": 713}, {""name"": ""validation"", ""num_bytes"": 101489819.0, ""num_examples"": 67}, {""name"": ""test"", ""num_bytes"": 267098586.0, ""num_examples"": 171}], ""download_size"": 1468618966, ""dataset_size"": 1477323440.0}, {""config_name"": ""zul_Latn"", ""features"": [{""name"": ""sentence"", ""dtype"": ""string""}, {""name"": ""URL"", ""dtype"": ""string""}, {""name"": ""id"", ""dtype"": ""int32""}, {""name"": ""domain"", ""dtype"": ""string""}, {""name"": ""topic"", ""dtype"": ""string""}, {""name"": ""has_image"", ""dtype"": ""int32""}, {""name"": ""has_hyperlink"", ""dtype"": ""int32""}, {""name"": ""fleurs_id"", ""dtype"": ""int32""}, {""name"": ""filename"", ""sequence"": ""string""}, {""name"": ""raw_transcription"", ""dtype"": ""string""}, {""name"": ""transcription"", ""dtype"": ""string""}, {""name"": ""num_samples"", ""sequence"": ""int64""}, {""name"": ""speaker_id"", ""sequence"": ""int64""}, {""name"": ""gender"", ""sequence"": ""string""}, {""name"": ""whisper_asr"", ""sequence"": ""string""}, {""name"": ""whisper_asr_cer"", ""sequence"": ""float64""}, {""name"": ""whisper_asr_wer"", ""sequence"": ""float64""}, {""name"": ""whisper_asr_translation"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr_cer"", ""sequence"": ""float64""}, {""name"": ""seamlessm4t_asr_wer"", ""sequence"": ""float64""}, {""name"": ""seamlessm4t_asr_translation"", ""sequence"": ""string""}, {""name"": ""index_id"", ""dtype"": ""int64""}, {""name"": ""category"", ""dtype"": {""class_label"": {""names"": {""0"": ""science/technology"", ""1"": ""travel"", ""2"": ""politics"", ""3"": ""sports"", ""4"": ""health"", ""5"": ""entertainment"", ""6"": ""geography""}}}}, {""name"": ""text"", ""dtype"": ""string""}, {""name"": ""audio"", ""sequence"": {""audio"": {""sampling_rate"": 16000}}}], ""splits"": [{""name"": ""train"", ""num_bytes"": 1645680235.0, ""num_examples"": 739}, {""name"": ""validation"", ""num_bytes"": 165746159.0, ""num_examples"": 69}, {""name"": ""test"", ""num_bytes"": 449851598.0, ""num_examples"": 175}], ""download_size"": 2219566462, ""dataset_size"": 2261277992.0}], ""configs"": [{""config_name"": ""afr_Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""data/afr_Latn/train-*""}, {""split"": ""validation"", ""path"": ""data/afr_Latn/validation-*""}, {""split"": ""test"", ""path"": ""data/afr_Latn/test-*""}]}, {""config_name"": ""amh_Ethi"", ""data_files"": [{""split"": ""train"", ""path"": ""data/amh_Ethi/train-*""}, {""split"": ""validation"", ""path"": ""data/amh_Ethi/validation-*""}, {""split"": ""test"", ""path"": ""data/amh_Ethi/test-*""}]}, {""config_name"": ""arb_Arab"", ""data_files"": [{""split"": ""train"", ""path"": ""data/arb_Arab/train-*""}, {""split"": ""validation"", ""path"": ""data/arb_Arab/validation-*""}, {""split"": ""test"", ""path"": ""data/arb_Arab/test-*""}]}, {""config_name"": ""asm_Beng"", ""data_files"": [{""split"": ""train"", ""path"": ""data/asm_Beng/train-*""}, {""split"": ""validation"", ""path"": ""data/asm_Beng/validation-*""}, {""split"": ""test"", ""path"": ""data/asm_Beng/test-*""}]}, {""config_name"": ""ast_Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""data/ast_Latn/train-*""}, {""split"": ""validation"", ""path"": ""data/ast_Latn/validation-*""}, {""split"": ""test"", ""path"": ""data/ast_Latn/test-*""}]}, {""config_name"": ""azj_Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""data/azj_Latn/train-*""}, {""split"": ""validation"", ""path"": ""data/azj_Latn/validation-*""}, {""split"": ""test"", ""path"": ""data/azj_Latn/test-*""}]}, {""config_name"": ""bel_Cyrl"", ""data_files"": [{""split"": ""train"", ""path"": ""data/bel_Cyrl/train-*""}, {""split"": ""validation"", ""path"": ""data/bel_Cyrl/validation-*""}, {""split"": ""test"", ""path"": ""data/bel_Cyrl/test-*""}]}, {""config_name"": ""ben_Beng"", ""data_files"": [{""split"": ""train"", ""path"": ""data/ben_Beng/train-*""}, {""split"": ""validation"", ""path"": ""data/ben_Beng/validation-*""}, {""split"": ""test"", ""path"": ""data/ben_Beng/test-*""}]}, {""config_name"": ""bos_Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""data/bos_Latn/train-*""}, {""split"": ""validation"", ""path"": ""data/bos_Latn/validation-*""}, {""split"": ""test"", ""path"": ""data/bos_Latn/test-*""}]}, {""config_name"": ""bul_Cyrl"", ""data_files"": [{""split"": ""train"", ""path"": ""data/bul_Cyrl/train-*""}, {""split"": ""validation"", ""path"": ""data/bul_Cyrl/validation-*""}, {""split"": ""test"", ""path"": ""data/bul_Cyrl/test-*""}]}, {""config_name"": ""cat_Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""data/cat_Latn/train-*""}, {""split"": ""validation"", ""path"": ""data/cat_Latn/validation-*""}, {""split"": ""test"", ""path"": ""data/cat_Latn/test-*""}]}, {""config_name"": ""ceb_Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""data/ceb_Latn/train-*""}, {""split"": ""validation"", ""path"": ""data/ceb_Latn/validation-*""}, {""split"": ""test"", ""path"": ""data/ceb_Latn/test-*""}]}, {""config_name"": ""ces_Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""data/ces_Latn/train-*""}, {""split"": ""validation"", ""path"": ""data/ces_Latn/validation-*""}, {""split"": ""test"", ""path"": ""data/ces_Latn/test-*""}]}, {""config_name"": ""ckb_Arab"", ""data_files"": [{""split"": ""train"", ""path"": ""data/ckb_Arab/train-*""}, {""split"": ""validation"", ""path"": ""data/ckb_Arab/validation-*""}, {""split"": ""test"", ""path"": ""data/ckb_Arab/test-*""}]}, {""config_name"": ""cym_Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""data/cym_Latn/train-*""}, {""split"": ""validation"", ""path"": ""data/cym_Latn/validation-*""}, {""split"": ""test"", ""path"": ""data/cym_Latn/test-*""}]}, {""config_name"": ""dan_Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""data/dan_Latn/train-*""}, {""split"": ""validation"", ""path"": ""data/dan_Latn/validation-*""}, {""split"": ""test"", ""path"": ""data/dan_Latn/test-*""}]}, {""config_name"": ""deu_Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""data/deu_Latn/train-*""}, {""split"": ""validation"", ""path"": ""data/deu_Latn/validation-*""}, {""split"": ""test"", ""path"": ""data/deu_Latn/test-*""}]}, {""config_name"": ""ell_Grek"", ""data_files"": [{""split"": ""train"", ""path"": ""data/ell_Grek/train-*""}, {""split"": ""validation"", ""path"": ""data/ell_Grek/validation-*""}, {""split"": ""test"", ""path"": ""data/ell_Grek/test-*""}]}, {""config_name"": ""eng_Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""data/eng_Latn/train-*""}, {""split"": ""validation"", ""path"": ""data/eng_Latn/validation-*""}, {""split"": ""test"", ""path"": ""data/eng_Latn/test-*""}]}, {""config_name"": ""est_Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""data/est_Latn/train-*""}, {""split"": ""validation"", ""path"": ""data/est_Latn/validation-*""}, {""split"": ""test"", ""path"": ""data/est_Latn/test-*""}]}, {""config_name"": ""fin_Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""data/fin_Latn/train-*""}, {""split"": ""validation"", ""path"": ""data/fin_Latn/validation-*""}, {""split"": ""test"", ""path"": ""data/fin_Latn/test-*""}]}, {""config_name"": ""fra_Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""data/fra_Latn/train-*""}, {""split"": ""validation"", ""path"": ""data/fra_Latn/validation-*""}, {""split"": ""test"", ""path"": ""data/fra_Latn/test-*""}]}, {""config_name"": ""fuv_Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""data/fuv_Latn/train-*""}, {""split"": ""validation"", ""path"": ""data/fuv_Latn/validation-*""}, {""split"": ""test"", ""path"": ""data/fuv_Latn/test-*""}]}, {""config_name"": ""gaz_Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""data/gaz_Latn/train-*""}, {""split"": ""validation"", ""path"": ""data/gaz_Latn/validation-*""}, {""split"": ""test"", ""path"": ""data/gaz_Latn/test-*""}]}, {""config_name"": ""gle_Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""data/gle_Latn/train-*""}, {""split"": ""validation"", ""path"": ""data/gle_Latn/validation-*""}, {""split"": ""test"", ""path"": ""data/gle_Latn/test-*""}]}, {""config_name"": ""glg_Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""data/glg_Latn/train-*""}, {""split"": ""validation"", ""path"": ""data/glg_Latn/validation-*""}, {""split"": ""test"", ""path"": ""data/glg_Latn/test-*""}]}, {""config_name"": ""guj_Gujr"", ""data_files"": [{""split"": ""train"", ""path"": ""data/guj_Gujr/train-*""}, {""split"": ""validation"", ""path"": ""data/guj_Gujr/validation-*""}, {""split"": ""test"", ""path"": ""data/guj_Gujr/test-*""}]}, {""config_name"": ""hau_Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""data/hau_Latn/train-*""}, {""split"": ""validation"", ""path"": ""data/hau_Latn/validation-*""}, {""split"": ""test"", ""path"": ""data/hau_Latn/test-*""}]}, {""config_name"": ""heb_Hebr"", ""data_files"": [{""split"": ""train"", ""path"": ""data/heb_Hebr/train-*""}, {""split"": ""validation"", ""path"": ""data/heb_Hebr/validation-*""}, {""split"": ""test"", ""path"": ""data/heb_Hebr/test-*""}]}, {""config_name"": ""hin_Deva"", ""data_files"": [{""split"": ""train"", ""path"": ""data/hin_Deva/train-*""}, {""split"": ""validation"", ""path"": ""data/hin_Deva/validation-*""}, {""split"": ""test"", ""path"": ""data/hin_Deva/test-*""}]}, {""config_name"": ""hrv_Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""data/hrv_Latn/train-*""}, {""split"": ""validation"", ""path"": ""data/hrv_Latn/validation-*""}, {""split"": ""test"", ""path"": ""data/hrv_Latn/test-*""}]}, {""config_name"": ""hun_Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""data/hun_Latn/train-*""}, {""split"": ""validation"", ""path"": ""data/hun_Latn/validation-*""}, {""split"": ""test"", ""path"": ""data/hun_Latn/test-*""}]}, {""config_name"": ""hye_Armn"", ""data_files"": [{""split"": ""train"", ""path"": ""data/hye_Armn/train-*""}, {""split"": ""validation"", ""path"": ""data/hye_Armn/validation-*""}, {""split"": ""test"", ""path"": ""data/hye_Armn/test-*""}]}, {""config_name"": ""ibo_Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""data/ibo_Latn/train-*""}, {""split"": ""validation"", ""path"": ""data/ibo_Latn/validation-*""}, {""split"": ""test"", ""path"": ""data/ibo_Latn/test-*""}]}, {""config_name"": ""ind_Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""data/ind_Latn/train-*""}, {""split"": ""validation"", ""path"": ""data/ind_Latn/validation-*""}, {""split"": ""test"", ""path"": ""data/ind_Latn/test-*""}]}, {""config_name"": ""isl_Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""data/isl_Latn/train-*""}, {""split"": ""validation"", ""path"": ""data/isl_Latn/validation-*""}, {""split"": ""test"", ""path"": ""data/isl_Latn/test-*""}]}, {""config_name"": ""ita_Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""data/ita_Latn/train-*""}, {""split"": ""validation"", ""path"": ""data/ita_Latn/validation-*""}, {""split"": ""test"", ""path"": ""data/ita_Latn/test-*""}]}, {""config_name"": ""jav_Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""data/jav_Latn/train-*""}, {""split"": ""validation"", ""path"": ""data/jav_Latn/validation-*""}, {""split"": ""test"", ""path"": ""data/jav_Latn/test-*""}]}, {""config_name"": ""jpn_Jpan"", ""data_files"": [{""split"": ""train"", ""path"": ""data/jpn_Jpan/train-*""}, {""split"": ""validation"", ""path"": ""data/jpn_Jpan/validation-*""}, {""split"": ""test"", ""path"": ""data/jpn_Jpan/test-*""}]}, {""config_name"": ""kam_Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""data/kam_Latn/train-*""}, {""split"": ""validation"", ""path"": ""data/kam_Latn/validation-*""}, {""split"": ""test"", ""path"": ""data/kam_Latn/test-*""}]}, {""config_name"": ""kan_Knda"", ""data_files"": [{""split"": ""train"", ""path"": ""data/kan_Knda/train-*""}, {""split"": ""validation"", ""path"": ""data/kan_Knda/validation-*""}, {""split"": ""test"", ""path"": ""data/kan_Knda/test-*""}]}, {""config_name"": ""kat_Geor"", ""data_files"": [{""split"": ""train"", ""path"": ""data/kat_Geor/train-*""}, {""split"": ""validation"", ""path"": ""data/kat_Geor/validation-*""}, {""split"": ""test"", ""path"": ""data/kat_Geor/test-*""}]}, {""config_name"": ""kaz_Cyrl"", ""data_files"": [{""split"": ""train"", ""path"": ""data/kaz_Cyrl/train-*""}, {""split"": ""validation"", ""path"": ""data/kaz_Cyrl/validation-*""}, {""split"": ""test"", ""path"": ""data/kaz_Cyrl/test-*""}]}, {""config_name"": ""kea_Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""data/kea_Latn/train-*""}, {""split"": ""validation"", ""path"": ""data/kea_Latn/validation-*""}, {""split"": ""test"", ""path"": ""data/kea_Latn/test-*""}]}, {""config_name"": ""khk_Cyrl"", ""data_files"": [{""split"": ""train"", ""path"": ""data/khk_Cyrl/train-*""}, {""split"": ""validation"", ""path"": ""data/khk_Cyrl/validation-*""}, {""split"": ""test"", ""path"": ""data/khk_Cyrl/test-*""}]}, {""config_name"": ""khm_Khmr"", ""data_files"": [{""split"": ""train"", ""path"": ""data/khm_Khmr/train-*""}, {""split"": ""validation"", ""path"": ""data/khm_Khmr/validation-*""}, {""split"": ""test"", ""path"": ""data/khm_Khmr/test-*""}]}, {""config_name"": ""kir_Cyrl"", ""data_files"": [{""split"": ""train"", ""path"": ""data/kir_Cyrl/train-*""}, {""split"": ""validation"", ""path"": ""data/kir_Cyrl/validation-*""}, {""split"": ""test"", ""path"": ""data/kir_Cyrl/test-*""}]}, {""config_name"": ""kor_Hang"", ""data_files"": [{""split"": ""train"", ""path"": ""data/kor_Hang/train-*""}, {""split"": ""validation"", ""path"": ""data/kor_Hang/validation-*""}, {""split"": ""test"", ""path"": ""data/kor_Hang/test-*""}]}, {""config_name"": ""lao_Laoo"", ""data_files"": [{""split"": ""train"", ""path"": ""data/lao_Laoo/train-*""}, {""split"": ""validation"", ""path"": ""data/lao_Laoo/validation-*""}, {""split"": ""test"", ""path"": ""data/lao_Laoo/test-*""}]}, {""config_name"": ""lin_Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""data/lin_Latn/train-*""}, {""split"": ""validation"", ""path"": ""data/lin_Latn/validation-*""}, {""split"": ""test"", ""path"": ""data/lin_Latn/test-*""}]}, {""config_name"": ""lit_Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""data/lit_Latn/train-*""}, {""split"": ""validation"", ""path"": ""data/lit_Latn/validation-*""}, {""split"": ""test"", ""path"": ""data/lit_Latn/test-*""}]}, {""config_name"": ""ltz_Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""data/ltz_Latn/train-*""}, {""split"": ""validation"", ""path"": ""data/ltz_Latn/validation-*""}, {""split"": ""test"", ""path"": ""data/ltz_Latn/test-*""}]}, {""config_name"": ""lug_Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""data/lug_Latn/train-*""}, {""split"": ""validation"", ""path"": ""data/lug_Latn/validation-*""}, {""split"": ""test"", ""path"": ""data/lug_Latn/test-*""}]}, {""config_name"": ""luo_Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""data/luo_Latn/train-*""}, {""split"": ""validation"", ""path"": ""data/luo_Latn/validation-*""}, {""split"": ""test"", ""path"": ""data/luo_Latn/test-*""}]}, {""config_name"": ""lvs_Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""data/lvs_Latn/train-*""}, {""split"": ""validation"", ""path"": ""data/lvs_Latn/validation-*""}, {""split"": ""test"", ""path"": ""data/lvs_Latn/test-*""}]}, {""config_name"": ""mal_Mlym"", ""data_files"": [{""split"": ""train"", ""path"": ""data/mal_Mlym/train-*""}, {""split"": ""validation"", ""path"": ""data/mal_Mlym/validation-*""}, {""split"": ""test"", ""path"": ""data/mal_Mlym/test-*""}]}, {""config_name"": ""mar_Deva"", ""data_files"": [{""split"": ""train"", ""path"": ""data/mar_Deva/train-*""}, {""split"": ""validation"", ""path"": ""data/mar_Deva/validation-*""}, {""split"": ""test"", ""path"": ""data/mar_Deva/test-*""}]}, {""config_name"": ""mkd_Cyrl"", ""data_files"": [{""split"": ""train"", ""path"": ""data/mkd_Cyrl/train-*""}, {""split"": ""validation"", ""path"": ""data/mkd_Cyrl/validation-*""}, {""split"": ""test"", ""path"": ""data/mkd_Cyrl/test-*""}]}, {""config_name"": ""mlt_Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""data/mlt_Latn/train-*""}, {""split"": ""validation"", ""path"": ""data/mlt_Latn/validation-*""}, {""split"": ""test"", ""path"": ""data/mlt_Latn/test-*""}]}, {""config_name"": ""mri_Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""data/mri_Latn/train-*""}, {""split"": ""validation"", ""path"": ""data/mri_Latn/validation-*""}, {""split"": ""test"", ""path"": ""data/mri_Latn/test-*""}]}, {""config_name"": ""mya_Mymr"", ""data_files"": [{""split"": ""train"", ""path"": ""data/mya_Mymr/train-*""}, {""split"": ""validation"", ""path"": ""data/mya_Mymr/validation-*""}, {""split"": ""test"", ""path"": ""data/mya_Mymr/test-*""}]}, {""config_name"": ""nld_Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""data/nld_Latn/train-*""}, {""split"": ""validation"", ""path"": ""data/nld_Latn/validation-*""}, {""split"": ""test"", ""path"": ""data/nld_Latn/test-*""}]}, {""config_name"": ""nob_Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""data/nob_Latn/train-*""}, {""split"": ""validation"", ""path"": ""data/nob_Latn/validation-*""}, {""split"": ""test"", ""path"": ""data/nob_Latn/test-*""}]}, {""config_name"": ""npi_Deva"", ""data_files"": [{""split"": ""train"", ""path"": ""data/npi_Deva/train-*""}, {""split"": ""validation"", ""path"": ""data/npi_Deva/validation-*""}, {""split"": ""test"", ""path"": ""data/npi_Deva/test-*""}]}, {""config_name"": ""nso_Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""data/nso_Latn/train-*""}, {""split"": ""validation"", ""path"": ""data/nso_Latn/validation-*""}, {""split"": ""test"", ""path"": ""data/nso_Latn/test-*""}]}, {""config_name"": ""nya_Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""data/nya_Latn/train-*""}, {""split"": ""validation"", ""path"": ""data/nya_Latn/validation-*""}, {""split"": ""test"", ""path"": ""data/nya_Latn/test-*""}]}, {""config_name"": ""oci_Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""data/oci_Latn/train-*""}, {""split"": ""validation"", ""path"": ""data/oci_Latn/validation-*""}, {""split"": ""test"", ""path"": ""data/oci_Latn/test-*""}]}, {""config_name"": ""ory_Orya"", ""data_files"": [{""split"": ""train"", ""path"": ""data/ory_Orya/train-*""}, {""split"": ""validation"", ""path"": ""data/ory_Orya/validation-*""}, {""split"": ""test"", ""path"": ""data/ory_Orya/test-*""}]}, {""config_name"": ""pan_Guru"", ""data_files"": [{""split"": ""train"", ""path"": ""data/pan_Guru/train-*""}, {""split"": ""validation"", ""path"": ""data/pan_Guru/validation-*""}, {""split"": ""test"", ""path"": ""data/pan_Guru/test-*""}]}, {""config_name"": ""pbt_Arab"", ""data_files"": [{""split"": ""train"", ""path"": ""data/pbt_Arab/train-*""}, {""split"": ""validation"", ""path"": ""data/pbt_Arab/validation-*""}, {""split"": ""test"", ""path"": ""data/pbt_Arab/test-*""}]}, {""config_name"": ""pes_Arab"", ""data_files"": [{""split"": ""train"", ""path"": ""data/pes_Arab/train-*""}, {""split"": ""validation"", ""path"": ""data/pes_Arab/validation-*""}, {""split"": ""test"", ""path"": ""data/pes_Arab/test-*""}]}, {""config_name"": ""pol_Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""data/pol_Latn/train-*""}, {""split"": ""validation"", ""path"": ""data/pol_Latn/validation-*""}, {""split"": ""test"", ""path"": ""data/pol_Latn/test-*""}]}, {""config_name"": ""por_Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""data/por_Latn/train-*""}, {""split"": ""validation"", ""path"": ""data/por_Latn/validation-*""}, {""split"": ""test"", ""path"": ""data/por_Latn/test-*""}]}, {""config_name"": ""ron_Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""data/ron_Latn/train-*""}, {""split"": ""validation"", ""path"": ""data/ron_Latn/validation-*""}, {""split"": ""test"", ""path"": ""data/ron_Latn/test-*""}]}, {""config_name"": ""rus_Cyrl"", ""data_files"": [{""split"": ""train"", ""path"": ""data/rus_Cyrl/train-*""}, {""split"": ""validation"", ""path"": ""data/rus_Cyrl/validation-*""}, {""split"": ""test"", ""path"": ""data/rus_Cyrl/test-*""}]}, {""config_name"": ""slk_Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""data/slk_Latn/train-*""}, {""split"": ""validation"", ""path"": ""data/slk_Latn/validation-*""}, {""split"": ""test"", ""path"": ""data/slk_Latn/test-*""}]}, {""config_name"": ""slv_Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""data/slv_Latn/train-*""}, {""split"": ""validation"", ""path"": ""data/slv_Latn/validation-*""}, {""split"": ""test"", ""path"": ""data/slv_Latn/test-*""}]}, {""config_name"": ""sna_Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""data/sna_Latn/train-*""}, {""split"": ""test"", ""path"": ""data/sna_Latn/test-*""}, {""split"": ""validation"", ""path"": ""data/sna_Latn/validation-*""}]}, {""config_name"": ""snd_Arab"", ""data_files"": [{""split"": ""train"", ""path"": ""data/snd_Arab/train-*""}, {""split"": ""validation"", ""path"": ""data/snd_Arab/validation-*""}, {""split"": ""test"", ""path"": ""data/snd_Arab/test-*""}]}, {""config_name"": ""som_Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""data/som_Latn/train-*""}, {""split"": ""validation"", ""path"": ""data/som_Latn/validation-*""}, {""split"": ""test"", ""path"": ""data/som_Latn/test-*""}]}, {""config_name"": ""spa_Latn"", ""data_files"": [{""split"": ""validation"", ""path"": ""data/spa_Latn/validation-*""}, {""split"": ""test"", ""path"": ""data/spa_Latn/test-*""}, {""split"": ""train"", ""path"": ""data/spa_Latn/train-*""}]}, {""config_name"": ""srp_Cyrl"", ""data_files"": [{""split"": ""train"", ""path"": ""data/srp_Cyrl/train-*""}, {""split"": ""validation"", ""path"": ""data/srp_Cyrl/validation-*""}, {""split"": ""test"", ""path"": ""data/srp_Cyrl/test-*""}]}, {""config_name"": ""swe_Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""data/swe_Latn/train-*""}, {""split"": ""validation"", ""path"": ""data/swe_Latn/validation-*""}, {""split"": ""test"", ""path"": ""data/swe_Latn/test-*""}]}, {""config_name"": ""swh_Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""data/swh_Latn/train-*""}, {""split"": ""validation"", ""path"": ""data/swh_Latn/validation-*""}, {""split"": ""test"", ""path"": ""data/swh_Latn/test-*""}]}, {""config_name"": ""tam_Taml"", ""data_files"": [{""split"": ""train"", ""path"": ""data/tam_Taml/train-*""}, {""split"": ""validation"", ""path"": ""data/tam_Taml/validation-*""}, {""split"": ""test"", ""path"": ""data/tam_Taml/test-*""}]}, {""config_name"": ""tel_Telu"", ""data_files"": [{""split"": ""train"", ""path"": ""data/tel_Telu/train-*""}, {""split"": ""validation"", ""path"": ""data/tel_Telu/validation-*""}, {""split"": ""test"", ""path"": ""data/tel_Telu/test-*""}]}, {""config_name"": ""tgk_Cyrl"", ""data_files"": [{""split"": ""train"", ""path"": ""data/tgk_Cyrl/train-*""}, {""split"": ""validation"", ""path"": ""data/tgk_Cyrl/validation-*""}, {""split"": ""test"", ""path"": ""data/tgk_Cyrl/test-*""}]}, {""config_name"": ""tgl_Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""data/tgl_Latn/train-*""}, {""split"": ""validation"", ""path"": ""data/tgl_Latn/validation-*""}, {""split"": ""test"", ""path"": ""data/tgl_Latn/test-*""}]}, {""config_name"": ""tha_Thai"", ""data_files"": [{""split"": ""train"", ""path"": ""data/tha_Thai/train-*""}, {""split"": ""validation"", ""path"": ""data/tha_Thai/validation-*""}, {""split"": ""test"", ""path"": ""data/tha_Thai/test-*""}]}, {""config_name"": ""tur_Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""data/tur_Latn/train-*""}, {""split"": ""validation"", ""path"": ""data/tur_Latn/validation-*""}, {""split"": ""test"", ""path"": ""data/tur_Latn/test-*""}]}, {""config_name"": ""ukr_Cyrl"", ""data_files"": [{""split"": ""train"", ""path"": ""data/ukr_Cyrl/train-*""}, {""split"": ""validation"", ""path"": ""data/ukr_Cyrl/validation-*""}, {""split"": ""test"", ""path"": ""data/ukr_Cyrl/test-*""}]}, {""config_name"": ""umb_Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""data/umb_Latn/train-*""}, {""split"": ""validation"", ""path"": ""data/umb_Latn/validation-*""}, {""split"": ""test"", ""path"": ""data/umb_Latn/test-*""}]}, {""config_name"": ""urd_Arab"", ""data_files"": [{""split"": ""train"", ""path"": ""data/urd_Arab/train-*""}, {""split"": ""validation"", ""path"": ""data/urd_Arab/validation-*""}, {""split"": ""test"", ""path"": ""data/urd_Arab/test-*""}]}, {""config_name"": ""uzn_Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""data/uzn_Latn/train-*""}, {""split"": ""validation"", ""path"": ""data/uzn_Latn/validation-*""}, {""split"": ""test"", ""path"": ""data/uzn_Latn/test-*""}]}, {""config_name"": ""vie_Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""data/vie_Latn/train-*""}, {""split"": ""validation"", ""path"": ""data/vie_Latn/validation-*""}, {""split"": ""test"", ""path"": ""data/vie_Latn/test-*""}]}, {""config_name"": ""wol_Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""data/wol_Latn/train-*""}, {""split"": ""validation"", ""path"": ""data/wol_Latn/validation-*""}, {""split"": ""test"", ""path"": ""data/wol_Latn/test-*""}]}, {""config_name"": ""xho_Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""data/xho_Latn/train-*""}, {""split"": ""validation"", ""path"": ""data/xho_Latn/validation-*""}, {""split"": ""test"", ""path"": ""data/xho_Latn/test-*""}]}, {""config_name"": ""yor_Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""data/yor_Latn/train-*""}, {""split"": ""validation"", ""path"": ""data/yor_Latn/validation-*""}, {""split"": ""test"", ""path"": ""data/yor_Latn/test-*""}]}, {""config_name"": ""zho_Hans"", ""data_files"": [{""split"": ""train"", ""path"": ""data/zho_Hans/train-*""}, {""split"": ""validation"", ""path"": ""data/zho_Hans/validation-*""}, {""split"": ""test"", ""path"": ""data/zho_Hans/test-*""}]}, {""config_name"": ""zho_Hant"", ""data_files"": [{""split"": ""train"", ""path"": ""data/zho_Hant/train-*""}, {""split"": ""validation"", ""path"": ""data/zho_Hant/validation-*""}, {""split"": ""test"", ""path"": ""data/zho_Hant/test-*""}]}, {""config_name"": ""zsm_Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""data/zsm_Latn/train-*""}, {""split"": ""validation"", ""path"": ""data/zsm_Latn/validation-*""}, {""split"": ""test"", ""path"": ""data/zsm_Latn/test-*""}]}, {""config_name"": ""zul_Latn"", ""data_files"": [{""split"": ""train"", ""path"": ""data/zul_Latn/train-*""}, {""split"": ""validation"", ""path"": ""data/zul_Latn/validation-*""}, {""split"": ""test"", ""path"": ""data/zul_Latn/test-*""}]}]}","# SIB-Fleurs
SIB-Fleurs is a dataset suitable to evaluate Multilingual Spoken Language Understanding. For each utterance in Fleurs, the task is to determine the topic the utterance belongs to.
The topics are:
- Science/Technology
- Travel
- Politics
- Sports
- Health
- Entertainment
- Geography
**Preliminary evaluations can be found at the bottom of the README. The preliminary results in full detail are available in ./results.csv***.
## Dataset creation
This dataset processes and merges all available multilingual data from the Fleurs, Flores, and [SIB-200](https://huggingface.co/datasets/Davlan/sib200) datasets.
It aligns the data of SIB to the available instances from the merged Fleurs-Flores data.
The processing pipeline involves the following steps:
1. Remove all silent and noisy files from Fleurs.
2. Match Fleurs into Flores
3. Merge SIB into available Fleurs-Flores sentences.
**This dataset retains the training, validation, and test splits of Fleurs and not SIB.**
Full details and scripts to compile this dataset are available at: [https://github.com/fdschmidt93/fleurs-slu](https://github.com/fdschmidt93/fleurs-slu)
## Usage
### Example
Each sentence in Flores has ~2.3 utterances in Fleurs, on average. That is why each instance comprises the aligned Fleurs data as `list[str, audio, ...]`. We track all available meta data (gender, speaker id) and further provide the ASR, ASR translations, CER, and WER for [SeamlessM4Tv2-Large](https://huggingface.co/facebook/seamless-m4t-v2-large) and [WhisperV3-Large](https://huggingface.co/openai/whisper-large-v3).
```python
from datasets import load_dataset
eng_Latn = load_dataset(""wuenlp/sib-fleurs"", ""eng_Latn"", split=""test"")
eng_Latn[0]
# {
# 'sentence': 'As knowledge of Greek declined, the West found itself cut off from its Greek philosophical and scientific roots.',
# 'URL': 'https://en.wikibooks.org/wiki/Animal_Behavior/History',
# 'id': 596,
# 'domain': 'wikibooks',
# 'topic': 'Science/Animal Behavior',
# 'has_image': 0,
# 'has_hyperlink': 0,
# 'fleurs_id': 1895,
# 'filename': ['5358875111503056320.wav', '11200231708585274851.wav'],
# 'raw_transcription': 'As knowledge of Greek declined, the West found itself cut off from its Greek philosophical and scientific roots.',
# 'transcription': 'as knowledge of greek declined the west found itself cut off from its greek philosophical and scientific roots',
# 'num_samples': [120960, 162880],
# 'speaker_id': [5, 1],
# 'gender': ['FEMALE', 'MALE'],
# 'whisper_asr': ['As knowledge of Greek declined, the West found itself cut off from its Greek philosophical and scientific roots.',
# 'As knowledge of Greek declined, the West found itself cut off from its Greek philosophical and scientific roots.'],
# 'whisper_asr_cer': [0.0, 0.0],
# 'whisper_asr_wer': [0.0, 0.0],
# 'whisper_asr_translation': ['As knowledge of Greek declined, the West found itself cut off from its Greek philosophical and scientific roots.',
# 'As knowledge of Greek declined, the West found itself cut off from its Greek philosophical and scientific roots.'],
# 'seamlessm4t_asr': ['As knowledge of Greek declined, the West found itself cut off from its Greek philosophical and scientific roots.',
# 'As knowledge of Greek declined, the West found itself cut off from its Greek philosophical and scientific roots.'],
# 'seamlessm4t_asr_cer': [0.0, 0.0],
# 'seamlessm4t_asr_wer': [0.0, 0.0],
# 'seamlessm4t_asr_translation': ['As knowledge of Greek declined, the West found itself cut off from its Greek philosophical and scientific roots.',
# 'As knowledge of Greek declined, the West found itself cut off from its Greek philosophical and scientific roots.'],
# 'index_id': 1592,
# 'category': 0,
# 'text': 'As knowledge of Greek declined, the West found itself cut off from its Greek philosophical and scientific roots.',
# 'audio': [{'path': '5358875111503056320.wav', 'array': array([0.00000000e+00, 0.00000000e+00, 0.00000000e+00, ..., 5.72204590e-06, 7.56978989e-06, 5.42402267e-06]), 'sampling_rate': 16000},
# {'path': '11200231708585274851.wav', 'array': array([0. , 0. , 0. , ..., 0.00011402, 0.00011003, 0.00012642]), 'sampling_rate': 16000}]
# }
```
### Preprocessing
Below is an example of how to use the provided functions for selecting utterances from the Sib-Fleurs dataset according to different criteria (e.g. minimizing or maximizing CER, or random selection). You can adjust the selection strategy (`strategy`) as needed.
The mapping unpacks the below columns according the selection criterion provided by `strategy`.
- 'filename'
- 'speaker_id'
- 'gender'
- 'whisper_asr'
- 'whisper_asr_cer'
- 'whisper_asr_wer'
- 'whisper_asr_translation'
- 'seamlessm4t_asr'
- 'seamlessm4t_asr_cer'
- 'seamlessm4t_asr_wer'
- 'seamlessm4t_asr_translation'
**Note:** The selection logic takes into account which models are supported for a given language. If a language is unsupported by one of the models, the function automatically adjusts to only consider CERs from the supported models.
#### Selection Strategy:
You can choose how you want to select utterances:
- `strategy=""best""`: Selects utterances with the minimal Character Error Rate (CER).
- `strategy=""worst""`: Selects utterances with the maximal CER.
- `strategy=""random""`: Selects utterances at random.
```python
import random
from typing import Any, Callable
import torch
from datasets import load_dataset
from transformers import PreTrainedTokenizerFast
from datasets.arrow_dataset import Dataset
def collect_cer(
examples: dict[str, list[list[float]]], models: list[str]
) -> list[list[float]]:
""""""
Calculate the average CER (Character Error Rate) for each index of each example across specified models.
Args:
examples (dict[str, list[list[float]]]): Dictionary containing CER lists for different models.
models (list[str]): List of models to include in the calculation.
Returns:
list[list[float]]: A list where each sublist contains the average CERs for each index of an example.
Raises:
ValueError: If models have inconsistent numbers of examples or mismatched CER lengths.
""""""
model_cer_lists = [examples[model] for model in models if model in examples]
if not model_cer_lists or not all(
len(cer_list) == len(model_cer_lists[0]) for cer_list in model_cer_lists
):
raise ValueError(""All models must have the same number of examples."")
averaged_cer = []
for example_group in zip(*model_cer_lists):
if not all(
len(cer_list) == len(example_group[0]) for cer_list in example_group
):
raise ValueError(""All CER lists for an example must have the same length."")
averaged_cer.append(
[sum(values) / len(values) for values in zip(*example_group)]
)
return averaged_cer
def select_audio_mapper(
language: str,
strategy: str = ""best"",
) -> Callable[[dict[str, list[Any]]], dict[str, list[Any]]]:
""""""
Create a mapping function for selecting audio data based on CER.
Args:
language (str): Language code for filtering unsupported models.
strategy (str, optional): Selection strategy ('best', 'worst', or 'random'). Defaults to 'best'.
Returns:
Callable[[dict[str, list[Any]]], dict[str, list[Any]]]: A function for mapping dataset examples.
Raises:
ValueError: If an invalid selection strategy is provided.
""""""
keys = {
""audio"",
""filename"",
""gender"",
""num_samples"",
""seamlessm4t_asr"",
""seamlessm4t_asr_cer"",
""seamlessm4t_asr_translation"",
""seamlessm4t_asr_wer"",
""speaker_id"",
""split"",
""whisper_asr"",
""whisper_asr_cer"",
""whisper_asr_translation"",
""whisper_asr_wer"",
}
# Define unsupported languages for each model
seamless_unsupported = {
""ast_Latn"",
""hau_Latn"",
""kam_Latn"",
""kea_Latn"",
""lin_Latn"",
""mri_Latn"",
""nso_Latn"",
""oci_Latn"",
""tgl_Latn"",
""umb_Latn"",
""wol_Latn"",
""xho_Latn"",
}
whisper_unsupported = {
""ast_Latn"",
""ceb_Latn"",
""ckb_Arab"",
""fuv_Latn"",
""gle_Latn"",
""ibo_Latn"",
""kam_Latn"",
""kea_Latn"",
""kir_Cyrl"",
""lug_Latn"",
""luo_Latn"",
""nso_Latn"",
""tgl_Latn"",
""umb_Latn"",
""wol_Latn"",
""xho_Latn"",
""zul_Latn"",
}
# Define selection strategy
if strategy == ""best"":
select_func = lambda scores: min(range(len(scores)), key=lambda i: scores[i])
elif strategy == ""worst"":
select_func = lambda scores: max(range(len(scores)), key=lambda i: scores[i])
elif strategy == ""random"":
select_func = lambda scores: random.randint(0, len(scores) - 1)
else:
raise ValueError(""Invalid 'strategy'. Must be one of 'best', 'worst', or 'random'."")
# Determine which models are supported for the given language
if language not in whisper_unsupported and language not in seamless_unsupported:
models = [""whisper_asr_cer"", ""seamlessm4t_asr_cer""]
elif language in whisper_unsupported:
models = [""seamlessm4t_asr_cer""]
elif language in seamless_unsupported:
models = [""whisper_asr_cer""]
else:
models = [""whisper_asr_cer"", ""seamlessm4t_asr_cer""]
def map_fn(examples: dict[str, list[Any]]) -> dict[str, list[Any]]:
""""""
Map function to process dataset examples by selecting CER-based audio data.
Args:
examples (dict[str, list[Any]]): Dataset examples.
Returns:
dict[str, list[Any]]: Processed dataset examples.
""""""
cers = collect_cer(examples, models)
indices = [select_func(cer) for cer in cers]
for key, values in examples.items():
if key not in keys:
examples[key] = values
else:
examples[key] = [values[idx] for idx, values in zip(indices, examples[key])]
return examples
return map_fn
eng_Latn = load_dataset(""wuenlp/sib"", ""eng_Latn"", split=""test"")
mapper = select_audio_mapper(""eng_Latn"")
dataset = eng_Latn.map(mapper, batched=True, batch_size=50)
```
## ASR results
We evaluate both speech encoders and LMs in a cascaded pipeline. In the cascaded setup, we first run Automatic Speech Recognition (ASR) using WhisperV3-Large and SeamlessM4Tv2-Large, then process the transcribed text with a language model (currently roberta-large).
We select the best checkpoint by maximizing performance on the English validation set. For evaluation, we test zero-shot cross-lingual transfer across all available languages. The avg column represents the average performance across all languages. For detailed per-language results, please refer to results.csv.
| Model | Input | ASR Quality | Seed | LR | Batch Size | eng_Latn | avg |
|:-------------------------------------|:--------------------------------|:--------------|-------:|------:|-------------:|:-----------|:------|
| roberta-large | SeamlessM4Tv2 ASR Translation | best | 42 | 3e-5 | 32 | 92.7% | 81.5% |
| roberta-large | SeamlessM4Tv2 ASR Translation | best | 43 | 3e-5 | 32 | 91.0% | 80.4% |
| roberta-large | SeamlessM4Tv2 ASR Translation | worst | 44 | 2e-5 | 32 | 89.8% | 79.8% |
| roberta-large | SeamlessM4Tv2 ASR Translation | best | 43 | 2e-5 | 32 | 87.6% | 79.3% |
| roberta-large | SeamlessM4Tv2 ASR Translation | best | 42 | 2e-5 | 32 | 89.3% | 79.0% |
| roberta-large | SeamlessM4Tv2 ASR Translation | worst | 43 | 3e-5 | 32 | 89.8% | 78.5% |
| roberta-large | SeamlessM4Tv2 ASR Translation | best | 44 | 3e-5 | 32 | 88.1% | 78.5% |
| roberta-large | SeamlessM4Tv2 ASR Translation | worst | 42 | 2e-5 | 32 | 89.3% | 78.4% |
| roberta-large | SeamlessM4Tv2 ASR Translation | best | 44 | 2e-5 | 32 | 87.6% | 78.2% |
| roberta-large | SeamlessM4Tv2 ASR Translation | worst | 43 | 2e-5 | 32 | 85.3% | 77.9% |
| roberta-large | SeamlessM4Tv2 ASR Translation | worst | 44 | 3e-5 | 32 | 88.1% | 77.5% |
| roberta-large | SeamlessM4Tv2 ASR Translation | worst | 42 | 3e-5 | 32 | 87.6% | 76.3% |
| seamless-m4t-v2-large-speech-encoder | Speech | best | 42 | 3e-5 | 32 | 85.9% | 70.0% |
| roberta-large | WhisperV3-Large ASR Translation | best | 43 | 3e-5 | 32 | 90.4% | 69.1% |
| roberta-large | WhisperV3-Large ASR Translation | best | 42 | 3e-5 | 32 | 91.5% | 68.9% |
| roberta-large | WhisperV3-Large ASR Translation | best | 43 | 2e-5 | 32 | 88.7% | 68.5% |
| roberta-large | WhisperV3-Large ASR Translation | worst | 43 | 2e-5 | 32 | 91.0% | 68.1% |
| seamless-m4t-v2-large-speech-encoder | Speech | worst | 42 | 3e-5 | 32 | 85.9% | 67.8% |
| roberta-large | WhisperV3-Large ASR Translation | worst | 43 | 3e-5 | 32 | 90.4% | 67.6% |
| roberta-large | WhisperV3-Large ASR Translation | best | 44 | 3e-5 | 32 | 89.3% | 67.3% |
| roberta-large | WhisperV3-Large ASR Translation | best | 44 | 2e-5 | 32 | 86.4% | 67.1% |
| roberta-large | WhisperV3-Large ASR Translation | worst | 44 | 2e-5 | 32 | 90.4% | 66.8% |
| roberta-large | WhisperV3-Large ASR Translation | worst | 44 | 3e-5 | 32 | 89.3% | 66.8% |
| roberta-large | WhisperV3-Large ASR Translation | best | 42 | 2e-5 | 32 | 87.6% | 66.8% |
| roberta-large | WhisperV3-Large ASR Translation | worst | 42 | 3e-5 | 32 | 89.8% | 66.0% |
| roberta-large | WhisperV3-Large ASR Translation | worst | 42 | 2e-5 | 32 | 89.3% | 65.9% |
| roberta-large | SeamlessM4Tv2 ASR Translation | best | 42 | 1e-5 | 32 | 67.8% | 66.0% |
| roberta-large | SeamlessM4Tv2 ASR Translation | best | 43 | 1e-5 | 32 | 66.7% | 64.7% |
| roberta-large | SeamlessM4Tv2 ASR Translation | worst | 42 | 1e-5 | 32 | 65.0% | 64.5% |
| roberta-large | SeamlessM4Tv2 ASR Translation | best | 44 | 1e-5 | 32 | 66.7% | 64.0% |
| roberta-large | SeamlessM4Tv2 ASR Translation | worst | 43 | 1e-5 | 32 | 66.1% | 63.7% |
| roberta-large | WhisperV3-Large ASR Translation | best | 42 | 1e-5 | 32 | 80.2% | 62.6% |
| roberta-large | SeamlessM4Tv2 ASR Translation | worst | 44 | 1e-5 | 32 | 63.8% | 61.7% |
| roberta-large | WhisperV3-Large ASR Translation | best | 44 | 1e-5 | 32 | 76.3% | 60.8% |
| roberta-large | WhisperV3-Large ASR Translation | worst | 43 | 1e-5 | 32 | 78.0% | 60.7% |
| roberta-large | WhisperV3-Large ASR Translation | worst | 42 | 1e-5 | 32 | 76.3% | 59.5% |
| roberta-large | WhisperV3-Large ASR Translation | worst | 44 | 1e-5 | 32 | 74.0% | 58.2% |
| seamless-m4t-v2-large-speech-encoder | Speech | worst | 43 | 3e-5 | 32 | 83.1% | 57.4% |
| seamless-m4t-v2-large-speech-encoder | Speech | best | 43 | 3e-5 | 32 | 81.9% | 56.2% |
| seamless-m4t-v2-large-speech-encoder | Speech | best | 44 | 3e-5 | 32 | 83.6% | 55.6% |
| seamless-m4t-v2-large-speech-encoder | Speech | worst | 44 | 3e-5 | 32 | 81.4% | 55.5% |
| seamless-m4t-v2-large-speech-encoder | Speech | best | 42 | 2e-5 | 32 | 74.6% | 50.8% |
| whisper-large-v3-turbo | Speech | worst | 42 | 2e-5 | 32 | 81.4% | 50.4% |
| whisper-large-v3-turbo | Speech | best | 42 | 1e-5 | 32 | 80.2% | 48.7% |
| whisper-large-v3-turbo | Speech | worst | 42 | 1e-5 | 32 | 79.7% | 47.4% |
| whisper-large-v3-turbo | Speech | best | 44 | 2e-5 | 32 | 83.6% | 46.9% |
| whisper-large-v3-turbo | Speech | best | 42 | 2e-5 | 32 | 77.4% | 45.8% |
| whisper-large-v3-turbo | Speech | best | 43 | 1e-5 | 32 | 75.7% | 45.3% |
| seamless-m4t-v2-large-speech-encoder | Speech | best | 44 | 2e-5 | 32 | 78.5% | 44.0% |
| seamless-m4t-v2-large-speech-encoder | Speech | worst | 42 | 2e-5 | 32 | 66.1% | 43.5% |
| seamless-m4t-v2-large-speech-encoder | Speech | worst | 44 | 2e-5 | 32 | 74.0% | 43.1% |
| whisper-large-v3-turbo | Speech | worst | 42 | 3e-5 | 32 | 76.8% | 42.4% |
| seamless-m4t-v2-large-speech-encoder | Speech | best | 43 | 2e-5 | 32 | 76.3% | 41.9% |
| whisper-large-v3-turbo | Speech | worst | 43 | 3e-5 | 32 | 78.0% | 41.8% |
| whisper-large-v3-turbo | Speech | best | 43 | 2e-5 | 32 | 74.0% | 41.2% |
| seamless-m4t-v2-large-speech-encoder | Speech | worst | 43 | 2e-5 | 32 | 76.3% | 41.0% |
| whisper-large-v3-turbo | Speech | best | 42 | 3e-5 | 32 | 76.3% | 40.6% |
| whisper-large-v3-turbo | Speech | best | 43 | 3e-5 | 32 | 78.5% | 39.3% |
| whisper-large-v3-turbo | Speech | worst | 44 | 2e-5 | 32 | 80.8% | 39.3% |
| whisper-large-v3-turbo | Speech | worst | 43 | 2e-5 | 32 | 76.3% | 39.2% |
| whisper-large-v3-turbo | Speech | worst | 44 | 1e-5 | 32 | 75.7% | 38.8% |
| whisper-large-v3-turbo | Speech | best | 44 | 3e-5 | 32 | 76.8% | 37.1% |
| whisper-large-v3-turbo | Speech | worst | 44 | 3e-5 | 32 | 75.1% | 37.0% |
| whisper-large-v3-turbo | Speech | worst | 43 | 1e-5 | 32 | 73.4% | 35.8% |
| whisper-large-v3-turbo | Speech | best | 44 | 1e-5 | 32 | 76.8% | 34.5% |
| seamless-m4t-v2-large-speech-encoder | Speech | worst | 42 | 1e-5 | 32 | 33.9% | 26.5% |
| seamless-m4t-v2-large-speech-encoder | Speech | best | 42 | 1e-5 | 32 | 28.8% | 24.7% |
| seamless-m4t-v2-large-speech-encoder | Speech | best | 43 | 1e-5 | 32 | 18.6% | 18.4% |
| seamless-m4t-v2-large-speech-encoder | Speech | worst | 43 | 1e-5 | 32 | 18.6% | 18.1% |
| seamless-m4t-v2-large-speech-encoder | Speech | worst | 44 | 1e-5 | 32 | 16.9% | 13.0% |
| seamless-m4t-v2-large-speech-encoder | Speech | best | 44 | 1e-5 | 32 | 18.6% | 12.7% |
# Statistics
The table below denotes the number of available examples per split by language. The original SIB splits have been realigned to match the Fleurs splits.
| Language | Train | Validation | Test |
|:---------|--------:|-------------:|-------:|
| `afr_Latn` | 406 | 86 | 95 |
| `amh_Ethi` | 752 | 54 | 149 |
| `arb_Arab` | 579 | 64 | 133 |
| `asm_Beng` | 730 | 71 | 176 |
| `ast_Latn` | 701 | 69 | 177 |
| `azj_Latn` | 712 | 71 | 174 |
| `bel_Cyrl` | 690 | 71 | 177 |
| `bul_Cyrl` | 749 | 70 | 176 |
| `ben_Beng` | 742 | 71 | 176 |
| `bos_Latn` | 746 | 71 | 177 |
| `cat_Latn` | 683 | 71 | 177 |
| `ceb_Latn` | 741 | 61 | 149 |
| `ckb_Arab` | 738 | 70 | 176 |
| `zho_Hans` | 751 | 71 | 176 |
| `ces_Latn` | 732 | 68 | 172 |
| `cym_Latn` | 739 | 71 | 177 |
| `dan_Latn` | 696 | 70 | 177 |
| `deu_Latn` | 736 | 69 | 175 |
| `ell_Grek` | 750 | 67 | 168 |
| `eng_Latn` | 738 | 71 | 177 |
| `spa_Latn` | 676 | 71 | 177 |
| `est_Latn` | 700 | 71 | 176 |
| `pes_Arab` | 692 | 66 | 165 |
| `fin_Latn` | 735 | 71 | 175 |
| `tgl_Latn` | 604 | 71 | 176 |
| `fra_Latn` | 753 | 65 | 164 |
| `gle_Latn` | 731 | 71 | 176 |
| `glg_Latn` | 660 | 71 | 174 |
| `guj_Gujr` | 752 | 71 | 177 |
| `hau_Latn` | 753 | 70 | 166 |
| `heb_Hebr` | 754 | 70 | 175 |
| `hin_Deva` | 653 | 60 | 132 |
| `hrv_Latn` | 756 | 71 | 176 |
| `hun_Latn` | 750 | 71 | 177 |
| `hye_Armn` | 741 | 71 | 177 |
| `ind_Latn` | 728 | 69 | 167 |
| `ibo_Latn` | 737 | 71 | 177 |
| `isl_Latn` | 381 | 18 | 23 |
| `ita_Latn` | 743 | 69 | 175 |
| `jpn_Jpan` | 662 | 62 | 164 |
| `jav_Latn` | 740 | 67 | 171 |
| `kat_Geor` | 557 | 69 | 177 |
| `kam_Latn` | 752 | 69 | 179 |
| `kea_Latn` | 725 | 71 | 175 |
| `kaz_Cyrl` | 749 | 70 | 176 |
| `khm_Khmr` | 588 | 69 | 168 |
| `kan_Knda` | 660 | 70 | 174 |
| `kor_Hang` | 669 | 61 | 141 |
| `kir_Cyrl` | 729 | 71 | 177 |
| `ltz_Latn` | 703 | 71 | 176 |
| `lug_Latn` | 691 | 70 | 173 |
| `lin_Latn` | 755 | 59 | 139 |
| `lao_Laoo` | 591 | 54 | 132 |
| `lit_Latn` | 730 | 71 | 178 |
| `luo_Latn` | 698 | 39 | 98 |
| `lvs_Latn` | 634 | 69 | 174 |
| `mri_Latn` | 749 | 71 | 176 |
| `mkd_Cyrl` | 680 | 71 | 177 |
| `mal_Mlym` | 723 | 68 | 174 |
| `khk_Cyrl` | 743 | 71 | 177 |
| `mar_Deva` | 749 | 71 | 177 |
| `zsm_Latn` | 713 | 67 | 171 |
| `mlt_Latn` | 731 | 71 | 176 |
| `mya_Mymr` | 746 | 71 | 175 |
| `nob_Latn` | 723 | 51 | 127 |
| `npi_Deva` | 754 | 70 | 175 |
| `nld_Latn` | 729 | 58 | 123 |
| `nso_Latn` | 633 | 70 | 169 |
| `nya_Latn` | 720 | 68 | 169 |
| `oci_Latn` | 756 | 71 | 177 |
| `ory_Orya` | 442 | 71 | 168 |
| `pan_Guru` | 580 | 56 | 143 |
| `pol_Latn` | 723 | 68 | 165 |
| `pbt_Arab` | 701 | 55 | 144 |
| `por_Latn` | 728 | 70 | 177 |
| `ron_Latn` | 734 | 69 | 177 |
| `rus_Cyrl` | 733 | 71 | 173 |
| `snd_Arab` | 749 | 71 | 177 |
| `slk_Latn` | 628 | 71 | 169 |
| `slv_Latn` | 704 | 71 | 174 |
| `sna_Latn` | 689 | 71 | 176 |
| `som_Latn` | 746 | 70 | 177 |
| `srp_Cyrl` | 730 | 63 | 164 |
| `swe_Latn` | 686 | 71 | 168 |
| `swh_Latn` | 745 | 65 | 154 |
| `tam_Taml` | 693 | 71 | 169 |
| `tel_Telu` | 658 | 66 | 153 |
| `tgk_Cyrl` | 680 | 69 | 163 |
| `tha_Thai` | 710 | 71 | 176 |
| `tur_Latn` | 692 | 67 | 164 |
| `ukr_Cyrl` | 732 | 67 | 164 |
| `umb_Latn` | 473 | 39 | 108 |
| `urd_Arab` | 636 | 65 | 120 |
| `uzn_Latn` | 734 | 69 | 175 |
| `vie_Latn` | 737 | 70 | 176 |
| `wol_Latn` | 643 | 52 | 123 |
| `xho_Latn` | 756 | 71 | 177 |
| `yor_Latn` | 686 | 71 | 172 |
| `zho_Hant` | 624 | 70 | 172 |
| `zul_Latn` | 739 | 69 | 175 |
| `fuv_Latn` | 752 | 68 | 166 |
| `gaz_Latn` | 574 | 6 | 17 |
# Citations
If you are using this dataset, please cite the following papers. Our paper is forthcoming and will be added as soon as possible.
```
@misc{adelani2023sib200,
title={SIB-200: A Simple, Inclusive, and Big Evaluation Dataset for Topic Classification in 200+ Languages and Dialects},
author={David Ifeoluwa Adelani and Hannah Liu and Xiaoyu Shen and Nikita Vassilyev and Jesujoba O. Alabi and Yanke Mao and Haonan Gao and Annie En-Shiun Lee},
year={2023},
eprint={2309.07445},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```"
ontocord/CulturaY,"{""configs"": [{""config_name"": ""af"", ""data_files"": ""af/*.jsonl.zst""}, {""config_name"": ""ar"", ""data_files"": ""ar/*.jsonl.zst""}, {""config_name"": ""az"", ""data_files"": ""az/*.jsonl.zst""}, {""config_name"": ""be"", ""data_files"": ""be/*.jsonl.zst""}, {""config_name"": ""bg"", ""data_files"": ""bg/*.jsonl.zst""}, {""config_name"": ""bn"", ""data_files"": ""bn/*.jsonl.zst""}, {""config_name"": ""ca"", ""data_files"": ""ca/*.jsonl.zst""}, {""config_name"": ""cs"", ""data_files"": ""cs/*.jsonl.zst""}, {""config_name"": ""cy"", ""data_files"": ""cy/*.jsonl.zst""}, {""config_name"": ""da"", ""data_files"": ""da/*.jsonl.zst""}, {""config_name"": ""de"", ""data_files"": ""de/*.jsonl.zst""}, {""config_name"": ""el"", ""data_files"": ""el/*.jsonl.zst""}, {""config_name"": ""en"", ""data_files"": ""en/*.jsonl.zst""}, {""config_name"": ""eo"", ""data_files"": ""eo/*.jsonl.zst""}, {""config_name"": ""es"", ""data_files"": ""es/*.jsonl.zst""}, {""config_name"": ""et"", ""data_files"": ""et/*.jsonl.zst""}, {""config_name"": ""eu"", ""data_files"": ""eu/*.jsonl.zst""}, {""config_name"": ""fa"", ""data_files"": ""fa/*.jsonl.zst""}, {""config_name"": ""fi"", ""data_files"": ""fi/*.jsonl.zst""}, {""config_name"": ""fr"", ""data_files"": ""fr/*.jsonl.zst""}, {""config_name"": ""ga"", ""data_files"": ""ga/*.jsonl.zst""}, {""config_name"": ""gl"", ""data_files"": ""gl/*.jsonl.zst""}, {""config_name"": ""gu"", ""data_files"": ""gu/*.jsonl.zst""}, {""config_name"": ""hbs"", ""data_files"": ""hbs/*.jsonl.zst""}, {""config_name"": ""he"", ""data_files"": ""he/*.jsonl.zst""}, {""config_name"": ""hi"", ""data_files"": ""hi/*.jsonl.zst""}, {""config_name"": ""hu"", ""data_files"": ""hu/*.jsonl.zst""}, {""config_name"": ""hy"", ""data_files"": ""hy/*.jsonl.zst""}, {""config_name"": ""id"", ""data_files"": ""id/*.jsonl.zst""}, {""config_name"": ""is"", ""data_files"": ""is/*.jsonl.zst""}, {""config_name"": ""it"", ""data_files"": ""it/*.jsonl.zst""}, {""config_name"": ""ja"", ""data_files"": ""ja/*.jsonl.zst""}, {""config_name"": ""ka"", ""data_files"": ""ka/*.jsonl.zst""}, {""config_name"": ""kk"", ""data_files"": ""kk/*.jsonl.zst""}, {""config_name"": ""kn"", ""data_files"": ""kn/*.jsonl.zst""}, {""config_name"": ""ko"", ""data_files"": ""ko/*.jsonl.zst""}, {""config_name"": ""ky"", ""data_files"": ""ky/*.jsonl.zst""}, {""config_name"": ""la"", ""data_files"": ""la/*.jsonl.zst""}, {""config_name"": ""lt"", ""data_files"": ""lt/*.jsonl.zst""}, {""config_name"": ""lv"", ""data_files"": ""lv/*.jsonl.zst""}, {""config_name"": ""mk"", ""data_files"": ""mk/*.jsonl.zst""}, {""config_name"": ""ml"", ""data_files"": ""ml/*.jsonl.zst""}, {""config_name"": ""mn"", ""data_files"": ""mn/*.jsonl.zst""}, {""config_name"": ""mr"", ""data_files"": ""mr/*.jsonl.zst""}, {""config_name"": ""ms"", ""data_files"": ""ms/*.jsonl.zst""}, {""config_name"": ""mt"", ""data_files"": ""mt/*.jsonl.zst""}, {""config_name"": ""my"", ""data_files"": ""my/*.jsonl.zst""}, {""config_name"": ""nb"", ""data_files"": ""nb/*.jsonl.zst""}, {""config_name"": ""ne"", ""data_files"": ""ne/*.jsonl.zst""}, {""config_name"": ""nl"", ""data_files"": ""nl/*.jsonl.zst""}, {""config_name"": ""nn"", ""data_files"": ""nn/*.jsonl.zst""}, {""config_name"": ""pa"", ""data_files"": ""pa/*.jsonl.zst""}, {""config_name"": ""pl"", ""data_files"": ""pl/*.jsonl.zst""}, {""config_name"": ""ps"", ""data_files"": ""ps/*.jsonl.zst""}, {""config_name"": ""pt"", ""data_files"": ""pt/*.jsonl.zst""}, {""config_name"": ""ro"", ""data_files"": ""ro/*.jsonl.zst""}, {""config_name"": ""ru"", ""data_files"": ""ru/*.jsonl.zst""}, {""config_name"": ""si"", ""data_files"": ""si/*.jsonl.zst""}, {""config_name"": ""sk"", ""data_files"": ""sk/*.jsonl.zst""}, {""config_name"": ""sl"", ""data_files"": ""sl/*.jsonl.zst""}, {""config_name"": ""so"", ""data_files"": ""so/*.jsonl.zst""}, {""config_name"": ""sq"", ""data_files"": ""sq/*.jsonl.zst""}, {""config_name"": ""sv"", ""data_files"": ""sv/*.jsonl.zst""}, {""config_name"": ""sw"", ""data_files"": ""sw/*.jsonl.zst""}, {""config_name"": ""ta"", ""data_files"": ""ta/*.jsonl.zst""}, {""config_name"": ""te"", ""data_files"": ""te/*.jsonl.zst""}, {""config_name"": ""th"", ""data_files"": ""th/*.jsonl.zst""}, {""config_name"": ""tl"", ""data_files"": ""tl/*.jsonl.zst""}, {""config_name"": ""tr"", ""data_files"": ""tr/*.jsonl.zst""}, {""config_name"": ""tt"", ""data_files"": ""tt/*.jsonl.zst""}, {""config_name"": ""uk"", ""data_files"": ""uk/*.jsonl.zst""}, {""config_name"": ""ur"", ""data_files"": ""ur/*.jsonl.zst""}, {""config_name"": ""uz"", ""data_files"": ""uz/*.jsonl.zst""}, {""config_name"": ""vi"", ""data_files"": ""vi/*.jsonl.zst""}, {""config_name"": ""zh"", ""data_files"": ""zh/*.jsonl.zst""}], ""pretty_name"": ""CulturaY"", ""annotations_creators"": [""no-annotation""], ""language_creators"": [""found""], ""language"": [""af"", ""ar"", ""az"", ""be"", ""bg"", ""bn"", ""ca"", ""cs"", ""cy"", ""da"", ""de"", ""el"", ""en"", ""eo"", ""es"", ""et"", ""eu"", ""fa"", ""fi"", ""fr"", ""ga"", ""gl"", ""gu"", ""hbs"", ""he"", ""hi"", ""hu"", ""hy"", ""id"", ""is"", ""it"", ""ja"", ""ka"", ""kk"", ""kn"", ""ko"", ""ky"", ""la"", ""lt"", ""lv"", ""mk"", ""ml"", ""mn"", ""mr"", ""ms"", ""mt"", ""my"", ""nb"", ""ne"", ""nl"", ""nn"", ""pa"", ""pl"", ""ps"", ""pt"", ""ro"", ""ru"", ""si"", ""sk"", ""sl"", ""so"", ""sq"", ""sv"", ""sw"", ""ta"", ""te"", ""th"", ""tl"", ""tr"", ""tt"", ""uk"", ""ur"", ""uz"", ""vi"", ""zh""], ""multilinguality"": [""multilingual""], ""size_categories"": [""n<1K"", ""1K>> from datasets import load_dataset
>>> ds = load_dataset(""nlpai-lab/kullm-v2"", split=""train"")
>>> ds
DatasetDict({
train: Dataset({
features: ['id', 'instruction', 'input', 'output'],
num_rows: 152630
})
})
```
```python
>>> ds[0]
{'id': 'alpaca_{idx}',
'instruction': '3원색이란 무엇인가요?',
'input': '',
'output': '세 가지 기본 색은 빨강, 파랑, 노랑입니다. 이 색은 다른 색을 혼합하여 만들 수 없고 다른 모든 색은 다양한 비율로 조합하여 만들 수 있기 때문에 원색이라고 부릅니다. 빛에 사용되는 첨가제 색상 시스템에서 원색은 빨강, 녹색, 파랑(RGB)입니다.'}
```"
adithya7/xlel_wd_dictionary,"{""annotations_creators"": [""found""], ""language_creators"": [""found""], ""language"": [""af"", ""ar"", ""be"", ""bg"", ""bn"", ""ca"", ""cs"", ""da"", ""de"", ""el"", ""en"", ""es"", ""fa"", ""fi"", ""fr"", ""he"", ""hi"", ""hu"", ""id"", ""it"", ""ja"", ""ko"", ""ml"", ""mr"", ""ms"", ""nl"", ""no"", ""pl"", ""pt"", ""ro"", ""ru"", ""si"", ""sk"", ""sl"", ""sr"", ""sv"", ""sw"", ""ta"", ""te"", ""th"", ""tr"", ""uk"", ""vi"", ""zh""], ""license"": [""cc-by-4.0""], ""multilinguality"": [""multilingual""], ""pretty_name"": ""XLEL-WD is a multilingual event linking dataset. This supplementary dataset contains a dictionary of event items from Wikidata. The descriptions for Wikidata event items are taken from the corresponding multilingual Wikipedia articles."", ""size_categories"": [""10K
- **Repository:**
- **Paper:**
- **Leaderboard:** N/A
- **Point of Contact:** Adithya Pratapa
### Dataset Summary
XLEL-WD is a multilingual event linking dataset. This supplementary dataset contains a dictionary of event items from Wikidata. The descriptions for Wikidata event items are taken from the corresponding multilingual Wikipedia articles.
### Supported Tasks and Leaderboards
This dictionary can be used as a part of the event linking task.
### Languages
This dataset contains text from 44 languages. The language names and their ISO 639-1 codes are listed below. For details on the dataset distribution for each language, refer to the original paper.
| Language | Code | Language | Code | Language | Code | Language | Code |
| -------- | ---- | -------- | ---- | -------- | ---- | -------- | ---- |
| Afrikaans | af | Arabic | ar | Belarusian | be | Bulgarian | bg |
| Bengali | bn | Catalan | ca | Czech | cs | Danish | da |
| German | de | Greek | el | English | en | Spanish | es |
| Persian | fa | Finnish | fi | French | fr | Hebrew | he |
| Hindi | hi | Hungarian | hu | Indonesian | id | Italian | it |
| Japanese | ja | Korean | ko | Malayalam | ml | Marathi | mr |
| Malay | ms | Dutch | nl | Norwegian | no | Polish | pl |
| Portuguese | pt | Romanian | ro | Russian | ru | Sinhala | si |
| Slovak | sk | Slovene | sl | Serbian | sr | Swedish | sv |
| Swahili | sw | Tamil | ta | Telugu | te | Thai | th |
| Turkish | tr | Ukrainian | uk | Vietnamese | vi | Chinese | zh |
## Dataset Structure
### Data Instances
Each instance in the `label_dict.jsonl` file follows the below template,
```json
{
""label_id"": ""830917"",
""label_title"": ""2010 European Aquatics Championships"",
""label_desc"": ""The 2010 European Aquatics Championships were held from 4–15 August 2010 in Budapest and Balatonfüred, Hungary. It was the fourth time that the city of Budapest hosts this event after 1926, 1958 and 2006. Events in swimming, diving, synchronised swimming (synchro) and open water swimming were scheduled."",
""label_lang"": ""en""
}
```
### Data Fields
| Field | Meaning |
| ----- | ------- |
| `label_id` | Wikidata ID |
| `label_title` | Title for the event, as collected from the corresponding Wikipedia article |
| `label_desc` | Description for the event, as collected from the corresponding Wikipedia article |
| `label_lang` | language used for the title and description |
### Data Splits
This dictionary has a single split, `dictionary`. It contains 10947 event items from Wikidata and a total of 114834 text descriptions collected from multilingual Wikipedia articles.
## Dataset Creation
### Curation Rationale
This datasets helps address the task of event linking. KB linking is extensively studied for entities, but its unclear if the same methodologies can be extended for linking mentions to events from KB. Event items are collected from Wikidata.
### Source Data
#### Initial Data Collection and Normalization
A Wikidata item is considered a potential event if it has spatial and temporal properties. The final event set is collected after post-processing for quality control.
#### Who are the source language producers?
The titles and descriptions for the events are written by Wikipedia contributors.
### Annotations
#### Annotation process
This dataset was automatically compiled from Wikidata. It was post-processed to improve data quality.
#### Who are the annotators?
Wikidata and Wikipedia contributors.
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
This dictionary primarily contains eventive nouns from Wikidata. It does not include other event items from Wikidata such as disease outbreak (Q3241045), military offensive (Q2001676), war (Q198), etc.,
## Additional Information
### Dataset Curators
The dataset was curated by Adithya Pratapa, Rishubh Gupta and Teruko Mitamura. The code for collecting the dataset is available at [Github:xlel-wd](https://github.com/adithya7/xlel-wd).
### Licensing Information
XLEL-WD dataset is released under [CC-BY-4.0 license](https://creativecommons.org/licenses/by/4.0/).
### Citation Information
```bib
@article{pratapa-etal-2022-multilingual,
title = {Multilingual Event Linking to Wikidata},
author = {Pratapa, Adithya and Gupta, Rishubh and Mitamura, Teruko},
publisher = {arXiv},
year = {2022},
url = {https://arxiv.org/abs/2204.06535},
}
```
### Contributions
Thanks to [@adithya7](https://github.com/adithya7) for adding this dataset."
stanford-oval/ccnews,"{""language"": [""multilingual"", ""af"", ""am"", ""ar"", ""as"", ""az"", ""be"", ""bg"", ""bn"", ""br"", ""bs"", ""ca"", ""cs"", ""cy"", ""da"", ""de"", ""el"", ""en"", ""eo"", ""es"", ""et"", ""eu"", ""fa"", ""fi"", ""fr"", ""fy"", ""ga"", ""gd"", ""gl"", ""gu"", ""ha"", ""he"", ""hi"", ""hr"", ""hu"", ""hy"", ""id"", ""is"", ""it"", ""ja"", ""jv"", ""ka"", ""kk"", ""km"", ""kn"", ""ko"", ""ku"", ""ky"", ""la"", ""lo"", ""lt"", ""lv"", ""mg"", ""mk"", ""ml"", ""mn"", ""mr"", ""ms"", ""my"", ""ne"", ""nl"", ""no"", ""om"", ""or"", ""pa"", ""pl"", ""ps"", ""pt"", ""ro"", ""ru"", ""sa"", ""sd"", ""si"", ""sk"", ""sl"", ""so"", ""sq"", ""sr"", ""su"", ""sv"", ""sw"", ""ta"", ""te"", ""th"", ""tl"", ""tr"", ""ug"", ""uk"", ""ur"", ""uz"", ""vi"", ""xh"", ""yi"", ""zh""], ""pretty_name"": ""All of Common Crawl News, 100+ languages, preprocessed and cleaned"", ""task_categories"": [""text-classification"", ""question-answering"", ""text-generation"", ""text2text-generation""], ""size_categories"": [""100M
## Dataset Descritpion
Korean Hate Speech Evaluation Datasets : trained with [BEEP!](https://huggingface.co/datasets/kor_hate) and evaluate with [APEACH](https://github.com/jason9693/APEACH)
- **Repository: [Korean HateSpeech Evaluation Dataset](https://github.com/jason9693/APEACH)**
- **Paper: [APEACH: Attacking Pejorative Expressions with Analysis on Crowd-Generated Hate Speech Evaluation Datasets](https://arxiv.org/abs/2202.12459)**
- **Point of Contact: [Kichang Yang](ykcha9@gmail.com)**
### Languages
ko-KR
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows:
```json
{'text': ['(현재 호텔주인 심정) 아18 난 마른하늘에 날벼락맞고 호텔망하게생겼는데 누군 계속 추모받네....',
'....한국적인 미인의 대표적인 분...너무나 곱고아름다운모습...그모습뒤의 슬픔을 미처 알지못했네요ㅠ'],
'class': ['Spoiled', 'Default']}
```
### Dataset Fields
The dataset has the following fields (also called ""features""):
```json
{
""text"": ""Value(dtype='string', id=None)"",
""class"": ""ClassLabel(num_classes=2, names=['Default', 'Spoiled'], id=None)""
}
```
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| Split name | Num samples |
| ------------ | ------------------- |
| train (binarized BEEP!) | 7896 |
| valid (APEACH) | 3770 |
## Citation
```
@article{yang2022apeach,
title={APEACH: Attacking Pejorative Expressions with Analysis on Crowd-Generated Hate Speech Evaluation Datasets},
author={Yang, Kichang and Jang, Wonjun and Cho, Won Ik},
journal={arXiv preprint arXiv:2202.12459},
year={2022}
}
```"
xu-song/cc100-samples,"{""annotations_creators"": [""no-annotation""], ""language_creators"": [""found""], ""datasets"": [""cc100""], ""language"": [""af"", ""am"", ""ar"", ""as"", ""az"", ""be"", ""bg"", ""bn"", ""br"", ""bs"", ""ca"", ""cs"", ""cy"", ""da"", ""de"", ""el"", ""en"", ""eo"", ""es"", ""et"", ""eu"", ""fa"", ""ff"", ""fi"", ""fr"", ""fy"", ""ga"", ""gd"", ""gl"", ""gn"", ""gu"", ""ha"", ""he"", ""hi"", ""hr"", ""ht"", ""hu"", ""hy"", ""id"", ""ig"", ""is"", ""it"", ""ja"", ""jv"", ""ka"", ""kk"", ""km"", ""kn"", ""ko"", ""ku"", ""ky"", ""la"", ""lg"", ""li"", ""ln"", ""lo"", ""lt"", ""lv"", ""mg"", ""mk"", ""ml"", ""mn"", ""mr"", ""ms"", ""my"", ""ne"", ""nl"", ""no"", ""ns"", ""om"", ""or"", ""pa"", ""pl"", ""ps"", ""pt"", ""qu"", ""rm"", ""ro"", ""ru"", ""sa"", ""sc"", ""sd"", ""si"", ""sk"", ""sl"", ""so"", ""sq"", ""sr"", ""ss"", ""su"", ""sv"", ""sw"", ""ta"", ""te"", ""th"", ""tl"", ""tn"", ""tr"", ""ug"", ""uk"", ""ur"", ""uz"", ""vi"", ""wo"", ""xh"", ""yi"", ""yo"", ""zh"", ""zu""], ""language_bcp47"": [""bn-Latn"", ""hi-Latn"", ""my-x-zawgyi"", ""ta-Latn"", ""te-Latn"", ""ur-Latn"", ""zh-Hans"", ""zh-Hant""], ""license"": [""unknown""], ""multilinguality"": [""multilingual""], ""size_categories"": [""1K
A dataset containing strings from projects hosted on [Weblate](https://hosted.weblate.org) and their translations into other languages.
Please consider [donating](https://weblate.org/en/donate/) or [contributing](https://weblate.org/en/contribute/) to Weblate if you find this dataset useful.
To avoid rows with values like ""None"" and ""N/A"" being interpreted as missing values, pass the keep_default_na parameter like this:
```
from datasets import load_dataset
dataset = load_dataset(""ayymen/Weblate-Translations"", keep_default_na=False)
```
## Dataset Details
### Dataset Description
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** Each sentence pair in the dataset has a corresponding license in the ""license"" column. This license is the one specified in the component or project containing the sentence.
### Dataset Sources [optional]
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
- Machine Translation
- Language Identification
### Direct Use
[More Information Needed]
### Out-of-Scope Use
[More Information Needed]
## Dataset Structure
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Data Collection and Processing
- Sentence pairs with empty/missing elements were dropped.
- Identical pairs were dropped.
- Trailing whitespace was stripped.
- Rows were deduplicated based on all 3 columns including ""license"", on a config/subset/tsv file basis. Which means that a single config might contain two identical sentence pairs with different licenses. Or a different config/subset might contain the exact same row (most likely a different variant/dialect of the same language(s)).
#### Who are the source data producers?
[More Information Needed]
### Annotations [optional]
#### Annotation process
[More Information Needed]
#### Who are the annotators?
Weblate users.
#### Personal and Sensitive Information
[More Information Needed]
## Bias, Risks, and Limitations
[More Information Needed]
### Recommendations
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed]"
heegyu/namuwiki-extracted,"{""license"": ""cc-by-nc-sa-2.0"", ""language"": [""ko""], ""language_creators"": [""other""], ""multilinguality"": [""monolingual""], ""size_categories"": [""100K
- 571308rows
- download size: 2.19GB
## 주의사항
namu-wiki-extractor를 이용하여 전처리, 추가로 아래 전처리를 수행했습니다
1. 헤더 제거 `== 개요 ==`
1. 테이블 제거
1. `[age(1997-01-01)]` 는 전처리 시점 기준으로 적용(2022년 10월 2일)
1. `[math(a / b + c)]` 는 제거하지 않음.
1. math 마크다운이 각주 내에 있을 경우, 각주가 전처리되지 않은 문제 있음.
## Usage
```bash
pip install datasets
```
```python
from datasets import load_dataset
dataset = load_dataset(""heegyu/namuwiki-extracted"")
print(dataset[""train""][0])
```
```
{
'title': '!!아앗!!',
'text': '!!ああっと!! ▲신 세계수의 미궁 2에서 뜬 !!아앗!! 세계수의 미궁 시리즈에 전통으로 등장하는 대사. 2편부터 등장했으며 훌륭한 사망 플래그의 예시이다. 세계수의 모험가들이 탐험하는 던전인 수해의 구석구석에는 채취/벌채/채굴 포인트가 있으며, 이를 위한 채집 스킬에 ...',
'contributors': '110.46.34.123,kirby10,max0243,218.54.117.149,ruby3141,121.165.63.239,iviyuki,1.229.200.194,anatra95,kiri47,175.127.134.2,nickchaos71,chkong1998,kiwitree2,namubot,huwieblusnow',
'namespace': ''
}
```"
bongsoo/kowiki20220620,"{""language"": [""ko""], ""license"": ""apache-2.0""}",-kowiki202206 1줄 말뭉치
wikimedia/wikisource,"{""language"": [""ar"", ""as"", ""az"", ""ban"", ""be"", ""bg"", ""bn"", ""br"", ""bs"", ""ca"", ""cs"", ""cy"", ""da"", ""de"", ""el"", ""en"", ""eo"", ""es"", ""et"", ""eu"", ""fa"", ""fi"", ""fo"", ""fr"", ""gl"", ""gu"", ""he"", ""hi"", ""hr"", ""hu"", ""hy"", ""id"", ""is"", ""it"", ""ja"", ""jv"", ""kn"", ""ko"", ""la"", ""li"", ""lij"", ""lt"", ""mk"", ""ml"", ""mr"", ""nan"", ""nap"", ""nl"", ""no"", ""or"", ""pa"", ""pl"", ""pms"", ""pt"", ""ro"", ""ru"", ""sa"", ""sah"", ""sk"", ""sl"", ""sr"", ""su"", ""sv"", ""ta"", ""te"", ""th"", ""tr"", ""uk"", ""vec"", ""vi"", ""wa"", ""yi"", ""zh""], ""license"": [""cc-by-sa-3.0"", ""gfdl""], ""size_categories"": [""n<1K"", ""1K
This is a dataset containing strings from various Mozilla projects on Mozilla's [Pontoon](https://pontoon.mozilla.org) localization platform and their translations into more than 200 languages.
Source strings are in English.
To avoid rows with values like ""None"" and ""N/A"" being interpreted as missing values, pass the keep_default_na parameter like this:
```
from datasets import load_dataset
dataset = load_dataset(""ayymen/Pontoon-Translations"", keep_default_na=False)
```
## Dataset Details
### Dataset Description
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** Per [Pontoons's terms](https://pontoon.mozilla.org/terms/) ""Translations are governed by the [Mozilla Public License 2.0](https://www.mozilla.org/en-US/MPL/2.0/), or another license or set of licenses acceptable to the Mozilla Foundation.""
### Dataset Sources [optional]
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
- Machine Translation
- Language Identification
### Direct Use
[More Information Needed]
### Out-of-Scope Use
[More Information Needed]
## Dataset Structure
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Data Collection and Processing
- Sentence pairs with empty/missing elements were dropped.
- Identical pairs were dropped.
- Rows where the english string does not contain any letters were dropped.
- Leading and trailing whitespace was stripped.
- Rows were deduplicated.
#### Who are the source data producers?
[More Information Needed]
### Annotations [optional]
#### Annotation process
[More Information Needed]
#### Who are the annotators?
Pontoon users.
#### Personal and Sensitive Information
[More Information Needed]
## Bias, Risks, and Limitations
[More Information Needed]
### Recommendations
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed]"
openai/MMMLU,"{""task_categories"": [""question-answering""], ""configs"": [{""config_name"": ""default"", ""data_files"": [{""split"": ""test"", ""path"": ""test/*.csv""}]}, {""config_name"": ""AR_XY"", ""data_files"": [{""split"": ""test"", ""path"": ""test/mmlu_AR-XY.csv""}]}, {""config_name"": ""BN_BD"", ""data_files"": [{""split"": ""test"", ""path"": ""test/mmlu_BN-BD.csv""}]}, {""config_name"": ""DE_DE"", ""data_files"": [{""split"": ""test"", ""path"": ""test/mmlu_DE-DE.csv""}]}, {""config_name"": ""ES_LA"", ""data_files"": [{""split"": ""test"", ""path"": ""test/mmlu_ES-LA.csv""}]}, {""config_name"": ""FR_FR"", ""data_files"": [{""split"": ""test"", ""path"": ""test/mmlu_FR-FR.csv""}]}, {""config_name"": ""HI_IN"", ""data_files"": [{""split"": ""test"", ""path"": ""test/mmlu_HI-IN.csv""}]}, {""config_name"": ""ID_ID"", ""data_files"": [{""split"": ""test"", ""path"": ""test/mmlu_ID-ID.csv""}]}, {""config_name"": ""IT_IT"", ""data_files"": [{""split"": ""test"", ""path"": ""test/mmlu_IT-IT.csv""}]}, {""config_name"": ""JA_JP"", ""data_files"": [{""split"": ""test"", ""path"": ""test/mmlu_JA-JP.csv""}]}, {""config_name"": ""KO_KR"", ""data_files"": [{""split"": ""test"", ""path"": ""test/mmlu_KO-KR.csv""}]}, {""config_name"": ""PT_BR"", ""data_files"": [{""split"": ""test"", ""path"": ""test/mmlu_PT-BR.csv""}]}, {""config_name"": ""SW_KE"", ""data_files"": [{""split"": ""test"", ""path"": ""test/mmlu_SW-KE.csv""}]}, {""config_name"": ""YO_NG"", ""data_files"": [{""split"": ""test"", ""path"": ""test/mmlu_YO-NG.csv""}]}, {""config_name"": ""ZH_CN"", ""data_files"": [{""split"": ""test"", ""path"": ""test/mmlu_ZH-CN.csv""}]}], ""language"": [""ar"", ""bn"", ""de"", ""es"", ""fr"", ""hi"", ""id"", ""it"", ""ja"", ""ko"", ""pt"", ""sw"", ""yo"", ""zh""], ""license"": ""mit""}","# Multilingual Massive Multitask Language Understanding (MMMLU)
The MMLU is a widely recognized benchmark of general knowledge attained by AI models. It covers a broad range of topics from 57 different categories, covering elementary-level knowledge up to advanced professional subjects like law, physics, history, and computer science.
We translated the MMLU’s test set into 14 languages using professional human translators. Relying on human translators for this evaluation increases confidence in the accuracy of the translations, especially for low-resource languages like Yoruba. We are publishing the professional human translations and the code we use to run the evaluations.
This effort reflects our commitment to improving the multilingual capabilities of AI models, ensuring they perform accurately across languages, particularly for underrepresented communities. By prioritizing high-quality translations, we aim to make AI technology more inclusive and effective for users worldwide.
## Locales
MMMLU contains the MMLU test set translated into the following locales:
* AR_XY (Arabic)
* BN_BD (Bengali)
* DE_DE (German)
* ES_LA (Spanish)
* FR_FR (French)
* HI_IN (Hindi)
* ID_ID (Indonesian)
* IT_IT (Italian)
* JA_JP (Japanese)
* KO_KR (Korean)
* PT_BR (Brazilian Portuguese)
* SW_KE (Swahili)
* YO_NG (Yoruba)
* ZH_CN (Simplified Chinese)
## Sources
Hendrycks, D., Burns, C., Kadavath, S., Arora, A., Basart, S., Tang, E., Song, D., & Steinhardt, J. (2021). [*Measuring Massive Multitask Language Understanding*](https://arxiv.org/abs/2009.03300).
[OpenAI Simple Evals GitHub Repository](https://github.com/openai/simple-evals)"
sentence-transformers/parallel-sentences-opus-100,"{""annotations_creators"": [""no-annotation""], ""language_creators"": [""found""], ""language"": [""af"", ""am"", ""an"", ""ar"", ""as"", ""az"", ""be"", ""bg"", ""bn"", ""br"", ""bs"", ""ca"", ""cs"", ""cy"", ""da"", ""de"", ""dz"", ""el"", ""en"", ""eo"", ""es"", ""et"", ""eu"", ""fa"", ""fi"", ""fr"", ""fy"", ""ga"", ""gd"", ""gl"", ""gu"", ""ha"", ""he"", ""hi"", ""hr"", ""hu"", ""hy"", ""id"", ""ig"", ""is"", ""it"", ""ja"", ""ka"", ""kk"", ""km"", ""kn"", ""ko"", ""ku"", ""ky"", ""li"", ""lt"", ""lv"", ""mg"", ""mk"", ""ml"", ""mn"", ""mr"", ""ms"", ""mt"", ""my"", ""nb"", ""ne"", ""nl"", ""nn"", ""no"", ""oc"", ""or"", ""pa"", ""pl"", ""ps"", ""pt"", ""ro"", ""ru"", ""rw"", ""se"", ""sh"", ""si"", ""sk"", ""sl"", ""sq"", ""sr"", ""sv"", ""ta"", ""te"", ""tg"", ""th"", ""tk"", ""tr"", ""tt"", ""ug"", ""uk"", ""ur"", ""uz"", ""vi"", ""wa"", ""xh"", ""yi"", ""yo"", ""zh"", ""zu""], ""size_categories"": [""10M wc -l *
6206 sharegpt_gpt4.jsonl # 清洗后的高质量gpt4日常问答数据集,大小:6K,主要是知识问答、编程题、推理计算,包括简体中文、繁体中文、英文、日文、韩文等多国语言多轮对话数据集。
58674 sharegpt_V3_format.jsonl # 原V3版sharegpt规范格式后的数据集,大小:58K,主要是日常问答,提问偏口语化,多国语言,多轮对话。
38535 sharegpt_zh_38K_format.jsonl # 中文gpt4日常问答数据集,大小:38K,主要是知识问答、翻译任务、求助、编程推理任务等偏口语提问,中文,多轮对话。
103415 total
```
#### Who are the annotators?
原作者。
### Licensing Information
same to sharegpt.
### Contributions
[shibing624](https://github.com/shibing624) add this dataset."
GEM/surface_realisation_st_2020,"{""annotations_creators"": [""none""], ""language_creators"": [""unknown""], ""language"": [""ar"", ""zh"", ""en"", ""fr"", ""hi"", ""id"", ""ja"", ""ko"", ""pt"", ""ru"", ""es""], ""license"": [""cc-by-2.5""], ""multilinguality"": [""unknown""], ""size_categories"": [""unknown""], ""source_datasets"": [""original""], ""task_categories"": [""table-to-text""], ""task_ids"": [], ""pretty_name"": ""surface_realisation_st_2020"", ""tags"": [""data-to-text""]}","# Dataset Card for GEM/surface_realisation_st_2020
## Dataset Description
- **Homepage:** http://taln.upf.edu/pages/msr2020-ws/SRST.html#data
- **Repository:** https://sites.google.com/site/genchalrepository/surface-realisation/sr-20-multilingual
- **Paper:** https://aclanthology.org/2020.msr-1.1/
- **Leaderboard:** N/A
- **Point of Contact:** Simon Mille
### Link to Main Data Card
You can find the main data card on the [GEM Website](https://gem-benchmark.com/data_cards/surface_realisation_st_2020).
### Dataset Summary
This dataset was used as part of the multilingual surface realization shared task in which a model gets full or partial universal dependency structures and has to reconstruct the natural language. This dataset support 11 languages.
You can load the dataset via:
```
import datasets
data = datasets.load_dataset('GEM/surface_realisation_st_2020')
```
The data loader can be found [here](https://huggingface.co/datasets/GEM/surface_realisation_st_2020).
#### website
[Website](http://taln.upf.edu/pages/msr2020-ws/SRST.html#data)
#### paper
[ACL Anthology](https://aclanthology.org/2020.msr-1.1/)
#### authors
Simon Mille (Pompeu Fabra University); Leo Wanner (Pompeu Fabra University); Anya Belz (Brighton University); Bernd Bohnet (Google Inc.); Thiago Castro Ferreira (Federal University of Minas Gerais); Yvette Graham (ADAPT/Trinity College Dublin)
## Dataset Overview
### Where to find the Data and its Documentation
#### Webpage
[Website](http://taln.upf.edu/pages/msr2020-ws/SRST.html#data)
#### Download
[Website](https://sites.google.com/site/genchalrepository/surface-realisation/sr-20-multilingual)
#### Paper
[ACL Anthology](https://aclanthology.org/2020.msr-1.1/)
#### BibTex
```
@inproceedings{mille-etal-2020-third,
title = ""The Third Multilingual Surface Realisation Shared Task ({SR}{'}20): Overview and Evaluation Results"",
author = ""Mille, Simon and
Belz, Anya and
Bohnet, Bernd and
Castro Ferreira, Thiago and
Graham, Yvette and
Wanner, Leo"",
booktitle = ""Proceedings of the Third Workshop on Multilingual Surface Realisation"",
month = dec,
year = ""2020"",
address = ""Barcelona, Spain (Online)"",
publisher = ""Association for Computational Linguistics"",
url = ""https://aclanthology.org/2020.msr-1.1"",
pages = ""1--20"",
abstract = ""This paper presents results from the Third Shared Task on Multilingual Surface Realisation (SR{'}20) which was organised as part of the COLING{'}20 Workshop on Multilingual Surface Realisation. As in SR{'}18 and SR{'}19, the shared task comprised two tracks: (1) a Shallow Track where the inputs were full UD structures with word order information removed and tokens lemmatised; and (2) a Deep Track where additionally, functional words and morphological information were removed. Moreover, each track had two subtracks: (a) restricted-resource, where only the data provided or approved as part of a track could be used for training models, and (b) open-resource, where any data could be used. The Shallow Track was offered in 11 languages, whereas the Deep Track in 3 ones. Systems were evaluated using both automatic metrics and direct assessment by human evaluators in terms of Readability and Meaning Similarity to reference outputs. We present the evaluation results, along with descriptions of the SR{'}19 tracks, data and evaluation methods, as well as brief summaries of the participating systems. For full descriptions of the participating systems, please see the separate system reports elsewhere in this volume."",
}
```
#### Contact Name
Simon Mille
#### Contact Email
sfmille@gmail.com
#### Has a Leaderboard?
no
### Languages and Intended Use
#### Multilingual?
yes
#### Covered Dialects
No multiple dialects.
#### Covered Languages
`Arabic`, `Chinese`, `English`, `French`, `Hindi`, `Indonesian`, `Japanese`, `Korean`, `Portuguese`, `Russian`, `Spanish, Castilian`
#### Whose Language?
Unknown
#### License
cc-by-2.5: Creative Commons Attribution 2.5 Generic
#### Intended Use
The dataset is intended to be used for training models to solve several NLG subtasks, such as function word introduction, morphological agreement resolution, word order determination and inflection generation.
Comment about the license: the dataset has multiple licences, since each original dataset has their own type of licence. All datasets but one are CC-BY and subclasses of it, the other one is GPL (French Sequoia).
#### Primary Task
Data-to-Text
#### Communicative Goal
The models are able to introduce surface features (syntax, morphology, topology) from more or less abstract inputs in different, the most abstract being predicate-argument structures. The datasets cover a large variety of domains (news, blogs, forums, wikipedia pages, etc.).
### Credit
#### Curation Organization Type(s)
`industry`, `academic`
#### Curation Organization(s)
Pompeu Fabra University, Google Inc., University of Brighton, Federal University of Minas Gerais, ADAPT/Trinity College Dublin
#### Dataset Creators
Simon Mille (Pompeu Fabra University); Leo Wanner (Pompeu Fabra University); Anya Belz (Brighton University); Bernd Bohnet (Google Inc.); Thiago Castro Ferreira (Federal University of Minas Gerais); Yvette Graham (ADAPT/Trinity College Dublin)
#### Funding
Mostly EU funds via H2020 projects
#### Who added the Dataset to GEM?
Simon Mille (Pompeu Fabra University)
### Dataset Structure
#### Data Fields
`input` (string): this field contains an input tree in CoNLL-U format; the CoNLL-U format is a one-word-per-line format with the following tab-separated 10 columns (see [here](http://universaldependencies.org/format.html)): [1] Position, [2] Lemma, [3] Wordform, [4] Part of Speech, [5] Fine-grained Part of Speech (if available), [6] Features (FEATS), [7] governor, [8] dependency relation, [9] additional dependency information, and [10] metadata. For the surface task, the input is a Universal Dependency tree of a given language in which the word order was scrambled and the surface forms removed (only lemmas are available); for the deep task, the input is a tree derived from the surface input, with predicate-argument relations between content words only (function words were removed) and without any morphological agreement information.
`target_tokenized` (string): this field contains the target sentence to generate, in which every non-initial and non-final token is surrounded by two spaces. This output is usually used for automatic evaluations.
`target` (string): this field contains the detokenised target sentence to generate. This output is usually used for human evaluations.
`gem_id` (string): a unique ID.
`sentence_id` (string): the original ID of a sentence in the UD dataset.
#### Reason for Structure
The structure of the input (CoNLL-U) was chosen according to the standards in parsing, and because the original UD datasets were provided in this format.
#### How were labels chosen?
The input labels for the surface track are the original labels in the UD treebanks; see [here](https://universaldependencies.org/u/dep/index.html) for the dependencies, [here](https://universaldependencies.org/u/feat/index.html) for the features, and [here](https://universaldependencies.org/u/pos/index.html) for the PoS tags.
The input labels for the deep track are a subset of the PoS tags and features of the surface track, and for the relations, universal predicate-argument relations augmented with a few specific relations to capture coordinations and named entity relations for instance.
#### Example Instance
```
{""input"": ""1\tGoogle\t_\tPROPN\tNNP\tNumber=Sing\t5\tnsubj\t_\t_\n2\t\t_\tPUNCT\t.\tlin=+1\t5\tpunct\t_\t_\n3\tinto\t_\tADP\tIN\t_\t6\tcase\t_\t_\n4\tif\t_\tSCONJ\tIN\t_\t5\tmark\t_\t_\n5\tmorph\t_\tVERB\tVBD\tMood=Ind|Tense=Past|VerbForm=Fin\t7\tadvcl\t_\t_\n6\tGoogleOS\t_\tPROPN\tNNP\tNumber=Sing\t5\tobl\t_\t_\n7\twhat\t_\tPRON\tWP\tPronType=Int\t0\troot\t_\t_"", ""target_tokenized"": ""What if Google Morphed Into GoogleOS ?"", ""target"": ""What if Google Morphed Into GoogleOS?"", ""gem_id"": ""GEM-surface_realisation_st_2020-T1-test-en_ewt-ud-test-0"", ""sentence_id"": """"}
```
#### Data Splits
There are 119 splits in the dataset:
- 29 training sets, which correspond to 20 UD datasets (11 languages), 9 of which have both surface and deep inputs (3 languages);
- 29 development set which correspond to the 29 training sets above;
- 29 test sets for the data described above;
- 4 out-of-domain test sets, 3 surface inputs and 1 deep one (3 languages for which PUD out-of-domain datasets were available);
- 9 automatically parsed in-domain test sets, 6 surface inputs and 3 deep inputs (6 languages for which good UD parsers were available);
- 9 automatically parsed out-of-domain test sets, 6 surface inputs and 3 deep inputs (6 languages for which we were able to create clean Wikipedia text and that had a good UD parser).
#### Splitting Criteria
Described above for more clarity.
####
An outlier would usually be an input that corresponds to a very long sentence (e.g. 159 words in English, when the average number of words per sentence is around 25).
## Dataset in GEM
### Rationale for Inclusion in GEM
#### Why is the Dataset in GEM?
The datset includes languages from different families and some languages not often used in NLG (e.g. Arabic, Indonesian, Korean, Hindi). It proposes two tasks, which can be tackled both separately and in one shot, with different levels of difficulty: the most superficial task (T1) consits in ordering and inflecting some trees, and the deeper task (T2) includes extra tasks such as defining the syntactic structure and introducing function words and morphological agreement information. Both tasks can allow for developing modules for pipeline NLG architectures. T1 is rather straightforward to evaluate: BLEU works quite well for some languages since all the words are present in the input and few word orders only can be possible for a syntactic tree. But T2 is more challenging to evaluate, since more outputs are correct given one particular input.
There is a large variety of sizes in the datasets, both clean and noisy data, parallel data in different languages, and many already available system outputs to use as baselines.
#### Similar Datasets
yes
#### Unique Language Coverage
yes
#### Difference from other GEM datasets
This is possibly the only dataset that starts the generation process from predicate-argument structures and from syntactic structures. It also has parallel datasets in a few languages (coming from the PUD parallel annotations).
#### Ability that the Dataset measures
Syntacticisation, functional word introduction, word order resolution, agreement resolution, morphological inflection
### GEM-Specific Curation
#### Modificatied for GEM?
no
#### Additional Splits?
no
### Getting Started with the Task
#### Pointers to Resources
[Website](http://taln.upf.edu/pages/msr2020-ws/SRST.html)
#### Technical Terms
Syntacticisation: prediction of the syntactic
## Previous Results
### Previous Results
#### Measured Model Abilities
Syntacticisation, functional word introduction, word order resolution, morphological agreement resolution, morphological inflection
#### Metrics
`BLEU`, `BERT-Score`, `Other: Other Metrics`
#### Other Metrics
NIST: n-gram similarity metric weighted in favour of less frequent n-grams which are taken to be more informative.
Normalised edit distance (DIST): inverse, normalised, character-based string-edit distance that starts by computing the minimum number of character inserts, deletes and substitutions (all at cost 1) required to turn the system output into the (single) reference text.
#### Proposed Evaluation
BLEU, NIST, BERTScore and DIST simply aim at calculating in different ways the similarity between a predicted and a reference sentence.
Two additional criteria have been used for human evaluation, Readability and Meaning SImilarity. The statement to be assessed in the Readability evaluation was: ""The text reads well and is free from grammatical errors and awkward constructions."". The corresponding statement in the Meaning Similarity evaluation, in which system outputs (‘the black text’) were compared to reference sentences (‘the gray text’), was: ""The meaning of the gray text is adequately expressed by the black text.""
#### Previous results available?
yes
#### Other Evaluation Approaches
Same as above.
#### Relevant Previous Results
- [Fast and Accurate Non-Projective Dependency Tree Linearization](https://aclanthology.org/2020.acl-main.134/)
- [Shape of Synth to Come: Why We Should Use Synthetic Data for English Surface Realization](https://aclanthology.org/2020.acl-main.665/)
## Dataset Curation
### Original Curation
#### Original Curation Rationale
The datasets were created in the context of the Surface Realisation Shared Task series.
#### Communicative Goal
The dataset's objective was to allow for training systems to perform tasks related to surface realisation (introduction of function words, syntacticisation, resolution of morphological agreements, word order resolution, inflection generation.
#### Sourced from Different Sources
yes
#### Source Details
Each of the 20 used UD datasets comes from various sources, all listed on the individual page of each UD treeebank (https://universaldependencies.org/).
Additional test sets were created for the task, and were obtained from Wikipedia pages for 6 languages.
### Language Data
#### How was Language Data Obtained?
`Found`
#### Where was it found?
`Multiple websites`
#### Language Producers
There are numerous sources of language in the multiple datasets.
#### Topics Covered
There is a large variety of topics in the multiple datasets.
#### Data Validation
not validated
#### Data Preprocessing
The text data was detokenised so as to create references for automatic evaluations (several languages don't use spaces to separate words, and running metrics like BLEU would not make sense without separating all the tokens in a sentence).
#### Was Data Filtered?
hybrid
#### Filter Criteria
For the Wikipedia test created for the shared task, extensive filtering was applied to achieve reasonably good text quality. Sentences that include special characters, contain unusual tokens (e.g. ISBN), or have unbalanced quotation marks or brackets were skipped. Furthermore, only sentences with more than 5 tokens and shorter than 50 tokens were selected. After the initial filtering, quite a few malformed sentences remained. In order to remove those, the sentences were scored with BERT and
only the top half scored sentences were kept. Finally, via manual inspection, patterns and expressions were identified to
further reduce the number of malformed sentences.
### Structured Annotations
#### Additional Annotations?
none
#### Annotation Service?
no
### Consent
#### Any Consent Policy?
no
#### Justification for Using the Data
The Universal Dependency data had been previously used for shared tasks on parsing, so it made sense to reuse it for generation.
### Private Identifying Information (PII)
#### Contains PII?
unlikely
#### Any PII Identification?
no identification
### Maintenance
#### Any Maintenance Plan?
no
## Broader Social Context
### Previous Work on the Social Impact of the Dataset
#### Usage of Models based on the Data
no
### Impact on Under-Served Communities
#### Addresses needs of underserved Communities?
yes
#### Details on how Dataset Addresses the Needs
Thanks to the original work of the UD dataset creators, the surface realisation dataset addresses a few languages which are possibly under-served in NLG: e.g. Arabic, Hindi, Indonesian, Korean.
### Discussion of Biases
#### Any Documented Social Biases?
no
#### Are the Language Producers Representative of the Language?
It is very likely that the distribution of language producers is not fully represented in the datasets of each language.
## Considerations for Using the Data
### PII Risks and Liability
#### Potential PII Risk
No risks foreseen.
### Licenses
#### Copyright Restrictions on the Dataset
`multiple licenses`, `open license - commercial use allowed`
#### Copyright Restrictions on the Language Data
`multiple licenses`, `open license - commercial use allowed`
### Known Technical Limitations
#### Technical Limitations
The deep track inputs (predicate-argument structures) are not of perfect quality, they were derived automatically from gold or predicted syntactic parses using handcrafted grammars.
#### Unsuited Applications
The datasets are probably not fitted to train tools to produce ""unusual"" languages (e.g. poetry, kid writing etc.).
#### Discouraged Use Cases
To be thought of :)"
royboy0416/ko-alpaca,"{""license"": ""cc-by-4.0"", ""task_categories"": [""text-generation""], ""language"": [""ko""]}","Testing purpose only. Do not redistribute.
Original contents: [url] https://huggingface.co/datasets/tatsu-lab/alpaca
Ko-alpaca: [url] https://github.com/Beomi/KoAlpaca/blob/main/ko_alpaca_data.json"
sean0042/KorMedMCQA,"{""language"": [""ko""], ""license"": ""cc-by-nc-2.0"", ""size_categories"": [""10K>> from datasets import load_dataset
>>> dataset = load_dataset(""Bingsu/zeroth-korean"")
>>> dataset
DatasetDict({
train: Dataset({
features: ['audio', 'text'],
num_rows: 22263
})
test: Dataset({
features: ['text', 'audio'],
num_rows: 457
})
})
```
### Data Size
download: 2.68 GiB
generated: 2.85 GiB
total: 5.52 GiB
### Data Fields
- audio: `audio`, sampling rate = 16000
- A dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate.
- Note that when accessing the audio column: `dataset[0][""audio""]` the audio file is automatically decoded and resampled to `dataset.features[""audio""].sampling_rate`. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the ""audio"" column, i.e. `dataset[0][""audio""]` should always be preferred over `dataset[""audio""][0]`.
- text: `string`
```pycon
>>> dataset[""train""][0]
{'audio': {'path': None,
'array': array([-3.0517578e-05, 0.0000000e+00, -3.0517578e-05, ...,
0.0000000e+00, 0.0000000e+00, -6.1035156e-05], dtype=float32),
'sampling_rate': 16000},
'text': '인사를 결정하는 과정에서 당 지도부가 우 원내대표 및 원내지도부와 충분한 상의를 거치지 않은 채 일방적으로 인사를 했다는 불만도 원내지도부를 중심으로 흘러나왔다'}
```
### Data Splits
| | train | test |
| ---------- | -------- | ----- |
| # of data | 22263 | 457 |"
MarkrAI/KoCommercial-Dataset,"{""language"": [""ko""], ""license"": ""mit"", ""configs"": [{""config_name"": ""default"", ""data_files"": [{""split"": ""train"", ""path"": ""data/train-*""}]}], ""dataset_info"": {""features"": [{""name"": ""input"", ""dtype"": ""string""}, {""name"": ""instruction"", ""dtype"": ""string""}, {""name"": ""output"", ""dtype"": ""string""}], ""splits"": [{""name"": ""train"", ""num_bytes"": 187990458, ""num_examples"": 175454}], ""download_size"": 110149618, ""dataset_size"": 187990458}}","# SSL 데이터 생성을 위한 코드 공개
**[SSL 데이터 생성용 Github Repo](https://github.com/DopeorNope-Lee/Ko-Fine-tuning_DataGen)**
- NIA와 AI-Hub와의 저작권 협의 하에, 조금 혼선이 생긴것 죄송합니다.
- 이에 기존에 저희가 code베이스로 SSL 데이터를 생성했던 코드를 그대로 공개드립니다.
- 다만, 이 과정에서는 저희 이후 파이프라인인, 자체 로컬 모델을 가지고 필터링하거나 수정하는 과정이 없어, 어느정도 감안을 해주시면 감사하겠습니다.
- 코드는 누구나 사용하실 수 있고 과제와 Task에 맞게 활용하시면 감사하겠습니다!
--------------------
# Dataset: KoCommercial-Dataset
## Info
**Dataset 개수:** 약 1.44M
**License:** MIT
**Dataset list(전부 상업적 용도로 이용가능)**
1. [kyujinpy/KOpen-platypus](kyujinpy/KOpen-platypus) (*Except non-commercial datasets)
2. [beomi/KoAlpaca-v1.1a](https://huggingface.co/datasets/beomi/KoAlpaca-v1.1a)
3. [HumanF-MarkrAI/WIKI_QA_Near_dedup](https://huggingface.co/datasets/HumanF-MarkrAI/WIKI_QA_Near_dedup)
4. [KorQuadv1.0](https://korquad.github.io/KorQuad%201.0/)
5. [AIHUB](https://www.aihub.or.kr/)(AIHUB데이터는, 위의 github주소를 통해, 데이터를 생성하셔 사용하시면 됩니다.)
- [일반상식 문장 생성 데이터](https://www.aihub.or.kr/aihubdata/data/view.do?currMenu=115&topMenu=100&aihubDataSe=data&dataSetSn=713090)
- [도서자료 요약](https://www.aihub.or.kr/aihubdata/data/view.do?currMenu=115&topMenu=100&aihubDataSe=data&dataSetSn=93)
- [논문자료 요약](https://www.aihub.or.kr/aihubdata/data/view.do?currMenu=115&topMenu=100&aihubDataSe=data&dataSetSn=90)
- [문서요약 텍스트](https://www.aihub.or.kr/aihubdata/data/view.do?currMenu=115&topMenu=100&aihubDataSe=data&dataSetSn=97)
---
**Self-Supervised method(AIHUB 데이터셋 가공)**
**0. (Default) Summary & Instruction-Answer**
```
주어진 문장에 적절한 제목을 생성하고, 내용을 요약해주세요.
문장: 원고가 소속회사의 노동조합에서 분규가 발생하자 노조활동을 구실로 정상적인 근무를 해태하고, ...
제목: 부당노동행위구제재심판정취소
원고가 주동하여 회사업무능률을 저해하고 회사업무상의 지휘명령에 위반하였다면 이에 따른 징계해고는 사내질서를 유지하기 위한 사용자 고유의 정당한 징계권의 행사로 보아야 한다.
```
**1. Sentence order inference**
```
임의의 순서로 나열된 문장들이 주어집니다. 주어진 문장들을 이용해 원본의 배열을 유추하고, 그 내용을 재구성하세요.
임의의 순서로 나열된 문장: ['나는', '천재다', '그러나', '바보다', '동시에']
나는 천재다. 그러나 동시에 바보다.
```
**2. Original sentence inference**
```
주어진 제목과 요약문에 대한 정보를 토대로, 요약되기 전 문장을 유추해서 생성해주세요.
제목: 수산물 수급 위기관리체계 구축을 위한 기초연구
요약문: 현대 사회에서 발생하는 다양하고...
지금의 국가가 직면하는 위기는 전통사회의 그것과 위기의 규모뿐만아니라...
```
**3. Last sentence prediction**
```
주어진 문장 뒤에 자연스럽게 이어질 문장을 생성해주세요.
문장: ...최근에 방문한 조선예술영화촬영소 에 있는 ‘문화성혁명사적관’(김정일관)에는 1960년대 중반부터 2000년대까지 40년 동안 김정일의 문화예술 부문 지도가 11,890건이며, 그 중 문화예술기관을 직접 방문하여 지도한 이른바 ‘현지지도’가 1,770건이라는 안내판이 있었다.
북한 연극이 김정일과 주체사상이라는 키워드를 떠나 존재할 수 없다는 것을 단적으로 말해 준다
```
**4. Multi question**
```
주어진 정보를 기반으로 질문에 답하세요. 답을 모른다면 답을 지어내지 말고 그냥 모른다고 말하세요.
1839년 바그너는 괴테의 파우스트을 처음 읽고 그 내용에 마음이...
질문:
1. 바그너는 괴테의 파우스트를 읽고 무엇을 쓰고자 했는가?
2. 바그너는 교향곡 작곡을 어디까지 쓴 뒤에 중단했는가?
3. 바그너가 파우스트 서곡을 쓸 때 어떤 곡의 영향을 받았는가?
4. 1839년 바그너가 교향곡의 소재로 쓰려고 했던 책은?
5. 파우스트 서곡의 라단조 조성이 영향을 받은 베토벤의 곡은?
6. 바그너가 파우스트를 처음으로 읽은 년도는?
7. 바그너가 처음 교향곡 작곡을 한 장소는?
8. 바그너의 1악장의 초연은 어디서 연주되었는가?
1. 교향곡
2. 1악장
3. 베토벤의 교향곡 9번
4. 파우스트
5. 합창교향곡
6. 1839
7. 파리
8. 드레스덴
```
**5. Mask Prediction**
```
주어진 문장에서 에 들어갈 적절한 단어를 생성해주세요.
독도는 이다.
우리땅
```
---
# References
1.[The CoT Collection: Improving Zero-shot and Few-shot Learning of Language Models via Chain-of-Thought Fine-Tuning (Kim et al., 2023)](https://huggingface.co/papers/2305.14045)
2.[Adapting Large Language Models via Reading Comprehension (Cheng et al., 2023)](https://huggingface.co/papers/2309.09530)
3.[Deduplicating Training Data Makes Language Models Better(Lee et al., 2021)](https://huggingface.co/papers/2107.06499)
---
# Acknowledgement
- 이 모델은 과학기술정보통신부·광주광역시가 공동 지원한 '인공지능 중심 산업융합 집적단지 조성사업'으로 지원을 받아 수행된 연구 결과입니다.
- This model was supported by Artificial intelligence industrial convergence cluster development project funded by the Ministry of Science and ICT(MSIT, Korea)&Gwangju Metropolitan City.
- 또한 수많은 오픈소스 개발자들과 연구자들에게 감사드리고, 최근 한국어 LLM 오픈생태계에 많은 공헌을 해주신, [Beomi](https://huggingface.co/beomi) 님과 [maywell](https://huggingface.co/maywell) 님에게도 감사의 인사 드립니다."
jeanlee/kmhas_korean_hate_speech,"{""annotations_creators"": [""crowdsourced""], ""language"": [""ko""], ""language_creators"": [""found""], ""license"": [""cc-by-sa-4.0""], ""multilinguality"": [""monolingual""], ""pretty_name"": ""K-MHaS"", ""size_categories"": [""100K
## Dataset Description
- **Homepage:** [K-MHaS](https://github.com/adlnlp/K-MHaS)
- **Repository:** [Korean Multi-label Hate Speech Dataset](https://github.com/adlnlp/K-MHaS)
- **Paper:** [K-MHaS: A Multi-label Hate Speech Detection Dataset in Korean Online News Comment](https://arxiv.org/abs/2208.10684)
- **Point of Contact:** [Caren Han](caren.han@sydney.edu.au)
- **Sample code:** [Colab](https://colab.research.google.com/drive/171KhS1_LVBtpAFd_kaT8lcrZmhcz5ehY?usp=sharing)
### Dataset Summary
The Korean Multi-label Hate Speech Dataset, **K-MHaS**, consists of 109,692 utterances from Korean online news comments, labelled with 8 fine-grained hate speech classes (labels: `Politics`, `Origin`, `Physical`, `Age`, `Gender`, `Religion`, `Race`, `Profanity`) or `Not Hate Speech` class. Each utterance provides from a single to four labels that can handles Korean language patterns effectively. For more details, please refer to our paper about [**K-MHaS**](https://aclanthology.org/2022.coling-1.311), published at COLING 2022.
### Supported Tasks and Leaderboards
Hate Speech Detection
* `binary classification` (labels: `Hate Speech`, `Not Hate Speech`)
* `multi-label classification`: (labels: `Politics`, `Origin`, `Physical`, `Age`, `Gender`, `Religion`, `Race`, `Profanity`, `Not Hate Speech`)
For the multi-label classification, a `Hate Speech` class from the binary classification, is broken down into eight classes, associated with the hate speech category. In order to reflect the social and historical context, we select the eight hate speech classes. For example, the `Politics` class is chosen, due to a significant influence on the style of Korean hate speech.
### Languages
Korean
## Dataset Structure
### Data Instances
The dataset is provided with train/validation/test set in the txt format. Each instance is a news comment with a corresponding one or more hate speech classes (labels: `Politics`, `Origin`, `Physical`, `Age`, `Gender`, `Religion`, `Race`, `Profanity`) or `Not Hate Speech` class. The label numbers matching in both English and Korean is in the data fields section.
```python
{'text':'수꼴틀딱시키들이 다 디져야 나라가 똑바로 될것같다..답이 없는 종자들ㅠ'
'label': [2, 3, 4]
}
```
### Data Fields
* `text`: utterance from Korean online news comment.
* `label`: the label numbers matching with 8 fine-grained hate speech classes and `not hate speech` class are follows.
* `0`: `Origin`(`출신차별`) hate speech based on place of origin or identity;
* `1`: `Physical`(`외모차별`) hate speech based on physical appearance (e.g. body, face) or disability;
* `2`: `Politics`(`정치성향차별`) hate speech based on political stance;
* `3`: `Profanity`(`혐오욕설`) hate speech in the form of swearing, cursing, cussing, obscene words, or expletives; or an unspecified hate speech category;
* `4`: `Age`(`연령차별`) hate speech based on age;
* `5`: `Gender`(`성차별`) hate speech based on gender or sexual orientation (e.g. woman, homosexual);
* `6`: `Race`(`인종차별`) hate speech based on ethnicity;
* `7`: `Religion`(`종교차별`) hate speech based on religion;
* `8`: `Not Hate Speech`(`해당사항없음`).
### Data Splits
In our repository, we provide splitted datasets that have 78,977(train) / 8,776 (validation) / 21,939 (test) samples, preserving the class proportion.
## Dataset Creation
### Curation Rationale
We propose K-MHaS, a large size Korean multi-label hate speech detection dataset that represents Korean language patterns effectively. Most datasets in hate speech research are annotated using a single label classification of particular aspects, even though the subjectivity of hate speech cannot be explained with a mutually exclusive annotation scheme. We propose a multi-label hate speech annotation scheme that allows overlapping labels associated with the subjectivity and the intersectionality of hate speech.
### Source Data
#### Initial Data Collection and Normalization
Our dataset is based on the Korean online news comments available on Kaggle and Github. The unlabeled raw data was collected between January 2018 and June 2020. Please see the details in our paper [K-MHaS](https://aclanthology.org/2022.coling-1.311) published at COLING2020.
#### Who are the source language producers?
The language producers are users who left the comments on the Korean online news platform between 2018 and 2020.
### Annotations
#### Annotation process
We begin with the common categories of hate speech found in literature and match the keywords for each category. After the preliminary round, we investigate the results to merge or remove labels in order to provide the most representative subtype labels of hate speech contextual to the cultural background. Our annotation instructions explain a twolayered annotation to (a) distinguish hate and not hate speech, and (b) the categories of hate speech. Annotators are requested to consider given keywords or alternatives of each category within social, cultural, and historical circumstances. For more details, please refer to the paper [K-MHaS](https://aclanthology.org/2022.coling-1.311).
#### Who are the annotators?
Five native speakers were recruited for manual annotation in both the preliminary and main rounds.
### Personal and Sensitive Information
This datasets contains examples of hateful language, however, has no personal information.
## Considerations for Using the Data
### Social Impact of Dataset
We propose K-MHaS, a new large-sized dataset for Korean hate speech detection with a multi-label annotation scheme. We provided extensive baseline experiment results, presenting the usability of a dataset to detect Korean language patterns in hate speech.
### Discussion of Biases
All annotators were recruited from a crowdsourcing platform. They were informed about hate speech before handling the data. Our instructions allowed them to feel free to leave if they were uncomfortable with the content. With respect to the potential risks, we note that the subjectivity of human annotation would impact on the quality of the dataset.
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
This dataset is curated by Taejun Lim, Heejun Lee and Bogeun Jo.
### Licensing Information
Creative Commons Attribution-ShareAlike 4.0 International (cc-by-sa-4.0).
### Citation Information
```
@inproceedings{lee-etal-2022-k,
title = ""K-{MH}a{S}: A Multi-label Hate Speech Detection Dataset in {K}orean Online News Comment"",
author = ""Lee, Jean and
Lim, Taejun and
Lee, Heejun and
Jo, Bogeun and
Kim, Yangsok and
Yoon, Heegeun and
Han, Soyeon Caren"",
booktitle = ""Proceedings of the 29th International Conference on Computational Linguistics"",
month = oct,
year = ""2022"",
address = ""Gyeongju, Republic of Korea"",
publisher = ""International Committee on Computational Linguistics"",
url = ""https://aclanthology.org/2022.coling-1.311"",
pages = ""3530--3538"",
abstract = ""Online hate speech detection has become an important issue due to the growth of online content, but resources in languages other than English are extremely limited. We introduce K-MHaS, a new multi-label dataset for hate speech detection that effectively handles Korean language patterns. The dataset consists of 109k utterances from news comments and provides a multi-label classification using 1 to 4 labels, and handles subjectivity and intersectionality. We evaluate strong baselines on K-MHaS. KR-BERT with a sub-character tokenizer outperforms others, recognizing decomposed characters in each hate speech class."",
}
```
### Contributions
The contributors of the work are:
- [Jean Lee](https://jeanlee-ai.github.io/) (The University of Sydney)
- [Taejun Lim](https://github.com/taezun) (The University of Sydney)
- [Heejun Lee](https://bigwaveai.com/) (BigWave AI)
- [Bogeun Jo](https://bigwaveai.com/) (BigWave AI)
- Yangsok Kim (Keimyung University)
- Heegeun Yoon (National Information Society Agency)
- [Soyeon Caren Han](https://drcarenhan.github.io/) (The University of Western Australia and The University of Sydney)"
djstrong/oscar-small,"{""annotations_creators"": [""no-annotation""], ""language_creators"": [""found""], ""language"": [""af"", ""am"", ""ar"", ""arz"", ""as"", ""az"", ""azb"", ""ba"", ""be"", ""bg"", ""bn"", ""bo"", ""br"", ""ca"", ""ce"", ""ceb"", ""ckb"", ""cs"", ""cv"", ""cy"", ""da"", ""de"", ""dv"", ""el"", ""en"", ""eo"", ""es"", ""et"", ""eu"", ""fa"", ""fi"", ""fr"", ""fy"", ""ga"", ""gl"", ""gu"", ""he"", ""hi"", ""hr"", ""hu"", ""hy"", ""id"", ""is"", ""it"", ""ja"", ""ka"", ""kk"", ""km"", ""kn"", ""ko"", ""ku"", ""ky"", ""la"", ""lb"", ""lo"", ""lt"", ""lv"", ""mg"", ""mhr"", ""mk"", ""ml"", ""mn"", ""mr"", ""ms"", ""mt"", ""my"", ""nds"", ""ne"", ""nl"", ""nn"", ""no"", ""or"", ""os"", ""pa"", ""pl"", ""pnb"", ""ps"", ""pt"", ""ro"", ""ru"", ""sa"", ""sah"", ""sd"", ""sh"", ""si"", ""sk"", ""sl"", ""sq"", ""sr"", ""sv"", ""sw"", ""ta"", ""te"", ""tg"", ""th"", ""tk"", ""tl"", ""tr"", ""tt"", ""ug"", ""uk"", ""ur"", ""uz"", ""vi"", ""yi"", ""zh""], ""license"": [""cc0-1.0""], ""multilinguality"": [""multilingual""], ""source_datasets"": [""oscar""], ""task_categories"": [""text-generation""], ""task_ids"": [""language-modeling""], ""paperswithcode_id"": ""oscar"", ""pretty_name"": ""OSCAR""}","## WARNING: this dataset is an extract of the OSCAR dataset published here to simulate the use of the full dataset in low-resource contexts.
Using this dataset is equivalent to using a processed version of OSCAR legally speaking. I take no credit for the gathering of the original data and hence refer entirely to the original dataset in the card below.
# Dataset Card for ""oscar""
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://oscar-corpus.com](https://oscar-corpus.com)
- **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Dataset Summary
OSCAR or **O**pen **S**uper-large **C**rawled [**A**LMAnaCH](https://team.inria.fr/almanach/) co**R**pus is a huge multilingual corpus obtained by language classification and filtering of the [Common Crawl](https://commoncrawl.org/) corpus using the [goclassy](https://github.com/pjox/goclassy) architecture. Data is distributed by language in both original and deduplicated form.
### Supported Tasks and Leaderboards
OSCAR is mainly inteded to pretrain language models and word represantations.
### Languages
All the data is distributed by language, both the original and the deduplicated versions of the data are available. 166 different languages are available. The table in subsection [Data Splits Sample Size](#data-splits-sample-size) provides the language code for each subcorpus as well as the number of words (space separated tokens), lines and sizes for both the original and the deduplicated versions of OSCAR.
## Dataset Structure
We show detailed information for all the configurations of the dataset.
## Dataset Creation
### Curation Rationale
OSCAR was constructed new pipeline derived from the [fastText's one](https://github.com/facebookresearch/fastText), called [_goclassy_](https://github.com/pjox/goclassy). Goclassy reuses the [fastText linear classifier](https://fasttext.cc) and the pre-trained fastText model for language recognition, but it completely rewrites and parallelises their pipeline in an asynchronous manner.
The order of operations is more or less the same as in the fastText pre-processing pipeline but instead of clustering multiple operations into a single blocking process, a worker is launched for each operation but bounding the number of possible parallel operations at a given time by the number of available threads instead of the number of CPUs. Goclassy is implemented in the [Go programming language](https://golang.org/) so it lets the [Go runtime](https://golang.org/src/runtime/mprof.go) handle the scheduling of the processes. Thus the goclassy's pipeline one does not have to wait for a whole WET file to download, decompress and classify in order to start downloading and processing the next one, a new file will start downloading and processing as soon as the scheduler is able to allocate a new process.
Filtering and cleaning processes at line level are done before feeding each line to the classifier. Lines shorter than 100 UTF-8 characters and lines containing invalid UTF-8 characters are discarted and are not classified. After all files are proccesed the deduplicated versions are constructed and everything is then splitted in shards and compressed.
### Source Data
#### Initial Data Collection and Normalization
[Common Crawl](https://commoncrawl.org/) is a non-profit foundation which produces and maintains an open repository of web crawled data that is both accessible and analysable. Common Crawl's complete web archive consists of petabytes of data collected over 8 years of web crawling. The repository contains raw web page HTML data (WARC files), metdata extracts (WAT files) and plain text extracts (WET files). The organisation's crawlers has always respected [nofollow](http://microformats.org/wiki/rel-nofollow) and [robots.txt](https://www.robotstxt.org/) policies.
Each monthly Common Crawl snapshot is in itself a massive multilingual corpus, where every single file contains data coming from multiple web pages written in a large variety of languages and covering all possible types of topics.
To construct OSCAR the WET files of Common Crawl were used. These contain the extracted plain texts from the websites mostly converted to UTF-8, as well as headers containing the metatada of each crawled document. Each WET file comes compressed in gzip format and is stored on Amazon Web Services. In the case of OSCAR, the **November 2018** snapshot was used. It surpasses 20TB of uncompressed data and contains more than 50 thousand plain text files where each file consists of the plain text from multiple websites along its metadata header.
#### Who are the source language producers?
The data comes from multiple web pages in a large variety of languages.
### Annotations
The dataset does not contain any additional annotations.
#### Annotation process
N/A
#### Who are the annotators?
N/A
### Personal and Sensitive Information
Being constructed from Common Crawl, Personal and sensitive information might be present. This **must** be considered before training deep learning models with OSCAR, specially in the case of text-generation models.
## Considerations for Using the Data
### Social Impact of Dataset
OSCAR is intended to bring more data to a wide variety of lanuages, the aim of the corpus is to make large amounts of data available to lower resource languages in order to facilitate the pre-training of state-of-the-art language modeling architectures.
### Discussion of Biases
OSCAR is not properly filtered yet and this can be reflected on the models trained with it. Care is advised specially concerning biases of the resulting models.
### Other Known Limitations
The [fastText linear classifier](https://fasttext.cc) is limed both in performance and the variety of languages it can recognize, so the quality of some OSCAR sub-corpora might be lower than expected, specially for the lowest-resource langiuages. Some audits have already been done by [third parties](https://arxiv.org/abs/2010.14571).
## Additional Information
### Dataset Curators
The corpus was put together by [Pedro J. Ortiz](https://pjortiz.eu/), [Benoît Sagot](http://pauillac.inria.fr/~sagot/), and [Laurent Romary](https://cv.archives-ouvertes.fr/laurentromary), during work done at [Inria](https://www.inria.fr/en), particularly at the [ALMAnaCH team](https://team.inria.fr/almanach/).
### Licensing Information
These data are released under this licensing scheme
We do not own any of the text from which these data has been extracted.
We license the actual packaging of these data under the Creative Commons CC0 license (""no rights reserved"") http://creativecommons.org/publicdomain/zero/1.0/
To the extent possible under law, Inria has waived all copyright and related or neighboring rights to OSCAR
This work is published from: France.
Should you consider that our data contains material that is owned by you and should therefore not be reproduced here, please:
* Clearly identify yourself, with detailed contact data such as an address, telephone number or email address at which you can be contacted.
* Clearly identify the copyrighted work claimed to be infringed.
* Clearly identify the material that is claimed to be infringing and information reasonably sufficient to allow us to locate the material.
We will comply to legitimate requests by removing the affected sources from the next release of the corpus.
### Citation Information
```
@inproceedings{ortiz-suarez-etal-2020-monolingual,
title = ""A Monolingual Approach to Contextualized Word Embeddings for Mid-Resource Languages"",
author = ""Ortiz Su{'a}rez, Pedro Javier and
Romary, Laurent and
Sagot, Benoit"",
booktitle = ""Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics"",
month = jul,
year = ""2020"",
address = ""Online"",
publisher = ""Association for Computational Linguistics"",
url = ""https://www.aclweb.org/anthology/2020.acl-main.156"",
pages = ""1703--1714"",
abstract = ""We use the multilingual OSCAR corpus, extracted from Common Crawl via language classification, filtering and cleaning, to train monolingual contextualized word embeddings (ELMo) for five mid-resource languages. We then compare the performance of OSCAR-based and Wikipedia-based ELMo embeddings for these languages on the part-of-speech tagging and parsing tasks. We show that, despite the noise in the Common-Crawl-based OSCAR data, embeddings trained on OSCAR perform much better than monolingual embeddings trained on Wikipedia. They actually equal or improve the current state of the art in tagging and parsing for all five languages. In particular, they also improve over multilingual Wikipedia-based contextual embeddings (multilingual BERT), which almost always constitutes the previous state of the art, thereby showing that the benefit of a larger, more diverse corpus surpasses the cross-lingual benefit of multilingual embedding architectures."",
}
@inproceedings{OrtizSuarezSagotRomary2019,
author = {Pedro Javier {Ortiz Su{'a}rez} and Benoit Sagot and Laurent Romary},
title = {Asynchronous pipelines for processing huge corpora on medium to low resource infrastructures},
series = {Proceedings of the Workshop on Challenges in the Management of Large Corpora (CMLC-7) 2019. Cardiff, 22nd July 2019},
editor = {Piotr Bański and Adrien Barbaresi and Hanno Biber and Evelyn Breiteneder and Simon Clematide and Marc Kupietz and Harald L{""u}ngen and Caroline Iliadi},
publisher = {Leibniz-Institut f{""u}r Deutsche Sprache},
address = {Mannheim},
doi = {10.14618/ids-pub-9021},
url = {http://nbn-resolving.de/urn:nbn:de:bsz:mh39-90215},
pages = {9 -- 16},
year = {2019},
abstract = {Common Crawl is a considerably large, heterogeneous multilingual corpus comprised of crawled documents from the internet, surpassing 20TB of data and distributed as a set of more than 50 thousand plain text files where each contains many documents written in a wide variety of languages. Even though each document has a metadata block associated to it, this data lacks any information about the language in which each document is written, making it extremely difficult to use Common Crawl for monolingual applications. We propose a general, highly parallel, multithreaded pipeline to clean and classify Common Crawl by language; we specifically design it so that it runs efficiently on medium to low resource infrastructures where I/O speeds are the main constraint. We develop the pipeline so that it can be easily reapplied to any kind of heterogeneous corpus and so that it can be parameterised to a wide range of infrastructures. We also distribute a 6.3TB version of Common Crawl, filtered, classified by language, shuffled at line level in order to avoid copyright issues, and ready to be used for NLP applications.},
language = {en}
}
```
### Contributions
Thanks to [@pjox](https://github.com/pjox) and [@lhoestq](https://github.com/lhoestq) for adding this dataset."
lcw99/wikipedia-korean-20221001,"{""language"": [""ko""], ""license"": ""apache-2.0""}",[20240501 update](https://huggingface.co/datasets/lcw99/wikipedia-korean-20240501)
kyujinpy/KOpen-platypus,"{""language"": [""en"", ""ko""], ""license"": ""cc-by-4.0"", ""size_categories"": [""10K Post-processing 작업 내용
- Add post-processing (v2)
+) 단답형 Task 삭제.
## OpenOrca-Ko-v2
1. NIV // 약 1500개
2. FLAN // 약 9000개
3. T0 // 약 6000개
4. CoT // 약 2000개
> Dataset 구성
- 수작업으로 고친 내용(v2)
1. 영어로 된 답변 수정. (Ex. Nick -> 닉, Lucky -> 운이 좋음, ...)
2. KoCoT 데이터셋 제거.
3. Yes, True, False 등등 일부 답변 수정
> Post-processing 작업 내용
## Translation
Using DeepL Pro API. Thanks.
---
>Below is original dataset card
## Table of Contents
- [Dataset Summary](#dataset-summary)
- [Dataset Attribution](#dataset-attribution)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Dataset Use](#dataset-use)
- [Use Cases](#use-cases)
- [Usage Caveats](#usage-caveats)
- [Getting Started](#getting-started)
🐋 The OpenOrca Dataset! 🐋

We are thrilled to announce the release of the OpenOrca dataset!
This rich collection of augmented FLAN data aligns, as best as possible, with the distributions outlined in the [Orca paper](https://arxiv.org/abs/2306.02707).
It has been instrumental in generating high-performing model checkpoints and serves as a valuable resource for all NLP researchers and developers!
# Official Models
## OpenOrca-Platypus2-13B
Our [latest release](https://huggingface.co/Open-Orca/OpenOrca-Platypus2-13B), the first 13B model to score higher than LLaMA1-65B on the HuggingFace Leaderboard!
Released in partnership with Platypus.
## LlongOrca 7B & 13B
* Our [first 7B release](https://huggingface.co/Open-Orca/LlongOrca-7B-16k), trained on top of LLongMA2 to achieve 16,000 tokens context. #1 long context 7B model at release time, with >99% of the overall #1 model's performance.
* [LlongOrca-13B-16k](https://huggingface.co/Open-Orca/LlongOrca-13B-16k), trained on top of LLongMA2. #1 long context 13B model at release time, with >97% of the overall #1 model's performance.
## OpenOrcaxOpenChat-Preview2-13B
Our [second model](https://huggingface.co/Open-Orca/OpenOrcaxOpenChat-Preview2-13B), highlighting that we've surpassed the performance reported in the Orca paper.
Was #1 at release time, now surpassed by our own OpenOrca-Platypus2-13B.
Released in partnership with OpenChat.
## OpenOrca-Preview1-13B
[OpenOrca-Preview1-13B](https://huggingface.co/Open-Orca/OpenOrca-Preview1-13B)
This model was trained in less than a day, for <$200, with <10% of our data.
At release, it beat the current state of the art models on BigBench-Hard and AGIEval. Achieves ~60% of the improvements reported in the Orca paper.
# Dataset Summary
The OpenOrca dataset is a collection of augmented [FLAN Collection data](https://arxiv.org/abs/2301.13688).
Currently ~1M GPT-4 completions, and ~3.2M GPT-3.5 completions.
It is tabularized in alignment with the distributions presented in the ORCA paper and currently represents a partial completion of the full intended dataset, with ongoing generation to expand its scope.
The data is primarily used for training and evaluation in the field of natural language processing.
# Dataset Attribution
We would like to give special recognition to the following contributors for their significant efforts and dedication:
Teknium
WingLian/Caseus
Eric Hartford
NanoBit
Pankaj
Winddude
Rohan
http://AlignmentLab.ai:
Autometa
Entropi
AtlasUnified
NeverendingToast
NanoBit
WingLian/Caseus
Also of course, as always, TheBloke, for being the backbone of the whole community.
Many thanks to NanoBit and Caseus, makers of [Axolotl](https://github.com/OpenAccess-AI-Collective/axolotl), for lending us their expertise on the platform that developed and trained manticore, minotaur, and many others!
We are welcoming sponsors or collaborators to help us build these models to the scale they deserve. Please reach out via our socials:
http://Alignmentlab.ai https://discord.gg/n9hXaBPWxx
Want to visualize our full dataset? Check out our [Nomic Atlas Map](https://atlas.nomic.ai/map/c1b88b47-2d9b-47e0-9002-b80766792582/2560fd25-52fe-42f1-a58f-ff5eccc890d2).
[
](https://atlas.nomic.ai/map/c1b88b47-2d9b-47e0-9002-b80766792582/2560fd25-52fe-42f1-a58f-ff5eccc890d2)
# Supported Tasks and Leaderboards
This dataset supports a range of tasks including language modeling, text generation, and text augmentation.
It has been instrumental in the generation of multiple high-performing model checkpoints which have exhibited exceptional performance in our unit testing.
Further information on leaderboards will be updated as they become available.
# Languages
The language of the data is primarily English.
# Dataset Structure
## Data Instances
A data instance in this dataset represents entries from the FLAN collection which have been augmented by submitting the listed question to either GPT-4 or GPT-3.5.
The response is then entered into the response field.
## Data Fields
The fields are:
1) 'id', a unique numbered identifier which includes one of 'niv', 't0', 'cot', or 'flan' to represent which source FLAN Collection submix the 'question' is sourced from.
2) 'system_prompt', representing the System Prompt presented to the GPT-3.5 or GPT-4 API for the datapoint
3) 'question', representing a question entry as provided by the FLAN Collection
4) 'response', a response to that question received from a query to either GPT-3.5 or GPT-4.
## Data Splits
The data is unsplit.
# Dataset Creation
## Curation Rationale
The dataset was created to provide a source of augmented text data for researchers and developers.
The datapoints are intended primarily to provide an enhancement of the core FLAN Collection data which relies upon the detailed step by step reasoning capabilities of GPT-3.5 and GPT-4.
This ""reasoning trace"" augmentation has demonstrated exceptional results, allowing a LLaMA-13B model trained with this data to rival or beat GPT-3.5 on broad sets of hard reasoning tasks which all models below 100B parameters had previously performed dramatically worse on.
## Source Data
The data is generated using techniques in alignment with the distributions outlined in the Orca paper, except as noted below:
1) There is not enough CoT data in the FLAN Collection to generate 150K zero-shot entries, as the paper purports to use.
We suspect this portion was either undocumented or misrepresented. We have used the ~75K points available.
2) We used the pre-generated FLAN Collection datasets hosted on HuggingFace under conceptofmind, e.g. [conceptofmind/flan2021](https://huggingface.co/datasets/conceptofmind/flan2021_submix_original).
These are referenced by the [official FLAN Collection repo](https://github.com/google-research/FLAN/tree/main/flan/v2) as the preferred data source.
However, these are a subset of the full FLAN Collection data, and have less than the required entries for the flan2021 and t0 submixes, by ~1.25M and 200k respectively.
Combined, this gave us ~1.5M fewer datapoints than in the original Orca paper. Completing the set is an ongoing work.
# Dataset Use
## Use Cases
The dataset can be used for tasks related to language understanding, natural language processing, machine learning model training, and model performance evaluation.
## Usage Caveats
Given that this is a work-in-progress dataset, it is recommended to regularly check for updates and improvements.
Further, the data should be used in accordance with the guidelines and recommendations outlined in the Orca paper.
## Getting Started
This dataset is organized such that it can be naively loaded via Hugging Face datasets library.
We recommend using streaming due to the large size of the files.
Regular updates and data generation progress can be monitored through the OpenOrca repository on Hugging Face.
# Citation
```bibtex
@misc{OpenOrca,
title = {OpenOrca: An Open Dataset of GPT Augmented FLAN Reasoning Traces},
author = {Wing Lian and Bleys Goodson and Eugene Pentland and Austin Cook and Chanvichet Vong and ""Teknium""},
year = {2023},
publisher = {HuggingFace},
journal = {HuggingFace repository},
howpublished = {\url{https://https://huggingface.co/Open-Orca/OpenOrca},
}
```
```bibtex
@misc{mukherjee2023orca,
title={Orca: Progressive Learning from Complex Explanation Traces of GPT-4},
author={Subhabrata Mukherjee and Arindam Mitra and Ganesh Jawahar and Sahaj Agarwal and Hamid Palangi and Ahmed Awadallah},
year={2023},
eprint={2306.02707},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
```bibtex
@misc{longpre2023flan,
title={The Flan Collection: Designing Data and Methods for Effective Instruction Tuning},
author={Shayne Longpre and Le Hou and Tu Vu and Albert Webson and Hyung Won Chung and Yi Tay and Denny Zhou and Quoc V. Le and Barret Zoph and Jason Wei and Adam Roberts},
year={2023},
eprint={2301.13688},
archivePrefix={arXiv},
primaryClass={cs.AI}
}
```
```bibtex
@misc{touvron2023llama,
title={Llama 2: Open Foundation and Fine-Tuned Chat Models},
author={Hugo Touvron and Louis Martin and Kevin Stone and Peter Albert and Amjad Almahairi and Yasmine Babaei and Nikolay Bashlykov and Soumya Batra and Prajjwal Bhargava and Shruti Bhosale and Dan Bikel and Lukas Blecher and Cristian Canton Ferrer and Moya Chen and Guillem Cucurull and David Esiobu and Jude Fernandes and Jeremy Fu and Wenyin Fu and Brian Fuller and Cynthia Gao and Vedanuj Goswami and Naman Goyal and Anthony Hartshorn and Saghar Hosseini and Rui Hou and Hakan Inan and Marcin Kardas and Viktor Kerkez and Madian Khabsa and Isabel Kloumann and Artem Korenev and Punit Singh Koura and Marie-Anne Lachaux and Thibaut Lavril and Jenya Lee and Diana Liskovich and Yinghai Lu and Yuning Mao and Xavier Martinet and Todor Mihaylov and Pushkar Mishra and Igor Molybog and Yixin Nie and Andrew Poulton and Jeremy Reizenstein and Rashi Rungta and Kalyan Saladi and Alan Schelten and Ruan Silva and Eric Michael Smith and Ranjan Subramanian and Xiaoqing Ellen Tan and Binh Tang and Ross Taylor and Adina Williams and Jian Xiang Kuan and Puxin Xu and Zheng Yan and Iliyan Zarov and Yuchen Zhang and Angela Fan and Melanie Kambadur and Sharan Narang and Aurelien Rodriguez and Robert Stojnic and Sergey Edunov and Thomas Scialom},
year={2023},
eprint= arXiv 2307.09288
}
@software{touvron2023llama,
title={LLaMA: Open and Efficient Foundation Language Models},
author={Touvron, Hugo and Lavril, Thibaut and Izacard, Gautier and Martinet, Xavier and Lachaux, Marie-Anne and Lacroix, Timoth{\'e}e and Rozi{\`e}re, Baptiste and Goyal, Naman and Hambro, Eric and Azhar, Faisal and Rodriguez, Aurelien and Joulin, Armand and Grave, Edouard and Lample, Guillaume},
journal={arXiv preprint arXiv:2302.13971},
year={2023}
}
```"
cyanic-selkie/wikianc,"{""license"": ""cc-by-sa-4.0"", ""pretty_name"": ""WikiAnc"", ""annotations_creators"": [""machine-generated"", ""crowdsourced""], ""language_creators"": [""machine-generated"", ""crowdsourced""], ""task_categories"": [""token-classification""], ""multilinguality"": [""multilingual""], ""language"": [""en"", ""ceb"", ""de"", ""sv"", ""fr"", ""nl"", ""ru"", ""es"", ""it"", ""arz"", ""pl"", ""ja"", ""zh"", ""vi"", ""uk"", ""war"", ""ar"", ""pt"", ""fa"", ""ca"", ""sr"", ""id"", ""ko"", ""no"", ""ce"", ""fi"", ""cs"", ""tr"", ""hu"", ""tt"", ""sh"", ""ro"", ""eu"", ""ms"", ""eo"", ""he"", ""hy"", ""da"", ""bg"", ""cy"", ""sk"", ""azb"", ""uz"", ""et"", ""be"", ""kk"", ""min"", ""el"", ""hr"", ""lt"", ""gl"", ""az"", ""ur"", ""sl"", ""lld"", ""ka"", ""nn"", ""hi"", ""th"", ""ta"", ""bn"", ""la"", ""mk"", ""ast"", ""lv"", ""af"", ""tg"", ""my"", ""mg"", ""mr"", ""sq"", ""bs"", ""oc"", ""te"", ""ml"", ""nds"", ""br"", ""ky"", ""sw"", ""jv"", ""lmo"", ""new"", ""pnb"", ""vec"", ""ht"", ""pms"", ""ba"", ""lb"", ""su"", ""ku"", ""ga"", ""szl"", ""is"", ""fy"", ""cv"", ""ckb"", ""pa"", ""tl"", ""an"", ""wuu"", ""diq"", ""io"", ""sco"", ""vo"", ""yo"", ""ne"", ""ia"", ""kn"", ""gu"", ""als"", ""ha"", ""avk"", ""bar"", ""crh"", ""scn"", ""bpy"", ""qu"", ""mn"", ""nv"", ""xmf"", ""ban"", ""si"", ""tum"", ""ps"", ""ig"", ""frr"", ""os"", ""mzn"", ""or"", ""sah"", ""cdo"", ""gd"", ""bug"", ""yi"", ""sd"", ""ilo"", ""am"", ""nap"", ""li"", ""bcl"", ""fo"", ""gor"", ""hsb"", ""mai"", ""shn"", ""eml"", ""ace"", ""sa"", ""as"", ""wa"", ""ie"", ""hyw"", ""lij"", ""mhr"", ""zu"", ""sn"", ""hif"", ""mrj"", ""bjn"", ""km"", ""mni"", ""hak"", ""pam"", ""sat"", ""rue"", ""nso"", ""bh"", ""so"", ""mi"", ""se"", ""myv"", ""vls"", ""dag"", ""sc"", ""co"", ""ary"", ""kw"", ""bo"", ""vep"", ""glk"", ""tk"", ""kab"", ""gan"", ""rw"", ""ab"", ""gv"", ""ug"", ""nah"", ""zea"", ""skr"", ""frp"", ""udm"", ""pcd"", ""mt"", ""kv"", ""csb"", ""gn"", ""smn"", ""ay"", ""nrm"", ""ks"", ""lez"", ""lfn"", ""olo"", ""mwl"", ""lo"", ""stq"", ""ang"", ""mdf"", ""fur"", ""rm"", ""lad"", ""kaa"", ""gom"", ""ext"", ""koi"", ""tyv"", ""pap"", ""av"", ""dsb"", ""ln"", ""dty"", ""tw"", ""dv"", ""ksh"", ""za"", ""gag"", ""bxr"", ""pfl"", ""lg"", ""szy"", ""pag"", ""blk"", ""pi"", ""tay"", ""haw"", ""awa"", ""inh"", ""krc"", ""xal"", ""pdc"", ""to"", ""atj"", ""tcy"", ""arc"", ""mnw"", ""shi"", ""jam"", ""kbp"", ""wo"", ""anp"", ""kbd"", ""nia"", ""om"", ""nov"", ""ki"", ""nqo"", ""bi"", ""xh"", ""tpi"", ""ff"", ""tet"", ""jbo"", ""fj"", ""kg"", ""lbe"", ""ty"", ""cu"", ""guw"", ""trv"", ""ami"", ""srn"", ""sm"", ""mad"", ""alt"", ""ltg"", ""gcr"", ""chr"", ""tn"", ""ny"", ""st"", ""pih"", ""got"", ""rmy"", ""ee"", ""pcm"", ""bm"", ""ss"", ""gpe"", ""ts"", ""ve"", ""kcg"", ""chy"", ""rn"", ""ch"", ""gur"", ""ik"", ""ady"", ""fat"", ""pnt"", ""guc"", ""iu"", ""pwn"", ""sg"", ""din"", ""ti"", ""kl"", ""dz"", ""cr""], ""tags"": [""wikidata"", ""wikipedia"", ""wikification"", ""named-entity-linking"", ""nel"", ""entity-linking"", ""el"", ""named-entity-disambiguation"", ""ned"", ""entity-disambiguation"", ""ed""], ""configs"": [{""config_name"": ""ab"", ""data_files"": [{""split"": ""train"", ""path"": ""data/ab/train.parquet""}, {""split"": ""validation"", ""path"": ""data/ab/validation.parquet""}]}, {""config_name"": ""ace"", ""data_files"": [{""split"": ""train"", ""path"": ""data/ace/train.parquet""}, {""split"": ""validation"", ""path"": ""data/ace/validation.parquet""}]}, {""config_name"": ""ady"", ""data_files"": [{""split"": ""train"", ""path"": ""data/ady/train.parquet""}, {""split"": ""validation"", ""path"": ""data/ady/validation.parquet""}]}, {""config_name"": ""af"", ""data_files"": [{""split"": ""train"", ""path"": ""data/af/train.parquet""}, {""split"": ""validation"", ""path"": ""data/af/validation.parquet""}]}, {""config_name"": ""als"", ""data_files"": [{""split"": ""train"", ""path"": ""data/als/train.parquet""}, {""split"": ""validation"", ""path"": ""data/als/validation.parquet""}]}, {""config_name"": ""alt"", ""data_files"": [{""split"": ""train"", ""path"": ""data/alt/train.parquet""}, {""split"": ""validation"", ""path"": ""data/alt/validation.parquet""}]}, {""config_name"": ""am"", ""data_files"": [{""split"": ""train"", ""path"": ""data/am/train.parquet""}, {""split"": ""validation"", ""path"": ""data/am/validation.parquet""}]}, {""config_name"": ""ami"", ""data_files"": [{""split"": ""train"", ""path"": ""data/ami/train.parquet""}, {""split"": ""validation"", ""path"": ""data/ami/validation.parquet""}]}, {""config_name"": ""an"", ""data_files"": [{""split"": ""train"", ""path"": ""data/an/train.parquet""}, {""split"": ""validation"", ""path"": ""data/an/validation.parquet""}]}, {""config_name"": ""ang"", ""data_files"": [{""split"": ""train"", ""path"": ""data/ang/train.parquet""}, {""split"": ""validation"", ""path"": ""data/ang/validation.parquet""}]}, {""config_name"": ""anp"", ""data_files"": [{""split"": ""train"", ""path"": ""data/anp/train.parquet""}, {""split"": ""validation"", ""path"": ""data/anp/validation.parquet""}]}, {""config_name"": ""ar"", ""data_files"": [{""split"": ""train"", ""path"": ""data/ar/train.parquet""}, {""split"": ""validation"", ""path"": ""data/ar/validation.parquet""}]}, {""config_name"": ""arc"", ""data_files"": [{""split"": ""train"", ""path"": ""data/arc/train.parquet""}, {""split"": ""validation"", ""path"": ""data/arc/validation.parquet""}]}, {""config_name"": ""ary"", ""data_files"": [{""split"": ""train"", ""path"": ""data/ary/train.parquet""}, {""split"": ""validation"", ""path"": ""data/ary/validation.parquet""}]}, {""config_name"": ""arz"", ""data_files"": [{""split"": ""train"", ""path"": ""data/arz/train.parquet""}, {""split"": ""validation"", ""path"": ""data/arz/validation.parquet""}]}, {""config_name"": ""as"", ""data_files"": [{""split"": ""train"", ""path"": ""data/as/train.parquet""}, {""split"": ""validation"", ""path"": ""data/as/validation.parquet""}]}, {""config_name"": ""ast"", ""data_files"": [{""split"": ""train"", ""path"": ""data/ast/train.parquet""}, {""split"": ""validation"", ""path"": ""data/ast/validation.parquet""}]}, {""config_name"": ""atj"", ""data_files"": [{""split"": ""train"", ""path"": ""data/atj/train.parquet""}, {""split"": ""validation"", ""path"": ""data/atj/validation.parquet""}]}, {""config_name"": ""av"", ""data_files"": [{""split"": ""train"", ""path"": ""data/av/train.parquet""}, {""split"": ""validation"", ""path"": ""data/av/validation.parquet""}]}, {""config_name"": ""avk"", ""data_files"": [{""split"": ""train"", ""path"": ""data/avk/train.parquet""}, {""split"": ""validation"", ""path"": ""data/avk/validation.parquet""}]}, {""config_name"": ""awa"", ""data_files"": [{""split"": ""train"", ""path"": ""data/awa/train.parquet""}, {""split"": ""validation"", ""path"": ""data/awa/validation.parquet""}]}, {""config_name"": ""ay"", ""data_files"": [{""split"": ""train"", ""path"": ""data/ay/train.parquet""}, {""split"": ""validation"", ""path"": ""data/ay/validation.parquet""}]}, {""config_name"": ""az"", ""data_files"": [{""split"": ""train"", ""path"": ""data/az/train.parquet""}, {""split"": ""validation"", ""path"": ""data/az/validation.parquet""}]}, {""config_name"": ""azb"", ""data_files"": [{""split"": ""train"", ""path"": ""data/azb/train.parquet""}, {""split"": ""validation"", ""path"": ""data/azb/validation.parquet""}]}, {""config_name"": ""ba"", ""data_files"": [{""split"": ""train"", ""path"": ""data/ba/train.parquet""}, {""split"": ""validation"", ""path"": ""data/ba/validation.parquet""}]}, {""config_name"": ""ban"", ""data_files"": [{""split"": ""train"", ""path"": ""data/ban/train.parquet""}, {""split"": ""validation"", ""path"": ""data/ban/validation.parquet""}]}, {""config_name"": ""bar"", ""data_files"": [{""split"": ""train"", ""path"": ""data/bar/train.parquet""}, {""split"": ""validation"", ""path"": ""data/bar/validation.parquet""}]}, {""config_name"": ""bat_smg"", ""data_files"": [{""split"": ""train"", ""path"": ""data/bat_smg/train.parquet""}, {""split"": ""validation"", ""path"": ""data/bat_smg/validation.parquet""}]}, {""config_name"": ""bcl"", ""data_files"": [{""split"": ""train"", ""path"": ""data/bcl/train.parquet""}, {""split"": ""validation"", ""path"": ""data/bcl/validation.parquet""}]}, {""config_name"": ""be"", ""data_files"": [{""split"": ""train"", ""path"": ""data/be/train.parquet""}, {""split"": ""validation"", ""path"": ""data/be/validation.parquet""}]}, {""config_name"": ""bg"", ""data_files"": [{""split"": ""train"", ""path"": ""data/bg/train.parquet""}, {""split"": ""validation"", ""path"": ""data/bg/validation.parquet""}]}, {""config_name"": ""bh"", ""data_files"": [{""split"": ""train"", ""path"": ""data/bh/train.parquet""}, {""split"": ""validation"", ""path"": ""data/bh/validation.parquet""}]}, {""config_name"": ""bi"", ""data_files"": [{""split"": ""train"", ""path"": ""data/bi/train.parquet""}, {""split"": ""validation"", ""path"": ""data/bi/validation.parquet""}]}, {""config_name"": ""bjn"", ""data_files"": [{""split"": ""train"", ""path"": ""data/bjn/train.parquet""}, {""split"": ""validation"", ""path"": ""data/bjn/validation.parquet""}]}, {""config_name"": ""blk"", ""data_files"": [{""split"": ""train"", ""path"": ""data/blk/train.parquet""}, {""split"": ""validation"", ""path"": ""data/blk/validation.parquet""}]}, {""config_name"": ""bm"", ""data_files"": [{""split"": ""train"", ""path"": ""data/bm/train.parquet""}, {""split"": ""validation"", ""path"": ""data/bm/validation.parquet""}]}, {""config_name"": ""bn"", ""data_files"": [{""split"": ""train"", ""path"": ""data/bn/train.parquet""}, {""split"": ""validation"", ""path"": ""data/bn/validation.parquet""}]}, {""config_name"": ""bo"", ""data_files"": [{""split"": ""train"", ""path"": ""data/bo/train.parquet""}, {""split"": ""validation"", ""path"": ""data/bo/validation.parquet""}]}, {""config_name"": ""bpy"", ""data_files"": [{""split"": ""train"", ""path"": ""data/bpy/train.parquet""}, {""split"": ""validation"", ""path"": ""data/bpy/validation.parquet""}]}, {""config_name"": ""br"", ""data_files"": [{""split"": ""train"", ""path"": ""data/br/train.parquet""}, {""split"": ""validation"", ""path"": ""data/br/validation.parquet""}]}, {""config_name"": ""bs"", ""data_files"": [{""split"": ""train"", ""path"": ""data/bs/train.parquet""}, {""split"": ""validation"", ""path"": ""data/bs/validation.parquet""}]}, {""config_name"": ""bug"", ""data_files"": [{""split"": ""train"", ""path"": ""data/bug/train.parquet""}, {""split"": ""validation"", ""path"": ""data/bug/validation.parquet""}]}, {""config_name"": ""bxr"", ""data_files"": [{""split"": ""train"", ""path"": ""data/bxr/train.parquet""}, {""split"": ""validation"", ""path"": ""data/bxr/validation.parquet""}]}, {""config_name"": ""ca"", ""data_files"": [{""split"": ""train"", ""path"": ""data/ca/train.parquet""}, {""split"": ""validation"", ""path"": ""data/ca/validation.parquet""}]}, {""config_name"": ""cbk_zam"", ""data_files"": [{""split"": ""train"", ""path"": ""data/cbk_zam/train.parquet""}, {""split"": ""validation"", ""path"": ""data/cbk_zam/validation.parquet""}]}, {""config_name"": ""cdo"", ""data_files"": [{""split"": ""train"", ""path"": ""data/cdo/train.parquet""}, {""split"": ""validation"", ""path"": ""data/cdo/validation.parquet""}]}, {""config_name"": ""ce"", ""data_files"": [{""split"": ""train"", ""path"": ""data/ce/train.parquet""}, {""split"": ""validation"", ""path"": ""data/ce/validation.parquet""}]}, {""config_name"": ""ceb"", ""data_files"": [{""split"": ""train"", ""path"": ""data/ceb/train.parquet""}, {""split"": ""validation"", ""path"": ""data/ceb/validation.parquet""}]}, {""config_name"": ""ch"", ""data_files"": [{""split"": ""train"", ""path"": ""data/ch/train.parquet""}, {""split"": ""validation"", ""path"": ""data/ch/validation.parquet""}]}, {""config_name"": ""chr"", ""data_files"": [{""split"": ""train"", ""path"": ""data/chr/train.parquet""}, {""split"": ""validation"", ""path"": ""data/chr/validation.parquet""}]}, {""config_name"": ""chy"", ""data_files"": [{""split"": ""train"", ""path"": ""data/chy/train.parquet""}, {""split"": ""validation"", ""path"": ""data/chy/validation.parquet""}]}, {""config_name"": ""ckb"", ""data_files"": [{""split"": ""train"", ""path"": ""data/ckb/train.parquet""}, {""split"": ""validation"", ""path"": ""data/ckb/validation.parquet""}]}, {""config_name"": ""co"", ""data_files"": [{""split"": ""train"", ""path"": ""data/co/train.parquet""}, {""split"": ""validation"", ""path"": ""data/co/validation.parquet""}]}, {""config_name"": ""cr"", ""data_files"": [{""split"": ""train"", ""path"": ""data/cr/train.parquet""}, {""split"": ""validation"", ""path"": ""data/cr/validation.parquet""}]}, {""config_name"": ""crh"", ""data_files"": [{""split"": ""train"", ""path"": ""data/crh/train.parquet""}, {""split"": ""validation"", ""path"": ""data/crh/validation.parquet""}]}, {""config_name"": ""cs"", ""data_files"": [{""split"": ""train"", ""path"": ""data/cs/train.parquet""}, {""split"": ""validation"", ""path"": ""data/cs/validation.parquet""}]}, {""config_name"": ""csb"", ""data_files"": [{""split"": ""train"", ""path"": ""data/csb/train.parquet""}, {""split"": ""validation"", ""path"": ""data/csb/validation.parquet""}]}, {""config_name"": ""cu"", ""data_files"": [{""split"": ""train"", ""path"": ""data/cu/train.parquet""}, {""split"": ""validation"", ""path"": ""data/cu/validation.parquet""}]}, {""config_name"": ""cv"", ""data_files"": [{""split"": ""train"", ""path"": ""data/cv/train.parquet""}, {""split"": ""validation"", ""path"": ""data/cv/validation.parquet""}]}, {""config_name"": ""cy"", ""data_files"": [{""split"": ""train"", ""path"": ""data/cy/train.parquet""}, {""split"": ""validation"", ""path"": ""data/cy/validation.parquet""}]}, {""config_name"": ""da"", ""data_files"": [{""split"": ""train"", ""path"": ""data/da/train.parquet""}, {""split"": ""validation"", ""path"": ""data/da/validation.parquet""}]}, {""config_name"": ""dag"", ""data_files"": [{""split"": ""train"", ""path"": ""data/dag/train.parquet""}, {""split"": ""validation"", ""path"": ""data/dag/validation.parquet""}]}, {""config_name"": ""de"", ""data_files"": [{""split"": ""train"", ""path"": ""data/de/train.parquet""}, {""split"": ""validation"", ""path"": ""data/de/validation.parquet""}]}, {""config_name"": ""din"", ""data_files"": [{""split"": ""train"", ""path"": ""data/din/train.parquet""}, {""split"": ""validation"", ""path"": ""data/din/validation.parquet""}]}, {""config_name"": ""diq"", ""data_files"": [{""split"": ""train"", ""path"": ""data/diq/train.parquet""}, {""split"": ""validation"", ""path"": ""data/diq/validation.parquet""}]}, {""config_name"": ""dsb"", ""data_files"": [{""split"": ""train"", ""path"": ""data/dsb/train.parquet""}, {""split"": ""validation"", ""path"": ""data/dsb/validation.parquet""}]}, {""config_name"": ""dty"", ""data_files"": [{""split"": ""train"", ""path"": ""data/dty/train.parquet""}, {""split"": ""validation"", ""path"": ""data/dty/validation.parquet""}]}, {""config_name"": ""dv"", ""data_files"": [{""split"": ""train"", ""path"": ""data/dv/train.parquet""}, {""split"": ""validation"", ""path"": ""data/dv/validation.parquet""}]}, {""config_name"": ""dz"", ""data_files"": [{""split"": ""train"", ""path"": ""data/dz/train.parquet""}, {""split"": ""validation"", ""path"": ""data/dz/validation.parquet""}]}, {""config_name"": ""ee"", ""data_files"": [{""split"": ""train"", ""path"": ""data/ee/train.parquet""}, {""split"": ""validation"", ""path"": ""data/ee/validation.parquet""}]}, {""config_name"": ""el"", ""data_files"": [{""split"": ""train"", ""path"": ""data/el/train.parquet""}, {""split"": ""validation"", ""path"": ""data/el/validation.parquet""}]}, {""config_name"": ""eml"", ""data_files"": [{""split"": ""train"", ""path"": ""data/eml/train.parquet""}, {""split"": ""validation"", ""path"": ""data/eml/validation.parquet""}]}, {""config_name"": ""en"", ""data_files"": [{""split"": ""train"", ""path"": ""data/en/train.parquet""}, {""split"": ""validation"", ""path"": ""data/en/validation.parquet""}]}, {""config_name"": ""eo"", ""data_files"": [{""split"": ""train"", ""path"": ""data/eo/train.parquet""}, {""split"": ""validation"", ""path"": ""data/eo/validation.parquet""}]}, {""config_name"": ""es"", ""data_files"": [{""split"": ""train"", ""path"": ""data/es/train.parquet""}, {""split"": ""validation"", ""path"": ""data/es/validation.parquet""}]}, {""config_name"": ""et"", ""data_files"": [{""split"": ""train"", ""path"": ""data/et/train.parquet""}, {""split"": ""validation"", ""path"": ""data/et/validation.parquet""}]}, {""config_name"": ""eu"", ""data_files"": [{""split"": ""train"", ""path"": ""data/eu/train.parquet""}, {""split"": ""validation"", ""path"": ""data/eu/validation.parquet""}]}, {""config_name"": ""ext"", ""data_files"": [{""split"": ""train"", ""path"": ""data/ext/train.parquet""}, {""split"": ""validation"", ""path"": ""data/ext/validation.parquet""}]}, {""config_name"": ""fa"", ""data_files"": [{""split"": ""train"", ""path"": ""data/fa/train.parquet""}, {""split"": ""validation"", ""path"": ""data/fa/validation.parquet""}]}, {""config_name"": ""fat"", ""data_files"": [{""split"": ""train"", ""path"": ""data/fat/train.parquet""}, {""split"": ""validation"", ""path"": ""data/fat/validation.parquet""}]}, {""config_name"": ""ff"", ""data_files"": [{""split"": ""train"", ""path"": ""data/ff/train.parquet""}, {""split"": ""validation"", ""path"": ""data/ff/validation.parquet""}]}, {""config_name"": ""fi"", ""data_files"": [{""split"": ""train"", ""path"": ""data/fi/train.parquet""}, {""split"": ""validation"", ""path"": ""data/fi/validation.parquet""}]}, {""config_name"": ""fiu_vro"", ""data_files"": [{""split"": ""train"", ""path"": ""data/fiu_vro/train.parquet""}, {""split"": ""validation"", ""path"": ""data/fiu_vro/validation.parquet""}]}, {""config_name"": ""fj"", ""data_files"": [{""split"": ""train"", ""path"": ""data/fj/train.parquet""}, {""split"": ""validation"", ""path"": ""data/fj/validation.parquet""}]}, {""config_name"": ""fo"", ""data_files"": [{""split"": ""train"", ""path"": ""data/fo/train.parquet""}, {""split"": ""validation"", ""path"": ""data/fo/validation.parquet""}]}, {""config_name"": ""fr"", ""data_files"": [{""split"": ""train"", ""path"": ""data/fr/train.parquet""}, {""split"": ""validation"", ""path"": ""data/fr/validation.parquet""}]}, {""config_name"": ""frp"", ""data_files"": [{""split"": ""train"", ""path"": ""data/frp/train.parquet""}, {""split"": ""validation"", ""path"": ""data/frp/validation.parquet""}]}, {""config_name"": ""frr"", ""data_files"": [{""split"": ""train"", ""path"": ""data/frr/train.parquet""}, {""split"": ""validation"", ""path"": ""data/frr/validation.parquet""}]}, {""config_name"": ""fur"", ""data_files"": [{""split"": ""train"", ""path"": ""data/fur/train.parquet""}, {""split"": ""validation"", ""path"": ""data/fur/validation.parquet""}]}, {""config_name"": ""fy"", ""data_files"": [{""split"": ""train"", ""path"": ""data/fy/train.parquet""}, {""split"": ""validation"", ""path"": ""data/fy/validation.parquet""}]}, {""config_name"": ""ga"", ""data_files"": [{""split"": ""train"", ""path"": ""data/ga/train.parquet""}, {""split"": ""validation"", ""path"": ""data/ga/validation.parquet""}]}, {""config_name"": ""gag"", ""data_files"": [{""split"": ""train"", ""path"": ""data/gag/train.parquet""}, {""split"": ""validation"", ""path"": ""data/gag/validation.parquet""}]}, {""config_name"": ""gan"", ""data_files"": [{""split"": ""train"", ""path"": ""data/gan/train.parquet""}, {""split"": ""validation"", ""path"": ""data/gan/validation.parquet""}]}, {""config_name"": ""gcr"", ""data_files"": [{""split"": ""train"", ""path"": ""data/gcr/train.parquet""}, {""split"": ""validation"", ""path"": ""data/gcr/validation.parquet""}]}, {""config_name"": ""gd"", ""data_files"": [{""split"": ""train"", ""path"": ""data/gd/train.parquet""}, {""split"": ""validation"", ""path"": ""data/gd/validation.parquet""}]}, {""config_name"": ""gl"", ""data_files"": [{""split"": ""train"", ""path"": ""data/gl/train.parquet""}, {""split"": ""validation"", ""path"": ""data/gl/validation.parquet""}]}, {""config_name"": ""glk"", ""data_files"": [{""split"": ""train"", ""path"": ""data/glk/train.parquet""}, {""split"": ""validation"", ""path"": ""data/glk/validation.parquet""}]}, {""config_name"": ""gn"", ""data_files"": [{""split"": ""train"", ""path"": ""data/gn/train.parquet""}, {""split"": ""validation"", ""path"": ""data/gn/validation.parquet""}]}, {""config_name"": ""gom"", ""data_files"": [{""split"": ""train"", ""path"": ""data/gom/train.parquet""}, {""split"": ""validation"", ""path"": ""data/gom/validation.parquet""}]}, {""config_name"": ""gor"", ""data_files"": [{""split"": ""train"", ""path"": ""data/gor/train.parquet""}, {""split"": ""validation"", ""path"": ""data/gor/validation.parquet""}]}, {""config_name"": ""got"", ""data_files"": [{""split"": ""train"", ""path"": ""data/got/train.parquet""}, {""split"": ""validation"", ""path"": ""data/got/validation.parquet""}]}, {""config_name"": ""gpe"", ""data_files"": [{""split"": ""train"", ""path"": ""data/gpe/train.parquet""}, {""split"": ""validation"", ""path"": ""data/gpe/validation.parquet""}]}, {""config_name"": ""gu"", ""data_files"": [{""split"": ""train"", ""path"": ""data/gu/train.parquet""}, {""split"": ""validation"", ""path"": ""data/gu/validation.parquet""}]}, {""config_name"": ""guc"", ""data_files"": [{""split"": ""train"", ""path"": ""data/guc/train.parquet""}, {""split"": ""validation"", ""path"": ""data/guc/validation.parquet""}]}, {""config_name"": ""gur"", ""data_files"": [{""split"": ""train"", ""path"": ""data/gur/train.parquet""}, {""split"": ""validation"", ""path"": ""data/gur/validation.parquet""}]}, {""config_name"": ""guw"", ""data_files"": [{""split"": ""train"", ""path"": ""data/guw/train.parquet""}, {""split"": ""validation"", ""path"": ""data/guw/validation.parquet""}]}, {""config_name"": ""gv"", ""data_files"": [{""split"": ""train"", ""path"": ""data/gv/train.parquet""}, {""split"": ""validation"", ""path"": ""data/gv/validation.parquet""}]}, {""config_name"": ""ha"", ""data_files"": [{""split"": ""train"", ""path"": ""data/ha/train.parquet""}, {""split"": ""validation"", ""path"": ""data/ha/validation.parquet""}]}, {""config_name"": ""hak"", ""data_files"": [{""split"": ""train"", ""path"": ""data/hak/train.parquet""}, {""split"": ""validation"", ""path"": ""data/hak/validation.parquet""}]}, {""config_name"": ""haw"", ""data_files"": [{""split"": ""train"", ""path"": ""data/haw/train.parquet""}, {""split"": ""validation"", ""path"": ""data/haw/validation.parquet""}]}, {""config_name"": ""he"", ""data_files"": [{""split"": ""train"", ""path"": ""data/he/train.parquet""}, {""split"": ""validation"", ""path"": ""data/he/validation.parquet""}]}, {""config_name"": ""hi"", ""data_files"": [{""split"": ""train"", ""path"": ""data/hi/train.parquet""}, {""split"": ""validation"", ""path"": ""data/hi/validation.parquet""}]}, {""config_name"": ""hif"", ""data_files"": [{""split"": ""train"", ""path"": ""data/hif/train.parquet""}, {""split"": ""validation"", ""path"": ""data/hif/validation.parquet""}]}, {""config_name"": ""hr"", ""data_files"": [{""split"": ""train"", ""path"": ""data/hr/train.parquet""}, {""split"": ""validation"", ""path"": ""data/hr/validation.parquet""}]}, {""config_name"": ""hsb"", ""data_files"": [{""split"": ""train"", ""path"": ""data/hsb/train.parquet""}, {""split"": ""validation"", ""path"": ""data/hsb/validation.parquet""}]}, {""config_name"": ""ht"", ""data_files"": [{""split"": ""train"", ""path"": ""data/ht/train.parquet""}, {""split"": ""validation"", ""path"": ""data/ht/validation.parquet""}]}, {""config_name"": ""hu"", ""data_files"": [{""split"": ""train"", ""path"": ""data/hu/train.parquet""}, {""split"": ""validation"", ""path"": ""data/hu/validation.parquet""}]}, {""config_name"": ""hy"", ""data_files"": [{""split"": ""train"", ""path"": ""data/hy/train.parquet""}, {""split"": ""validation"", ""path"": ""data/hy/validation.parquet""}]}, {""config_name"": ""hyw"", ""data_files"": [{""split"": ""train"", ""path"": ""data/hyw/train.parquet""}, {""split"": ""validation"", ""path"": ""data/hyw/validation.parquet""}]}, {""config_name"": ""ia"", ""data_files"": [{""split"": ""train"", ""path"": ""data/ia/train.parquet""}, {""split"": ""validation"", ""path"": ""data/ia/validation.parquet""}]}, {""config_name"": ""id"", ""data_files"": [{""split"": ""train"", ""path"": ""data/id/train.parquet""}, {""split"": ""validation"", ""path"": ""data/id/validation.parquet""}]}, {""config_name"": ""ie"", ""data_files"": [{""split"": ""train"", ""path"": ""data/ie/train.parquet""}, {""split"": ""validation"", ""path"": ""data/ie/validation.parquet""}]}, {""config_name"": ""ig"", ""data_files"": [{""split"": ""train"", ""path"": ""data/ig/train.parquet""}, {""split"": ""validation"", ""path"": ""data/ig/validation.parquet""}]}, {""config_name"": ""ik"", ""data_files"": [{""split"": ""train"", ""path"": ""data/ik/train.parquet""}, {""split"": ""validation"", ""path"": ""data/ik/validation.parquet""}]}, {""config_name"": ""ilo"", ""data_files"": [{""split"": ""train"", ""path"": ""data/ilo/train.parquet""}, {""split"": ""validation"", ""path"": ""data/ilo/validation.parquet""}]}, {""config_name"": ""inh"", ""data_files"": [{""split"": ""train"", ""path"": ""data/inh/train.parquet""}, {""split"": ""validation"", ""path"": ""data/inh/validation.parquet""}]}, {""config_name"": ""io"", ""data_files"": [{""split"": ""train"", ""path"": ""data/io/train.parquet""}, {""split"": ""validation"", ""path"": ""data/io/validation.parquet""}]}, {""config_name"": ""is"", ""data_files"": [{""split"": ""train"", ""path"": ""data/is/train.parquet""}, {""split"": ""validation"", ""path"": ""data/is/validation.parquet""}]}, {""config_name"": ""it"", ""data_files"": [{""split"": ""train"", ""path"": ""data/it/train.parquet""}, {""split"": ""validation"", ""path"": ""data/it/validation.parquet""}]}, {""config_name"": ""iu"", ""data_files"": [{""split"": ""train"", ""path"": ""data/iu/train.parquet""}, {""split"": ""validation"", ""path"": ""data/iu/validation.parquet""}]}, {""config_name"": ""ja"", ""data_files"": [{""split"": ""train"", ""path"": ""data/ja/train.parquet""}, {""split"": ""validation"", ""path"": ""data/ja/validation.parquet""}]}, {""config_name"": ""jam"", ""data_files"": [{""split"": ""train"", ""path"": ""data/jam/train.parquet""}, {""split"": ""validation"", ""path"": ""data/jam/validation.parquet""}]}, {""config_name"": ""jbo"", ""data_files"": [{""split"": ""train"", ""path"": ""data/jbo/train.parquet""}, {""split"": ""validation"", ""path"": ""data/jbo/validation.parquet""}]}, {""config_name"": ""jv"", ""data_files"": [{""split"": ""train"", ""path"": ""data/jv/train.parquet""}, {""split"": ""validation"", ""path"": ""data/jv/validation.parquet""}]}, {""config_name"": ""ka"", ""data_files"": [{""split"": ""train"", ""path"": ""data/ka/train.parquet""}, {""split"": ""validation"", ""path"": ""data/ka/validation.parquet""}]}, {""config_name"": ""kaa"", ""data_files"": [{""split"": ""train"", ""path"": ""data/kaa/train.parquet""}, {""split"": ""validation"", ""path"": ""data/kaa/validation.parquet""}]}, {""config_name"": ""kab"", ""data_files"": [{""split"": ""train"", ""path"": ""data/kab/train.parquet""}, {""split"": ""validation"", ""path"": ""data/kab/validation.parquet""}]}, {""config_name"": ""kbd"", ""data_files"": [{""split"": ""train"", ""path"": ""data/kbd/train.parquet""}, {""split"": ""validation"", ""path"": ""data/kbd/validation.parquet""}]}, {""config_name"": ""kbp"", ""data_files"": [{""split"": ""train"", ""path"": ""data/kbp/train.parquet""}, {""split"": ""validation"", ""path"": ""data/kbp/validation.parquet""}]}, {""config_name"": ""kcg"", ""data_files"": [{""split"": ""train"", ""path"": ""data/kcg/train.parquet""}, {""split"": ""validation"", ""path"": ""data/kcg/validation.parquet""}]}, {""config_name"": ""kg"", ""data_files"": [{""split"": ""train"", ""path"": ""data/kg/train.parquet""}, {""split"": ""validation"", ""path"": ""data/kg/validation.parquet""}]}, {""config_name"": ""ki"", ""data_files"": [{""split"": ""train"", ""path"": ""data/ki/train.parquet""}, {""split"": ""validation"", ""path"": ""data/ki/validation.parquet""}]}, {""config_name"": ""kk"", ""data_files"": [{""split"": ""train"", ""path"": ""data/kk/train.parquet""}, {""split"": ""validation"", ""path"": ""data/kk/validation.parquet""}]}, {""config_name"": ""kl"", ""data_files"": [{""split"": ""train"", ""path"": ""data/kl/train.parquet""}, {""split"": ""validation"", ""path"": ""data/kl/validation.parquet""}]}, {""config_name"": ""km"", ""data_files"": [{""split"": ""train"", ""path"": ""data/km/train.parquet""}, {""split"": ""validation"", ""path"": ""data/km/validation.parquet""}]}, {""config_name"": ""kn"", ""data_files"": [{""split"": ""train"", ""path"": ""data/kn/train.parquet""}, {""split"": ""validation"", ""path"": ""data/kn/validation.parquet""}]}, {""config_name"": ""ko"", ""data_files"": [{""split"": ""train"", ""path"": ""data/ko/train.parquet""}, {""split"": ""validation"", ""path"": ""data/ko/validation.parquet""}]}, {""config_name"": ""koi"", ""data_files"": [{""split"": ""train"", ""path"": ""data/koi/train.parquet""}, {""split"": ""validation"", ""path"": ""data/koi/validation.parquet""}]}, {""config_name"": ""krc"", ""data_files"": [{""split"": ""train"", ""path"": ""data/krc/train.parquet""}, {""split"": ""validation"", ""path"": ""data/krc/validation.parquet""}]}, {""config_name"": ""ks"", ""data_files"": [{""split"": ""train"", ""path"": ""data/ks/train.parquet""}, {""split"": ""validation"", ""path"": ""data/ks/validation.parquet""}]}, {""config_name"": ""ksh"", ""data_files"": [{""split"": ""train"", ""path"": ""data/ksh/train.parquet""}, {""split"": ""validation"", ""path"": ""data/ksh/validation.parquet""}]}, {""config_name"": ""ku"", ""data_files"": [{""split"": ""train"", ""path"": ""data/ku/train.parquet""}, {""split"": ""validation"", ""path"": ""data/ku/validation.parquet""}]}, {""config_name"": ""kv"", ""data_files"": [{""split"": ""train"", ""path"": ""data/kv/train.parquet""}, {""split"": ""validation"", ""path"": ""data/kv/validation.parquet""}]}, {""config_name"": ""kw"", ""data_files"": [{""split"": ""train"", ""path"": ""data/kw/train.parquet""}, {""split"": ""validation"", ""path"": ""data/kw/validation.parquet""}]}, {""config_name"": ""ky"", ""data_files"": [{""split"": ""train"", ""path"": ""data/ky/train.parquet""}, {""split"": ""validation"", ""path"": ""data/ky/validation.parquet""}]}, {""config_name"": ""la"", ""data_files"": [{""split"": ""train"", ""path"": ""data/la/train.parquet""}, {""split"": ""validation"", ""path"": ""data/la/validation.parquet""}]}, {""config_name"": ""lad"", ""data_files"": [{""split"": ""train"", ""path"": ""data/lad/train.parquet""}, {""split"": ""validation"", ""path"": ""data/lad/validation.parquet""}]}, {""config_name"": ""lb"", ""data_files"": [{""split"": ""train"", ""path"": ""data/lb/train.parquet""}, {""split"": ""validation"", ""path"": ""data/lb/validation.parquet""}]}, {""config_name"": ""lbe"", ""data_files"": [{""split"": ""train"", ""path"": ""data/lbe/train.parquet""}, {""split"": ""validation"", ""path"": ""data/lbe/validation.parquet""}]}, {""config_name"": ""lez"", ""data_files"": [{""split"": ""train"", ""path"": ""data/lez/train.parquet""}, {""split"": ""validation"", ""path"": ""data/lez/validation.parquet""}]}, {""config_name"": ""lfn"", ""data_files"": [{""split"": ""train"", ""path"": ""data/lfn/train.parquet""}, {""split"": ""validation"", ""path"": ""data/lfn/validation.parquet""}]}, {""config_name"": ""lg"", ""data_files"": [{""split"": ""train"", ""path"": ""data/lg/train.parquet""}, {""split"": ""validation"", ""path"": ""data/lg/validation.parquet""}]}, {""config_name"": ""li"", ""data_files"": [{""split"": ""train"", ""path"": ""data/li/train.parquet""}, {""split"": ""validation"", ""path"": ""data/li/validation.parquet""}]}, {""config_name"": ""lij"", ""data_files"": [{""split"": ""train"", ""path"": ""data/lij/train.parquet""}, {""split"": ""validation"", ""path"": ""data/lij/validation.parquet""}]}, {""config_name"": ""lld"", ""data_files"": [{""split"": ""train"", ""path"": ""data/lld/train.parquet""}, {""split"": ""validation"", ""path"": ""data/lld/validation.parquet""}]}, {""config_name"": ""lmo"", ""data_files"": [{""split"": ""train"", ""path"": ""data/lmo/train.parquet""}, {""split"": ""validation"", ""path"": ""data/lmo/validation.parquet""}]}, {""config_name"": ""ln"", ""data_files"": [{""split"": ""train"", ""path"": ""data/ln/train.parquet""}, {""split"": ""validation"", ""path"": ""data/ln/validation.parquet""}]}, {""config_name"": ""lo"", ""data_files"": [{""split"": ""train"", ""path"": ""data/lo/train.parquet""}, {""split"": ""validation"", ""path"": ""data/lo/validation.parquet""}]}, {""config_name"": ""lt"", ""data_files"": [{""split"": ""train"", ""path"": ""data/lt/train.parquet""}, {""split"": ""validation"", ""path"": ""data/lt/validation.parquet""}]}, {""config_name"": ""ltg"", ""data_files"": [{""split"": ""train"", ""path"": ""data/ltg/train.parquet""}, {""split"": ""validation"", ""path"": ""data/ltg/validation.parquet""}]}, {""config_name"": ""lv"", ""data_files"": [{""split"": ""train"", ""path"": ""data/lv/train.parquet""}, {""split"": ""validation"", ""path"": ""data/lv/validation.parquet""}]}, {""config_name"": ""mad"", ""data_files"": [{""split"": ""train"", ""path"": ""data/mad/train.parquet""}, {""split"": ""validation"", ""path"": ""data/mad/validation.parquet""}]}, {""config_name"": ""mai"", ""data_files"": [{""split"": ""train"", ""path"": ""data/mai/train.parquet""}, {""split"": ""validation"", ""path"": ""data/mai/validation.parquet""}]}, {""config_name"": ""map_bms"", ""data_files"": [{""split"": ""train"", ""path"": ""data/map_bms/train.parquet""}, {""split"": ""validation"", ""path"": ""data/map_bms/validation.parquet""}]}, {""config_name"": ""mdf"", ""data_files"": [{""split"": ""train"", ""path"": ""data/mdf/train.parquet""}, {""split"": ""validation"", ""path"": ""data/mdf/validation.parquet""}]}, {""config_name"": ""mg"", ""data_files"": [{""split"": ""train"", ""path"": ""data/mg/train.parquet""}, {""split"": ""validation"", ""path"": ""data/mg/validation.parquet""}]}, {""config_name"": ""mhr"", ""data_files"": [{""split"": ""train"", ""path"": ""data/mhr/train.parquet""}, {""split"": ""validation"", ""path"": ""data/mhr/validation.parquet""}]}, {""config_name"": ""mi"", ""data_files"": [{""split"": ""train"", ""path"": ""data/mi/train.parquet""}, {""split"": ""validation"", ""path"": ""data/mi/validation.parquet""}]}, {""config_name"": ""min"", ""data_files"": [{""split"": ""train"", ""path"": ""data/min/train.parquet""}, {""split"": ""validation"", ""path"": ""data/min/validation.parquet""}]}, {""config_name"": ""mk"", ""data_files"": [{""split"": ""train"", ""path"": ""data/mk/train.parquet""}, {""split"": ""validation"", ""path"": ""data/mk/validation.parquet""}]}, {""config_name"": ""ml"", ""data_files"": [{""split"": ""train"", ""path"": ""data/ml/train.parquet""}, {""split"": ""validation"", ""path"": ""data/ml/validation.parquet""}]}, {""config_name"": ""mn"", ""data_files"": [{""split"": ""train"", ""path"": ""data/mn/train.parquet""}, {""split"": ""validation"", ""path"": ""data/mn/validation.parquet""}]}, {""config_name"": ""mni"", ""data_files"": [{""split"": ""train"", ""path"": ""data/mni/train.parquet""}, {""split"": ""validation"", ""path"": ""data/mni/validation.parquet""}]}, {""config_name"": ""mnw"", ""data_files"": [{""split"": ""train"", ""path"": ""data/mnw/train.parquet""}, {""split"": ""validation"", ""path"": ""data/mnw/validation.parquet""}]}, {""config_name"": ""mr"", ""data_files"": [{""split"": ""train"", ""path"": ""data/mr/train.parquet""}, {""split"": ""validation"", ""path"": ""data/mr/validation.parquet""}]}, {""config_name"": ""mrj"", ""data_files"": [{""split"": ""train"", ""path"": ""data/mrj/train.parquet""}, {""split"": ""validation"", ""path"": ""data/mrj/validation.parquet""}]}, {""config_name"": ""ms"", ""data_files"": [{""split"": ""train"", ""path"": ""data/ms/train.parquet""}, {""split"": ""validation"", ""path"": ""data/ms/validation.parquet""}]}, {""config_name"": ""mt"", ""data_files"": [{""split"": ""train"", ""path"": ""data/mt/train.parquet""}, {""split"": ""validation"", ""path"": ""data/mt/validation.parquet""}]}, {""config_name"": ""mwl"", ""data_files"": [{""split"": ""train"", ""path"": ""data/mwl/train.parquet""}, {""split"": ""validation"", ""path"": ""data/mwl/validation.parquet""}]}, {""config_name"": ""my"", ""data_files"": [{""split"": ""train"", ""path"": ""data/my/train.parquet""}, {""split"": ""validation"", ""path"": ""data/my/validation.parquet""}]}, {""config_name"": ""myv"", ""data_files"": [{""split"": ""train"", ""path"": ""data/myv/train.parquet""}, {""split"": ""validation"", ""path"": ""data/myv/validation.parquet""}]}, {""config_name"": ""mzn"", ""data_files"": [{""split"": ""train"", ""path"": ""data/mzn/train.parquet""}, {""split"": ""validation"", ""path"": ""data/mzn/validation.parquet""}]}, {""config_name"": ""nah"", ""data_files"": [{""split"": ""train"", ""path"": ""data/nah/train.parquet""}, {""split"": ""validation"", ""path"": ""data/nah/validation.parquet""}]}, {""config_name"": ""nap"", ""data_files"": [{""split"": ""train"", ""path"": ""data/nap/train.parquet""}, {""split"": ""validation"", ""path"": ""data/nap/validation.parquet""}]}, {""config_name"": ""nds"", ""data_files"": [{""split"": ""train"", ""path"": ""data/nds/train.parquet""}, {""split"": ""validation"", ""path"": ""data/nds/validation.parquet""}]}, {""config_name"": ""nds_nl"", ""data_files"": [{""split"": ""train"", ""path"": ""data/nds_nl/train.parquet""}, {""split"": ""validation"", ""path"": ""data/nds_nl/validation.parquet""}]}, {""config_name"": ""ne"", ""data_files"": [{""split"": ""train"", ""path"": ""data/ne/train.parquet""}, {""split"": ""validation"", ""path"": ""data/ne/validation.parquet""}]}, {""config_name"": ""new"", ""data_files"": [{""split"": ""train"", ""path"": ""data/new/train.parquet""}, {""split"": ""validation"", ""path"": ""data/new/validation.parquet""}]}, {""config_name"": ""nia"", ""data_files"": [{""split"": ""train"", ""path"": ""data/nia/train.parquet""}, {""split"": ""validation"", ""path"": ""data/nia/validation.parquet""}]}, {""config_name"": ""nl"", ""data_files"": [{""split"": ""train"", ""path"": ""data/nl/train.parquet""}, {""split"": ""validation"", ""path"": ""data/nl/validation.parquet""}]}, {""config_name"": ""nn"", ""data_files"": [{""split"": ""train"", ""path"": ""data/nn/train.parquet""}, {""split"": ""validation"", ""path"": ""data/nn/validation.parquet""}]}, {""config_name"": ""no"", ""data_files"": [{""split"": ""train"", ""path"": ""data/no/train.parquet""}, {""split"": ""validation"", ""path"": ""data/no/validation.parquet""}]}, {""config_name"": ""nov"", ""data_files"": [{""split"": ""train"", ""path"": ""data/nov/train.parquet""}, {""split"": ""validation"", ""path"": ""data/nov/validation.parquet""}]}, {""config_name"": ""nqo"", ""data_files"": [{""split"": ""train"", ""path"": ""data/nqo/train.parquet""}, {""split"": ""validation"", ""path"": ""data/nqo/validation.parquet""}]}, {""config_name"": ""nrm"", ""data_files"": [{""split"": ""train"", ""path"": ""data/nrm/train.parquet""}, {""split"": ""validation"", ""path"": ""data/nrm/validation.parquet""}]}, {""config_name"": ""nso"", ""data_files"": [{""split"": ""train"", ""path"": ""data/nso/train.parquet""}, {""split"": ""validation"", ""path"": ""data/nso/validation.parquet""}]}, {""config_name"": ""nv"", ""data_files"": [{""split"": ""train"", ""path"": ""data/nv/train.parquet""}, {""split"": ""validation"", ""path"": ""data/nv/validation.parquet""}]}, {""config_name"": ""ny"", ""data_files"": [{""split"": ""train"", ""path"": ""data/ny/train.parquet""}, {""split"": ""validation"", ""path"": ""data/ny/validation.parquet""}]}, {""config_name"": ""oc"", ""data_files"": [{""split"": ""train"", ""path"": ""data/oc/train.parquet""}, {""split"": ""validation"", ""path"": ""data/oc/validation.parquet""}]}, {""config_name"": ""olo"", ""data_files"": [{""split"": ""train"", ""path"": ""data/olo/train.parquet""}, {""split"": ""validation"", ""path"": ""data/olo/validation.parquet""}]}, {""config_name"": ""om"", ""data_files"": [{""split"": ""train"", ""path"": ""data/om/train.parquet""}, {""split"": ""validation"", ""path"": ""data/om/validation.parquet""}]}, {""config_name"": ""or"", ""data_files"": [{""split"": ""train"", ""path"": ""data/or/train.parquet""}, {""split"": ""validation"", ""path"": ""data/or/validation.parquet""}]}, {""config_name"": ""os"", ""data_files"": [{""split"": ""train"", ""path"": ""data/os/train.parquet""}, {""split"": ""validation"", ""path"": ""data/os/validation.parquet""}]}, {""config_name"": ""pa"", ""data_files"": [{""split"": ""train"", ""path"": ""data/pa/train.parquet""}, {""split"": ""validation"", ""path"": ""data/pa/validation.parquet""}]}, {""config_name"": ""pag"", ""data_files"": [{""split"": ""train"", ""path"": ""data/pag/train.parquet""}, {""split"": ""validation"", ""path"": ""data/pag/validation.parquet""}]}, {""config_name"": ""pam"", ""data_files"": [{""split"": ""train"", ""path"": ""data/pam/train.parquet""}, {""split"": ""validation"", ""path"": ""data/pam/validation.parquet""}]}, {""config_name"": ""pap"", ""data_files"": [{""split"": ""train"", ""path"": ""data/pap/train.parquet""}, {""split"": ""validation"", ""path"": ""data/pap/validation.parquet""}]}, {""config_name"": ""pcd"", ""data_files"": [{""split"": ""train"", ""path"": ""data/pcd/train.parquet""}, {""split"": ""validation"", ""path"": ""data/pcd/validation.parquet""}]}, {""config_name"": ""pcm"", ""data_files"": [{""split"": ""train"", ""path"": ""data/pcm/train.parquet""}, {""split"": ""validation"", ""path"": ""data/pcm/validation.parquet""}]}, {""config_name"": ""pdc"", ""data_files"": [{""split"": ""train"", ""path"": ""data/pdc/train.parquet""}, {""split"": ""validation"", ""path"": ""data/pdc/validation.parquet""}]}, {""config_name"": ""pfl"", ""data_files"": [{""split"": ""train"", ""path"": ""data/pfl/train.parquet""}, {""split"": ""validation"", ""path"": ""data/pfl/validation.parquet""}]}, {""config_name"": ""pi"", ""data_files"": [{""split"": ""train"", ""path"": ""data/pi/train.parquet""}, {""split"": ""validation"", ""path"": ""data/pi/validation.parquet""}]}, {""config_name"": ""pih"", ""data_files"": [{""split"": ""train"", ""path"": ""data/pih/train.parquet""}, {""split"": ""validation"", ""path"": ""data/pih/validation.parquet""}]}, {""config_name"": ""pl"", ""data_files"": [{""split"": ""train"", ""path"": ""data/pl/train.parquet""}, {""split"": ""validation"", ""path"": ""data/pl/validation.parquet""}]}, {""config_name"": ""pms"", ""data_files"": [{""split"": ""train"", ""path"": ""data/pms/train.parquet""}, {""split"": ""validation"", ""path"": ""data/pms/validation.parquet""}]}, {""config_name"": ""pnb"", ""data_files"": [{""split"": ""train"", ""path"": ""data/pnb/train.parquet""}, {""split"": ""validation"", ""path"": ""data/pnb/validation.parquet""}]}, {""config_name"": ""pnt"", ""data_files"": [{""split"": ""train"", ""path"": ""data/pnt/train.parquet""}, {""split"": ""validation"", ""path"": ""data/pnt/validation.parquet""}]}, {""config_name"": ""ps"", ""data_files"": [{""split"": ""train"", ""path"": ""data/ps/train.parquet""}, {""split"": ""validation"", ""path"": ""data/ps/validation.parquet""}]}, {""config_name"": ""pt"", ""data_files"": [{""split"": ""train"", ""path"": ""data/pt/train.parquet""}, {""split"": ""validation"", ""path"": ""data/pt/validation.parquet""}]}, {""config_name"": ""pwn"", ""data_files"": [{""split"": ""train"", ""path"": ""data/pwn/train.parquet""}, {""split"": ""validation"", ""path"": ""data/pwn/validation.parquet""}]}, {""config_name"": ""qu"", ""data_files"": [{""split"": ""train"", ""path"": ""data/qu/train.parquet""}, {""split"": ""validation"", ""path"": ""data/qu/validation.parquet""}]}, {""config_name"": ""rm"", ""data_files"": [{""split"": ""train"", ""path"": ""data/rm/train.parquet""}, {""split"": ""validation"", ""path"": ""data/rm/validation.parquet""}]}, {""config_name"": ""rmy"", ""data_files"": [{""split"": ""train"", ""path"": ""data/rmy/train.parquet""}, {""split"": ""validation"", ""path"": ""data/rmy/validation.parquet""}]}, {""config_name"": ""rn"", ""data_files"": [{""split"": ""train"", ""path"": ""data/rn/train.parquet""}, {""split"": ""validation"", ""path"": ""data/rn/validation.parquet""}]}, {""config_name"": ""ro"", ""data_files"": [{""split"": ""train"", ""path"": ""data/ro/train.parquet""}, {""split"": ""validation"", ""path"": ""data/ro/validation.parquet""}]}, {""config_name"": ""roa_rup"", ""data_files"": [{""split"": ""train"", ""path"": ""data/roa_rup/train.parquet""}, {""split"": ""validation"", ""path"": ""data/roa_rup/validation.parquet""}]}, {""config_name"": ""roa_tara"", ""data_files"": [{""split"": ""train"", ""path"": ""data/roa_tara/train.parquet""}, {""split"": ""validation"", ""path"": ""data/roa_tara/validation.parquet""}]}, {""config_name"": ""ru"", ""data_files"": [{""split"": ""train"", ""path"": ""data/ru/train.parquet""}, {""split"": ""validation"", ""path"": ""data/ru/validation.parquet""}]}, {""config_name"": ""rue"", ""data_files"": [{""split"": ""train"", ""path"": ""data/rue/train.parquet""}, {""split"": ""validation"", ""path"": ""data/rue/validation.parquet""}]}, {""config_name"": ""rw"", ""data_files"": [{""split"": ""train"", ""path"": ""data/rw/train.parquet""}, {""split"": ""validation"", ""path"": ""data/rw/validation.parquet""}]}, {""config_name"": ""sa"", ""data_files"": [{""split"": ""train"", ""path"": ""data/sa/train.parquet""}, {""split"": ""validation"", ""path"": ""data/sa/validation.parquet""}]}, {""config_name"": ""sah"", ""data_files"": [{""split"": ""train"", ""path"": ""data/sah/train.parquet""}, {""split"": ""validation"", ""path"": ""data/sah/validation.parquet""}]}, {""config_name"": ""sat"", ""data_files"": [{""split"": ""train"", ""path"": ""data/sat/train.parquet""}, {""split"": ""validation"", ""path"": ""data/sat/validation.parquet""}]}, {""config_name"": ""sc"", ""data_files"": [{""split"": ""train"", ""path"": ""data/sc/train.parquet""}, {""split"": ""validation"", ""path"": ""data/sc/validation.parquet""}]}, {""config_name"": ""scn"", ""data_files"": [{""split"": ""train"", ""path"": ""data/scn/train.parquet""}, {""split"": ""validation"", ""path"": ""data/scn/validation.parquet""}]}, {""config_name"": ""sco"", ""data_files"": [{""split"": ""train"", ""path"": ""data/sco/train.parquet""}, {""split"": ""validation"", ""path"": ""data/sco/validation.parquet""}]}, {""config_name"": ""sd"", ""data_files"": [{""split"": ""train"", ""path"": ""data/sd/train.parquet""}, {""split"": ""validation"", ""path"": ""data/sd/validation.parquet""}]}, {""config_name"": ""se"", ""data_files"": [{""split"": ""train"", ""path"": ""data/se/train.parquet""}, {""split"": ""validation"", ""path"": ""data/se/validation.parquet""}]}, {""config_name"": ""sg"", ""data_files"": [{""split"": ""train"", ""path"": ""data/sg/train.parquet""}, {""split"": ""validation"", ""path"": ""data/sg/validation.parquet""}]}, {""config_name"": ""sh"", ""data_files"": [{""split"": ""train"", ""path"": ""data/sh/train.parquet""}, {""split"": ""validation"", ""path"": ""data/sh/validation.parquet""}]}, {""config_name"": ""shi"", ""data_files"": [{""split"": ""train"", ""path"": ""data/shi/train.parquet""}, {""split"": ""validation"", ""path"": ""data/shi/validation.parquet""}]}, {""config_name"": ""shn"", ""data_files"": [{""split"": ""train"", ""path"": ""data/shn/train.parquet""}, {""split"": ""validation"", ""path"": ""data/shn/validation.parquet""}]}, {""config_name"": ""si"", ""data_files"": [{""split"": ""train"", ""path"": ""data/si/train.parquet""}, {""split"": ""validation"", ""path"": ""data/si/validation.parquet""}]}, {""config_name"": ""simple"", ""data_files"": [{""split"": ""train"", ""path"": ""data/simple/train.parquet""}, {""split"": ""validation"", ""path"": ""data/simple/validation.parquet""}]}, {""config_name"": ""sk"", ""data_files"": [{""split"": ""train"", ""path"": ""data/sk/train.parquet""}, {""split"": ""validation"", ""path"": ""data/sk/validation.parquet""}]}, {""config_name"": ""skr"", ""data_files"": [{""split"": ""train"", ""path"": ""data/skr/train.parquet""}, {""split"": ""validation"", ""path"": ""data/skr/validation.parquet""}]}, {""config_name"": ""sl"", ""data_files"": [{""split"": ""train"", ""path"": ""data/sl/train.parquet""}, {""split"": ""validation"", ""path"": ""data/sl/validation.parquet""}]}, {""config_name"": ""sm"", ""data_files"": [{""split"": ""train"", ""path"": ""data/sm/train.parquet""}, {""split"": ""validation"", ""path"": ""data/sm/validation.parquet""}]}, {""config_name"": ""smn"", ""data_files"": [{""split"": ""train"", ""path"": ""data/smn/train.parquet""}, {""split"": ""validation"", ""path"": ""data/smn/validation.parquet""}]}, {""config_name"": ""sn"", ""data_files"": [{""split"": ""train"", ""path"": ""data/sn/train.parquet""}, {""split"": ""validation"", ""path"": ""data/sn/validation.parquet""}]}, {""config_name"": ""so"", ""data_files"": [{""split"": ""train"", ""path"": ""data/so/train.parquet""}, {""split"": ""validation"", ""path"": ""data/so/validation.parquet""}]}, {""config_name"": ""sq"", ""data_files"": [{""split"": ""train"", ""path"": ""data/sq/train.parquet""}, {""split"": ""validation"", ""path"": ""data/sq/validation.parquet""}]}, {""config_name"": ""sr"", ""data_files"": [{""split"": ""train"", ""path"": ""data/sr/train.parquet""}, {""split"": ""validation"", ""path"": ""data/sr/validation.parquet""}]}, {""config_name"": ""srn"", ""data_files"": [{""split"": ""train"", ""path"": ""data/srn/train.parquet""}, {""split"": ""validation"", ""path"": ""data/srn/validation.parquet""}]}, {""config_name"": ""ss"", ""data_files"": [{""split"": ""train"", ""path"": ""data/ss/train.parquet""}, {""split"": ""validation"", ""path"": ""data/ss/validation.parquet""}]}, {""config_name"": ""st"", ""data_files"": [{""split"": ""train"", ""path"": ""data/st/train.parquet""}, {""split"": ""validation"", ""path"": ""data/st/validation.parquet""}]}, {""config_name"": ""stq"", ""data_files"": [{""split"": ""train"", ""path"": ""data/stq/train.parquet""}, {""split"": ""validation"", ""path"": ""data/stq/validation.parquet""}]}, {""config_name"": ""su"", ""data_files"": [{""split"": ""train"", ""path"": ""data/su/train.parquet""}, {""split"": ""validation"", ""path"": ""data/su/validation.parquet""}]}, {""config_name"": ""sv"", ""data_files"": [{""split"": ""train"", ""path"": ""data/sv/train.parquet""}, {""split"": ""validation"", ""path"": ""data/sv/validation.parquet""}]}, {""config_name"": ""sw"", ""data_files"": [{""split"": ""train"", ""path"": ""data/sw/train.parquet""}, {""split"": ""validation"", ""path"": ""data/sw/validation.parquet""}]}, {""config_name"": ""szl"", ""data_files"": [{""split"": ""train"", ""path"": ""data/szl/train.parquet""}, {""split"": ""validation"", ""path"": ""data/szl/validation.parquet""}]}, {""config_name"": ""szy"", ""data_files"": [{""split"": ""train"", ""path"": ""data/szy/train.parquet""}, {""split"": ""validation"", ""path"": ""data/szy/validation.parquet""}]}, {""config_name"": ""ta"", ""data_files"": [{""split"": ""train"", ""path"": ""data/ta/train.parquet""}, {""split"": ""validation"", ""path"": ""data/ta/validation.parquet""}]}, {""config_name"": ""tay"", ""data_files"": [{""split"": ""train"", ""path"": ""data/tay/train.parquet""}, {""split"": ""validation"", ""path"": ""data/tay/validation.parquet""}]}, {""config_name"": ""tcy"", ""data_files"": [{""split"": ""train"", ""path"": ""data/tcy/train.parquet""}, {""split"": ""validation"", ""path"": ""data/tcy/validation.parquet""}]}, {""config_name"": ""te"", ""data_files"": [{""split"": ""train"", ""path"": ""data/te/train.parquet""}, {""split"": ""validation"", ""path"": ""data/te/validation.parquet""}]}, {""config_name"": ""tet"", ""data_files"": [{""split"": ""train"", ""path"": ""data/tet/train.parquet""}, {""split"": ""validation"", ""path"": ""data/tet/validation.parquet""}]}, {""config_name"": ""tg"", ""data_files"": [{""split"": ""train"", ""path"": ""data/tg/train.parquet""}, {""split"": ""validation"", ""path"": ""data/tg/validation.parquet""}]}, {""config_name"": ""th"", ""data_files"": [{""split"": ""train"", ""path"": ""data/th/train.parquet""}, {""split"": ""validation"", ""path"": ""data/th/validation.parquet""}]}, {""config_name"": ""ti"", ""data_files"": [{""split"": ""train"", ""path"": ""data/ti/train.parquet""}, {""split"": ""validation"", ""path"": ""data/ti/validation.parquet""}]}, {""config_name"": ""tk"", ""data_files"": [{""split"": ""train"", ""path"": ""data/tk/train.parquet""}, {""split"": ""validation"", ""path"": ""data/tk/validation.parquet""}]}, {""config_name"": ""tl"", ""data_files"": [{""split"": ""train"", ""path"": ""data/tl/train.parquet""}, {""split"": ""validation"", ""path"": ""data/tl/validation.parquet""}]}, {""config_name"": ""tn"", ""data_files"": [{""split"": ""train"", ""path"": ""data/tn/train.parquet""}, {""split"": ""validation"", ""path"": ""data/tn/validation.parquet""}]}, {""config_name"": ""to"", ""data_files"": [{""split"": ""train"", ""path"": ""data/to/train.parquet""}, {""split"": ""validation"", ""path"": ""data/to/validation.parquet""}]}, {""config_name"": ""tpi"", ""data_files"": [{""split"": ""train"", ""path"": ""data/tpi/train.parquet""}, {""split"": ""validation"", ""path"": ""data/tpi/validation.parquet""}]}, {""config_name"": ""tr"", ""data_files"": [{""split"": ""train"", ""path"": ""data/tr/train.parquet""}, {""split"": ""validation"", ""path"": ""data/tr/validation.parquet""}]}, {""config_name"": ""trv"", ""data_files"": [{""split"": ""train"", ""path"": ""data/trv/train.parquet""}, {""split"": ""validation"", ""path"": ""data/trv/validation.parquet""}]}, {""config_name"": ""ts"", ""data_files"": [{""split"": ""train"", ""path"": ""data/ts/train.parquet""}, {""split"": ""validation"", ""path"": ""data/ts/validation.parquet""}]}, {""config_name"": ""tt"", ""data_files"": [{""split"": ""train"", ""path"": ""data/tt/train.parquet""}, {""split"": ""validation"", ""path"": ""data/tt/validation.parquet""}]}, {""config_name"": ""tum"", ""data_files"": [{""split"": ""train"", ""path"": ""data/tum/train.parquet""}, {""split"": ""validation"", ""path"": ""data/tum/validation.parquet""}]}, {""config_name"": ""tw"", ""data_files"": [{""split"": ""train"", ""path"": ""data/tw/train.parquet""}, {""split"": ""validation"", ""path"": ""data/tw/validation.parquet""}]}, {""config_name"": ""ty"", ""data_files"": [{""split"": ""train"", ""path"": ""data/ty/train.parquet""}, {""split"": ""validation"", ""path"": ""data/ty/validation.parquet""}]}, {""config_name"": ""tyv"", ""data_files"": [{""split"": ""train"", ""path"": ""data/tyv/train.parquet""}, {""split"": ""validation"", ""path"": ""data/tyv/validation.parquet""}]}, {""config_name"": ""udm"", ""data_files"": [{""split"": ""train"", ""path"": ""data/udm/train.parquet""}, {""split"": ""validation"", ""path"": ""data/udm/validation.parquet""}]}, {""config_name"": ""ug"", ""data_files"": [{""split"": ""train"", ""path"": ""data/ug/train.parquet""}, {""split"": ""validation"", ""path"": ""data/ug/validation.parquet""}]}, {""config_name"": ""uk"", ""data_files"": [{""split"": ""train"", ""path"": ""data/uk/train.parquet""}, {""split"": ""validation"", ""path"": ""data/uk/validation.parquet""}]}, {""config_name"": ""ur"", ""data_files"": [{""split"": ""train"", ""path"": ""data/ur/train.parquet""}, {""split"": ""validation"", ""path"": ""data/ur/validation.parquet""}]}, {""config_name"": ""uz"", ""data_files"": [{""split"": ""train"", ""path"": ""data/uz/train.parquet""}, {""split"": ""validation"", ""path"": ""data/uz/validation.parquet""}]}, {""config_name"": ""ve"", ""data_files"": [{""split"": ""train"", ""path"": ""data/ve/train.parquet""}, {""split"": ""validation"", ""path"": ""data/ve/validation.parquet""}]}, {""config_name"": ""vec"", ""data_files"": [{""split"": ""train"", ""path"": ""data/vec/train.parquet""}, {""split"": ""validation"", ""path"": ""data/vec/validation.parquet""}]}, {""config_name"": ""vep"", ""data_files"": [{""split"": ""train"", ""path"": ""data/vep/train.parquet""}, {""split"": ""validation"", ""path"": ""data/vep/validation.parquet""}]}, {""config_name"": ""vi"", ""data_files"": [{""split"": ""train"", ""path"": ""data/vi/train.parquet""}, {""split"": ""validation"", ""path"": ""data/vi/validation.parquet""}]}, {""config_name"": ""vls"", ""data_files"": [{""split"": ""train"", ""path"": ""data/vls/train.parquet""}, {""split"": ""validation"", ""path"": ""data/vls/validation.parquet""}]}, {""config_name"": ""vo"", ""data_files"": [{""split"": ""train"", ""path"": ""data/vo/train.parquet""}, {""split"": ""validation"", ""path"": ""data/vo/validation.parquet""}]}, {""config_name"": ""wa"", ""data_files"": [{""split"": ""train"", ""path"": ""data/wa/train.parquet""}, {""split"": ""validation"", ""path"": ""data/wa/validation.parquet""}]}, {""config_name"": ""war"", ""data_files"": [{""split"": ""train"", ""path"": ""data/war/train.parquet""}, {""split"": ""validation"", ""path"": ""data/war/validation.parquet""}]}, {""config_name"": ""wo"", ""data_files"": [{""split"": ""train"", ""path"": ""data/wo/train.parquet""}, {""split"": ""validation"", ""path"": ""data/wo/validation.parquet""}]}, {""config_name"": ""wuu"", ""data_files"": [{""split"": ""train"", ""path"": ""data/wuu/train.parquet""}, {""split"": ""validation"", ""path"": ""data/wuu/validation.parquet""}]}, {""config_name"": ""xal"", ""data_files"": [{""split"": ""train"", ""path"": ""data/xal/train.parquet""}, {""split"": ""validation"", ""path"": ""data/xal/validation.parquet""}]}, {""config_name"": ""xh"", ""data_files"": [{""split"": ""train"", ""path"": ""data/xh/train.parquet""}, {""split"": ""validation"", ""path"": ""data/xh/validation.parquet""}]}, {""config_name"": ""xmf"", ""data_files"": [{""split"": ""train"", ""path"": ""data/xmf/train.parquet""}, {""split"": ""validation"", ""path"": ""data/xmf/validation.parquet""}]}, {""config_name"": ""yi"", ""data_files"": [{""split"": ""train"", ""path"": ""data/yi/train.parquet""}, {""split"": ""validation"", ""path"": ""data/yi/validation.parquet""}]}, {""config_name"": ""yo"", ""data_files"": [{""split"": ""train"", ""path"": ""data/yo/train.parquet""}, {""split"": ""validation"", ""path"": ""data/yo/validation.parquet""}]}, {""config_name"": ""za"", ""data_files"": [{""split"": ""train"", ""path"": ""data/za/train.parquet""}, {""split"": ""validation"", ""path"": ""data/za/validation.parquet""}]}, {""config_name"": ""zea"", ""data_files"": [{""split"": ""train"", ""path"": ""data/zea/train.parquet""}, {""split"": ""validation"", ""path"": ""data/zea/validation.parquet""}]}, {""config_name"": ""zh"", ""data_files"": [{""split"": ""train"", ""path"": ""data/zh/train.parquet""}, {""split"": ""validation"", ""path"": ""data/zh/validation.parquet""}]}, {""config_name"": ""zh_classical"", ""data_files"": [{""split"": ""train"", ""path"": ""data/zh_classical/train.parquet""}, {""split"": ""validation"", ""path"": ""data/zh_classical/validation.parquet""}]}, {""config_name"": ""zh_min_nan"", ""data_files"": [{""split"": ""train"", ""path"": ""data/zh_min_nan/train.parquet""}, {""split"": ""validation"", ""path"": ""data/zh_min_nan/validation.parquet""}]}, {""config_name"": ""zh_yue"", ""data_files"": [{""split"": ""train"", ""path"": ""data/zh_yue/train.parquet""}, {""split"": ""validation"", ""path"": ""data/zh_yue/validation.parquet""}]}, {""config_name"": ""zu"", ""data_files"": [{""split"": ""train"", ""path"": ""data/zu/train.parquet""}, {""split"": ""validation"", ""path"": ""data/zu/validation.parquet""}]}]}","# Dataset Card for WikiAnc
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Additional Information](#additional-information)
- [Licensing Information](#licensing-information)
## Dataset Description
- **Repository:** [WikiAnc repository](https://github.com/cyanic-selkie/wikianc)
### Dataset Summary
The WikiAnc dataset is an automatically generated dataset from Wikipedia (all languages) and Wikidata dumps (August, 2023).
The code for generating the dataset can be found [here](https://github.com/cyanic-selkie/wikianc).
### Supported Tasks
- `wikificiation`: The dataset can be used to train a model for Wikification.
- `named-entity-linking`: The dataset can be used to train a model for Named Entity Linking.
### Languages
The text in the dataset is in all 320 Wikipedia languages. The full list can be found in the table below.
## Dataset Structure
### Data Instances
A typical data point represents a paragraph in a Wikipedia article.
The `paragraph_text` field contains the original text in an NFC normalized, UTF-8 encoded string.
The `paragraph_anchors` field contains a list of anchors, each represented by a struct with the inclusive starting UTF-8 code point `start` field, exclusive ending UTF-8 code point `end` field, a nullable `qid` field, a nullable `pageid` field, and an NFC normalized, UTF-8 encoded `title` (Wikipedia) field.
Additionally, each paragraph has `article_title`, `article_pageid`, and (nullable) `article_qid` fields referring to the article the paragraph came from.
There is also a nullable, NFC normalized, UTF-8 encoded `section_heading` field, and an integer `section_level` field referring to the heading (if it exists) of the article section, and the level in the section hierarchy that the paragraph came from.
The `qid` fields refers to Wikidata's QID identifiers, while the `pageid` and `title` fields refer to Wikipedia's pageID and title identifiers (there is a one-to-one mapping between pageIDs and titles).
**NOTE:** An anchor will always have a `title`, but that doesn't mean it has to have a `pageid`. This is because Wikipedia allows defining anchors to nonexistent articles.
An example from the WikiAnc EN test set looks as follows:
```
{
""uuid"": ""5f74e678-944f-4761-a5e0-b6426f6f61b8"",
""article_title"": ""Climatius"",
""article_pageid"": 5394373,
""article_qid"": 867987,
""section_heading"": null,
""section_level"": 0,
""paragraph_text"": ""It was a small fish, at 7.5 cm, and to discourage predators, Climatius sported fifteen sharp spines. There was one spine each on the paired pelvic and pectoral fins, and on the aingle anal and two dorsal fins, and a four pairs without fins on the fish's underside."",
""paragraph_anchors"": [
{
""start"": 140,
""end"": 146,
""qid"": 3335089,
""pageid"": 56849833,
""title"": ""Pelvic_fin""
},
{
""start"": 151,
""end"": 159,
""qid"": 4162555,
""pageid"": 331956,
""title"": ""Pectoral_fin""
},
{
""start"": 184,
""end"": 188,
""qid"": 4162555,
""pageid"": 331958,
""title"": ""Anal_fin""
},
{
""start"": 197,
""end"": 208,
""qid"": 1568355,
""pageid"": 294244,
""title"": ""Dorsal_fin""
}
]
}
```
### Data Fields
- `uuid`: a UTF-8 encoded string representing a v4 UUID that uniquely identifies the example
- `article_title`: an NFC normalized, UTF-8 encoded Wikipedia title of the article; spaces are replaced with underscores
- `article_pageid`: an integer representing the Wikipedia pageID of the article
- `article_qid`: an integer representing the Wikidata QID this article refers to; it can be null if the entity didn't exist in Wikidata at the time of the creation of the original dataset
- `section_heading`: a nullable, NFC normalized, UTF-8 encoded string representing the section heading
- `section_level`: an integer representing the level of the section in the section hierarchy
- `paragraph_text`: an NFC normalized, UTF-8 encoded string representing the paragraph
- `paragraph_anchors`: a list of structs representing anchors, each anchor has:
- `start`: an integer representing the inclusive starting UTF-8 code point of the anchors
- `end`: an integer representing the exclusive ending UTF-8 code point of the anchor
- `qid`: a nullable integer representing the Wikidata QID this anchor refers to; it can be null if the entity didn't exist in Wikidata at the time of the creation of the original dataset
- `pageid`: a nullable integer representing the Wikipedia pageID of the anchor; it can be null if the article didn't exist in Wikipedia at the time of the creation of the original dataset
- `title`: an NFC normalized, UTF-8 encoded string representing the Wikipedia title of the anchor; spaces are replaced with underscores; can refer to a nonexistent Wikipedia article
### Data Splits
The data is split into training, validation and test sets; paragraphs belonging to the same article aren't necessarily in the same split. The final split sizes are as follows:
#### Train
| | Articles | Paragraphs | Anchors | Anchors with QIDs | Anchors with PageIDs |
| :-- | --: | --: | --: | --: | --: |
| ab | 2378 | 5678 | 10515 | 3649 | 3650 |
| ace | 12591 | 23969 | 48638 | 25150 | 25175 |
| ady | 596 | 1662 | 2694 | 1593 | 1606 |
| af | 104470 | 399038 | 985640 | 900596 | 900967 |
| als | 27999 | 165085 | 402049 | 294742 | 294744 |
| alt | 1043 | 7468 | 9158 | 5446 | 5452 |
| am | 13576 | 46318 | 90051 | 51915 | 52173 |
| ami | 1582 | 12428 | 6080 | 1505 | 2579 |
| an | 40179 | 121367 | 669830 | 516248 | 516822 |
| ang | 3833 | 9664 | 24297 | 10189 | 10229 |
| anp | 2506 | 6865 | 14560 | 3825 | 5061 |
| ar | 1132271 | 3617491 | 11657228 | 11240112 | 11244160 |
| arc | 1844 | 3766 | 9232 | 5460 | 5545 |
| ary | 6736 | 17049 | 50185 | 34193 | 34227 |
| arz | 1579782 | 3693549 | 7879303 | 6906799 | 6917393 |
| as | 11947 | 77835 | 122760 | 67594 | 67720 |
| ast | 126992 | 877278 | 2952000 | 1775764 | 1777383 |
| atj | 1872 | 3820 | 6544 | 3247 | 3365 |
| av | 3048 | 8542 | 16115 | 8895 | 9000 |
| avk | 27577 | 85219 | 106100 | 32260 | 33491 |
| awa | 3396 | 5802 | 6617 | 1679 | 2370 |
| ay | 5102 | 15125 | 22802 | 13930 | 13933 |
| az | 180810 | 789902 | 1570889 | 1377797 | 1380325 |
| azb | 240990 | 585386 | 1241661 | 749575 | 753318 |
| ba | 62269 | 391926 | 625645 | 562730 | 563181 |
| ban | 18955 | 44138 | 86239 | 66213 | 66412 |
| bar | 26057 | 83298 | 185158 | 109082 | 109091 |
| bat_smg | 17013 | 41951 | 77417 | 51701 | 51733 |
| bcl | 13783 | 45457 | 78963 | 47819 | 47861 |
| be | 222883 | 821135 | 2499258 | 2204062 | 2204117 |
| bg | 285156 | 1336530 | 3967713 | 3618800 | 3627798 |
| bh | 7658 | 17052 | 29110 | 22157 | 22217 |
| bi | 1403 | 1712 | 3172 | 1991 | 1995 |
| bjn | 9672 | 19007 | 58660 | 32538 | 33071 |
| blk | 2786 | 11825 | 11341 | 5979 | 6129 |
| bm | 1111 | 2421 | 2451 | 1217 | 1218 |
| bn | 136921 | 736388 | 1530942 | 1161967 | 1162761 |
| bo | 11843 | 37121 | 8241 | 6265 | 6359 |
| bpy | 24742 | 115606 | 166906 | 86166 | 86170 |
| br | 78524 | 214128 | 657375 | 527295 | 527606 |
| bs | 86407 | 382114 | 1246030 | 965782 | 966511 |
| bug | 14231 | 14484 | 53879 | 14787 | 15146 |
| bxr | 2730 | 9571 | 27853 | 11560 | 11567 |
| ca | 691444 | 3596667 | 11359870 | 10236358 | 10237666 |
| cbk_zam | 2989 | 8322 | 9939 | 2790 | 2847 |
| cdo | 15922 | 30059 | 63474 | 29659 | 29705 |
| ce | 597137 | 2121587 | 3097393 | 1507129 | 1507806 |
| ceb | 5888811 | 11920613 | 37969424 | 33678489 | 33962205 |
| ch | 574 | 1166 | 2290 | 492 | 601 |
| chr | 980 | 1110 | 1311 | 779 | 790 |
| chy | 711 | 753 | 494 | 428 | 428 |
| ckb | 48903 | 163599 | 435662 | 224749 | 226749 |
| co | 6719 | 22954 | 46391 | 24149 | 24229 |
| cr | 158 | 216 | 209 | 94 | 94 |
| crh | 24117 | 29781 | 98534 | 70231 | 70235 |
| cs | 516037 | 2679537 | 9917806 | 8763103 | 8763291 |
| csb | 5315 | 14009 | 31294 | 16820 | 16820 |
| cu | 1171 | 2796 | 5283 | 2346 | 2349 |
| cv | 50525 | 157542 | 375399 | 166889 | 167497 |
| cy | 276031 | 992900 | 2011030 | 1613064 | 1620632 |
| da | 284765 | 1167917 | 4352733 | 3854239 | 3854549 |
| dag | 9248 | 29213 | 46084 | 10981 | 14213 |
| de | 2780056 | 16093948 | 52497421 | 50480495 | 50480548 |
| din | 485 | 1551 | 1096 | 197 | 197 |
| diq | 37565 | 70969 | 155656 | 141636 | 141695 |
| dsb | 3083 | 8760 | 19397 | 9652 | 9652 |
| dty | 3339 | 6219 | 7505 | 4417 | 4447 |
| dv | 4190 | 16809 | 7906 | 3612 | 3620 |
| dz | 652 | 2623 | 272 | 94 | 100 |
| ee | 1075 | 2326 | 1823 | 861 | 926 |
| el | 224207 | 1527561 | 4181433 | 3119952 | 3121967 |
| eml | 12169 | 53861 | 115729 | 65775 | 65940 |
| en | 6514924 | 40656507 | 109681826 | 107761324 | 107768438 |
| eo | 330486 | 1116191 | 4257655 | 3975927 | 3979379 |
| es | 1792062 | 10890435 | 33729712 | 31581851 | 31648945 |
| et | 233078 | 1110906 | 3558448 | 2879595 | 2886824 |
| eu | 386029 | 1405747 | 3398477 | 3025183 | 3030635 |
| ext | 3472 | 9626 | 20554 | 11966 | 11978 |
| fa | 901254 | 2357271 | 6189352 | 5862106 | 5870803 |
| fat | 1044 | 6092 | 1717 | 120 | 857 |
| ff | 1763 | 4103 | 3483 | 2304 | 2413 |
| fi | 373226 | 1667296 | 5221239 | 4658292 | 4663471 |
| fiu_vro | 6417 | 19897 | 40418 | 23563 | 23609 |
| fj | 1157 | 1782 | 4852 | 1910 | 1911 |
| fo | 11809 | 30828 | 119267 | 95117 | 95259 |
| fr | 2432972 | 15252697 | 43564517 | 42573624 | 42589064 |
| frp | 5341 | 10574 | 36358 | 24905 | 24926 |
| frr | 16038 | 30821 | 80265 | 68184 | 68315 |
| fur | 3665 | 10651 | 29516 | 16249 | 16278 |
| fy | 46011 | 206153 | 1271339 | 985227 | 985511 |
| ga | 52168 | 130535 | 347037 | 288261 | 288309 |
| gag | 2408 | 4844 | 8551 | 4520 | 4520 |
| gan | 4219 | 9689 | 18994 | 14119 | 14128 |
| gcr | 2227 | 5163 | 2763 | 1186 | 1186 |
| gd | 15850 | 48217 | 141290 | 95557 | 95562 |
| gl | 190419 | 910543 | 3674404 | 2937660 | 2938634 |
| glk | 6484 | 15344 | 32631 | 21395 | 21447 |
| gn | 5064 | 15481 | 40641 | 30389 | 30440 |
| gom | 4192 | 37508 | 14192 | 2369 | 2382 |
| gor | 14388 | 28133 | 107341 | 66191 | 67016 |
| got | 960 | 2186 | 4093 | 1404 | 1415 |
| gpe | 899 | 3383 | 1199 | 796 | 815 |
| gu | 30025 | 114805 | 459063 | 348651 | 348731 |
| guc | 546 | 2545 | 2300 | 1025 | 1138 |
| gur | 1010 | 5043 | 1761 | 227 | 244 |
| guw | 1263 | 3719 | 7474 | 3116 | 5375 |
| gv | 5036 | 12213 | 48801 | 19659 | 19663 |
| ha | 31977 | 149096 | 115029 | 97167 | 98184 |
| hak | 8694 | 11505 | 39744 | 28150 | 28152 |
| haw | 2470 | 5810 | 11169 | 5700 | 5705 |
| he | 323472 | 2648617 | 10904148 | 10367532 | 10379886 |
| hi | 150121 | 538451 | 964251 | 795726 | 798254 |
| hif | 10534 | 21169 | 43463 | 23970 | 24316 |
| hr | 189415 | 876107 | 3210326 | 2752205 | 2758602 |
| hsb | 13183 | 40760 | 91863 | 66632 | 66633 |
| ht | 64850 | 154160 | 201547 | 166206 | 167961 |
| hu | 346711 | 1859683 | 5267990 | 4707580 | 4710525 |
| hy | 298066 | 1542920 | 3767938 | 2689014 | 2690466 |
| hyw | 11358 | 83640 | 161227 | 82218 | 84817 |
| ia | 24581 | 43289 | 129914 | 96517 | 96595 |
| id | 620895 | 2138237 | 6589957 | 5629372 | 5644832 |
| ie | 11020 | 22342 | 60890 | 46054 | 46122 |
| ig | 19448 | 110907 | 57963 | 31022 | 31298 |
| ik | 737 | 1016 | 848 | 551 | 580 |
| ilo | 14135 | 74304 | 126533 | 75701 | 75705 |
| inh | 1754 | 4640 | 13284 | 5770 | 6011 |
| io | 36312 | 101555 | 303765 | 258933 | 259001 |
| is | 54348 | 170321 | 574897 | 436767 | 437784 |
| it | 1610989 | 8718610 | 27447754 | 26116131 | 26126157 |
| iu | 502 | 757 | 536 | 414 | 418 |
| ja | 1355269 | 9276459 | 29002111 | 27752954 | 27801000 |
| jam | 1571 | 2260 | 5887 | 3588 | 3590 |
| jbo | 1287 | 3088 | 5831 | 546 | 546 |
| jv | 66323 | 148710 | 547010 | 381682 | 382052 |
| ka | 167161 | 695865 | 2275552 | 422090 | 422095 |
| kaa | 3540 | 9814 | 12930 | 5312 | 5752 |
| kab | 5346 | 14709 | 36889 | 22000 | 22050 |
| kbd | 1549 | 6348 | 14594 | 5277 | 5280 |
| kbp | 1846 | 6005 | 7119 | 6875 | 6880 |
| kcg | 871 | 1839 | 2953 | 1857 | 1871 |
| kg | 1187 | 1933 | 3835 | 2292 | 2295 |
| ki | 1482 | 2899 | 2035 | 1386 | 1649 |
| kk | 235740 | 889990 | 1840304 | 1143049 | 1151399 |
| kl | 282 | 1024 | 1337 | 302 | 302 |
| km | 11422 | 84697 | 111378 | 40954 | 41529 |
| kn | 30729 | 261724 | 432994 | 188536 | 188807 |
| ko | 606386 | 2159706 | 6217786 | 5715559 | 5725614 |
| koi | 3260 | 9065 | 17068 | 10628 | 10628 |
| krc | 1465 | 6234 | 18092 | 7294 | 7311 |
| ks | 4176 | 9446 | 15252 | 5917 | 6226 |
| ksh | 2836 | 11043 | 26577 | 9484 | 9496 |
| ku | 55166 | 112840 | 269080 | 208679 | 210304 |
| kv | 5236 | 13396 | 32141 | 26727 | 26744 |
| kw | 6884 | 18901 | 49462 | 28074 | 28194 |
| ky | 75426 | 191772 | 271376 | 189656 | 190133 |
| la | 124150 | 240343 | 1456464 | 1283285 | 1283728 |
| lad | 3538 | 11910 | 37456 | 19124 | 19124 |
| lb | 57747 | 178507 | 573528 | 443583 | 444601 |
| lbe | 1205 | 2249 | 4470 | 2543 | 2543 |
| lez | 4067 | 16675 | 36970 | 25834 | 25842 |
| lfn | 4506 | 21746 | 29785 | 14554 | 14560 |
| lg | 3814 | 23386 | 15539 | 2088 | 2724 |
| li | 14134 | 58711 | 212772 | 137110 | 137367 |
| lij | 8092 | 23366 | 61410 | 34939 | 34940 |
| lld | 152613 | 158049 | 578033 | 443976 | 458150 |
| lmo | 67387 | 136650 | 373890 | 274174 | 274612 |
| ln | 3132 | 6066 | 11086 | 7838 | 7874 |
| lo | 4734 | 15005 | 27132 | 8562 | 8799 |
| lt | 204135 | 775863 | 2687983 | 2406710 | 2414909 |
| ltg | 1018 | 2979 | 5815 | 2190 | 2193 |
| lv | 118530 | 437086 | 1458341 | 1244609 | 1247181 |
| mad | 1113 | 3500 | 3762 | 1149 | 1157 |
| mai | 13285 | 22572 | 53246 | 38119 | 38128 |
| map_bms | 10875 | 16411 | 67964 | 51125 | 51137 |
| mdf | 4002 | 11043 | 21658 | 9178 | 9183 |
| mg | 92227 | 213580 | 328751 | 265931 | 267633 |
| mhr | 11010 | 33013 | 60771 | 38153 | 38220 |
| mi | 7274 | 10154 | 29052 | 24854 | 25216 |
| min | 223075 | 422381 | 1315030 | 513108 | 515548 |
| mk | 131522 | 695456 | 1984109 | 1639280 | 1640744 |
| ml | 84334 | 415940 | 797903 | 485482 | 486324 |
| mn | 23434 | 124485 | 295548 | 142014 | 142984 |
| mni | 10354 | 18872 | 29474 | 18810 | 19876 |
| mnw | 3136 | 34165 | 9342 | 1908 | 2387 |
| mr | 92464 | 326662 | 633452 | 383501 | 392709 |
| mrj | 10156 | 20132 | 48416 | 24098 | 24098 |
| ms | 344459 | 988647 | 2424535 | 1932685 | 1937647 |
| mt | 5381 | 49856 | 104636 | 51251 | 51278 |
| mwl | 4402 | 37271 | 127176 | 25729 | 26366 |
| my | 103938 | 334243 | 445026 | 300567 | 303288 |
| myv | 7515 | 21592 | 36762 | 26570 | 26591 |
| mzn | 17364 | 39937 | 89805 | 46962 | 47020 |
| nah | 5934 | 12478 | 30805 | 13093 | 14364 |
| nap | 11235 | 22336 | 41891 | 20798 | 20804 |
| nds | 79228 | 242004 | 583941 | 305374 | 305422 |
| nds_nl | 6484 | 28252 | 94875 | 51767 | 51785 |
| ne | 30359 | 91033 | 153937 | 124841 | 125078 |
| new | 71653 | 245033 | 454251 | 289444 | 289912 |
| nia | 1496 | 4047 | 4524 | 2258 | 2812 |
| nl | 1948842 | 5867108 | 17953497 | 16886996 | 16893078 |
| nn | 160106 | 549454 | 1751481 | 1375622 | 1376155 |
| no | 591000 | 2213493 | 7050421 | 6471776 | 6476157 |
| nov | 1341 | 3711 | 7466 | 3948 | 3955 |
| nqo | 1489 | 9858 | 23633 | 6056 | 6981 |
| nrm | 4571 | 14279 | 38935 | 33295 | 33321 |
| nso | 7618 | 9505 | 36826 | 35621 | 35623 |
| nv | 21911 | 57663 | 123762 | 107139 | 107139 |
| ny | 1060 | 3164 | 4750 | 1455 | 1490 |
| oc | 85099 | 303185 | 1035051 | 791403 | 792043 |
| olo | 4348 | 14334 | 18704 | 8634 | 8647 |
| om | 1710 | 7496 | 8222 | 4333 | 4416 |
| or | 17027 | 76677 | 137274 | 57023 | 57064 |
| os | 17468 | 40488 | 80943 | 48124 | 48414 |
| pa | 50421 | 226354 | 344239 | 197594 | 198080 |
| pag | 2533 | 41416 | 4150 | 2907 | 2907 |
| pam | 7816 | 16493 | 53785 | 29375 | 29715 |
| pap | 3153 | 12086 | 22157 | 18161 | 18233 |
| pcd | 5272 | 12203 | 15602 | 12319 | 12360 |
| pcm | 1019 | 4631 | 4161 | 1160 | 1261 |
| pdc | 2009 | 5406 | 8151 | 4122 | 4144 |
| pfl | 2717 | 14024 | 26150 | 10291 | 10294 |
| pi | 2972 | 5959 | 7773 | 201 | 201 |
| pih | 829 | 1065 | 2857 | 2016 | 2018 |
| pl | 1468194 | 5599437 | 19364191 | 18389560 | 18405120 |
| pms | 66552 | 170133 | 369956 | 308593 | 314917 |
| pnb | 67534 | 402101 | 937247 | 525105 | 533265 |
| pnt | 497 | 1467 | 3553 | 1715 | 1716 |
| ps | 19254 | 134868 | 72493 | 36348 | 36899 |
| pt | 1048823 | 5226543 | 16811382 | 15714686 | 15714890 |
| pwn | 328 | 1825 | 990 | 428 | 430 |
| qu | 22365 | 47078 | 133032 | 106686 | 106708 |
| rm | 3569 | 27345 | 47169 | 20460 | 20490 |
| rmy | 911 | 2221 | 4235 | 1854 | 1965 |
| rn | 726 | 1641 | 1436 | 594 | 601 |
| ro | 417630 | 1518438 | 4282072 | 3764830 | 3765626 |
| roa_rup | 1270 | 2751 | 4641 | 2527 | 2537 |
| roa_tara | 8407 | 18031 | 42040 | 14330 | 14331 |
| ru | 1889271 | 12344758 | 30796034 | 29268121 | 29288089 |
| rue | 7369 | 21429 | 61022 | 43241 | 43256 |
| rw | 7793 | 35619 | 38066 | 19821 | 20967 |
| sa | 12069 | 78188 | 104193 | 40307 | 41518 |
| sah | 16007 | 76450 | 82154 | 61041 | 61412 |
| sat | 8655 | 43624 | 57493 | 28497 | 28820 |
| sc | 6919 | 24434 | 66719 | 44707 | 44733 |
| scn | 21990 | 49686 | 132583 | 102735 | 102774 |
| sco | 34097 | 86464 | 301450 | 148184 | 148406 |
| sd | 16228 | 48679 | 79392 | 34572 | 35729 |
| se | 6101 | 10531 | 25844 | 17978 | 18010 |
| sg | 473 | 537 | 318 | 184 | 184 |
| sh | 445218 | 1213741 | 4337559 | 3858400 | 3860253 |
| shi | 1650 | 6036 | 10364 | 4715 | 4926 |
| shn | 10653 | 51542 | 46976 | 29925 | 29993 |
| si | 21959 | 132932 | 146935 | 55158 | 56422 |
| simple | 224811 | 618711 | 2014692 | 1689101 | 1689185 |
| sk | 230073 | 845501 | 2867955 | 2468707 | 2469129 |
| skr | 5505 | 62742 | 38412 | 15004 | 21015 |
| sl | 175804 | 810714 | 2597824 | 2067682 | 2068522 |
| sm | 995 | 1591 | 3838 | 2515 | 2523 |
| smn | 5004 | 12483 | 37008 | 22440 | 22492 |
| sn | 10159 | 19527 | 40437 | 31573 | 32763 |
| so | 8540 | 36173 | 53012 | 42913 | 43548 |
| sq | 94941 | 371562 | 699210 | 520709 | 522241 |
| sr | 657766 | 2331205 | 6562651 | 5257496 | 5264077 |
| srn | 1171 | 3050 | 6637 | 1752 | 1941 |
| ss | 783 | 2124 | 2382 | 1127 | 1139 |
| st | 982 | 1971 | 2510 | 1689 | 1701 |
| stq | 3648 | 10972 | 29713 | 15919 | 15920 |
| su | 57552 | 122590 | 496201 | 384518 | 384891 |
| sv | 2418380 | 5019466 | 22263222 | 21445193 | 21445441 |
| sw | 75109 | 218219 | 798980 | 688743 | 692052 |
| szl | 56229 | 109496 | 473528 | 129434 | 129479 |
| szy | 4628 | 49166 | 18867 | 2419 | 3187 |
| ta | 157642 | 780711 | 1642095 | 1141032 | 1142372 |
| tay | 2643 | 15831 | 10104 | 1496 | 5312 |
| tcy | 2135 | 9932 | 11073 | 4680 | 4745 |
| te | 83866 | 719826 | 822054 | 619184 | 622092 |
| tet | 1323 | 3797 | 8047 | 4093 | 4095 |
| tg | 108598 | 279635 | 761826 | 330974 | 331423 |
| th | 153075 | 715083 | 1723394 | 1395935 | 1398891 |
| ti | 388 | 987 | 1191 | 325 | 326 |
| tk | 4739 | 23629 | 18964 | 9717 | 9760 |
| tl | 43388 | 150141 | 447293 | 296084 | 296634 |
| tn | 1090 | 3960 | 3976 | 2008 | 2010 |
| to | 1512 | 2754 | 3542 | 2029 | 2080 |
| tpi | 1278 | 2055 | 3897 | 2193 | 2198 |
| tr | 500435 | 1806253 | 4476004 | 3964449 | 3965589 |
| trv | 1770 | 16650 | 3814 | 504 | 969 |
| ts | 674 | 1798 | 1557 | 903 | 909 |
| tt | 484761 | 1196573 | 2064576 | 1675637 | 1676579 |
| tum | 16778 | 31383 | 57382 | 28399 | 37107 |
| tw | 3568 | 16807 | 15312 | 10912 | 11495 |
| ty | 1175 | 1364 | 1563 | 1095 | 1095 |
| tyv | 3399 | 21968 | 21004 | 5535 | 5557 |
| udm | 5066 | 11432 | 24875 | 17709 | 17715 |
| ug | 8102 | 58982 | 23654 | 12671 | 12874 |
| uk | 522709 | 2867475 | 6800045 | 6445628 | 6451294 |
| ur | 194948 | 676227 | 1870488 | 910419 | 914840 |
| uz | 232879 | 859793 | 1344790 | 1073065 | 1084092 |
| ve | 764 | 1359 | 2524 | 2366 | 2366 |
| vec | 62729 | 98987 | 275972 | 194424 | 194447 |
| vep | 6853 | 43014 | 93864 | 39225 | 39228 |
| vi | 1300753 | 4103594 | 10852870 | 6884928 | 6892519 |
| vls | 7272 | 26374 | 61885 | 49639 | 49653 |
| vo | 32133 | 78015 | 125495 | 101612 | 101629 |
| wa | 11104 | 56305 | 116752 | 79686 | 80037 |
| war | 1158901 | 1342594 | 6654010 | 6009636 | 6009641 |
| wo | 1659 | 7693 | 10828 | 4057 | 4103 |
| wuu | 37170 | 58227 | 121928 | 82184 | 82237 |
| xal | 2008 | 4309 | 4582 | 2112 | 2113 |
| xh | 1502 | 4448 | 6733 | 2128 | 2186 |
| xmf | 19201 | 49944 | 179291 | 21189 | 22041 |
| yi | 14164 | 68937 | 172645 | 116102 | 116325 |
| yo | 29938 | 52231 | 85171 | 46928 | 47346 |
| za | 2388 | 3917 | 7463 | 4613 | 4665 |
| zea | 5445 | 16648 | 36161 | 23532 | 23578 |
| zh | 1310818 | 5501834 | 16397675 | 14380752 | 14421795 |
| zh_classical | 11775 | 44053 | 140340 | 71576 | 71692 |
| zh_min_nan | 425676 | 853753 | 2627115 | 2053956 | 2054838 |
| zh_yue | 121401 | 273459 | 844047 | 683130 | 683226 |
| zu | 10387 | 18211 | 22569 | 20193 | 20238 |
#### Validation
| | Articles | Paragraphs | Anchors | Anchors with QIDs | Anchors with PageIDs |
| :-- | --: | --: | --: | --: | --: |
| ab | 475 | 601 | 1061 | 399 | 399 |
| ace | 2443 | 2668 | 5197 | 2583 | 2587 |
| ady | 142 | 183 | 248 | 150 | 151 |
| af | 27383 | 44157 | 109108 | 100078 | 100123 |
| als | 11998 | 18277 | 44634 | 32874 | 32874 |
| alt | 481 | 827 | 1020 | 621 | 621 |
| am | 3746 | 5234 | 10111 | 5731 | 5756 |
| ami | 749 | 1431 | 744 | 179 | 304 |
| an | 10526 | 13588 | 74808 | 58195 | 58259 |
| ang | 826 | 1099 | 2647 | 1099 | 1102 |
| anp | 504 | 751 | 1698 | 437 | 581 |
| ar | 265368 | 401215 | 1295968 | 1249666 | 1250103 |
| arc | 377 | 418 | 1061 | 610 | 617 |
| ary | 1447 | 1870 | 5702 | 3885 | 3887 |
| arz | 367206 | 410487 | 876531 | 767742 | 768942 |
| as | 5463 | 8589 | 13953 | 7719 | 7732 |
| ast | 48345 | 97904 | 329690 | 197832 | 198042 |
| atj | 399 | 440 | 774 | 406 | 416 |
| av | 719 | 961 | 1918 | 1043 | 1053 |
| avk | 8056 | 9538 | 11816 | 3633 | 3772 |
| awa | 515 | 645 | 721 | 213 | 287 |
| ay | 1391 | 1653 | 2616 | 1481 | 1483 |
| az | 57070 | 88136 | 177151 | 155596 | 155858 |
| azb | 57642 | 64997 | 137053 | 83336 | 83778 |
| ba | 25690 | 43460 | 69052 | 61624 | 61666 |
| ban | 4053 | 4840 | 9581 | 7374 | 7385 |
| bar | 6905 | 9377 | 20546 | 12164 | 12164 |
| bat_smg | 4149 | 4706 | 8787 | 5820 | 5823 |
| bcl | 3355 | 5058 | 8759 | 5080 | 5083 |
| be | 64203 | 91174 | 276525 | 244114 | 244122 |
| bg | 98148 | 148234 | 438687 | 400356 | 401330 |
| bh | 1535 | 1891 | 3464 | 2630 | 2635 |
| bi | 154 | 159 | 251 | 151 | 151 |
| bjn | 1764 | 2166 | 6458 | 3694 | 3775 |
| blk | 887 | 1374 | 1538 | 821 | 839 |
| bm | 196 | 272 | 317 | 146 | 146 |
| bn | 50495 | 81841 | 169097 | 128508 | 128609 |
| bo | 2198 | 4079 | 934 | 746 | 752 |
| bpy | 10057 | 12879 | 18710 | 9693 | 9693 |
| br | 18687 | 23734 | 73278 | 59024 | 59056 |
| bs | 28533 | 42574 | 138483 | 107760 | 107846 |
| bug | 1636 | 1655 | 6141 | 1682 | 1731 |
| bxr | 754 | 1003 | 2930 | 1211 | 1211 |
| ca | 251952 | 399403 | 1265187 | 1140208 | 1140359 |
| cbk_zam | 460 | 932 | 1040 | 268 | 272 |
| cdo | 2953 | 3237 | 6938 | 3273 | 3281 |
| ce | 197899 | 234617 | 341843 | 166126 | 166206 |
| ceb | 1221405 | 1324624 | 4218179 | 3742385 | 3773844 |
| ch | 123 | 131 | 239 | 64 | 73 |
| chr | 124 | 134 | 175 | 100 | 100 |
| chy | 67 | 67 | 47 | 42 | 42 |
| ckb | 13511 | 18279 | 48490 | 25365 | 25540 |
| co | 1723 | 2587 | 5286 | 2729 | 2737 |
| cr | 22 | 23 | 22 | 13 | 13 |
| crh | 2978 | 3246 | 11005 | 7899 | 7899 |
| cs | 189136 | 297000 | 1101343 | 974485 | 974505 |
| csb | 1307 | 1533 | 3341 | 1851 | 1851 |
| cu | 250 | 275 | 540 | 229 | 229 |
| cv | 14374 | 17462 | 42486 | 19049 | 19114 |
| cy | 89897 | 110225 | 222476 | 177842 | 178698 |
| da | 87765 | 129990 | 482701 | 427333 | 427374 |
| dag | 2215 | 3237 | 4935 | 1169 | 1498 |
| de | 1120553 | 1788057 | 5831103 | 5607963 | 5607963 |
| din | 149 | 177 | 128 | 15 | 15 |
| diq | 6660 | 7883 | 17684 | 15853 | 15861 |
| dsb | 781 | 1032 | 2476 | 1301 | 1301 |
| dty | 554 | 659 | 861 | 480 | 483 |
| dv | 1227 | 1898 | 870 | 406 | 406 |
| dz | 215 | 303 | 21 | 8 | 8 |
| ee | 203 | 242 | 183 | 66 | 74 |
| el | 99725 | 169395 | 461747 | 344216 | 344456 |
| eml | 4387 | 6114 | 13938 | 8193 | 8214 |
| en | 2503257 | 4516442 | 12185882 | 11974436 | 11975194 |
| eo | 90949 | 123848 | 474727 | 442357 | 442772 |
| es | 701171 | 1209944 | 3752765 | 3514968 | 3522213 |
| et | 80911 | 123354 | 395877 | 319773 | 320587 |
| eu | 104388 | 156552 | 378553 | 337331 | 337944 |
| ext | 804 | 1045 | 2269 | 1344 | 1345 |
| fa | 191532 | 262121 | 688824 | 652200 | 653219 |
| fat | 446 | 709 | 214 | 3 | 97 |
| ff | 361 | 459 | 378 | 222 | 234 |
| fi | 123327 | 184244 | 576163 | 514419 | 514915 |
| fiu_vro | 1738 | 2263 | 4622 | 2623 | 2628 |
| fj | 168 | 213 | 604 | 214 | 214 |
| fo | 2625 | 3398 | 13383 | 10599 | 10617 |
| fr | 954388 | 1695419 | 4847588 | 4738268 | 4740047 |
| frp | 1018 | 1181 | 4089 | 2862 | 2862 |
| frr | 2968 | 3419 | 9609 | 7996 | 8011 |
| fur | 884 | 1168 | 3225 | 1833 | 1839 |
| fy | 15980 | 22974 | 139530 | 108300 | 108337 |
| ga | 10781 | 14493 | 38848 | 32343 | 32352 |
| gag | 440 | 551 | 961 | 465 | 465 |
| gan | 731 | 1045 | 2071 | 1536 | 1537 |
| gcr | 480 | 567 | 297 | 122 | 122 |
| gd | 4393 | 5296 | 15544 | 10458 | 10458 |
| gl | 62030 | 101112 | 407821 | 325854 | 325960 |
| glk | 1383 | 1747 | 3723 | 2435 | 2443 |
| gn | 1164 | 1728 | 4751 | 3521 | 3528 |
| gom | 2106 | 4116 | 1511 | 251 | 251 |
| gor | 2844 | 3082 | 11826 | 7315 | 7411 |
| got | 216 | 245 | 514 | 190 | 190 |
| gpe | 265 | 355 | 93 | 71 | 73 |
| gu | 8437 | 13008 | 50956 | 38242 | 38251 |
| guc | 198 | 279 | 312 | 141 | 162 |
| gur | 369 | 565 | 145 | 25 | 27 |
| guw | 332 | 393 | 827 | 313 | 616 |
| gv | 957 | 1324 | 5652 | 2252 | 2253 |
| ha | 10666 | 16571 | 12853 | 10862 | 10993 |
| hak | 1179 | 1302 | 4628 | 3155 | 3155 |
| haw | 541 | 650 | 1238 | 616 | 618 |
| he | 165541 | 295188 | 1213939 | 1153986 | 1155384 |
| hi | 36229 | 60184 | 108382 | 89102 | 89340 |
| hif | 2107 | 2369 | 5015 | 2648 | 2680 |
| hr | 62673 | 97103 | 354392 | 304964 | 305664 |
| hsb | 3599 | 4379 | 10001 | 7239 | 7240 |
| ht | 14693 | 17294 | 23011 | 18721 | 18928 |
| hu | 125438 | 206546 | 586091 | 523501 | 523814 |
| hy | 113060 | 171415 | 418503 | 298111 | 298292 |
| hyw | 5310 | 9207 | 17616 | 8842 | 9168 |
| ia | 4021 | 4850 | 14972 | 11257 | 11263 |
| id | 158648 | 237793 | 734148 | 627764 | 629525 |
| ie | 2213 | 2523 | 6750 | 5036 | 5046 |
| ig | 7944 | 12354 | 6464 | 3466 | 3493 |
| ik | 100 | 118 | 120 | 64 | 71 |
| ilo | 4096 | 8297 | 14183 | 8609 | 8609 |
| inh | 399 | 494 | 1298 | 626 | 645 |
| io | 8868 | 11368 | 33682 | 28744 | 28748 |
| is | 13573 | 18566 | 62576 | 47263 | 47360 |
| it | 584902 | 968880 | 3050620 | 2902006 | 2903047 |
| iu | 61 | 62 | 48 | 29 | 29 |
| ja | 573457 | 1032568 | 3222875 | 3083301 | 3088604 |
| jam | 249 | 274 | 623 | 399 | 399 |
| jbo | 270 | 321 | 562 | 56 | 56 |
| jv | 13108 | 16457 | 60143 | 42112 | 42148 |
| ka | 53071 | 76961 | 252383 | 46974 | 46975 |
| kaa | 775 | 1071 | 1476 | 669 | 717 |
| kab | 1269 | 1685 | 4050 | 2397 | 2403 |
| kbd | 474 | 663 | 1482 | 537 | 537 |
| kbp | 535 | 656 | 835 | 810 | 811 |
| kcg | 190 | 223 | 311 | 196 | 197 |
| kg | 187 | 213 | 420 | 260 | 260 |
| ki | 273 | 333 | 248 | 169 | 206 |
| kk | 76635 | 99268 | 204324 | 126732 | 127677 |
| kl | 97 | 129 | 162 | 43 | 43 |
| km | 3844 | 9340 | 12192 | 4524 | 4583 |
| kn | 14217 | 29387 | 48402 | 20992 | 21022 |
| ko | 154713 | 239887 | 689906 | 633527 | 634725 |
| koi | 682 | 1010 | 1815 | 1144 | 1144 |
| krc | 423 | 698 | 2022 | 841 | 846 |
| ks | 888 | 1006 | 1692 | 645 | 670 |
| ksh | 918 | 1156 | 2951 | 1053 | 1055 |
| ku | 10060 | 12771 | 29766 | 23050 | 23232 |
| kv | 1105 | 1456 | 3365 | 2787 | 2787 |
| kw | 1820 | 2171 | 5570 | 3076 | 3082 |
| ky | 16655 | 21571 | 31213 | 21712 | 21757 |
| la | 22397 | 26732 | 161732 | 142447 | 142486 |
| lad | 961 | 1286 | 3984 | 2056 | 2056 |
| lb | 15385 | 19667 | 60568 | 46664 | 46730 |
| lbe | 207 | 232 | 488 | 290 | 290 |
| lez | 1184 | 1764 | 3829 | 2760 | 2760 |
| lfn | 1455 | 2435 | 3328 | 1602 | 1604 |
| lg | 1272 | 2650 | 1795 | 239 | 305 |
| li | 4501 | 6650 | 24213 | 15790 | 15826 |
| lij | 1781 | 2607 | 6658 | 3933 | 3933 |
| lld | 17293 | 17539 | 64059 | 49327 | 50864 |
| lmo | 12641 | 14976 | 40217 | 29874 | 29946 |
| ln | 585 | 692 | 1321 | 996 | 997 |
| lo | 1144 | 1680 | 3023 | 991 | 1013 |
| lt | 62652 | 85962 | 300456 | 269264 | 270227 |
| ltg | 289 | 341 | 686 | 285 | 285 |
| lv | 34742 | 48371 | 160433 | 136594 | 136873 |
| mad | 284 | 381 | 439 | 135 | 136 |
| mai | 2184 | 2499 | 5878 | 4209 | 4212 |
| map_bms | 1539 | 1847 | 7486 | 5705 | 5705 |
| mdf | 1086 | 1244 | 2512 | 1077 | 1077 |
| mg | 20361 | 23650 | 36313 | 29821 | 29974 |
| mhr | 2863 | 3594 | 6538 | 4114 | 4122 |
| mi | 1078 | 1154 | 3214 | 2743 | 2776 |
| min | 42987 | 46277 | 143692 | 55809 | 56077 |
| mk | 46235 | 76890 | 219310 | 180884 | 181042 |
| ml | 31116 | 46345 | 88976 | 53726 | 53818 |
| mn | 8485 | 13887 | 32271 | 15330 | 15455 |
| mni | 1843 | 2102 | 3418 | 2183 | 2325 |
| mnw | 1284 | 3750 | 897 | 202 | 224 |
| mr | 26803 | 36202 | 70510 | 43103 | 44352 |
| mrj | 2062 | 2297 | 5627 | 2888 | 2888 |
| ms | 75473 | 110077 | 270064 | 215280 | 215811 |
| mt | 2516 | 5510 | 11680 | 5760 | 5761 |
| mwl | 1828 | 4316 | 15365 | 3216 | 3287 |
| my | 24005 | 37165 | 49321 | 33223 | 33518 |
| myv | 1732 | 2327 | 4094 | 2923 | 2925 |
| mzn | 3784 | 4409 | 9938 | 5199 | 5205 |
| nah | 1128 | 1314 | 3316 | 1418 | 1556 |
| nap | 2047 | 2473 | 4579 | 2249 | 2249 |
| nds | 20646 | 26845 | 65355 | 34090 | 34094 |
| nds_nl | 2127 | 3063 | 10188 | 5585 | 5587 |
| ne | 6956 | 10087 | 16847 | 13502 | 13536 |
| new | 22645 | 27233 | 50860 | 32165 | 32217 |
| nia | 312 | 430 | 512 | 277 | 329 |
| nl | 490380 | 651743 | 1994062 | 1874588 | 1875259 |
| nn | 44180 | 60918 | 194747 | 153072 | 153140 |
| no | 172653 | 245377 | 779775 | 715618 | 716153 |
| nov | 339 | 410 | 861 | 452 | 452 |
| nqo | 583 | 1037 | 2598 | 704 | 813 |
| nrm | 1318 | 1600 | 4276 | 3734 | 3736 |
| nso | 960 | 1038 | 4242 | 4119 | 4119 |
| nv | 5649 | 6281 | 13652 | 11768 | 11768 |
| ny | 236 | 318 | 392 | 126 | 126 |
| oc | 23067 | 33775 | 115155 | 87980 | 88063 |
| olo | 1273 | 1598 | 2162 | 997 | 998 |
| om | 401 | 830 | 891 | 401 | 412 |
| or | 6261 | 8669 | 16120 | 6752 | 6757 |
| os | 3923 | 4535 | 9130 | 5470 | 5524 |
| pa | 17242 | 24844 | 37813 | 21759 | 21812 |
| pag | 1602 | 4519 | 404 | 300 | 300 |
| pam | 1509 | 1831 | 6019 | 3230 | 3272 |
| pap | 773 | 1376 | 2526 | 2042 | 2056 |
| pcd | 1089 | 1361 | 1803 | 1334 | 1338 |
| pcm | 353 | 542 | 409 | 128 | 139 |
| pdc | 370 | 565 | 839 | 424 | 429 |
| pfl | 1113 | 1500 | 2861 | 1070 | 1070 |
| pi | 578 | 682 | 881 | 26 | 26 |
| pih | 118 | 125 | 317 | 217 | 218 |
| pl | 444095 | 621669 | 2149058 | 2041686 | 2043400 |
| pms | 16530 | 19186 | 41547 | 34783 | 35474 |
| pnb | 21586 | 44654 | 103992 | 58461 | 59380 |
| pnt | 147 | 172 | 389 | 177 | 178 |
| ps | 7566 | 14922 | 8427 | 4108 | 4187 |
| pt | 349931 | 580790 | 1868210 | 1745832 | 1745858 |
| pwn | 103 | 166 | 85 | 31 | 31 |
| qu | 4540 | 5211 | 14781 | 11746 | 11750 |
| rm | 1076 | 3100 | 5539 | 2293 | 2298 |
| rmy | 214 | 235 | 446 | 176 | 184 |
| rn | 125 | 172 | 124 | 53 | 53 |
| ro | 106169 | 168972 | 473512 | 416263 | 416347 |
| roa_rup | 214 | 290 | 458 | 254 | 254 |
| roa_tara | 1278 | 1979 | 4455 | 1534 | 1534 |
| ru | 806592 | 1369860 | 3416036 | 3245837 | 3247963 |
| rue | 2022 | 2513 | 7023 | 5064 | 5066 |
| rw | 2577 | 3925 | 4139 | 2223 | 2349 |
| sa | 4344 | 8607 | 11313 | 4249 | 4391 |
| sah | 4729 | 8472 | 9040 | 6623 | 6660 |
| sat | 3485 | 4960 | 6473 | 3225 | 3278 |
| sc | 1900 | 2807 | 7641 | 5096 | 5098 |
| scn | 4263 | 5604 | 14333 | 11167 | 11171 |
| sco | 7382 | 9639 | 33771 | 16432 | 16453 |
| sd | 3970 | 5499 | 8879 | 3804 | 3925 |
| se | 982 | 1149 | 2841 | 1958 | 1958 |
| sg | 67 | 72 | 36 | 24 | 24 |
| sh | 103283 | 135121 | 484459 | 429555 | 429770 |
| shi | 477 | 679 | 1144 | 545 | 570 |
| shn | 3633 | 5630 | 5456 | 3627 | 3639 |
| si | 7672 | 14760 | 16443 | 6215 | 6346 |
| simple | 52503 | 68765 | 224811 | 187586 | 187598 |
| sk | 67520 | 93957 | 317232 | 272711 | 272779 |
| skr | 2090 | 6926 | 4136 | 1683 | 2359 |
| sl | 55621 | 89740 | 285769 | 228421 | 228530 |
| sm | 153 | 171 | 485 | 297 | 297 |
| smn | 1163 | 1420 | 4517 | 2681 | 2688 |
| sn | 1896 | 2139 | 4351 | 3384 | 3529 |
| so | 2358 | 4032 | 6064 | 5027 | 5083 |
| sq | 25223 | 41621 | 79295 | 59156 | 59350 |
| sr | 177997 | 258455 | 728755 | 584663 | 585394 |
| srn | 281 | 342 | 796 | 205 | 225 |
| ss | 188 | 259 | 265 | 125 | 125 |
| st | 157 | 198 | 248 | 164 | 166 |
| stq | 804 | 1162 | 3150 | 1816 | 1816 |
| su | 10348 | 13687 | 55055 | 42915 | 42944 |
| sv | 467467 | 558522 | 2473790 | 2382576 | 2382608 |
| sw | 18014 | 24348 | 90302 | 77817 | 78145 |
| szl | 11292 | 12173 | 52459 | 14419 | 14424 |
| szy | 2391 | 5418 | 2042 | 235 | 285 |
| ta | 59923 | 87114 | 183399 | 126977 | 127148 |
| tay | 1192 | 1757 | 1101 | 175 | 591 |
| tcy | 769 | 1077 | 1089 | 464 | 465 |
| te | 43790 | 79667 | 91327 | 69148 | 69484 |
| tet | 294 | 412 | 871 | 471 | 471 |
| tg | 27060 | 31599 | 86180 | 37522 | 37561 |
| th | 49169 | 78814 | 189768 | 154097 | 154453 |
| ti | 87 | 99 | 89 | 22 | 22 |
| tk | 1328 | 2612 | 2116 | 1056 | 1062 |
| tl | 11731 | 16623 | 49726 | 32858 | 32914 |
| tn | 296 | 424 | 477 | 278 | 278 |
| to | 254 | 277 | 393 | 230 | 233 |
| tpi | 180 | 207 | 394 | 216 | 217 |
| tr | 134938 | 200972 | 496960 | 440639 | 440790 |
| trv | 807 | 1814 | 400 | 53 | 98 |
| ts | 155 | 203 | 219 | 132 | 132 |
| tt | 113689 | 132676 | 228544 | 185563 | 185662 |
| tum | 2188 | 3516 | 6442 | 3105 | 4083 |
| tw | 1249 | 1885 | 1729 | 1217 | 1291 |
| ty | 162 | 167 | 215 | 143 | 143 |
| tyv | 1494 | 2486 | 2342 | 611 | 617 |
| udm | 1036 | 1240 | 2781 | 1957 | 1957 |
| ug | 2629 | 6556 | 2657 | 1479 | 1493 |
| uk | 203057 | 318240 | 758049 | 718278 | 718908 |
| ur | 54784 | 75152 | 206169 | 99493 | 100041 |
| uz | 65767 | 95465 | 149763 | 119192 | 120519 |
| ve | 128 | 148 | 256 | 229 | 229 |
| vec | 9463 | 11242 | 32188 | 22525 | 22531 |
| vep | 3225 | 4804 | 10375 | 4295 | 4295 |
| vi | 330763 | 455933 | 1211343 | 768936 | 769829 |
| vls | 2189 | 2904 | 7133 | 5776 | 5777 |
| vo | 7308 | 8647 | 13902 | 11270 | 11273 |
| wa | 4457 | 6269 | 12736 | 8751 | 8794 |
| war | 146537 | 149236 | 738087 | 666983 | 666983 |
| wo | 516 | 864 | 1083 | 404 | 414 |
| wuu | 5530 | 6448 | 13732 | 9168 | 9171 |
| xal | 407 | 449 | 549 | 308 | 308 |
| xh | 399 | 550 | 804 | 284 | 293 |
| xmf | 4516 | 5414 | 19437 | 2342 | 2447 |
| yi | 5260 | 7563 | 18821 | 12493 | 12510 |
| yo | 4431 | 5855 | 9761 | 5361 | 5410 |
| za | 335 | 414 | 777 | 457 | 458 |
| zea | 1470 | 1847 | 3682 | 2569 | 2574 |
| zh | 389361 | 611537 | 1817382 | 1592929 | 1597686 |
| zh_classical | 3601 | 4995 | 15834 | 8157 | 8170 |
| zh_min_nan | 87849 | 94529 | 291330 | 227978 | 228083 |
| zh_yue | 23579 | 30146 | 92720 | 75081 | 75096 |
| zu | 1646 | 2050 | 2518 | 2228 | 2234 |
**NOTE:** The number of articles in the tables above refers to the number of articles that have at least one paragraph belonging to the article appear in the split.
## Additional Information
### Licensing Information
The WikiAnc dataset is given under the [Creative Commons Attribution ShareAlike 4.0 International](https://creativecommons.org/licenses/by-sa/4.0/) license."
KETI-AIR/kor_duorc,"{""language"": [""ko""], ""license"": [""mit""], ""multilinguality"": [""monolingual""], ""size_categories"": [""10K(Male/Female/Unidentified) |
|:---:|:---:|:---:|:---:|:---:|
| ar-SA | validation | 2033 | 2.12 | 36 (22/14/0) |
| | test | 2974 | 3.23 | 37 (15/17/5) |
| | train_115 | 115 | 0.14 | 8 (4/4/0) |
| de-DE | validation | 2033 | 2.33 | 68 (35/32/1) |
| | test | 2974 | 3.41 | 82 (36/36/10) |
| | train | 11514 | 12.61 | 117 (50/63/4) |
| | train_115 | 115 | 0.15 | 7 (3/4/0) |
| es-ES | validation | 2033 | 2.53 | 109 (51/53/5) |
| | test | 2974 | 3.61 | 85 (37/33/15) |
| | train_115 | 115 | 0.13 | 7 (3/4/0) |
| fr-FR | validation | 2033 | 2.20 | 55 (26/26/3) |
| | test | 2974 | 2.65 | 75 (31/35/9) |
| | train | 11514 | 12.42 | 103 (50/52/1) |
| | train_115 | 115 | 0.12 | 103 (50/52/1) |
| hu-HU | validation | 2033 | 2.27 | 69 (33/33/3) |
| | test | 2974 | 3.30 | 55 (25/24/6) |
| | train_115 | 115 | 0.12 | 8 (3/4/1) |
| ko-KR | validation | 2033 | 2.12 | 21 (8/13/0) |
| | test | 2974 | 2.66 | 31 (10/18/3) |
| | train_115 | 115 | 0.14 | 8 (4/4/0) |
| nl-NL | validation | 2033 | 2.14 | 37 (17/19/1) |
| | test | 2974 | 3.30 | 100 (48/49/3) |
| | train_115 | 115 | 0.12 | 7 (3/4/0) |
| pl-PL | validation | 2033 | 2.24 | 105 (50/52/3) |
| | test | 2974 | 3.21 | 151 (73/71/7) |
| | train_115 | 115 | 0.10 | 7 (3/4/0) |
| pt-PT | validation | 2033 | 2.20 | 107 (51/53/3) |
| | test | 2974 | 3.25 | 102 (48/50/4) |
| | train_115 | 115 | 0.12 | 8 (4/4/0) |
| ru-RU | validation | 2033 | 2.25 | 40 (7/31/2) |
| | test | 2974 | 3.44 | 51 (25/23/3) |
| | train_115 | 115 | 0.12 | 7 (3/4/0) |
| tr-TR | validation | 2033 | 2.17 | 71 (36/34/1) |
| | test | 2974 | 3.00 | 42 (17/18/7) |
| | train_115 | 115 | 0.11 | 6 (3/3/0) |
| vi-VN | validation | 2033 | 2.10 | 28 (13/14/1) |
| | test | 2974 | 3.23 | 30 (11/14/5) |
|| train_115 | 115 | 0.11 | 7 (2/4/1) |
## How to use
### How to use
The `datasets` library allows you to load and pre-process your dataset in pure Python, at scale. The dataset can be downloaded and prepared in one call to your local drive by using the `load_dataset` function.
For example, to download the French config, simply specify the corresponding language config name (i.e., ""fr-FR"" for French):
```python
from datasets import load_dataset
speech_massive_fr_train = load_dataset(""FBK-MT/Speech-MASSIVE"", ""fr-FR"", split=""train"")
```
In case you don't have enough space in the machine, you can stream dataset by adding a `streaming=True` argument to the `load_dataset` function call. Loading a dataset in streaming mode loads individual samples of the dataset at a time, rather than downloading the entire dataset to disk.
```python
from datasets import load_dataset
speech_massive_de_train = load_dataset(""FBK-MT/Speech-MASSIVE"", ""de-DE"", split=""train"", streaming=True)
list(speech_massive_de_train.take(2))
```
You can also load all the available languages and splits at once.
And then access each split.
```python
from datasets import load_dataset
speech_massive = load_dataset(""FBK-MT/Speech-MASSIVE"", ""all"")
multilingual_validation = speech_massive['validation']
```
Or you can load dataset's all the splits per language to separate languages more easily.
```python
from datasets import load_dataset, interleave_datasets, concatenate_datasets
# creating full train set by interleaving between German and French
speech_massive_de = load_dataset(""FBK-MT/Speech-MASSIVE"", ""de-DE"")
speech_massive_fr = load_dataset(""FBK-MT/Speech-MASSIVE"", ""fr-FR"")
speech_massive_train_de_fr = interleave_datasets([speech_massive_de['train'], speech_massive_fr['train']])
# creating train_115 few-shot set by concatenating Korean and Russian
speech_massive_ko = load_dataset(""FBK-MT/Speech-MASSIVE"", ""ko-KR"")
speech_massive_ru = load_dataset(""FBK-MT/Speech-MASSIVE"", ""ru-RU"")
speech_massive_train_115_ko_ru = concatenate_datasets([speech_massive_ko['train_115'], speech_massive_ru['train_115']])
```
## Dataset Structure
### Data configs
- `all`: load all the 12 languages in one single dataset instance
- `lang`: load only `lang` in the dataset instance, by specifying one of below languages
- ```ar-SA, de-DE, es-ES, fr-FR, hu-HU, ko-KR, nl-NL, pl-PL, pt-PT, ru-RU, tr-TR, vi-VN```
### Data Splits
- `validation`: validation(dev) split available for all the 12 languages
- `train_115`: few-shot (115 samples) split available for all the 12 languages
- `train`: train split available for French (fr-FR) and German (de-DE)
> [!WARNING]
> `test` split is uploaded as a separate dataset on HF to prevent possible data contamination
- ⚠️ `test`: available **_only_** in the separate HF dataset repository. ⚠️
- [https://huggingface.co/datasets/FBK-MT/Speech-MASSIVE-test](https://huggingface.co/datasets/FBK-MT/Speech-MASSIVE-test)
### Data Instances
```json
{
// Start of the data collected in Speech-MASSIVE
'audio': {
'path': 'train/2b12a21ca64a729ccdabbde76a8f8d56.wav',
'array': array([-7.80913979e-...7259e-03]),
'sampling_rate': 16000},
'path': '/path/to/wav/file.wav',
'is_transcript_reported': False,
'is_validated': True,
'speaker_id': '60fcc09cb546eee814672f44',
'speaker_sex': 'Female',
'speaker_age': '25',
'speaker_ethnicity_simple': 'White',
'speaker_country_of_birth': 'France',
'speaker_country_of_residence': 'Ireland',
'speaker_nationality': 'France',
'speaker_first_language': 'French',
// End of the data collected in Speech-MASSIVE
// Start of the data extracted from MASSIVE
// (https://huggingface.co/datasets/AmazonScience/massive/blob/main/README.md#data-instances)
'id': '7509',
'locale': 'fr-FR',
'partition': 'train',
'scenario': 2,
'scenario_str': 'calendar',
'intent_idx': 32,
'intent_str': 'calendar_query',
'utt': 'après les cours de natation quoi d autre sur mon calendrier mardi',
'annot_utt': 'après les cours de natation quoi d autre sur mon calendrier [date : mardi]',
'worker_id': '22',
'slot_method': {'slot': ['date'], 'method': ['translation']},
'judgments': {
'worker_id': ['22', '19', '0'],
'intent_score': [1, 2, 1],
'slots_score': [1, 1, 1],
'grammar_score': [4, 4, 4],
'spelling_score': [2, 1, 2],
'language_identification': ['target', 'target', 'target']
},
'tokens': ['après', 'les', 'cours', 'de', 'natation', 'quoi', 'd', 'autre', 'sur', 'mon', 'calendrier', 'mardi'],
'labels': ['Other', 'Other', 'Other', 'Other', 'Other', 'Other', 'Other', 'Other', 'Other', 'Other', 'Other', 'date'],
// End of the data extracted from MASSIVE
}
```
### Data Fields
`audio.path`: Original audio file name
`audio.array`: Read audio file with the sampling rate of 16,000
`audio.sampling_rate`: Sampling rate
`path`: Original audio file full path
`is_transcript_reported`: Whether the transcript is reported as 'syntatically wrong' by crowd-source worker
`is_validated`: Whether the recorded audio has been validated to check if the audio matches transcript exactly by crowd-source worker
`speaker_id`: Unique hash id of the crowd source speaker
`speaker_sex`: Speaker's sex information provided by the crowd-source platform ([Prolific](http://prolific.com))
- Male
- Female
- Unidentified : Information not available from Prolific
`speaker_age`: Speaker's age information provided by Prolific
- age value (`str`)
- Unidentified : Information not available from Prolific
`speaker_ethnicity_simple`: Speaker's ethnicity information provided by Prolific
- ethnicity value (`str`)
- Unidentified : Information not available from Prolific
`speaker_country_of_birth`: Speaker's country of birth information provided by Prolific
- country value (`str`)
- Unidentified : Information not available from Prolific
`speaker_country_of_residence`: Speaker's country of residence information provided by Prolific
- country value (`str`)
- Unidentified : Information not available from Prolific
`speaker_nationality`: Speaker's nationality information provided by Prolific
- nationality value (`str`)
- Unidentified : Information not available from Prolific
`speaker_first_language`: Speaker's first language information provided by Prolific
- language value (`str`)
- Unidentified : Information not available from Prolific
### Limitations
As Speech-MASSIVE is constructed based on the MASSIVE dataset, it inherently retains certain grammatical errors present in the original MASSIVE text. Correcting these errors was outside the scope of our project. However, by providing the `is_transcripted_reported` attribute in Speech-MASSIVE, we enable users of the dataset to be aware of these errors.
## License
All datasets are licensed under the [CC-BY-NC-SA-4.0 license](https://creativecommons.org/licenses/by-nc-sa/4.0/deed.en).
### Citation Information
Speech-MASSIVE is accepted at INTERSPEECH 2024 (Kos, Greece).
You can access the [Speech-MASSIVE paper on arXiv](https://arxiv.org/abs/2408.03900).
Please cite the paper when referencing the Speech-MASSIVE corpus as:
```
@misc{lee2024speechmassivemultilingualspeechdataset,
title={Speech-MASSIVE: A Multilingual Speech Dataset for SLU and Beyond},
author={Beomseok Lee and Ioan Calapodescu and Marco Gaido and Matteo Negri and Laurent Besacier},
year={2024},
eprint={2408.03900},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2408.03900},
}
```"
facebook/2M-Belebele,"{""license"": ""cc-by-sa-4.0"", ""task_categories"": [""question-answering"", ""automatic-speech-recognition""], ""language"": [""bg"", ""pa"", ""en"", ""hu"", ""sv"", ""af"", ""ca"", ""ka"", ""sk"", ""jv"", ""bn"", ""tr"", ""sr"", ""ro"", ""tg"", ""fa"", ""wo"", ""fi"", ""hy"", ""vi"", ""kea"", ""as"", ""ja"", ""nl"", ""ne"", ""lg"", ""hi"", ""xh"", ""kk"", ""mn"", ""yo"", ""km"", ""ha"", ""ru"", ""sw"", ""ps"", ""ko"", ""cs"", ""lv"", ""ig"", ""ar"", ""es"", ""nb"", ""lt"", ""fil"", ""it"", ""he"", ""da"", ""ml"", ""my"", ""el"", ""et"", ""pl"", ""sn"", ""sd"", ""or"", ""th"", ""luo"", ""sl"", ""fr"", ""id"", ""ta"", ""gu"", ""mk"", ""am"", ""pt"", ""cmn"", ""de"", ""ceb"", ""is"", ""ur"", ""az"", ""te""], ""tags"": [""speech-recognition"", ""multilingual"", ""flores200"", ""translation"", ""audio"", ""speech""], ""pretty_name"": ""2M Belebele Speech"", ""size_categories"": [""1Kin multiple languages** to be the **Vript_Multilingual**.
**New in Vript_Multilingual**:
1. Multilingual: zh (60%), en (17%), de (15%), ja (6%), ko (2%), ru (<1%), es (<1%), pt (<1%), jv (<1%), fr (<1%), id (<1%), vi (<1%)
2. More diverse and fine-grained categories: 113 categories (please check [vript_CN-V2_meta.json](https://huggingface.co/datasets/Mutonix/Vript_Multilingual/blob/main/vript_CN-V2_meta.jsonl))
3. Wider range: from 2011-01 to 2024-06
4. Higher resolution: 1080p
5. Longer duration: > 10 minutes in average
6. More clips: ~677k clips
## Getting Started
**By downloading these datasets, you agree to the terms of the [License](#License).**
The captions of the videos in the Vript_Multilingual dataset are structured as follows:
```
{
""meta"": {
""video_id"": ""xxx"",
""video_title"": ""..."",
""num_clips"": ...,
""integrity"": true,
},
""data"": {
""xxx-Scene-001"": {
""video_id"": ""xxx"",
""clip_id"": ""xxx-Scene-001"",
""video_title"": ""..."",
""caption"":{
""shot_type"": ""..."",
""camera_movement"": ""..."",
""content"": ""..."",
""scene_title"": ""..."",
},
""voiceover"": [""...""],
},
""xxx-Scene-002"": {
...
}
}
}
```
- `video_id`: The ID of the video from YouTube.
- `video_title`: The title of the video.
- `num_clips`: The number of clips in the video. If the `integrity` is `false`, some clips may not be captioned.
- `integrity`: Whether all clips of the video are captioned.
- `clip_id`: The ID of the clip in the video, which is the concatenation of the `video_id` and the scene number.
- `caption`: The caption of the scene, including the shot type, camera movement, content, and scene title.
- `voiceover`: The transcription of the voice-over in the scene.
The data is organized as follows:
```
Vript_Multilingual/
|
├── vript_CN-V2_meta.json
│
├── vript_CN-V2_captions/
│ ├── vript_CN-V2_captions.zip
│ └── vript_CN-V2_captions.jsonl
│
├── vript_CN-V2_videos/
│ ├── CN-V2_video_1_of_224.zip
│ │ ├── xxx.mp4
│ │ └── ...
│ ├── CN-V2_video_2_of_224.zip
│ └── ...
│
└── vript_CN-V2_clips/
├── CN-V2_clips_1_of_224.zip
│ ├── xxx/
│ │ ├── xxx_cut_meta.json
│ │ ├── xxx_asr.jsonl
│ │ ├── xxx-Scene-001.mp4
│ │ └── ...
│ └── ...
├── CN-V2_clips_2_of_224.zip
└── ...
```
- `vript_CN-V2_meta.json`: The meta information of the videos in the Vript_Multilingual dataset, including the video id, title, url, description, category, etc.
- `vript_CN-V2_captions/`: The video captions of the videos in the Vript_Multilingual dataset, which are structured as described above.
- `vript_CN-V2_videos/` (711 GB): The untrimmed videos in the Vript_Multilingual dataset. We divide the whole data into multiple zip files, each containing 200 videos.
- `vript_CN-V2_clips/` (890 GB): The trimmed video clips in the Vript_Multilingual dataset, which correspond to scenes in the `video_CN-V2_captions`.
- `xxx_cut_meta.json`: The meta information about how the video is trimmed, including the start time, end time, and the duration of the scene.
- `xxx_asr.jsonl`: The transcription of the voice-over in the scene.
## License
By downloading or using the data or model, you understand, acknowledge, and agree to all the terms in the following agreement.
- ACADEMIC USE ONLY
Any content from the Vript-related dataset and Vriptor model is available for academic research purposes only. You agree not to reproduce, duplicate, copy, trade, or exploit for any commercial purposes
- NO DISTRIBUTION
Respect the privacy of personal information of the original source. Without the permission of the copyright owner, you are not allowed to perform any form of broadcasting, modification or any other similar behavior to the data set content.
- RESTRICTION AND LIMITATION OF LIABILITY
In no event shall we be liable for any other damages whatsoever arising out of the use of, or inability to use this dataset and its associated software, even if we have been advised of the possibility of such damages.
- DISCLAIMER
You are solely responsible for legal liability arising from your improper use of the dataset content. We reserve the right to terminate your access to the dataset at any time. You should delete the Vript-related dataset or Vriptor model if required.
This license is modified from the [HD-VG-100M](https://github.com/daooshee/HD-VG-130M) license.
## Citation
```
@misc{yang2024vript,
title={Vript: A Video Is Worth Thousands of Words},
author={Dongjie Yang and Suyuan Huang and Chengqiang Lu and Xiaodong Han and Haoxin Zhang and Yan Gao and Yao Hu and Hai Zhao},
year={2024},
eprint={2406.06040},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
```
## Contact
**Dongjie Yang**: [djyang.tony@sjtu.edu.cn](djyang.tony@sjtu.edu.cn)
Paper: arxiv.org/abs/2406.06040"
sentence-transformers/mldr,"{""multilinguality"": [""monolingual""], ""size_categories"": [""100K 1M | 9 | images > 1M | 6
total > 500K | 10 | images > 500K | 12
total > 100K | 36 | images > 100K | 35
total > 50K | 15 | images > 50K | 17
total > 14K | 38 | images > 13K | 38
## Dataset Structure
### Data Instances
```
{
'language': 'en',
'page_url': 'https://en.wikipedia.org/wiki/Oxydactylus',
'image_url': 'https://upload.wikimedia.org/wikipedia/commons/5/5f/Oxydactylus_longipes_fm.jpg',
'page_title': 'Oxydactylus',
'section_title': None,
'hierarchical_section_title': 'Oxydactylus',
'caption_reference_description': None,
'caption_attribution_description': 'English: Mounted skeleton of Oxydactylus longipes in the Field Museum of Natural History.',
'caption_alt_text_description': None,
'mime_type': 'image/jpeg',
'original_height': 3564,
'original_width': 2748,
'is_main_image': True,
'attribution_passes_lang_id': True,
'page_changed_recently': True,
'context_page_description': 'Oxydactylus is an extinct genus of camelid endemic to North America. It lived from the Late Oligocene to the Middle Miocene, existing for approximately 14 million years. The name is from the Ancient Greek οξύς and δάκτυλος.\nThey had very long legs and necks, and were probably adapted to eating high vegetation, much like modern giraffes. Unlike modern camelids, they had hooves, rather than tough sole-pads, and splayed toes.',
'context_section_description': 'Oxydactylus is an extinct genus of camelid endemic to North America. It lived from the Late Oligocene to the Middle Miocene (28.4–13.7 mya), existing for approximately 14 million years. The name is from the Ancient Greek οξύς (oxys, ""sharp"")and δάκτυλος (daktylos, ""finger"").\n \nThey had very long legs and necks, and were probably adapted to eating high vegetation, much like modern giraffes. Unlike modern camelids, they had hooves, rather than tough sole-pads, and splayed toes.'
}
```
### Data Fields
- `language`: Language code depicting wikipedia language of the page
- `page_url`: URL to wikipedia page
- `image_url`: URL to wikipedia image
- `page_title`: Wikipedia page's title
- `section_title`: Section's title
- `hierarchical_section_title`: Hierarchical section's title
- `caption_reference_description`: This is the caption that is visible on the wiki page directly below the image.
- `caption_attribution_description`: This is the text found on the Wikimedia page of the image. This text is common to all occurrences of that image across all Wikipedias and thus can be in a language different to the original page article.
- `caption_alt_text_description`: This is the “alt” text associated with the image. While not visible in general, it is commonly used for accessibility / screen readers
- `mime_type`: Mime type associated to the image.
- `original_height`: Image height
- `original_width`: Image width
- `is_main_image`: Flag determining if the image is the first image of the page. Usually displayed on the top-right part of the page when using web browsers.
- `attribution_passes_lang_id`: Compared `language` field with the attribution language (written in the prefix of the attribution description).
- `page_changed_recently`: [More Information Needed]
- `context_page_description`: Page description corresponds to the short description of the page. It provides a concise explanation of the scope of the page.
- `context_section_description`: Text within the image's section.
Figure: WIT annotation example.
Details on the field content can be found directly in the [paper, figure 5 and table 12.](https://arxiv.org/abs/2103.01913)
### Data Splits
All data is held in `train` split, with a total of 37046386 rows.
## Dataset Creation
### Curation Rationale
From the [repository](https://github.com/google-research-datasets/wit#motivation):
> Multimodal visio-linguistic models rely on a rich dataset to help them learn to model the relationship between images and texts. Having large image-text datasets can significantly improve performance, as shown by recent works. Furthermore the lack of language coverage in existing datasets (which are mostly only in English) also impedes research in the multilingual multimodal space – we consider this a lost opportunity given the potential shown in leveraging images (as a language-agnostic medium) to help improve our multilingual textual understanding.
>
> To address these challenges and advance research on multilingual, multimodal learning we created the Wikipedia-based Image Text (WIT) Dataset. WIT is created by extracting multiple different texts associated with an image (e.g., as shown in the above image) from Wikipedia articles and Wikimedia image links. This was accompanied by rigorous filtering to only retain high quality image-text sets.
>
> The resulting dataset contains over 37.6 million image-text sets – making WIT the largest multimodal dataset (publicly available at the time of this writing) with unparalleled multilingual coverage – with 12K+ examples in each of 108 languages (53 languages have 100K+ image-text pairs).
### Source Data
#### Initial Data Collection and Normalization
From the [paper, section 3.1](https://arxiv.org/abs/2103.01913):
> We started with all Wikipedia content pages (i.e., ignoring other
pages that have discussions, comments and such). These number about ∼124M pages across 279 languages.
#### Who are the source language producers?
Text was extracted from Wikipedia.
### Annotations
#### Annotation process
WIT was constructed using an automatic process. However it was human-validated.
From the [paper, section 3.7](https://arxiv.org/abs/2103.01913):
> To further verify the quality of the WIT dataset we performed a
study using (crowd-sourced) human annotators. As seen in Fig. 3,
we asked raters to answer 3 questions. Given an image and the page
title, raters first evaluate the quality of the attribution description
and reference description in the first two questions (order randomized). The third question understands the contextual quality of these
text descriptions given the page description and caption. Each response is on a 3-point scale: ""Yes"" if the text perfectly describes
the image, ""Maybe"" if it is sufficiently explanatory and ""No"" if it is
irrelevant or the image is inappropriate.
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
From the [paper, section 3.4](https://arxiv.org/abs/2103.01913):
> Lastly we found that certain image-text pairs occurred very
frequently. These were often generic images that did not have
much to do with the main article page. Common examples
included flags, logos, maps, insignia and such. To prevent
biasing the data, we heavily under-sampled all such images
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
```bibtex
@article{srinivasan2021wit,
title={WIT: Wikipedia-based Image Text Dataset for Multimodal Multilingual Machine Learning},
author={Srinivasan, Krishna and Raman, Karthik and Chen, Jiecao and Bendersky, Michael and Najork, Marc},
journal={arXiv preprint arXiv:2103.01913},
year={2021}
}
```
### Contributions
Thanks to [@thomasw21](https://github.com/thomasw21), [@nateraw](https://github.com/nateraw) and [hassiahk](https://github.com/hassiahk) for adding this dataset."
msarmi9/korean-english-multitarget-ted-talks-task,"{""annotations_creators"": [""expert-generated""], ""language_creators"": [""other""], ""language"": [""en"", ""ko""], ""language_bcp47"": [""en-US"", ""ko-KR""], ""license"": [""cc-by-nc-nd-4.0""], ""multilinguality"": [""translation"", ""multilingual""], ""pretty_name"": ""English-Korean Multitarget Ted Talks Task (MTTT)"", ""task_categories"": [""conditional-text-generation""], ""task_ids"": [""machine-translation""]}","# Dataset Card for english-korean-multitarget-ted-talks-task
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://www.cs.jhu.edu/~kevinduh/a/multitarget-tedtalks/
### Dataset Summary
- Parallel English-Korean Text Corpus
- Text was originally transcribed to English from various Ted Talks, then translated to Korean by TED translators
- Approximately 166k train, 2k validation, and 2k test sentence pairs.
### Supported Tasks and Leaderboards
- Machine Translation
### Languages
- English
- Korean
## Additional Information
### Dataset Curators
Kevin Duh, ""The Multitarget TED Talks Task"", http://www.cs.jhu.edu/~kevinduh/a/multitarget-tedtalks/, 2018
### Licensing Information
TED makes its collection available under the Creative Commons BY-NC-ND license. Please acknowledge TED when using this data. We acknowledge the authorship of TED Talks (BY condition). We are not redistributing the transcripts for commercial purposes (NC condition) nor making derivative works of the original contents (ND condition).
### Citation Information
@misc{duh18multitarget,
author = {Kevin Duh},
title = {The Multitarget TED Talks Task},
howpublished = {\url{http://www.cs.jhu.edu/~kevinduh/a/multitarget-tedtalks/}},
year = {2018},
}"
DAMO-NLP-SG/MultiJail,"{""license"": ""mit"", ""task_categories"": [""conversational""], ""language"": [""en"", ""zh"", ""it"", ""vi"", ""ar"", ""ko"", ""th"", ""bn"", ""sw"", ""jv""], ""size_categories"": [""n<1K""]}","# Multilingual Jailbreak Challenges in Large Language Models
This repo contains the data for our paper [""Multilingual Jailbreak Challenges in Large Language Models""](https://arxiv.org/abs/2310.06474).
[[Github repo]](https://github.com/DAMO-NLP-SG/multilingual-safety-for-LLMs/)
## Annotation Statistics
We collected a total of 315 English unsafe prompts and annotated them into nine non-English languages. The languages were categorized based on resource availability, as shown below:
**High-resource languages:** Chinese (zh), Italian (it), Vietnamese (vi)
**Medium-resource languages:** Arabic (ar), Korean (ko), Thai (th)
**Low-resource languages:** Bengali (bn), Swahili (sw), Javanese (jv)
## Ethics Statement
Our research investigates the safety challenges of LLMs in multilingual settings. We are aware of the potential misuse of our findings and emphasize that our research is solely for academic purposes and ethical use. Misuse or harm resulting from the information in this paper is strongly discouraged. To address the identified risks and vulnerabilities, we commit to open-sourcing the data used in our study. This openness aims to facilitate vulnerability identification, encourage discussions, and foster collaborative efforts to enhance LLM safety in multilingual contexts. Furthermore, we have developed the SELF-DEFENSE framework to address multilingual jailbreak challenges in LLMs. This framework automatically generates multilingual safety training data to mitigate risks associated with unintentional and intentional jailbreak scenarios. Overall, our work not only highlights multilingual jailbreak challenges in LLMs but also paves the way for future research, collaboration, and innovation to enhance their safety.
## Citation
```
@misc{deng2023multilingual,
title={Multilingual Jailbreak Challenges in Large Language Models},
author={Yue Deng and Wenxuan Zhang and Sinno Jialin Pan and Lidong Bing},
year={2023},
eprint={2310.06474},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```"
sentence-transformers/parallel-sentences-opensubtitles,"{""language"": [""en"", ""multilingual"", ""ar"", ""bg"", ""ca"", ""cs"", ""da"", ""de"", ""el"", ""es"", ""et"", ""fa"", ""fi"", ""fr"", ""gl"", ""he"", ""hi"", ""hr"", ""hu"", ""hy"", ""id"", ""it"", ""ja"", ""ka"", ""ko"", ""lt"", ""lv"", ""mk"", ""ms"", ""nl"", ""pl"", ""pt"", ""ro"", ""ru"", ""sk"", ""sl"", ""sq"", ""sr"", ""sv"", ""th"", ""tr"", ""uk"", ""ur"", ""vi"", ""zh""], ""size_categories"": [""100MBelow is original dataset card
## Table of Contents
- [Dataset Summary](#dataset-summary)
- [Dataset Attribution](#dataset-attribution)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Dataset Use](#dataset-use)
- [Use Cases](#use-cases)
- [Usage Caveats](#usage-caveats)
- [Getting Started](#getting-started)
🐋 The OpenOrca Dataset! 🐋

We are thrilled to announce the release of the OpenOrca dataset!
This rich collection of augmented FLAN data aligns, as best as possible, with the distributions outlined in the [Orca paper](https://arxiv.org/abs/2306.02707).
It has been instrumental in generating high-performing model checkpoints and serves as a valuable resource for all NLP researchers and developers!
# Official Models
## OpenOrca-Platypus2-13B
Our [latest release](https://huggingface.co/Open-Orca/OpenOrca-Platypus2-13B), the first 13B model to score higher than LLaMA1-65B on the HuggingFace Leaderboard!
Released in partnership with Platypus.
## LlongOrca 7B & 13B
* Our [first 7B release](https://huggingface.co/Open-Orca/LlongOrca-7B-16k), trained on top of LLongMA2 to achieve 16,000 tokens context. #1 long context 7B model at release time, with >99% of the overall #1 model's performance.
* [LlongOrca-13B-16k](https://huggingface.co/Open-Orca/LlongOrca-13B-16k), trained on top of LLongMA2. #1 long context 13B model at release time, with >97% of the overall #1 model's performance.
## OpenOrcaxOpenChat-Preview2-13B
Our [second model](https://huggingface.co/Open-Orca/OpenOrcaxOpenChat-Preview2-13B), highlighting that we've surpassed the performance reported in the Orca paper.
Was #1 at release time, now surpassed by our own OpenOrca-Platypus2-13B.
Released in partnership with OpenChat.
## OpenOrca-Preview1-13B
[OpenOrca-Preview1-13B](https://huggingface.co/Open-Orca/OpenOrca-Preview1-13B)
This model was trained in less than a day, for <$200, with <10% of our data.
At release, it beat the current state of the art models on BigBench-Hard and AGIEval. Achieves ~60% of the improvements reported in the Orca paper.
# Dataset Summary
The OpenOrca dataset is a collection of augmented [FLAN Collection data](https://arxiv.org/abs/2301.13688).
Currently ~1M GPT-4 completions, and ~3.2M GPT-3.5 completions.
It is tabularized in alignment with the distributions presented in the ORCA paper and currently represents a partial completion of the full intended dataset, with ongoing generation to expand its scope.
The data is primarily used for training and evaluation in the field of natural language processing.
# Dataset Attribution
We would like to give special recognition to the following contributors for their significant efforts and dedication:
Teknium
WingLian/Caseus
Eric Hartford
NanoBit
Pankaj
Winddude
Rohan
http://AlignmentLab.ai:
Autometa
Entropi
AtlasUnified
NeverendingToast
NanoBit
WingLian/Caseus
Also of course, as always, TheBloke, for being the backbone of the whole community.
Many thanks to NanoBit and Caseus, makers of [Axolotl](https://github.com/OpenAccess-AI-Collective/axolotl), for lending us their expertise on the platform that developed and trained manticore, minotaur, and many others!
We are welcoming sponsors or collaborators to help us build these models to the scale they deserve. Please reach out via our socials:
http://Alignmentlab.ai https://discord.gg/n9hXaBPWxx
Want to visualize our full dataset? Check out our [Nomic Atlas Map](https://atlas.nomic.ai/map/c1b88b47-2d9b-47e0-9002-b80766792582/2560fd25-52fe-42f1-a58f-ff5eccc890d2).
[
](https://atlas.nomic.ai/map/c1b88b47-2d9b-47e0-9002-b80766792582/2560fd25-52fe-42f1-a58f-ff5eccc890d2)
# Supported Tasks and Leaderboards
This dataset supports a range of tasks including language modeling, text generation, and text augmentation.
It has been instrumental in the generation of multiple high-performing model checkpoints which have exhibited exceptional performance in our unit testing.
Further information on leaderboards will be updated as they become available.
# Languages
The language of the data is primarily English.
# Dataset Structure
## Data Instances
A data instance in this dataset represents entries from the FLAN collection which have been augmented by submitting the listed question to either GPT-4 or GPT-3.5.
The response is then entered into the response field.
## Data Fields
The fields are:
1) 'id', a unique numbered identifier which includes one of 'niv', 't0', 'cot', or 'flan' to represent which source FLAN Collection submix the 'question' is sourced from.
2) 'system_prompt', representing the System Prompt presented to the GPT-3.5 or GPT-4 API for the datapoint
3) 'question', representing a question entry as provided by the FLAN Collection
4) 'response', a response to that question received from a query to either GPT-3.5 or GPT-4.
## Data Splits
The data is unsplit.
# Dataset Creation
## Curation Rationale
The dataset was created to provide a source of augmented text data for researchers and developers.
The datapoints are intended primarily to provide an enhancement of the core FLAN Collection data which relies upon the detailed step by step reasoning capabilities of GPT-3.5 and GPT-4.
This ""reasoning trace"" augmentation has demonstrated exceptional results, allowing a LLaMA-13B model trained with this data to rival or beat GPT-3.5 on broad sets of hard reasoning tasks which all models below 100B parameters had previously performed dramatically worse on.
## Source Data
The data is generated using techniques in alignment with the distributions outlined in the Orca paper, except as noted below:
1) There is not enough CoT data in the FLAN Collection to generate 150K zero-shot entries, as the paper purports to use.
We suspect this portion was either undocumented or misrepresented. We have used the ~75K points available.
2) We used the pre-generated FLAN Collection datasets hosted on HuggingFace under conceptofmind, e.g. [conceptofmind/flan2021](https://huggingface.co/datasets/conceptofmind/flan2021_submix_original).
These are referenced by the [official FLAN Collection repo](https://github.com/google-research/FLAN/tree/main/flan/v2) as the preferred data source.
However, these are a subset of the full FLAN Collection data, and have less than the required entries for the flan2021 and t0 submixes, by ~1.25M and 200k respectively.
Combined, this gave us ~1.5M fewer datapoints than in the original Orca paper. Completing the set is an ongoing work.
# Dataset Use
## Use Cases
The dataset can be used for tasks related to language understanding, natural language processing, machine learning model training, and model performance evaluation.
## Usage Caveats
Given that this is a work-in-progress dataset, it is recommended to regularly check for updates and improvements.
Further, the data should be used in accordance with the guidelines and recommendations outlined in the Orca paper.
## Getting Started
This dataset is organized such that it can be naively loaded via Hugging Face datasets library.
We recommend using streaming due to the large size of the files.
Regular updates and data generation progress can be monitored through the OpenOrca repository on Hugging Face.
# Citation
```bibtex
@misc{OpenOrca,
title = {OpenOrca: An Open Dataset of GPT Augmented FLAN Reasoning Traces},
author = {Wing Lian and Bleys Goodson and Eugene Pentland and Austin Cook and Chanvichet Vong and ""Teknium""},
year = {2023},
publisher = {HuggingFace},
journal = {HuggingFace repository},
howpublished = {\url{https://https://huggingface.co/Open-Orca/OpenOrca},
}
```
```bibtex
@misc{mukherjee2023orca,
title={Orca: Progressive Learning from Complex Explanation Traces of GPT-4},
author={Subhabrata Mukherjee and Arindam Mitra and Ganesh Jawahar and Sahaj Agarwal and Hamid Palangi and Ahmed Awadallah},
year={2023},
eprint={2306.02707},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
```bibtex
@misc{longpre2023flan,
title={The Flan Collection: Designing Data and Methods for Effective Instruction Tuning},
author={Shayne Longpre and Le Hou and Tu Vu and Albert Webson and Hyung Won Chung and Yi Tay and Denny Zhou and Quoc V. Le and Barret Zoph and Jason Wei and Adam Roberts},
year={2023},
eprint={2301.13688},
archivePrefix={arXiv},
primaryClass={cs.AI}
}
```
```bibtex
@misc{touvron2023llama,
title={Llama 2: Open Foundation and Fine-Tuned Chat Models},
author={Hugo Touvron and Louis Martin and Kevin Stone and Peter Albert and Amjad Almahairi and Yasmine Babaei and Nikolay Bashlykov and Soumya Batra and Prajjwal Bhargava and Shruti Bhosale and Dan Bikel and Lukas Blecher and Cristian Canton Ferrer and Moya Chen and Guillem Cucurull and David Esiobu and Jude Fernandes and Jeremy Fu and Wenyin Fu and Brian Fuller and Cynthia Gao and Vedanuj Goswami and Naman Goyal and Anthony Hartshorn and Saghar Hosseini and Rui Hou and Hakan Inan and Marcin Kardas and Viktor Kerkez and Madian Khabsa and Isabel Kloumann and Artem Korenev and Punit Singh Koura and Marie-Anne Lachaux and Thibaut Lavril and Jenya Lee and Diana Liskovich and Yinghai Lu and Yuning Mao and Xavier Martinet and Todor Mihaylov and Pushkar Mishra and Igor Molybog and Yixin Nie and Andrew Poulton and Jeremy Reizenstein and Rashi Rungta and Kalyan Saladi and Alan Schelten and Ruan Silva and Eric Michael Smith and Ranjan Subramanian and Xiaoqing Ellen Tan and Binh Tang and Ross Taylor and Adina Williams and Jian Xiang Kuan and Puxin Xu and Zheng Yan and Iliyan Zarov and Yuchen Zhang and Angela Fan and Melanie Kambadur and Sharan Narang and Aurelien Rodriguez and Robert Stojnic and Sergey Edunov and Thomas Scialom},
year={2023},
eprint= arXiv 2307.09288
}
@software{touvron2023llama,
title={LLaMA: Open and Efficient Foundation Language Models},
author={Touvron, Hugo and Lavril, Thibaut and Izacard, Gautier and Martinet, Xavier and Lachaux, Marie-Anne and Lacroix, Timoth{\'e}e and Rozi{\`e}re, Baptiste and Goyal, Naman and Hambro, Eric and Azhar, Faisal and Rodriguez, Aurelien and Joulin, Armand and Grave, Edouard and Lample, Guillaume},
journal={arXiv preprint arXiv:2302.13971},
year={2023}
}
```"
sentence-transformers/miracl,"{""language"": [""en"", ""ar"", ""bn"", ""es"", ""fa"", ""fi"", ""fr"", ""hi"", ""id"", ""ja"", ""ko"", ""ru"", ""sw"", ""te"", ""th"", ""zh""], ""size_categories"": [""1M>> from datasets import load_dataset
>>> dataset = load_dataset(""Bingsu/laion2b_multi_korean_subset_with_image"", streaming=True, split=""train"")
>>> dataset.features
{'image': Image(decode=True, id=None),
'text': Value(dtype='string', id=None),
'width': Value(dtype='int32', id=None),
'height': Value(dtype='int32', id=None)}
>>> next(iter(dataset))
{'image': ,
'text': '소닉기어 에어폰5 휴대용 스테레오 블루투스 헤드폰',
'width': 256,
'height': 256}
```
### 2. webdataset
이 데이터셋은 [webdataset](https://github.com/webdataset/webdataset)으로 사용할 수 있도록 구성되어있습니다. 데이터를 다운로드하지 않고 스트리밍으로 처리한다면 1번 방법보다 훨씬 빠릅니다.
!! 아래 방법은 Windows에서는 에러가 발생합니다.
```python
>>> import webdataset as wds
>>> url = ""https://huggingface.co/datasets/Bingsu/laion2b_multi_korean_subset_with_image/resolve/main/data/{00000..02122}.tar""
>>> dataset = wds.WebDataset(url).shuffle(1000).decode(""pil"").to_tuple(""webp"", ""json"")
```
```python
>>> next(iter(dataset))
...
```
이 글을 작성하는 현재(22-10-18), webp이미지의 자동 디코딩을 지원하지 않고 있기 때문에([PR #215](https://github.com/webdataset/webdataset/pull/215)), 직접 디코딩해야 합니다.
```python
import io
import webdataset as wds
from PIL import Image
def preprocess(data):
webp, jsn = data
img = Image.open(io.BytesIO(webp))
out = {
""image"": img,
""text"": jsn[""caption""],
""width"": jsn[""width""],
""height"": jsn[""height""]
}
return out
url = ""https://huggingface.co/datasets/Bingsu/laion2b_multi_korean_subset_with_image/resolve/main/data/{00000..02122}.tar""
dataset = wds.WebDataset(url).shuffle(1000).decode(""pil"").to_tuple(""webp"", ""json"").map(preprocess)
```
```python
>>> next(iter(dataset))
{'image': ,
'text': '[따블리에]유아동 미술가운, 미술 전신복',
'width': 427,
'height': 256}
```
## Note

각각의 tar 파일은 위 처럼 구성되어 있습니다.
다운로드에 실패한 이미지는 건너뛰어져있기 때문에 파일 이름은 완전히 연속적이지는 않습니다.
각각의 json 파일은 다음처럼 되어있습니다.
```json
{
""caption"": ""\ub514\uc790\uc778 \uc53d\ud0b9\uacfc \ub514\uc9c0\ud138 \ud2b8\ub79c\uc2a4\ud3ec\uba54\uc774\uc158"",
""url"": ""https://image.samsungsds.com/kr/insights/dt1.jpg?queryString=20210915031642"",
""key"": ""014770069"",
""status"": ""success"",
""error_message"": null,
""width"": 649,
""height"": 256,
""original_width"": 760,
""original_height"": 300,
""exif"": ""{}""
}
```
txt파일은 json파일의 ""caption""을 담고 있습니다."
phonemetransformers/CHILDES,"{""configs"": [{""config_name"": ""English"", ""default"": true, ""data_files"": ""Eng-NA/processed.csv""}, {""config_name"": ""EnglishUK"", ""data_files"": ""Eng-UK/processed.csv""}, {""config_name"": ""French"", ""data_files"": ""French/processed.csv""}, {""config_name"": ""German"", ""data_files"": ""German/processed.csv""}, {""config_name"": ""Spanish"", ""data_files"": ""Spanish/processed.csv""}, {""config_name"": ""Dutch"", ""data_files"": ""Dutch/processed.csv""}, {""config_name"": ""Mandarin"", ""data_files"": ""Mandarin/processed.csv""}, {""config_name"": ""Japanese"", ""data_files"": ""Japanese/processed.csv""}, {""config_name"": ""Cantonese"", ""data_files"": ""Cantonese/processed.csv""}, {""config_name"": ""Estonian"", ""data_files"": ""Estonian/processed.csv""}, {""config_name"": ""Croatian"", ""data_files"": ""Croatian/processed.csv""}, {""config_name"": ""Danish"", ""data_files"": ""Danish/processed.csv""}, {""config_name"": ""Basque"", ""data_files"": ""Basque/processed.csv""}, {""config_name"": ""Hungarian"", ""data_files"": ""Hungarian/processed.csv""}, {""config_name"": ""Turkish"", ""data_files"": ""Turkish/processed.csv""}, {""config_name"": ""Farsi"", ""data_files"": ""Farsi/processed.csv""}, {""config_name"": ""Icelandic"", ""data_files"": ""Icelandic/processed.csv""}, {""config_name"": ""Indonesian"", ""data_files"": ""Indonesian/processed.csv""}, {""config_name"": ""Irish"", ""data_files"": ""Irish/processed.csv""}, {""config_name"": ""Welsh"", ""data_files"": ""Welsh/processed.csv""}, {""config_name"": ""Korean"", ""data_files"": ""Korean/processed.csv""}, {""config_name"": ""Swedish"", ""data_files"": ""Swedish/processed.csv""}, {""config_name"": ""Norwegian"", ""data_files"": ""Norwegian/processed.csv""}, {""config_name"": ""Quechua"", ""data_files"": ""Quechua/processed.csv""}, {""config_name"": ""Catalan"", ""data_files"": ""Catalan/processed.csv""}, {""config_name"": ""Italian"", ""data_files"": ""Italian/processed.csv""}, {""config_name"": ""PortuguesePt"", ""data_files"": ""PortuguesePt/processed.csv""}, {""config_name"": ""PortugueseBr"", ""data_files"": ""PortugueseBr/processed.csv""}, {""config_name"": ""Romanian"", ""data_files"": ""Romanian/processed.csv""}, {""config_name"": ""Serbian"", ""data_files"": ""Serbian/processed.csv""}, {""config_name"": ""Polish"", ""data_files"": ""Polish/processed.csv""}], ""language"": [""en"", ""de"", ""fr"", ""es"", ""nl"", ""cmn"", ""ja"", ""yue"", ""et"", ""hr"", ""da"", ""eu"", ""hu"", ""tr"", ""fa"", ""is"", ""id"", ""ga"", ""cy"", ""ko"", ""sv"", ""nb"", ""qu"", ""ca"", ""it"", ""pt"", ""ro"", ""sv"", ""pl""], ""tags"": [""language modeling"", ""cognitive modeling""], ""pretty_name"": ""Phonemized Child Directed Speech"", ""size_categories"": [""100K
### Original Source?
Around 11 months ago, I downloaded and preprocessed 2.7M rows of text data, but completely forgot the original source of these datasets...
All I remember is that I downloaded datasets from everywhere I could: HuggingFace, research papers, GitHub, Kaggle, SurgeAI, and Google search. I even fetched 20K+ tweets using the Twitter API.
Recently, I came across 6 datasets, so I remembered to credit them below.
Known datasets:
- tomekkorbak/pile-toxicity-balanced2 (HuggingFace)
- datasets/thai_toxicity_tweet (HuggingFace)
- datasets/ethos (HuggingFace)
- inspection-ai/japanese-toxic-dataset (GitHub)
- mathigatti/sexting-dataset (GitHub)
- omar-sharif03/BAD-Bangla-Aggressive-Text-Dataset (GitHub)
I manually collected and wrote 100 rows of data.
### Loading the Dataset
To prevent errors like [row count mismatch](https://huggingface.co/datasets/FredZhang7/toxi-text-3M/discussions/5), please add `verification_mode=""no_checks""` when loading the dataset.
```py
from datasets import load_dataset
ds = load_dataset(""FredZhang7/toxi-text-3M"", verification_mode=""no_checks"")
```
### Limitations
Limitations include:
- All labels were rounded to the nearest integer. If a text was classified as 46%-54% toxic, the text itself might not be noticeably toxic or neutral.
- There were disagreements among moderators on some labels, due to ambiguity and lack of context.
- When there're only URL(s), emojis, or anything that's unrecognizable as natural language in the ""text"" column, the corresponding ""lang"" is ""unknown"".
Have fun modelling!"
sentence-transformers/parallel-sentences-jw300,"{""language"": [""en"", ""multilingual"", ""ar"", ""bg"", ""cs"", ""da"", ""de"", ""el"", ""es"", ""et"", ""fa"", ""fi"", ""fr"", ""gu"", ""he"", ""hi"", ""hr"", ""hu"", ""hy"", ""id"", ""it"", ""ja"", ""ka"", ""ko"", ""lt"", ""lv"", ""mk"", ""mn"", ""mr"", ""my"", ""nl"", ""pl"", ""pt"", ""ro"", ""ru"", ""sk"", ""sl"", ""sq"", ""sv"", ""th"", ""tr"", ""uk"", ""ur"", ""vi""], ""size_categories"": [""10M 답변에 오픈 어시스턴트라고 하는 경우가 나오기 때문
또한 스탠포드 대학 번역 데이터에서 번역 과정 오류로 input에 입력없음 과 같이 추가된 부분 삭제
그리고 \ 등으로 gpt 상에서 번역 오류가 난 것들을 삭제
***
자연스러움을 위해 stanford alpaca data, oig_chip2를 ChatGPT3.5 turbo 16k를 이용하여 새롭게 전처리 과정을 거쳤습니다.
https://github.com/JoJo0217/rlhf_korean_dataset/tree/main
여기에서 자세한 설명을 볼 수 있으며
데이터의 구성은 다음과 같습니다.
***
데이터 구성
|데이터 종류|개수|url|
|:---|---:|---:|
|koalpaca v1.1|21155|https://github.com/Beomi/KoAlpaca|
|stanford alpaca|51374|https://huggingface.co/datasets/tatsu-lab/alpaca|
|dolly|15009|https://huggingface.co/datasets/nlpai-lab/databricks-dolly-15k-ko|
|openassistant|9651|https://huggingface.co/datasets/nlpai-lab/openassistant-guanaco-ko|
|oig_chip2|10000|https://huggingface.co/datasets/0-hero/OIG-small-chip2|
|총합|107189||"
Muennighoff/xP3x-sample,"{""annotations_creators"": [""expert-generated"", ""crowdsourced""], ""language"": [""af"", ""ar"", ""az"", ""be"", ""bg"", ""bn"", ""br"", ""bs"", ""ca"", ""ch"", ""cs"", ""cv"", ""cy"", ""da"", ""de"", ""el"", ""en"", ""eo"", ""es"", ""et"", ""eu"", ""fa"", ""fi"", ""fo"", ""fr"", ""fy"", ""ga"", ""gd"", ""gl"", ""gn"", ""he"", ""hi"", ""hr"", ""hu"", ""hy"", ""ia"", ""id"", ""ie"", ""io"", ""is"", ""it"", ""ja"", ""jv"", ""ka"", ""kk"", ""km"", ""ko"", ""ku"", ""kw"", ""la"", ""lb"", ""lt"", ""lv"", ""mi"", ""mk"", ""ml"", ""mn"", ""mr"", ""ms"", ""mt"", ""my"", ""nb"", ""nl"", ""nn"", ""no"", ""oc"", ""pl"", ""pt"", ""qu"", ""rn"", ""ro"", ""ru"", ""sh"", ""sl"", ""sq"", ""sr"", ""sv"", ""sw"", ""ta"", ""te"", ""th"", ""tk"", ""tl"", ""tr"", ""tt"", ""ug"", ""uk"", ""ur"", ""uz"", ""vi"", ""vo"", ""yi"", ""zh"", ""ace"", ""acm"", ""acq"", ""aeb"", ""af"", ""ajp"", ""ak"", ""als"", ""am"", ""apc"", ""ar"", ""ars"", ""ary"", ""arz"", ""as"", ""ast"", ""awa"", ""ayr"", ""azb"", ""azj"", ""ba"", ""bm"", ""ban"", ""be"", ""bem"", ""bn"", ""bho"", ""bjn"", ""bo"", ""bs"", ""bug"", ""bg"", ""ca"", ""ceb"", ""cs"", ""cjk"", ""ckb"", ""crh"", ""cy"", ""da"", ""de"", ""dik"", ""dyu"", ""dz"", ""el"", ""en"", ""eo"", ""et"", ""eu"", ""ee"", ""fo"", ""fj"", ""fi"", ""fon"", ""fr"", ""fur"", ""fuv"", ""gaz"", ""gd"", ""ga"", ""gl"", ""gn"", ""gu"", ""ht"", ""ha"", ""he"", ""hi"", ""hne"", ""hr"", ""hu"", ""hy"", ""ig"", ""ilo"", ""id"", ""is"", ""it"", ""jv"", ""ja"", ""kab"", ""kac"", ""kam"", ""kn"", ""ks"", ""ka"", ""kk"", ""kbp"", ""kea"", ""khk"", ""km"", ""ki"", ""rw"", ""ky"", ""kmb"", ""kmr"", ""knc"", ""kg"", ""ko"", ""lo"", ""lij"", ""li"", ""ln"", ""lt"", ""lmo"", ""ltg"", ""lb"", ""lua"", ""lg"", ""luo"", ""lus"", ""lvs"", ""mag"", ""mai"", ""ml"", ""mar"", ""min"", ""mk"", ""mt"", ""mni"", ""mos"", ""mi"", ""my"", ""nl"", ""nn"", ""nb"", ""npi"", ""nso"", ""nus"", ""ny"", ""oc"", ""ory"", ""pag"", ""pa"", ""pap"", ""pbt"", ""pes"", ""plt"", ""pl"", ""pt"", ""prs"", ""quy"", ""ro"", ""rn"", ""ru"", ""sg"", ""sa"", ""sat"", ""scn"", ""shn"", ""si"", ""sk"", ""sl"", ""sm"", ""sn"", ""sd"", ""so"", ""st"", ""es"", ""sc"", ""sr"", ""ss"", ""su"", ""sv"", ""swh"", ""szl"", ""ta"", ""taq"", ""tt"", ""te"", ""tg"", ""tl"", ""th"", ""ti"", ""tpi"", ""tn"", ""ts"", ""tk"", ""tum"", ""tr"", ""tw"", ""tzm"", ""ug"", ""uk"", ""umb"", ""ur"", ""uzn"", ""vec"", ""vi"", ""war"", ""wo"", ""xh"", ""ydd"", ""yo"", ""yue"", ""zh"", ""zsm"", ""zu""], ""programming_language"": [""Java"", ""Python"", ""Jupyter-Notebook""], ""license"": [""apache-2.0""], ""multilinguality"": [""multilingual""], ""pretty_name"": ""xP3x"", ""size_categories"": [""100M
additional details
The columns corresponding to annotations collected from our cultural bias study (i.e. 'required_knowledge', 'time_sensitive', 'reference', 'culture', 'region', 'country') contain a list of values representing annotations from different annotators.
However, to avoid conversion issues to HF dataset, these columns are provided as string in the final dataset.
You can convert these columns back to list of values for easier manipulation as follows:
```python
import ast
# convert string values to list
gmmlu_lite_test['required_knowledge'] = gmmlu_lite_test['required_knowledge'].apply(lamda x: ast.literal_eval(x))
```
## Data Fields
The data fields are the same among all splits. Brief description of each field is provided below.
data field description
- `sample_id`: A unique identifier for the question.
- `subject`: The main topic the question falls under.
- `subject_category`: The high-level category the subject falls under i.e. STEM/Humanities/Social Sciences/Medical/Business/Other.
- `question`: translated question from MMLU
- `option_a`: one of the possible option choices
- `option_b`: one of the possible option choices
- `option_c`: one of the possible option choices
- `option_d`: one of the possible option choices
- `answer': the correct answer (A/B/C/D)
- `required_knowledge`: annotator votes for knowledge needed to answer the question correctly. Possible values include: ""cultural"", ""regional"", ""dialect"" or ""none""
- `time_sensitive`: annotator votes indicating if the question's answer is time-dependent. Possible values include: Yes/No
- `reference`: annotations for which part of the question contains cultural/regional/dialect references. The different items in the list are annotations from different annotators.
- `culture`: annotations for which culture does the question belong to. The different items in the list correspond to annotations from different annotators.
- `region`: Geographic region the question is relevant to. Each item in the list correspond to annotations from different annotators.
- `country`: Specific country the question pertains to. Each item in the list correspond to annotations from different annotators.
- `cultural_sensitivity_label`: Label to indicate if question is culturally sensitive (CS) or culturally agnostic (CA) based on annotator votes.
- `is_annotated`: True/False flag to indicate if sample contains any annotations from our cultural bias study.
## Data Splits
The following are the splits of the data:
| Split | No. of instances | Language Coverage |
|-------|------------------|-------------------|
| test | 6,000 | 15 |
| dev | 4,275 | 15 |
## Data Instances
An example from `test` set looks as follows:
```json
{'sample_id': 'astronomy/test/58',
'subject': 'astronomy',
'subject_category': 'STEM',
'question': 'When traveling north from the United States into Canada you’ll see the North Star (Polaris) getting _________.',
'option_a': 'Brighter',
'option_b': 'Dimmer',
'option_c': 'Higher in the sky',
'option_d': 'Lower in the sky',
'answer': 'C',
'required_knowledge': ""['regional', 'regional', 'regional', 'regional']"",
'time_sensitive': ""['No', 'No', 'No', 'No']"",
'reference': ""[{'end': 55, 'label': 'Geographic', 'score': None, 'start': 5}, {'end': 43, 'label': 'Geographic', 'score': None, 'start': 30}, {'end': 55, 'label': 'Geographic', 'score': None, 'start': 5}, {'end': 43, 'label': 'Geographic', 'score': None, 'start': 30}]"",
'culture': '[]',
'region': ""['North America', 'North America', 'North America', 'North America']"",
'country': ""['United States of America (USA)', 'United States of America (USA)', 'United States of America (USA)', 'United States of America (USA)']"",
'cultural_sensitivity_label': 'CS',
'is_annotated': True
}
```
## Statistics
### Annotation Types
The following is the breakdown of CS🗽, CA⚖️ and MA📝 samples in the final dataset.
| Type of Annotation | Instances per language | No. of languages | Total instances
|--------------------|------------------------|------------------|----------------|
| Culturally Sensitive 🗽 | 200 | 15 | 3,000 |
| Culturally Agnostic ⚖️ | 200 |15 | 3,000 |
| MMLU Annotated 📝| 400 |15 | 6,000 |
### Languages
The dataset covers 15 languages. The following is details about the languages included in the dataset.
Languages Info
| ISO Code | Language | Resources |
|----------|----------|-----------|
| `ar` | Arabic (Standard)| High |
| `bn` | Bengali | Mid |
| `de` | German | High |
| `en` | English | High |
| `fr` | French | High |
| `hi` | Hindi | High |
| `id` | Indonesian | Mid |
| `it` | Italian | High |
| `ja` | Japanese | High |
| `ko` | Korean | Mid |
| `pt` | Portuguese | High |
| `es` | Spanish | High |
| `sw` | Swahili | Low |
| `yo` | Yorùbá | Low |
| `zh` | Chinese (Simplified) | High |
# Known Limitations
A brief overview of limitations of this dataset is provided below.
show limitations
- **Language and dialect coverage:** Global-MMLU focusses on 42 languages. However, this is still only a tiny fraction of the world’s linguistic diversity. Future work is needed to continue to improve evaluations beyond these 42 languages and take into account how technology serves different dialects.
- **Uneven distribution of contributions:** The dataset contains translation post-edits from community volunteers, with a 'long tail' of volunteers making only one or two contributions. Similarly, there is a huge gap between languages with the highest number of contributions and ones with the lowest number of contributions.
- **Toxic or offensive speech:** Our annotation process did not focus on flagging for toxic,harmful, or offensive speech, so it is possible that Global-MMLU contains some data that could be considered harmful. We believe this is of relatively low risk because of the nature of the original MMLU and the focus on examination material.
- **Region Category Assignment:** For the annotation of geographically sensitive questions, we classified regions into six geographic regions (Africa, Asia, Europe, North America, Oceania,and South America). However, based upon discussions we would going forward recommend switching to the taxonomy proposed by the World Bank which is more granular and includes separate designations for Central America and Sub-Saharan Africa.
- **Identifying cultural sensitivity does not guarantee cultural inclusion:** While Global-MMLU highlights important limitations in current datasets by identifying gaps in non-Western cultural representation. Future work must prioritize the integration of diverse culturally grounded knowledge to achieve true inclusivity and fairness in multilingual AI evaluation.
# Additional Information
## Provenance
- **Methods Used:** Professional annotations as well as crowd-sourced through volunteer annotations.
- **Methodology Details:** We collected cultural bias annotations as well as post-edits of translations for different mmlu questions.
- [Cultural Sensitivity Annotation Platform](https://huggingface.co/spaces/CohereForAI/MMLU-evaluation)
- [Translation Quality Annotation Platform](https://huggingface.co/spaces/CohereForAI/review-mmlu-translations)
- Dates of Collection: May 2024 - Aug 2024
## Dataset Version and Maintenance
- **Maintenance Status:** Actively Maintained
- **Version Details:**
- *Current version:* 1.0
- *Last Update:* 12/2024
- *First Release:* 12/2024
## Authorship
- **Publishing Organization:** [Cohere For AI](https://cohere.com/research)
- **Industry Type:** Not-for-profit - Tech
## Licensing Information
This dataset can be used for any purpose, under the terms of the [Apache 2.0](https://opensource.org/license/apache-2-0) License.
## Continuous Improvement:
If you want to contribute to improving the quality of translations in Global-MMLU-Lite then please contribute using our [annotation UI](https://huggingface.co/spaces/CohereForAI/review-global-mmlu-lite).
You can also help review and edit machine translations in additional languages using our annotation interface to help improve language coverage of Global-MMLU-Lite.
## Additional Details
For any additional details, please check our paper, [Global MMLU: Understanding and Addressing Cultural and Linguistic Biases in Multilingual Evaluation](https://arxiv.org/abs/2412.03304).
## Citation Information
```bibtex
@misc{singh2024globalmmluunderstandingaddressing,
title={Global MMLU: Understanding and Addressing Cultural and Linguistic Biases in Multilingual Evaluation},
author={Shivalika Singh and Angelika Romanou and Clémentine Fourrier and David I. Adelani and Jian Gang Ngui and Daniel Vila-Suero and Peerat Limkonchotiwat and Kelly Marchisio and Wei Qi Leong and Yosephine Susanto and Raymond Ng and Shayne Longpre and Wei-Yin Ko and Madeline Smith and Antoine Bosselut and Alice Oh and Andre F. T. Martins and Leshem Choshen and Daphne Ippolito and Enzo Ferrante and Marzieh Fadaee and Beyza Ermis and Sara Hooker},
year={2024},
eprint={2412.03304},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2412.03304},
}
```"
gsarti/iwslt2017_context,"{""annotations_creators"": [""crowdsourced""], ""language"": [""ar"", ""de"", ""en"", ""fr"", ""it"", ""ja"", ""ko"", ""nl"", ""ro"", ""zh""], ""language_creators"": [""expert-generated""], ""license"": [""cc-by-nc-nd-4.0""], ""multilinguality"": [""translation""], ""pretty_name"": ""IWSLT 2017"", ""size_categories"": [""1M>> from datasets import load_dataset
>>> ko_lima = load_dataset('taeshahn/ko-lima', 'plain') # or load_dataset('taeshahn/ko-lima')
>>> ko_lima_vicuna = load_dataset('taeshahn/ko-lima', 'vicuna')
```
```python
>>> ko_lima['train'][1025]
{
'conversations': [
'저는 케냐 출신입니다. 망명을 신청하고 싶은데 비자없이 네덜란드로 망명을 신청하기 위해 여행할 수 있나요? 케냐항공에서 여권을 소지한 경우 스키폴 공항으로 가는 비자없이 비행기에 탑승할 수 있나요?',
'항공사가 탑승을 허용할 가능성은 극히 낮습니다. 네덜란드에 입국하려는 케냐 시민은 비자, 체류 기간에 필요한 충분한 자금 증명, 다음 목적지 입국에 필요한 서류를 소지해야 합니다. 또한 항공사는 케냐에서 출발하는 승객에 대해 특별 조사를 실시해야 합니다:\n\n> 다음 공항에서 네덜란드로 운항하는 항공사:\n\n아부다비(AUH), 아크라(ACC), 바레인(BAH), 베이징 캐피탈(PEK), 보리스필(KBP), 카이로(CAI), 담만(DMM), 다르에스살람(DAR), 두바이(DXB), 엔테베(EBB), 광저우(CAN), 홍콩(HKG), 하마드(DOH), 이스탄불 아타튀르크(IST), 이스탄불 사비하곡첸(SAW), 요하네스버그(JNB), 키갈리(KGL), 킬리만자로(JRO), 쿠알라룸푸르(KUL), 쿠웨이트(KWI), 라고스(LOS), 모스크바 셰레메티예보(SVO), 무스카트(MCT), 나이로비(NB ), 뉴델리(DEL), 상파울루(GRU), 싱가포르(SIN) 및 테헤란(IKA)은 네덜란드 도착 시 적절한 여행 서류가 없어 입국할 수 없는 경우 해당 항공편의 모든 승객의 여행 서류 사본을 제출하셔야 합니다. 사본에는 여권/여행 서류의 데이터 페이지, 비자가 있는 페이지, 출국/통관 스탬프가 있는 페이지(해당되는 경우)가 포함되어야 합니다. 이러한 입국 규정을 준수하지 않을 경우 항공사에 승객 1인당 최대 11,250유로의 벌금이 부과될 수 있습니다.\n\n마지막 문장에서 볼 수 있듯이 항공사는 탑승 전에 여행 서류를 확인해야 하는 강력한 인센티브를 갖게 됩니다.',
'인용된 자료의 출처는 무엇인가요?',
'출처는 항공을 통해 해외로 여행하는 승객의 여권 및 비자 요건과 같은 서류 요건이 포함된 데이터베이스인 Timatic(여행 정보 매뉴얼 자동)에서 가져온 것입니다. 위의 답변은 목적지가 네덜란드이고 귀국/귀국 여행 증빙이 있으며 5일간 체류할 계획이며 케냐 여권의 유효 기간이 약 3년 남은 케냐 시민이 케냐에서 직접 여행하는 시나리오를 기반으로 한 것입니다.'
],
'source': 'multi_turn'
}
```
```python
>>> ko_lima_vicuna['train'][1025]
{
'id': '1026-multi_turn',
'conversations': [
{
'from': '사람',
'value': '저는 케냐 출신입니다. 망명을 신청하고 싶은데 비자없이 네덜란드로 망명을 신청하기 위해 여행할 수 있나요? 케냐항공에서 여권을 소지한 경우 스키폴 공항으로 가는 비자없이 비행기에 탑승할 수 있나요?'
},
{
'from': '어시스턴트',
'value': '항공사가 탑승을 허용할 가능성은 극히 낮습니다. 네덜란드에 입국하려는 케냐 시민은 비자, 체류 기간에 필요한 충분한 자금 증명, 다음 목적지 입국에 필요한 서류를 소지해야 합니다. 또한 항공사는 케냐에서 출발하는 승객에 대해 특별 조사를 실시해야 합니다:\n\n> 다음 공항에서 네덜란드로 운항하는 항공사:\n\n아부다비(AUH), 아크라(ACC), 바레인(BAH), 베이징 캐피탈(PEK), 보리스필(KBP), 카이로(CAI), 담만(DMM), 다르에스살람(DAR), 두바이(DXB), 엔테베(EBB), 광저우(CAN), 홍콩(HKG), 하마드(DOH), 이스탄불 아타튀르크(IST), 이스탄불 사비하곡첸(SAW), 요하네스버그(JNB), 키갈리(KGL), 킬리만자로(JRO), 쿠알라룸푸르(KUL), 쿠웨이트(KWI), 라고스(LOS), 모스크바 셰레메티예보(SVO), 무스카트(MCT), 나이로비(NB ), 뉴델리(DEL), 상파울루(GRU), 싱가포르(SIN) 및 테헤란(IKA)은 네덜란드 도착 시 적절한 여행 서류가 없어 입국할 수 없는 경우 해당 항공편의 모든 승객의 여행 서류 사본을 제출하셔야 합니다. 사본에는 여권/여행 서류의 데이터 페이지, 비자가 있는 페이지, 출국/통관 스탬프가 있는 페이지(해당되는 경우)가 포함되어야 합니다. 이러한 입국 규정을 준수하지 않을 경우 항공사에 승객 1인당 최대 11,250유로의 벌금이 부과될 수 있습니다.\n\n마지막 문장에서 볼 수 있듯이 항공사는 탑승 전에 여행 서류를 확인해야 하는 강력한 인센티브를 갖게 됩니다.'
},
{
'from': '사람',
'value': '인용된 자료의 출처는 무엇인가요?'
},
{
'from': '어시스턴트',
'value': '출처는 항공을 통해 해외로 여행하는 승객의 여권 및 비자 요건과 같은 서류 요건이 포함된 데이터베이스인 Timatic(여행 정보 매뉴얼 자동)에서 가져온 것입니다. 위의 답변은 목적지가 네덜란드이고 귀국/귀국 여행 증빙이 있으며 5일간 체류할 계획이며 케냐 여권의 유효 기간이 약 3년 남은 케냐 시민이 케냐에서 직접 여행하는 시나리오를 기반으로 한 것입니다.'
}
]
}
```
### Citation Information
```
@InProceedings{kolimadataset,
title = {KoLIMA: Korean LIMA Dataset for Efficient Instruction-tuning},
author = {Hahn, Taeseung},
year = {2023}
}
```"
kuotient/gsm8k-ko,"{""dataset_info"": {""features"": [{""name"": ""question"", ""dtype"": ""string""}, {""name"": ""answer"", ""dtype"": ""string""}, {""name"": ""question_en"", ""dtype"": ""string""}, {""name"": ""answer_en"", ""dtype"": ""string""}], ""splits"": [{""name"": ""train"", ""num_bytes"": 8792462, ""num_examples"": 7473}, {""name"": ""test"", ""num_bytes"": 1585126, ""num_examples"": 1319}], ""download_size"": 6575639, ""dataset_size"": 10377588}, ""configs"": [{""config_name"": ""default"", ""data_files"": [{""split"": ""train"", ""path"": ""data/train-*""}, {""split"": ""test"", ""path"": ""data/test-*""}]}], ""language"": [""ko""], ""pretty_name"": ""g""}","번역 모델 `kuotient/Seagull-13B-translate` 사용.
## How to evaluate
```
git clone https://github.com/kuotient/lm-evaluation-harness.git
cd lm-evaluation-harness
pip install -e .
```
```
lm_eval --model hf \
--model_args pretrained=yanolja/EEVE-Korean-Instruct-2.8B-v1.0 \
--tasks gsm8k-ko \
--device cuda:0 \
--batch_size auto:4
```
혹은 원본 lm-evaluation-harness에서
데이터셋 내의 `gsm8k-ko.yaml` 파일을 `lm-evaluation-harness/tasks/gsm8k-ko` 내에 생성해 사용."
leey4n/KR3,"{""annotations_creators"": [], ""language_creators"": [], ""language"": [""ko""], ""license"": [""cc-by-nc-sa-4.0""], ""multilinguality"": [""monolingual""], ""pretty_name"": ""KR3"", ""size_categories"": [""100K
""num_docs"":
""title"":
""intro"":
""section_name"":
""previous_text"":
""question"":
""gold_section_text"":
""en_gold_section_text"":
""citations"":
}
```
## Licensing and Takedown
MegaWika 1.0 consists in part of documents scraped from across the web (based on citations linked in Wikipedia articles.)
We do not own any of the scraped text nor do we claim copyright: text drawn from Wikipedia citations are meant for research use in algorithmic design and model training.
We release this dataset and all its contents under CC-BY-SA-4.0.
### Notice and Takedown Policy:
*NB*: Should you consider that our data contains material that is owned by you and should therefore not be reproduced here, please:
- Clearly identify yourself, with detailed contact data such as an address, telephone number or email address at which you can be contacted.
- Clearly identify the copyrighted work claimed to be infringed.
- Clearly identify the material that is claimed to be infringing and information reasonably sufficient to allow us to locate the material.
And contact the authors.
*Take down*: We will comply to legitimate requests by removing the affected sources from the next release of the dataset.
## Usage
```
# all of the dataset (not recommended)
dataset = load_dataset(""hltcoe/megawika-report-generation"")
# just the `all`` section data (all splits)
dataset = load_dataset(""hltcoe/megawika-report-generation"", data_dir=""all"")
# just the `all` English test set (can replace with ""validation"" or ""train"", or other langs)
dataset = load_dataset(""hltcoe/megawika-report-generation"", data_dir=""all/en"", split=""test"")
```
### Dataset Curators
Released and maintained by the Johns Hopkins University Human Language Technology Center of Excellence (JHU/HLTCOE).
You can contact one the MegaWika authors, including [Samuel Barham](mailto:samuel.barham@jhuapl.edu), [Orion Weller](mailto:oweller2@jhu.edu),
and [Ben van Durme](mailto:vandurme@jhu.edu) with questions.
### Licensing Information
Released under the [Attribution-ShareAlike 4.0 International (CC BY-SA 4.0)](https://creativecommons.org/licenses/by-sa/4.0/) license.
### Citation Information
```
@misc{barham2023megawika,
title={MegaWika: Millions of reports and their sources across 50 diverse languages},
author={Samuel Barham and and Weller and Michelle Yuan and Kenton Murray and Mahsa Yarmohammadi and Zhengping Jiang and Siddharth Vashishtha and Alexander Martin and Anqi Liu and Aaron Steven White and Jordan Boyd-Graber and Benjamin Van Durme},
year={2023},
eprint={2307.07049},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```"
joonhok-exo-ai/korean_law_open_data_precedents,"{""language"": [""ko""], ""tags"": [""legal""], ""size_categories"": [""10K,
# text containing w words (one per language) separated by underscores
'text': 'σπιτάκι πουλιών_ドーム_प्रयोगशाला कोट_мавпа-павук_gown',
# target word class name in English (key in translations.json)
'cls': 'dome',
# class ID from translations.json (0 to 999)
'cls_id': 538,
# target word (class name in the language of the audio)
'target_text': 'ドーム'
}
```
The dataset includes a `translations.json` file that maps ImageNet class names across all supported languages. Each entry contains:
- The English class name as the key
- Translations for all supported languages (`ar`, `el`, `en`, `hi`, `ja`, `ko`, `te`, `th`, `uk`, `zh-CN`)
- The ImageNet synset ID
- A unique class ID (0-999)
Example structure:
```json
{
""tench"": {
""synset_id"": ""n01440764"",
""cls_id"": 0,
""ar"": ""سمك البنش"",
""el"": ""είδος κυπρίνου"",
""en"": ""tench"",
""hi"": ""टेंच"",
""ja"": ""テンチ"",
""ko"": ""텐치"",
""te"": ""టెంచ్"",
""th"": ""ปลาเทนช์"",
""uk"": ""линь"",
""zh-CN"": ""丁鱥""
}
}
```
## Dataset Variants
We release three variants of the dataset:
- Symile-M3-2 with 2 languages: English (`en`) and Greek (`el`).
- Symile-M3-5 with 5 languages: English (`en`), Greek (`el`), Hindi (`hi`), Japanese (`ja`), and Ukrainian (`uk`).
- Symile-M3-10 with 10 languages: Arabic (`ar`), Greek (`el`), English (`en`), Hindi (`hi`), Japanese (`ja`), Korean (`ko`), Telugu (`te`), Thai (`th`), Ukrainian (`uk`), and Chinese (`zh-CN`).
Each variant is available in four sizes:
- Large (`l`): 10M training samples, 500K validation samples, 500K test samples
- Medium (`m`): 5M training samples, 250K validation samples, 250K test samples
- Small (`s`): 1M training samples, 50K validation samples, 50K test samples
- Extra Small (`xs`): 500K training samples, 25K validation samples, 25K test samples
## Usage
Before using the dataset, ensure you have the required audio and image processing libraries installed:
```bash
pip install librosa soundfile pillow
```
To load a specific version of Symile-M3, use a configuration name following the pattern `symile-m3-{num_langs}-{size}` where:
- `num_langs` is `2`, `5`, or `10`
- `size` is `xs`, `s`, `m`, or `l`
For example, to load the `xs` version of Symile-M3-5:
```python
from datasets import load_dataset
dataset = load_dataset(""arsaporta/symile-m3"", ""symile-m3-5-xs"")
print(dataset['train'][0]) # access first train sample
print(len(dataset['train'])) # get number of train samples
```
To process the dataset without loading it entirely into memory, use streaming mode to load samples one at a time:
```python
from datasets import load_dataset
dataset = load_dataset(""arsaporta/symile-m3"", ""symile-m3-5-xs"", streaming=True)
print(next(iter(dataset['train'])))
```
To download the dataset for offline use:
```python
import os
from datasets import load_dataset
from huggingface_hub import snapshot_download
local_dir = ""./symile-m3-5-xs"" # where to save
# download parquet files
snapshot_download(
repo_id=""arsaporta/symile-m3"",
repo_type=""dataset"",
local_dir=local_dir,
allow_patterns=[""symile-m3-5-xs/*""] # which configuration to download
)
# load the downloaded parquet files
dataset = load_dataset(
""parquet"",
data_files={
""train"": os.path.join(data_dir, ""train-*.parquet""),
""validation"": os.path.join(data_dir, ""val-*.parquet""),
""test"": os.path.join(data_dir, ""test-*.parquet"")
}
)
```
## Working with Raw Data
To work directly with the source images (jpeg) and audio (mp3):
1. Download the source data:
- **ImageNet:** Get the training data from [Kaggle's ImageNet Challenge](https://www.kaggle.com/c/imagenet-object-localization-challenge/data?select=ILSVRC)
- **Common Voice:** Download your needed languages from [Common Voice](https://commonvoice.mozilla.org/en/datasets):
* All languages use Common Voice v16.0, except English which uses v14.0
* Required languages vary by configuration:
- Symile-M3-2: English (`en`), Greek (`el`)
- Symile-M3-5: English, Greek, Hindi (`hi`), Japanese (`ja`), Ukrainian (`uk`)
- Symile-M3-10: All of the above plus Arabic (`ar`), Korean (`ko`), Telugu (`te`), Thai (`th`), Chinese (`zh-CN`)
2. Access the dataset CSV files:
- Find them in the `.csv_files` directory, organized by configuration (e.g., `symile-m3-2-xs`, `symile-m3-10-l`)
- Each configuration contains `train.csv`, `val.csv`, and `test.csv`
- CSV paths match the default extraction paths of ImageNet (`ILSVRC/Data/CLS-LOC/train/...`) and Common Voice (`cv/{lang}/clips/...`)
## Citation
```
@inproceedings{saporta2024symile,
title = {Contrasting with Symile: Simple Model-Agnostic Representation Learning for Unlimited Modalities}
author = {Saporta, Adriel and Puli, Aahlad and Goldstein, Mark and Ranganath, Rajesh}
booktitle = {Advances in Neural Information Processing Systems},
year = {2024}
}
```"
bltlab/lr-sum,"{""license"": ""cc-by-4.0"", ""task_categories"": [""summarization"", ""text-generation""], ""annotations_creators"": [""found""], ""language_creators"": [""found""], ""language"": [""am"", ""az"", ""bn"", ""bo"", ""bs"", ""ku"", ""zh"", ""el"", ""en"", ""fa"", ""fr"", ""ht"", ""ha"", ""hy"", ""id"", ""ka"", ""km"", ""rw"", ""ko"", ""lo"", ""mk"", ""my"", ""nd"", ""pt"", ""ps"", ""ru"", ""sn"", ""so"", ""es"", ""sq"", ""sr"", ""sw"", ""th"", ""ti"", ""tr"", ""uk"", ""ur"", ""uz"", ""vi""], ""pretty_name"": ""LR-Sum"", ""size_categories"": [""100K
HALvest
Open Scientific Papers Harvested from HAL (Unfiltered)
---
## Dataset Description
- **Repository:** [GitHub](https://github.com/Madjakul/HALvesting/tree/main)
## Dataset Summary
### overview:
This is the unfiltered version of [HALvest](https://huggingface.co/datasets/Madjakul/HALvest), comprising of fulltext from open papers found on [Hyper Articles en Ligne (HAL)](https://hal.science/) with extra fields for potential filtering. Our dump is mostly english/french but gather papers written in 56 languages across 13 domains.
You can download the dataset using Hugging Face datasets:
```py
from datasets import load_dataset
ds = load_dataset(""almanach/HALvest"", ""en"")
```
### Details
Building the dataset is a three steps process: data fetching from HAL, data merging and data enriching.
1. We first request [HAL's API](https://api.archives-ouvertes.fr/docs) in order to gather open research papers and parse it -- effectively sorting papers by language. Then, we download the PDFs of the fetched data.
2. Using [GROBID](https://github.com/kermitt2/grobid), we convert each PDF to an `xml-tei` format in order to have structured data. We convert each `xml-tei` file to a `txt` format before concatenating it with the paper's.
3. Finally, we compute some statistics about each document.
### Languages
Please, note that the number of tokens is highly inflated in the raw version of the dataset because of badly encoded PDFs, translating to gibberish documents/texts.
ISO-639|Language|# Documents|# mT5 Tokens
-------|--------|-----------|--------
en|English|464,679|8,158,933,235
fr|French|199,216|9,018,529,985
es|Spanish|2,975|69,221,667
it|Italian|1,172|48,747,986
pt|Portuguese|934|32,918,832
de|German|652|12,225,960
ru|Russian|245|5,763,532
zh|Chinese|160|2,861,585
eu|Basque|113|2,297,485
ar|Arabic|92|2,167,431
ja|Japanese|92|547,861
el|Greek|54|1,738,878
pl|Polish|43|987,878
ro|Romanian|39|1,298,901
uk|Ukrainian|34|837,793
vi|Viêt Namese|29|436,660
ca|Catalan|28|975,078
da|Danish|27|961,955
oc|Occitan|26|285,334
br|Breton|24|998,088
sr|Serbian|24|336,878
ko|Korean|17|226,268
fa|Persian|17|213,903
tr|Turkish|17|149,718
hu|Hungarian|14|577,568
eo|Esperanto|14|105,286
hy|Armenian|10|127,988
cs|Czech|9|712,263
bg|Bulgarian|9|208,763
sq|Albanian|9|98,009
id|Indonesian|9|53,075
he|Hebrew|8|61,283
hr|Croatian|8|40,621
et|Estonian|7|20,405
sv|Swedish|6|270,642
no|Norwegian|6|62,767
az|Azerbaijani|5|52,762
fi|Finnish|4|60,507
tet|Tetum|4|18,485
lt|Lithuanian|3|16,572
mr|Marathi|3|16,386
hi|Hindi|3|3,490
ie|Interlingue|2|140,383
ta|Tamil|2|77,087
sw|Swahili|2|73,921
tl|Tagalog|2|35,962
gl|Galician|2|29,688
mk|Macedonian|2|14,654
th|Thai|1|70,909
tk|Turkmen|1|66,104
bs|Bosnian|1|63,018
kk|Kazakh|1|41,839
sl|Slovenian|1|22,844
sk|Slovak|1|12,997
co|Corsican|1|9,083
gn|Guarani|1|1,566
bo|Tibetan|1|579
### Domains
Please, note that the number of tokens is highly inflated in the raw version of the dataset because of badly encoded PDFs, translating to gibberish documents/texts.
Domain|Code|# Documents|# mT5 Tokens
------|----|-----------|------------
Humanities and Social Sciences|shs|156,566|5,614,423,171
Computer Science|info|148,316|2,573,673,455
Life Sciences|sdv|115,744|3,145,323,780
Engineering Sciences|spi|102,751|2,254,653,825
Physics|phys|65,991|1,503,190,749
Mathematics|math|62,921|1,638,500,361
Chemical Science|chim|40,012|899,507,319
Environmental Science|sde|31,575|579,076,669
Sciences of the Universe|sdu|23,557|682,356,264
Cognitive science|scco|11,772|227,487,096
Statistics|stat|10,579|184,678,350
Quantitative Finance|qfin|3,451|68,518,636
Nonlinear Sciences|nlin|1,972|30,694,088
You can browse through every domains and sub-domains here: https://hal.science/browse/domain.
## Considerations for Using the Data
The corpus is extracted from the [HAL's open archive](https://hal.science/) which distributes scientific publications following open access principles. The corpus is made up of both creative commons licensed and copyrighted documents (distribution authorized on HAL by the publisher). This must be considered prior to using this dataset for any purpose, other than training deep learning models, data mining etc. We do not own any of the text from which these data has been extracted.
## Citation
```bib
@misc{kulumba2024harvestingtextualstructureddata,
title={Harvesting Textual and Structured Data from the HAL Publication Repository},
author={Francis Kulumba and Wissam Antoun and Guillaume Vimont and Laurent Romary},
year={2024},
eprint={2407.20595},
archivePrefix={arXiv},
primaryClass={cs.DL},
url={https://arxiv.org/abs/2407.20595},
}
```
## Dataset Copyright
The licence terms for HALvest strictly follows the one from HAL. Please refer to the below license when using this dataset.
- [HAL license](https://doc.archives-ouvertes.fr/en/legal-aspects/)"
davidkim205/ko_hellaswag,"{""language"": [""ko""]}","# Korean HellaSwag
hellaswag 영어 데이터셋을 한국어로 번역
https://huggingface.co/datasets/Rowan/hellaswag
## Structure
```jsonl
{
""ind"": 24,
""activity_label"": ""지붕 슁글 제거"",
""ctx_a"": ""한 남자가 지붕 위에 앉아 있다."",
""ctx_b"": ""그"",
""ctx"": ""한 남자가 지붕 위에 앉아 있다. 그"",
""endings"": [
""스키 한 켤레를 감싸기 위해 랩을 사용하고 있습니다."",
""레벨 타일을 뜯어내고 있습니다."",
""루빅스 큐브를 들고 있습니다."",
""지붕에 지붕을 올리기 시작합니다.""
],
""source_id"": ""activitynet~v_-JhWjGDPHMY"",
""split"": ""val"",
""split_type"": ""indomain"",
""label"": ""3""
}
{...}
```"
squarelike/OpenOrca-gugugo-ko,"{""language"": [""ko""], ""license"": ""mit"", ""task_categories"": [""conversational"", ""text-classification"", ""token-classification"", ""table-question-answering"", ""question-answering"", ""zero-shot-classification"", ""summarization"", ""feature-extraction"", ""text-generation"", ""text2text-generation""], ""pretty_name"": ""OpenOrca"", ""size_categories"": [""10M🐋 The OpenOrca Dataset! 🐋

We are thrilled to announce the release of the OpenOrca dataset!
This rich collection of augmented FLAN data aligns, as best as possible, with the distributions outlined in the [Orca paper](https://arxiv.org/abs/2306.02707).
It has been instrumental in generating high-performing model checkpoints and serves as a valuable resource for all NLP researchers and developers!
# Official Models
## Mistral-7B-OpenOrca
Our [latest model](https://huggingface.co/spaces/Open-Orca/Mistral-7B-OpenOrca), the first 7B to score better overall than all previous models below 30B.
98% of Llama2-70b-chat's performance, in a completely open 7B!
## OpenOrca-Platypus2-13B
Our [third model](https://huggingface.co/Open-Orca/OpenOrca-Platypus2-13B), the first 13B model to score higher than LLaMA1-65B on the HuggingFace Leaderboard!
Released in partnership with Platypus.
## LlongOrca 7B & 13B
* Our [first 7B release](https://huggingface.co/Open-Orca/LlongOrca-7B-16k), trained on top of LLongMA2 to achieve 16,000 tokens context. #1 long context 7B model at release time, with >99% of the overall #1 model's performance.
* [LlongOrca-13B-16k](https://huggingface.co/Open-Orca/LlongOrca-13B-16k), trained on top of LLongMA2. #1 long context 13B model at release time, with >97% of the overall #1 model's performance.
## OpenOrcaxOpenChat-Preview2-13B
Our [second model](https://huggingface.co/Open-Orca/OpenOrcaxOpenChat-Preview2-13B), highlighting that we've surpassed the performance reported in the Orca paper.
Was #1 at release time, now surpassed by our own OpenOrca-Platypus2-13B.
Released in partnership with OpenChat.
## OpenOrca-Preview1-13B
[OpenOrca-Preview1-13B](https://huggingface.co/Open-Orca/OpenOrca-Preview1-13B)
This model was trained in less than a day, for <$200, with <10% of our data.
At release, it beat the current state of the art models on BigBench-Hard and AGIEval. Achieves ~60% of the improvements reported in the Orca paper.
# Dataset Summary
The OpenOrca dataset is a collection of augmented [FLAN Collection data](https://arxiv.org/abs/2301.13688).
Currently ~1M GPT-4 completions, and ~3.2M GPT-3.5 completions.
It is tabularized in alignment with the distributions presented in the ORCA paper and currently represents a partial completion of the full intended dataset, with ongoing generation to expand its scope.
The data is primarily used for training and evaluation in the field of natural language processing.
# Dataset Attribution
We would like to give special recognition to the following contributors for their significant efforts and dedication:
Teknium
WingLian/Caseus
Eric Hartford
NanoBit
Pankaj
Winddude
Rohan
http://AlignmentLab.ai:
Autometa
Entropi
AtlasUnified
NeverendingToast
NanoBit
WingLian/Caseus
Also of course, as always, TheBloke, for being the backbone of the whole community.
Many thanks to NanoBit and Caseus, makers of [Axolotl](https://github.com/OpenAccess-AI-Collective/axolotl), for lending us their expertise on the platform that developed and trained manticore, minotaur, and many others!
We are welcoming sponsors or collaborators to help us build these models to the scale they deserve. Please reach out via our socials:
http://Alignmentlab.ai https://discord.gg/n9hXaBPWxx
Want to visualize our full dataset? Check out our [Nomic Atlas Map](https://atlas.nomic.ai/map/c1b88b47-2d9b-47e0-9002-b80766792582/2560fd25-52fe-42f1-a58f-ff5eccc890d2).
[
](https://atlas.nomic.ai/map/c1b88b47-2d9b-47e0-9002-b80766792582/2560fd25-52fe-42f1-a58f-ff5eccc890d2)
# Supported Tasks and Leaderboards
This dataset supports a range of tasks including language modeling, text generation, and text augmentation.
It has been instrumental in the generation of multiple high-performing model checkpoints which have exhibited exceptional performance in our unit testing.
Further information on leaderboards will be updated as they become available.
# Languages
The language of the data is primarily English.
# Dataset Structure
## Data Instances
A data instance in this dataset represents entries from the FLAN collection which have been augmented by submitting the listed question to either GPT-4 or GPT-3.5.
The response is then entered into the response field.
## Data Fields
The fields are:
1) 'id', a unique numbered identifier which includes one of 'niv', 't0', 'cot', or 'flan' to represent which source FLAN Collection submix the 'question' is sourced from.
2) 'system_prompt', representing the System Prompt presented to the GPT-3.5 or GPT-4 API for the datapoint
3) 'question', representing a question entry as provided by the FLAN Collection
4) 'response', a response to that question received from a query to either GPT-3.5 or GPT-4.
## Data Splits
The data is unsplit.
# Dataset Creation
## Curation Rationale
The dataset was created to provide a source of augmented text data for researchers and developers.
The datapoints are intended primarily to provide an enhancement of the core FLAN Collection data which relies upon the detailed step by step reasoning capabilities of GPT-3.5 and GPT-4.
This ""reasoning trace"" augmentation has demonstrated exceptional results, allowing a LLaMA-13B model trained with this data to rival or beat GPT-3.5 on broad sets of hard reasoning tasks which all models below 100B parameters had previously performed dramatically worse on.
## Source Data
The data is generated using techniques in alignment with the distributions outlined in the Orca paper, except as noted below:
1) There is not enough CoT data in the FLAN Collection to generate 150K zero-shot entries, as the paper purports to use.
We suspect this portion was either undocumented or misrepresented. We have used the ~75K points available.
2) We used the pre-generated FLAN Collection datasets hosted on HuggingFace under conceptofmind, e.g. [conceptofmind/flan2021](https://huggingface.co/datasets/conceptofmind/flan2021_submix_original).
These are referenced by the [official FLAN Collection repo](https://github.com/google-research/FLAN/tree/main/flan/v2) as the preferred data source.
However, these are a subset of the full FLAN Collection data, and have less than the required entries for the flan2021 and t0 submixes, by ~1.25M and 200k respectively.
Combined, this gave us ~1.5M fewer datapoints than in the original Orca paper. Completing the set is an ongoing work.
# Dataset Use
## Use Cases
The dataset can be used for tasks related to language understanding, natural language processing, machine learning model training, and model performance evaluation.
## Usage Caveats
Given that this is a work-in-progress dataset, it is recommended to regularly check for updates and improvements.
Further, the data should be used in accordance with the guidelines and recommendations outlined in the Orca paper.
## Getting Started
This dataset is organized such that it can be naively loaded via Hugging Face datasets library.
We recommend using streaming due to the large size of the files.
Regular updates and data generation progress can be monitored through the OpenOrca repository on Hugging Face.
# Citation
```bibtex
@misc{OpenOrca,
title = {OpenOrca: An Open Dataset of GPT Augmented FLAN Reasoning Traces},
author = {Wing Lian and Bleys Goodson and Eugene Pentland and Austin Cook and Chanvichet Vong and ""Teknium""},
year = {2023},
publisher = {HuggingFace},
journal = {HuggingFace repository},
howpublished = {\url{https://https://huggingface.co/Open-Orca/OpenOrca}},
}
```
```bibtex
@misc{mukherjee2023orca,
title={Orca: Progressive Learning from Complex Explanation Traces of GPT-4},
author={Subhabrata Mukherjee and Arindam Mitra and Ganesh Jawahar and Sahaj Agarwal and Hamid Palangi and Ahmed Awadallah},
year={2023},
eprint={2306.02707},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
```bibtex
@misc{longpre2023flan,
title={The Flan Collection: Designing Data and Methods for Effective Instruction Tuning},
author={Shayne Longpre and Le Hou and Tu Vu and Albert Webson and Hyung Won Chung and Yi Tay and Denny Zhou and Quoc V. Le and Barret Zoph and Jason Wei and Adam Roberts},
year={2023},
eprint={2301.13688},
archivePrefix={arXiv},
primaryClass={cs.AI}
}
```
```bibtex
@misc{touvron2023llama,
title={Llama 2: Open Foundation and Fine-Tuned Chat Models},
author={Hugo Touvron and Louis Martin and Kevin Stone and Peter Albert and Amjad Almahairi and Yasmine Babaei and Nikolay Bashlykov and Soumya Batra and Prajjwal Bhargava and Shruti Bhosale and Dan Bikel and Lukas Blecher and Cristian Canton Ferrer and Moya Chen and Guillem Cucurull and David Esiobu and Jude Fernandes and Jeremy Fu and Wenyin Fu and Brian Fuller and Cynthia Gao and Vedanuj Goswami and Naman Goyal and Anthony Hartshorn and Saghar Hosseini and Rui Hou and Hakan Inan and Marcin Kardas and Viktor Kerkez and Madian Khabsa and Isabel Kloumann and Artem Korenev and Punit Singh Koura and Marie-Anne Lachaux and Thibaut Lavril and Jenya Lee and Diana Liskovich and Yinghai Lu and Yuning Mao and Xavier Martinet and Todor Mihaylov and Pushkar Mishra and Igor Molybog and Yixin Nie and Andrew Poulton and Jeremy Reizenstein and Rashi Rungta and Kalyan Saladi and Alan Schelten and Ruan Silva and Eric Michael Smith and Ranjan Subramanian and Xiaoqing Ellen Tan and Binh Tang and Ross Taylor and Adina Williams and Jian Xiang Kuan and Puxin Xu and Zheng Yan and Iliyan Zarov and Yuchen Zhang and Angela Fan and Melanie Kambadur and Sharan Narang and Aurelien Rodriguez and Robert Stojnic and Sergey Edunov and Thomas Scialom},
year={2023},
eprint= arXiv 2307.09288
}
@software{touvron2023llama,
title={LLaMA: Open and Efficient Foundation Language Models},
author={Touvron, Hugo and Lavril, Thibaut and Izacard, Gautier and Martinet, Xavier and Lachaux, Marie-Anne and Lacroix, Timoth{\'e}e and Rozi{\`e}re, Baptiste and Goyal, Naman and Hambro, Eric and Azhar, Faisal and Rodriguez, Aurelien and Joulin, Armand and Grave, Edouard and Lample, Guillaume},
journal={arXiv preprint arXiv:2302.13971},
year={2023}
}
```"
devngho/culturax-mini-nonshuffled,"{""dataset_info"": [{""config_name"": ""af"", ""features"": [{""name"": ""text"", ""dtype"": ""string""}, {""name"": ""timestamp"", ""dtype"": ""string""}, {""name"": ""url"", ""dtype"": ""string""}, {""name"": ""source"", ""dtype"": ""string""}], ""splits"": [{""name"": ""train"", ""num_bytes"": 47914226, ""num_examples"": 8265}], ""download_size"": 29299096, ""dataset_size"": 47914226}, {""config_name"": ""als"", ""features"": [{""name"": ""text"", ""dtype"": ""string""}, {""name"": ""timestamp"", ""dtype"": ""string""}, {""name"": ""url"", ""dtype"": ""string""}, {""name"": ""source"", ""dtype"": ""string""}], ""splits"": [{""name"": ""train"", ""num_bytes"": 48403, ""num_examples"": 69}], ""download_size"": 37780, ""dataset_size"": 48403}, {""config_name"": ""am"", ""features"": [{""name"": ""text"", ""dtype"": ""string""}, {""name"": ""timestamp"", ""dtype"": ""string""}, {""name"": ""url"", ""dtype"": ""string""}, {""name"": ""source"", ""dtype"": ""string""}], ""splits"": [{""name"": ""train"", ""num_bytes"": 21830519, ""num_examples"": 2433}], ""download_size"": 10734167, ""dataset_size"": 21830519}, {""config_name"": ""an"", ""features"": [{""name"": ""text"", ""dtype"": ""string""}, {""name"": ""timestamp"", ""dtype"": ""string""}, {""name"": ""url"", ""dtype"": ""string""}, {""name"": ""source"", ""dtype"": ""string""}], ""splits"": [{""name"": ""train"", ""num_bytes"": 16821, ""num_examples"": 27}], ""download_size"": 8251, ""dataset_size"": 16821}, {""config_name"": ""ar"", ""features"": [{""name"": ""text"", ""dtype"": ""string""}, {""name"": ""timestamp"", ""dtype"": ""string""}, {""name"": ""url"", ""dtype"": ""string""}, {""name"": ""source"", ""dtype"": ""string""}], ""splits"": [{""name"": ""train"", ""num_bytes"": 4151742409, ""num_examples"": 740280}], ""download_size"": 2037845118, ""dataset_size"": 4151742409}, {""config_name"": ""arz"", ""features"": [{""name"": ""text"", ""dtype"": ""string""}, {""name"": ""timestamp"", ""dtype"": ""string""}, {""name"": ""url"", ""dtype"": ""string""}, {""name"": ""source"", ""dtype"": ""string""}], ""splits"": [{""name"": ""train"", ""num_bytes"": 553243, ""num_examples"": 716}], ""download_size"": 242572, ""dataset_size"": 553243}, {""config_name"": ""as"", ""features"": [{""name"": ""text"", ""dtype"": ""string""}, {""name"": ""timestamp"", ""dtype"": ""string""}, {""name"": ""url"", ""dtype"": ""string""}, {""name"": ""source"", ""dtype"": ""string""}], ""splits"": [{""name"": ""train"", ""num_bytes"": 5152044, ""num_examples"": 526}], ""download_size"": 1989351, ""dataset_size"": 5152044}, {""config_name"": ""ast"", ""features"": [{""name"": ""text"", ""dtype"": ""string""}, {""name"": ""timestamp"", ""dtype"": ""string""}, {""name"": ""url"", ""dtype"": ""string""}, {""name"": ""source"", ""dtype"": ""string""}], ""splits"": [{""name"": ""train"", ""num_bytes"": 35333, ""num_examples"": 90}], ""download_size"": 12091, ""dataset_size"": 35333}, {""config_name"": ""av"", ""features"": [{""name"": ""text"", ""dtype"": ""string""}, {""name"": ""timestamp"", ""dtype"": ""string""}, {""name"": ""url"", ""dtype"": ""string""}, {""name"": ""source"", ""dtype"": ""string""}], ""splits"": [{""name"": ""train"", ""num_bytes"": 5927, ""num_examples"": 4}], ""download_size"": 13686, ""dataset_size"": 5927}, {""config_name"": ""az"", ""features"": [{""name"": ""text"", ""dtype"": ""string""}, {""name"": ""timestamp"", ""dtype"": ""string""}, {""name"": ""url"", ""dtype"": ""string""}, {""name"": ""source"", ""dtype"": ""string""}], ""splits"": [{""name"": ""train"", ""num_bytes"": 178968096, ""num_examples"": 50845}], ""download_size"": 102873990, ""dataset_size"": 178968096}, {""config_name"": ""azb"", ""features"": [{""name"": ""text"", ""dtype"": ""string""}, {""name"": ""timestamp"", ""dtype"": ""string""}, {""name"": ""url"", ""dtype"": ""string""}, {""name"": ""source"", ""dtype"": ""string""}], ""splits"": [{""name"": ""train"", ""num_bytes"": 289670, ""num_examples"": 298}], ""download_size"": 102227, ""dataset_size"": 289670}, {""config_name"": ""ba"", ""features"": [{""name"": ""text"", ""dtype"": ""string""}, {""name"": ""timestamp"", ""dtype"": ""string""}, {""name"": ""url"", ""dtype"": ""string""}, {""name"": ""source"", ""dtype"": ""string""}], ""splits"": [{""name"": ""train"", ""num_bytes"": 3869656, ""num_examples"": 720}], ""download_size"": 1866021, ""dataset_size"": 3869656}, {""config_name"": ""be"", ""features"": [{""name"": ""text"", ""dtype"": ""string""}, {""name"": ""timestamp"", ""dtype"": ""string""}, {""name"": ""url"", ""dtype"": ""string""}, {""name"": ""source"", ""dtype"": ""string""}], ""splits"": [{""name"": ""train"", ""num_bytes"": 118923540, ""num_examples"": 16435}], ""download_size"": 60378152, ""dataset_size"": 118923540}, {""config_name"": ""bg"", ""features"": [{""name"": ""text"", ""dtype"": ""string""}, {""name"": ""timestamp"", ""dtype"": ""string""}, {""name"": ""url"", ""dtype"": ""string""}, {""name"": ""source"", ""dtype"": ""string""}], ""splits"": [{""name"": ""train"", ""num_bytes"": 1568660631, ""num_examples"": 241318}], ""download_size"": 762579995, ""dataset_size"": 1568660631}, {""config_name"": ""bh"", ""features"": [{""name"": ""text"", ""dtype"": ""string""}, {""name"": ""timestamp"", ""dtype"": ""string""}, {""name"": ""url"", ""dtype"": ""string""}, {""name"": ""source"", ""dtype"": ""string""}], ""splits"": [{""name"": ""train"", ""num_bytes"": 1328, ""num_examples"": 3}], ""download_size"": 4781, ""dataset_size"": 1328}, {""config_name"": ""bn"", ""features"": [{""name"": ""text"", ""dtype"": ""string""}, {""name"": ""timestamp"", ""dtype"": ""string""}, {""name"": ""url"", ""dtype"": ""string""}, {""name"": ""source"", ""dtype"": ""string""}], ""splits"": [{""name"": ""train"", ""num_bytes"": 822984361, ""num_examples"": 124366}], ""download_size"": 308588716, ""dataset_size"": 822984361}, {""config_name"": ""bo"", ""features"": [{""name"": ""text"", ""dtype"": ""string""}, {""name"": ""timestamp"", ""dtype"": ""string""}, {""name"": ""url"", ""dtype"": ""string""}, {""name"": ""source"", ""dtype"": ""string""}], ""splits"": [{""name"": ""train"", ""num_bytes"": 9132545, ""num_examples"": 542}], ""download_size"": 2605240, ""dataset_size"": 9132545}, {""config_name"": ""bpy"", ""features"": [{""name"": ""text"", ""dtype"": ""string""}, {""name"": ""timestamp"", ""dtype"": ""string""}, {""name"": ""url"", ""dtype"": ""string""}, {""name"": ""source"", ""dtype"": ""string""}], ""splits"": [{""name"": ""train"", ""num_bytes"": 245749, ""num_examples"": 51}], ""download_size"": 78424, ""dataset_size"": 245749}, {""config_name"": ""br"", ""features"": [{""name"": ""text"", ""dtype"": ""string""}, {""name"": ""timestamp"", ""dtype"": ""string""}, {""name"": ""url"", ""dtype"": ""string""}, {""name"": ""source"", ""dtype"": ""string""}], ""splits"": [{""name"": ""train"", ""num_bytes"": 641444, ""num_examples"": 438}], ""download_size"": 366282, ""dataset_size"": 641444}, {""config_name"": ""bs"", ""features"": [{""name"": ""text"", ""dtype"": ""string""}, {""name"": ""timestamp"", ""dtype"": ""string""}, {""name"": ""url"", ""dtype"": ""string""}, {""name"": ""source"", ""dtype"": ""string""}], ""splits"": [{""name"": ""train"", ""num_bytes"": 4982, ""num_examples"": 12}], ""download_size"": 10971, ""dataset_size"": 4982}, {""config_name"": ""bxr"", ""features"": [{""name"": ""text"", ""dtype"": ""string""}, {""name"": ""timestamp"", ""dtype"": ""string""}, {""name"": ""url"", ""dtype"": ""string""}, {""name"": ""source"", ""dtype"": ""string""}], ""splits"": [{""name"": ""train"", ""num_bytes"": 340, ""num_examples"": 1}], ""download_size"": 4080, ""dataset_size"": 340}, {""config_name"": ""ca"", ""features"": [{""name"": ""text"", ""dtype"": ""string""}, {""name"": ""timestamp"", ""dtype"": ""string""}, {""name"": ""url"", ""dtype"": ""string""}, {""name"": ""source"", ""dtype"": ""string""}], ""splits"": [{""name"": ""train"", ""num_bytes"": 537336240, ""num_examples"": 155318}], ""download_size"": 334120761, ""dataset_size"": 537336240}, {""config_name"": ""ce"", ""features"": [{""name"": ""text"", ""dtype"": ""string""}, {""name"": ""timestamp"", ""dtype"": ""string""}, {""name"": ""url"", ""dtype"": ""string""}, {""name"": ""source"", ""dtype"": ""string""}], ""splits"": [{""name"": ""train"", ""num_bytes"": 158946, ""num_examples"": 173}], ""download_size"": 64453, ""dataset_size"": 158946}, {""config_name"": ""ceb"", ""features"": [{""name"": ""text"", ""dtype"": ""string""}, {""name"": ""timestamp"", ""dtype"": ""string""}, {""name"": ""url"", ""dtype"": ""string""}, {""name"": ""source"", ""dtype"": ""string""}], ""splits"": [{""name"": ""train"", ""num_bytes"": 7978160, ""num_examples"": 2639}], ""download_size"": 4018429, ""dataset_size"": 7978160}, {""config_name"": ""ckb"", ""features"": [{""name"": ""text"", ""dtype"": ""string""}, {""name"": ""timestamp"", ""dtype"": ""string""}, {""name"": ""url"", ""dtype"": ""string""}, {""name"": ""source"", ""dtype"": ""string""}], ""splits"": [{""name"": ""train"", ""num_bytes"": 14520157, ""num_examples"": 1720}], ""download_size"": 6663902, ""dataset_size"": 14520157}, {""config_name"": ""cs"", ""features"": [{""name"": ""text"", ""dtype"": ""string""}, {""name"": ""timestamp"", ""dtype"": ""string""}, {""name"": ""url"", ""dtype"": ""string""}, {""name"": ""source"", ""dtype"": ""string""}], ""splits"": [{""name"": ""train"", ""num_bytes"": 2250377897, ""num_examples"": 653506}], ""download_size"": 1493825235, ""dataset_size"": 2250377897}, {""config_name"": ""cv"", ""features"": [{""name"": ""text"", ""dtype"": ""string""}, {""name"": ""timestamp"", ""dtype"": ""string""}, {""name"": ""url"", ""dtype"": ""string""}, {""name"": ""source"", ""dtype"": ""string""}], ""splits"": [{""name"": ""train"", ""num_bytes"": 1267176, ""num_examples"": 226}], ""download_size"": 624139, ""dataset_size"": 1267176}, {""config_name"": ""cy"", ""features"": [{""name"": ""text"", ""dtype"": ""string""}, {""name"": ""timestamp"", ""dtype"": ""string""}, {""name"": ""url"", ""dtype"": ""string""}, {""name"": ""source"", ""dtype"": ""string""}], ""splits"": [{""name"": ""train"", ""num_bytes"": 21180962, ""num_examples"": 5500}], ""download_size"": 13063947, ""dataset_size"": 21180962}, {""config_name"": ""da"", ""features"": [{""name"": ""text"", ""dtype"": ""string""}, {""name"": ""timestamp"", ""dtype"": ""string""}, {""name"": ""url"", ""dtype"": ""string""}, {""name"": ""source"", ""dtype"": ""string""}], ""splits"": [{""name"": ""train"", ""num_bytes"": 966271184, ""num_examples"": 254298}], ""download_size"": 582888859, ""dataset_size"": 966271184}, {""config_name"": ""de"", ""features"": [{""name"": ""text"", ""dtype"": ""string""}, {""name"": ""timestamp"", ""dtype"": ""string""}, {""name"": ""url"", ""dtype"": ""string""}, {""name"": ""source"", ""dtype"": ""string""}], ""splits"": [{""name"": ""train"", ""num_bytes"": 15649594701, ""num_examples"": 4200175}], ""download_size"": 9734776977, ""dataset_size"": 15649594701}, {""config_name"": ""dsb"", ""features"": [{""name"": ""text"", ""dtype"": ""string""}, {""name"": ""timestamp"", ""dtype"": ""string""}, {""name"": ""url"", ""dtype"": ""string""}, {""name"": ""source"", ""dtype"": ""string""}], ""splits"": [{""name"": ""train"", ""num_bytes"": 303, ""num_examples"": 1}], ""download_size"": 3858, ""dataset_size"": 303}, {""config_name"": ""dv"", ""features"": [{""name"": ""text"", ""dtype"": ""string""}, {""name"": ""timestamp"", ""dtype"": ""string""}, {""name"": ""url"", ""dtype"": ""string""}, {""name"": ""source"", ""dtype"": ""string""}], ""splits"": [{""name"": ""train"", ""num_bytes"": 3735234, ""num_examples"": 667}], ""download_size"": 1492894, ""dataset_size"": 3735234}, {""config_name"": ""el"", ""features"": [{""name"": ""text"", ""dtype"": ""string""}, {""name"": ""timestamp"", ""dtype"": ""string""}, {""name"": ""url"", ""dtype"": ""string""}, {""name"": ""source"", ""dtype"": ""string""}], ""splits"": [{""name"": ""train"", ""num_bytes"": 2712448803, ""num_examples"": 514302}], ""download_size"": 1345073560, ""dataset_size"": 2712448803}, {""config_name"": ""eml"", ""features"": [{""name"": ""text"", ""dtype"": ""string""}, {""name"": ""timestamp"", ""dtype"": ""string""}, {""name"": ""url"", ""dtype"": ""string""}, {""name"": ""source"", ""dtype"": ""string""}], ""splits"": [{""name"": ""train"", ""num_bytes"": 401, ""num_examples"": 1}], ""download_size"": 4541, ""dataset_size"": 401}, {""config_name"": ""en"", ""features"": [{""name"": ""text"", ""dtype"": ""string""}, {""name"": ""timestamp"", ""dtype"": ""string""}, {""name"": ""url"", ""dtype"": ""string""}, {""name"": ""source"", ""dtype"": ""string""}], ""splits"": [{""name"": ""train"", ""num_bytes"": 118589793215, ""num_examples"": 32410657}], ""download_size"": 73211369011, ""dataset_size"": 118589793215}, {""config_name"": ""eo"", ""features"": [{""name"": ""text"", ""dtype"": ""string""}, {""name"": ""timestamp"", ""dtype"": ""string""}, {""name"": ""url"", ""dtype"": ""string""}, {""name"": ""source"", ""dtype"": ""string""}], ""splits"": [{""name"": ""train"", ""num_bytes"": 35738000, ""num_examples"": 4601}], ""download_size"": 21992261, ""dataset_size"": 35738000}, {""config_name"": ""es"", ""features"": [{""name"": ""text"", ""dtype"": ""string""}, {""name"": ""timestamp"", ""dtype"": ""string""}, {""name"": ""url"", ""dtype"": ""string""}, {""name"": ""source"", ""dtype"": ""string""}], ""splits"": [{""name"": ""train"", ""num_bytes"": 16463142543, ""num_examples"": 4509376}], ""download_size"": 10151881762, ""dataset_size"": 16463142543}, {""config_name"": ""et"", ""features"": [{""name"": ""text"", ""dtype"": ""string""}, {""name"": ""timestamp"", ""dtype"": ""string""}, {""name"": ""url"", ""dtype"": ""string""}, {""name"": ""source"", ""dtype"": ""string""}], ""splits"": [{""name"": ""train"", ""num_bytes"": 380391945, ""num_examples"": 80048}], ""download_size"": 243102579, ""dataset_size"": 380391945}, {""config_name"": ""eu"", ""features"": [{""name"": ""text"", ""dtype"": ""string""}, {""name"": ""timestamp"", ""dtype"": ""string""}, {""name"": ""url"", ""dtype"": ""string""}, {""name"": ""source"", ""dtype"": ""string""}], ""splits"": [{""name"": ""train"", ""num_bytes"": 58524898, ""num_examples"": 15988}], ""download_size"": 35529943, ""dataset_size"": 58524898}, {""config_name"": ""fa"", ""features"": [{""name"": ""text"", ""dtype"": ""string""}, {""name"": ""timestamp"", ""dtype"": ""string""}, {""name"": ""url"", ""dtype"": ""string""}, {""name"": ""source"", ""dtype"": ""string""}], ""splits"": [{""name"": ""train"", ""num_bytes"": 3143144959, ""num_examples"": 595311}], ""download_size"": 1463543689, ""dataset_size"": 3143144959}, {""config_name"": ""fi"", ""features"": [{""name"": ""text"", ""dtype"": ""string""}, {""name"": ""timestamp"", ""dtype"": ""string""}, {""name"": ""url"", ""dtype"": ""string""}, {""name"": ""source"", ""dtype"": ""string""}], ""splits"": [{""name"": ""train"", ""num_bytes"": 1214683096, ""num_examples"": 304677}], ""download_size"": 769607927, ""dataset_size"": 1214683096}, {""config_name"": ""fr"", ""features"": [{""name"": ""text"", ""dtype"": ""string""}, {""name"": ""timestamp"", ""dtype"": ""string""}, {""name"": ""url"", ""dtype"": ""string""}, {""name"": ""source"", ""dtype"": ""string""}], ""splits"": [{""name"": ""train"", ""num_bytes"": 13071329986, ""num_examples"": 3637543}], ""download_size"": 7968623946, ""dataset_size"": 13071329986}, {""config_name"": ""fy"", ""features"": [{""name"": ""text"", ""dtype"": ""string""}, {""name"": ""timestamp"", ""dtype"": ""string""}, {""name"": ""url"", ""dtype"": ""string""}, {""name"": ""source"", ""dtype"": ""string""}], ""splits"": [{""name"": ""train"", ""num_bytes"": 6154765, ""num_examples"": 2233}], ""download_size"": 3810893, ""dataset_size"": 6154765}, {""config_name"": ""ga"", ""features"": [{""name"": ""text"", ""dtype"": ""string""}, {""name"": ""timestamp"", ""dtype"": ""string""}, {""name"": ""url"", ""dtype"": ""string""}, {""name"": ""source"", ""dtype"": ""string""}], ""splits"": [{""name"": ""train"", ""num_bytes"": 15239433, ""num_examples"": 3043}], ""download_size"": 8910658, ""dataset_size"": 15239433}, {""config_name"": ""gd"", ""features"": [{""name"": ""text"", ""dtype"": ""string""}, {""name"": ""timestamp"", ""dtype"": ""string""}, {""name"": ""url"", ""dtype"": ""string""}, {""name"": ""source"", ""dtype"": ""string""}], ""splits"": [{""name"": ""train"", ""num_bytes"": 272932, ""num_examples"": 84}], ""download_size"": 165365, ""dataset_size"": 272932}, {""config_name"": ""gl"", ""features"": [{""name"": ""text"", ""dtype"": ""string""}, {""name"": ""timestamp"", ""dtype"": ""string""}, {""name"": ""url"", ""dtype"": ""string""}, {""name"": ""source"", ""dtype"": ""string""}], ""splits"": [{""name"": ""train"", ""num_bytes"": 67394305, ""num_examples"": 17860}], ""download_size"": 41760462, ""dataset_size"": 67394305}, {""config_name"": ""gn"", ""features"": [{""name"": ""text"", ""dtype"": ""string""}, {""name"": ""timestamp"", ""dtype"": ""string""}, {""name"": ""url"", ""dtype"": ""string""}, {""name"": ""source"", ""dtype"": ""string""}], ""splits"": [{""name"": ""train"", ""num_bytes"": 293, ""num_examples"": 1}], ""download_size"": 3758, ""dataset_size"": 293}, {""config_name"": ""gom"", ""features"": [{""name"": ""text"", ""dtype"": ""string""}, {""name"": ""timestamp"", ""dtype"": ""string""}, {""name"": ""url"", ""dtype"": ""string""}, {""name"": ""source"", ""dtype"": ""string""}], ""splits"": [{""name"": ""train"", ""num_bytes"": 79389, ""num_examples"": 7}], ""download_size"": 31671, ""dataset_size"": 79389}, {""config_name"": ""gu"", ""features"": [{""name"": ""text"", ""dtype"": ""string""}, {""name"": ""timestamp"", ""dtype"": ""string""}, {""name"": ""url"", ""dtype"": ""string""}, {""name"": ""source"", ""dtype"": ""string""}], ""splits"": [{""name"": ""train"", ""num_bytes"": 86897373, ""num_examples"": 11629}], ""download_size"": 33664792, ""dataset_size"": 86897373}, {""config_name"": ""he"", ""features"": [{""name"": ""text"", ""dtype"": ""string""}, {""name"": ""timestamp"", ""dtype"": ""string""}, {""name"": ""url"", ""dtype"": ""string""}, {""name"": ""source"", ""dtype"": ""string""}], ""splits"": [{""name"": ""train"", ""num_bytes"": 425728060, ""num_examples"": 46540}], ""download_size"": 215527218, ""dataset_size"": 425728060}, {""config_name"": ""hi"", ""features"": [{""name"": ""text"", ""dtype"": ""string""}, {""name"": ""timestamp"", ""dtype"": ""string""}, {""name"": ""url"", ""dtype"": ""string""}, {""name"": ""source"", ""dtype"": ""string""}], ""splits"": [{""name"": ""train"", ""num_bytes"": 1476484284, ""num_examples"": 196654}], ""download_size"": 565966884, ""dataset_size"": 1476484284}, {""config_name"": ""hr"", ""features"": [{""name"": ""text"", ""dtype"": ""string""}, {""name"": ""timestamp"", ""dtype"": ""string""}, {""name"": ""url"", ""dtype"": ""string""}, {""name"": ""source"", ""dtype"": ""string""}], ""splits"": [{""name"": ""train"", ""num_bytes"": 1657545, ""num_examples"": 4607}], ""download_size"": 1061804, ""dataset_size"": 1657545}, {""config_name"": ""hsb"", ""features"": [{""name"": ""text"", ""dtype"": ""string""}, {""name"": ""timestamp"", ""dtype"": ""string""}, {""name"": ""url"", ""dtype"": ""string""}, {""name"": ""source"", ""dtype"": ""string""}], ""splits"": [{""name"": ""train"", ""num_bytes"": 27010, ""num_examples"": 42}], ""download_size"": 18470, ""dataset_size"": 27010}, {""config_name"": ""hu"", ""features"": [{""name"": ""text"", ""dtype"": ""string""}, {""name"": ""timestamp"", ""dtype"": ""string""}, {""name"": ""url"", ""dtype"": ""string""}, {""name"": ""source"", ""dtype"": ""string""}], ""splits"": [{""name"": ""train"", ""num_bytes"": 1819937397, ""num_examples"": 441322}], ""download_size"": 1160423578, ""dataset_size"": 1819937397}, {""config_name"": ""hy"", ""features"": [{""name"": ""text"", ""dtype"": ""string""}, {""name"": ""timestamp"", ""dtype"": ""string""}, {""name"": ""url"", ""dtype"": ""string""}, {""name"": ""source"", ""dtype"": ""string""}], ""splits"": [{""name"": ""train"", ""num_bytes"": 159683892, ""num_examples"": 29645}], ""download_size"": 74782480, ""dataset_size"": 159683892}, {""config_name"": ""ia"", ""features"": [{""name"": ""text"", ""dtype"": ""string""}, {""name"": ""timestamp"", ""dtype"": ""string""}, {""name"": ""url"", ""dtype"": ""string""}, {""name"": ""source"", ""dtype"": ""string""}], ""splits"": [{""name"": ""train"", ""num_bytes"": 6598, ""num_examples"": 6}], ""download_size"": 5808, ""dataset_size"": 6598}, {""config_name"": ""id"", ""features"": [{""name"": ""text"", ""dtype"": ""string""}, {""name"": ""timestamp"", ""dtype"": ""string""}, {""name"": ""url"", ""dtype"": ""string""}, {""name"": ""source"", ""dtype"": ""string""}], ""splits"": [{""name"": ""train"", ""num_bytes"": 629329515, ""num_examples"": 232514}], ""download_size"": 352896945, ""dataset_size"": 629329515}, {""config_name"": ""ilo"", ""features"": [{""name"": ""text"", ""dtype"": ""string""}, {""name"": ""timestamp"", ""dtype"": ""string""}, {""name"": ""url"", ""dtype"": ""string""}, {""name"": ""source"", ""dtype"": ""string""}], ""splits"": [{""name"": ""train"", ""num_bytes"": 5183, ""num_examples"": 23}], ""download_size"": 5038, ""dataset_size"": 5183}, {""config_name"": ""io"", ""features"": [{""name"": ""text"", ""dtype"": ""string""}, {""name"": ""timestamp"", ""dtype"": ""string""}, {""name"": ""url"", ""dtype"": ""string""}, {""name"": ""source"", ""dtype"": ""string""}], ""splits"": [{""name"": ""train"", ""num_bytes"": 2637, ""num_examples"": 11}], ""download_size"": 4792, ""dataset_size"": 2637}, {""config_name"": ""is"", ""features"": [{""name"": ""text"", ""dtype"": ""string""}, {""name"": ""timestamp"", ""dtype"": ""string""}, {""name"": ""url"", ""dtype"": ""string""}, {""name"": ""source"", ""dtype"": ""string""}], ""splits"": [{""name"": ""train"", ""num_bytes"": 104256268, ""num_examples"": 23736}], ""download_size"": 63804435, ""dataset_size"": 104256268}, {""config_name"": ""it"", ""features"": [{""name"": ""text"", ""dtype"": ""string""}, {""name"": ""timestamp"", ""dtype"": ""string""}, {""name"": ""url"", ""dtype"": ""string""}, {""name"": ""source"", ""dtype"": ""string""}], ""splits"": [{""name"": ""train"", ""num_bytes"": 7149380555, ""num_examples"": 2113099}], ""download_size"": 4514678157, ""dataset_size"": 7149380555}, {""config_name"": ""ja"", ""features"": [{""name"": ""text"", ""dtype"": ""string""}, {""name"": ""timestamp"", ""dtype"": ""string""}, {""name"": ""url"", ""dtype"": ""string""}, {""name"": ""source"", ""dtype"": ""string""}], ""splits"": [{""name"": ""train"", ""num_bytes"": 5175409022, ""num_examples"": 1111885}], ""download_size"": 2918380663, ""dataset_size"": 5175409022}, {""config_name"": ""jbo"", ""features"": [{""name"": ""text"", ""dtype"": ""string""}, {""name"": ""timestamp"", ""dtype"": ""string""}, {""name"": ""url"", ""dtype"": ""string""}, {""name"": ""source"", ""dtype"": ""string""}], ""splits"": [{""name"": ""train"", ""num_bytes"": 22447, ""num_examples"": 13}], ""download_size"": 26662, ""dataset_size"": 22447}, {""config_name"": ""jv"", ""features"": [{""name"": ""text"", ""dtype"": ""string""}, {""name"": ""timestamp"", ""dtype"": ""string""}, {""name"": ""url"", ""dtype"": ""string""}, {""name"": ""source"", ""dtype"": ""string""}], ""splits"": [{""name"": ""train"", ""num_bytes"": 248112, ""num_examples"": 21}], ""download_size"": 143652, ""dataset_size"": 248112}, {""config_name"": ""ka"", ""features"": [{""name"": ""text"", ""dtype"": ""string""}, {""name"": ""timestamp"", ""dtype"": ""string""}, {""name"": ""url"", ""dtype"": ""string""}, {""name"": ""source"", ""dtype"": ""string""}], ""splits"": [{""name"": ""train"", ""num_bytes"": 260720432, ""num_examples"": 31203}], ""download_size"": 94582642, ""dataset_size"": 260720432}, {""config_name"": ""kk"", ""features"": [{""name"": ""text"", ""dtype"": ""string""}, {""name"": ""timestamp"", ""dtype"": ""string""}, {""name"": ""url"", ""dtype"": ""string""}, {""name"": ""source"", ""dtype"": ""string""}], ""splits"": [{""name"": ""train"", ""num_bytes"": 205739783, ""num_examples"": 27340}], ""download_size"": 96662550, ""dataset_size"": 205739783}, {""config_name"": ""km"", ""features"": [{""name"": ""text"", ""dtype"": ""string""}, {""name"": ""timestamp"", ""dtype"": ""string""}, {""name"": ""url"", ""dtype"": ""string""}, {""name"": ""source"", ""dtype"": ""string""}], ""splits"": [{""name"": ""train"", ""num_bytes"": 69303374, ""num_examples"": 10132}], ""download_size"": 25063681, ""dataset_size"": 69303374}, {""config_name"": ""kn"", ""features"": [{""name"": ""text"", ""dtype"": ""string""}, {""name"": ""timestamp"", ""dtype"": ""string""}, {""name"": ""url"", ""dtype"": ""string""}, {""name"": ""source"", ""dtype"": ""string""}], ""splits"": [{""name"": ""train"", ""num_bytes"": 109114779, ""num_examples"": 13521}], ""download_size"": 41841503, ""dataset_size"": 109114779}, {""config_name"": ""ko"", ""features"": [{""name"": ""text"", ""dtype"": ""string""}, {""name"": ""timestamp"", ""dtype"": ""string""}, {""name"": ""url"", ""dtype"": ""string""}, {""name"": ""source"", ""dtype"": ""string""}], ""splits"": [{""name"": ""train"", ""num_bytes"": 1080491739, ""num_examples"": 205573}], ""download_size"": 640352203, ""dataset_size"": 1080491739}, {""config_name"": ""krc"", ""features"": [{""name"": ""text"", ""dtype"": ""string""}, {""name"": ""timestamp"", ""dtype"": ""string""}, {""name"": ""url"", ""dtype"": ""string""}, {""name"": ""source"", ""dtype"": ""string""}], ""splits"": [{""name"": ""train"", ""num_bytes"": 5213, ""num_examples"": 17}], ""download_size"": 4418, ""dataset_size"": 5213}, {""config_name"": ""ku"", ""features"": [{""name"": ""text"", ""dtype"": ""string""}, {""name"": ""timestamp"", ""dtype"": ""string""}, {""name"": ""url"", ""dtype"": ""string""}, {""name"": ""source"", ""dtype"": ""string""}], ""splits"": [{""name"": ""train"", ""num_bytes"": 12150347, ""num_examples"": 2953}], ""download_size"": 7283637, ""dataset_size"": 12150347}, {""config_name"": ""kv"", ""features"": [{""name"": ""text"", ""dtype"": ""string""}, {""name"": ""timestamp"", ""dtype"": ""string""}, {""name"": ""url"", ""dtype"": ""string""}, {""name"": ""source"", ""dtype"": ""string""}], ""splits"": [{""name"": ""train"", ""num_bytes"": 4479, ""num_examples"": 14}], ""download_size"": 5159, ""dataset_size"": 4479}, {""config_name"": ""kw"", ""features"": [{""name"": ""text"", ""dtype"": ""string""}, {""name"": ""timestamp"", ""dtype"": ""string""}, {""name"": ""url"", ""dtype"": ""string""}, {""name"": ""source"", ""dtype"": ""string""}], ""splits"": [{""name"": ""train"", ""num_bytes"": 201, ""num_examples"": 1}], ""download_size"": 3138, ""dataset_size"": 201}, {""config_name"": ""ky"", ""features"": [{""name"": ""text"", ""dtype"": ""string""}, {""name"": ""timestamp"", ""dtype"": ""string""}, {""name"": ""url"", ""dtype"": ""string""}, {""name"": ""source"", ""dtype"": ""string""}], ""splits"": [{""name"": ""train"", ""num_bytes"": 37686163, ""num_examples"": 5709}], ""download_size"": 18097022, ""dataset_size"": 37686163}, {""config_name"": ""la"", ""features"": [{""name"": ""text"", ""dtype"": ""string""}, {""name"": ""timestamp"", ""dtype"": ""string""}, {""name"": ""url"", ""dtype"": ""string""}, {""name"": ""source"", ""dtype"": ""string""}], ""splits"": [{""name"": ""train"", ""num_bytes"": 4471637, ""num_examples"": 490}], ""download_size"": 2569958, ""dataset_size"": 4471637}, {""config_name"": ""lb"", ""features"": [{""name"": ""text"", ""dtype"": ""string""}, {""name"": ""timestamp"", ""dtype"": ""string""}, {""name"": ""url"", ""dtype"": ""string""}, {""name"": ""source"", ""dtype"": ""string""}], ""splits"": [{""name"": ""train"", ""num_bytes"": 6975194, ""num_examples"": 1659}], ""download_size"": 4287815, ""dataset_size"": 6975194}, {""config_name"": ""lez"", ""features"": [{""name"": ""text"", ""dtype"": ""string""}, {""name"": ""timestamp"", ""dtype"": ""string""}, {""name"": ""url"", ""dtype"": ""string""}, {""name"": ""source"", ""dtype"": ""string""}], ""splits"": [{""name"": ""train"", ""num_bytes"": 11712, ""num_examples"": 18}], ""download_size"": 7371, ""dataset_size"": 11712}, {""config_name"": ""li"", ""features"": [{""name"": ""text"", ""dtype"": ""string""}, {""name"": ""timestamp"", ""dtype"": ""string""}, {""name"": ""url"", ""dtype"": ""string""}, {""name"": ""source"", ""dtype"": ""string""}], ""splits"": [{""name"": ""train"", ""num_bytes"": 478, ""num_examples"": 2}], ""download_size"": 3584, ""dataset_size"": 478}, {""config_name"": ""lmo"", ""features"": [{""name"": ""text"", ""dtype"": ""string""}, {""name"": ""timestamp"", ""dtype"": ""string""}, {""name"": ""url"", ""dtype"": ""string""}, {""name"": ""source"", ""dtype"": ""string""}], ""splits"": [{""name"": ""train"", ""num_bytes"": 15093, ""num_examples"": 35}], ""download_size"": 10219, ""dataset_size"": 15093}, {""config_name"": ""lo"", ""features"": [{""name"": ""text"", ""dtype"": ""string""}, {""name"": ""timestamp"", ""dtype"": ""string""}, {""name"": ""url"", ""dtype"": ""string""}, {""name"": ""source"", ""dtype"": ""string""}], ""splits"": [{""name"": ""train"", ""num_bytes"": 16677427, ""num_examples"": 2178}], ""download_size"": 6323357, ""dataset_size"": 16677427}, {""config_name"": ""lt"", ""features"": [{""name"": ""text"", ""dtype"": ""string""}, {""name"": ""timestamp"", ""dtype"": ""string""}, {""name"": ""url"", ""dtype"": ""string""}, {""name"": ""source"", ""dtype"": ""string""}], ""splits"": [{""name"": ""train"", ""num_bytes"": 594627488, ""num_examples"": 133398}], ""download_size"": 375349095, ""dataset_size"": 594627488}, {""config_name"": ""lv"", ""features"": [{""name"": ""text"", ""dtype"": ""string""}, {""name"": ""timestamp"", ""dtype"": ""string""}, {""name"": ""url"", ""dtype"": ""string""}, {""name"": ""source"", ""dtype"": ""string""}], ""splits"": [{""name"": ""train"", ""num_bytes"": 333697262, ""num_examples"": 71366}], ""download_size"": 206347782, ""dataset_size"": 333697262}, {""config_name"": ""mai"", ""features"": [{""name"": ""text"", ""dtype"": ""string""}, {""name"": ""timestamp"", ""dtype"": ""string""}, {""name"": ""url"", ""dtype"": ""string""}, {""name"": ""source"", ""dtype"": ""string""}], ""splits"": [{""name"": ""train"", ""num_bytes"": 459, ""num_examples"": 1}], ""download_size"": 4860, ""dataset_size"": 459}, {""config_name"": ""mg"", ""features"": [{""name"": ""text"", ""dtype"": ""string""}, {""name"": ""timestamp"", ""dtype"": ""string""}, {""name"": ""url"", ""dtype"": ""string""}, {""name"": ""source"", ""dtype"": ""string""}], ""splits"": [{""name"": ""train"", ""num_bytes"": 5497530, ""num_examples"": 1159}], ""download_size"": 3005490, ""dataset_size"": 5497530}, {""config_name"": ""mhr"", ""features"": [{""name"": ""text"", ""dtype"": ""string""}, {""name"": ""timestamp"", ""dtype"": ""string""}, {""name"": ""url"", ""dtype"": ""string""}, {""name"": ""source"", ""dtype"": ""string""}], ""splits"": [{""name"": ""train"", ""num_bytes"": 438800, ""num_examples"": 79}], ""download_size"": 212268, ""dataset_size"": 438800}, {""config_name"": ""min"", ""features"": [{""name"": ""text"", ""dtype"": ""string""}, {""name"": ""timestamp"", ""dtype"": ""string""}, {""name"": ""url"", ""dtype"": ""string""}, {""name"": ""source"", ""dtype"": ""string""}], ""splits"": [{""name"": ""train"", ""num_bytes"": 40467, ""num_examples"": 14}], ""download_size"": 22942, ""dataset_size"": 40467}, {""config_name"": ""mk"", ""features"": [{""name"": ""text"", ""dtype"": ""string""}, {""name"": ""timestamp"", ""dtype"": ""string""}, {""name"": ""url"", ""dtype"": ""string""}, {""name"": ""source"", ""dtype"": ""string""}], ""splits"": [{""name"": ""train"", ""num_bytes"": 149674631, ""num_examples"": 27628}], ""download_size"": 70500031, ""dataset_size"": 149674631}, {""config_name"": ""ml"", ""features"": [{""name"": ""text"", ""dtype"": ""string""}, {""name"": ""timestamp"", ""dtype"": ""string""}, {""name"": ""url"", ""dtype"": ""string""}, {""name"": ""source"", ""dtype"": ""string""}], ""splits"": [{""name"": ""train"", ""num_bytes"": 213710028, ""num_examples"": 26931}], ""download_size"": 78315697, ""dataset_size"": 213710028}, {""config_name"": ""mn"", ""features"": [{""name"": ""text"", ""dtype"": ""string""}, {""name"": ""timestamp"", ""dtype"": ""string""}, {""name"": ""url"", ""dtype"": ""string""}, {""name"": ""source"", ""dtype"": ""string""}], ""splits"": [{""name"": ""train"", ""num_bytes"": 118599039, ""num_examples"": 19288}], ""download_size"": 56664207, ""dataset_size"": 118599039}, {""config_name"": ""mr"", ""features"": [{""name"": ""text"", ""dtype"": ""string""}, {""name"": ""timestamp"", ""dtype"": ""string""}, {""name"": ""url"", ""dtype"": ""string""}, {""name"": ""source"", ""dtype"": ""string""}], ""splits"": [{""name"": ""train"", ""num_bytes"": 183531742, ""num_examples"": 22666}], ""download_size"": 69818044, ""dataset_size"": 183531742}, {""config_name"": ""mrj"", ""features"": [{""name"": ""text"", ""dtype"": ""string""}, {""name"": ""timestamp"", ""dtype"": ""string""}, {""name"": ""url"", ""dtype"": ""string""}, {""name"": ""source"", ""dtype"": ""string""}], ""splits"": [{""name"": ""train"", ""num_bytes"": 15670, ""num_examples"": 11}], ""download_size"": 12574, ""dataset_size"": 15670}, {""config_name"": ""ms"", ""features"": [{""name"": ""text"", ""dtype"": ""string""}, {""name"": ""timestamp"", ""dtype"": ""string""}, {""name"": ""url"", ""dtype"": ""string""}, {""name"": ""source"", ""dtype"": ""string""}], ""splits"": [{""name"": ""train"", ""num_bytes"": 3568236, ""num_examples"": 2382}], ""download_size"": 1955952, ""dataset_size"": 3568236}, {""config_name"": ""mt"", ""features"": [{""name"": ""text"", ""dtype"": ""string""}, {""name"": ""timestamp"", ""dtype"": ""string""}, {""name"": ""url"", ""dtype"": ""string""}, {""name"": ""source"", ""dtype"": ""string""}], ""splits"": [{""name"": ""train"", ""num_bytes"": 8906317, ""num_examples"": 1513}], ""download_size"": 4931205, ""dataset_size"": 8906317}, {""config_name"": ""my"", ""features"": [{""name"": ""text"", ""dtype"": ""string""}, {""name"": ""timestamp"", ""dtype"": ""string""}, {""name"": ""url"", ""dtype"": ""string""}, {""name"": ""source"", ""dtype"": ""string""}], ""splits"": [{""name"": ""train"", ""num_bytes"": 79209425, ""num_examples"": 8656}], ""download_size"": 27359509, ""dataset_size"": 79209425}, {""config_name"": ""mzn"", ""features"": [{""name"": ""text"", ""dtype"": ""string""}, {""name"": ""timestamp"", ""dtype"": ""string""}, {""name"": ""url"", ""dtype"": ""string""}, {""name"": ""source"", ""dtype"": ""string""}], ""splits"": [{""name"": ""train"", ""num_bytes"": 8611, ""num_examples"": 19}], ""download_size"": 7329, ""dataset_size"": 8611}, {""config_name"": ""nah"", ""features"": [{""name"": ""text"", ""dtype"": ""string""}, {""name"": ""timestamp"", ""dtype"": ""string""}, {""name"": ""url"", ""dtype"": ""string""}, {""name"": ""source"", ""dtype"": ""string""}], ""splits"": [{""name"": ""train"", ""num_bytes"": 208, ""num_examples"": 1}], ""download_size"": 3179, ""dataset_size"": 208}, {""config_name"": ""nds"", ""features"": [{""name"": ""text"", ""dtype"": ""string""}, {""name"": ""timestamp"", ""dtype"": ""string""}, {""name"": ""url"", ""dtype"": ""string""}, {""name"": ""source"", ""dtype"": ""string""}], ""splits"": [{""name"": ""train"", ""num_bytes"": 536401, ""num_examples"": 151}], ""download_size"": 328368, ""dataset_size"": 536401}, {""config_name"": ""ne"", ""features"": [{""name"": ""text"", ""dtype"": ""string""}, {""name"": ""timestamp"", ""dtype"": ""string""}, {""name"": ""url"", ""dtype"": ""string""}, {""name"": ""source"", ""dtype"": ""string""}], ""splits"": [{""name"": ""train"", ""num_bytes"": 218530089, ""num_examples"": 31240}], ""download_size"": 81244964, ""dataset_size"": 218530089}, {""config_name"": ""new"", ""features"": [{""name"": ""text"", ""dtype"": ""string""}, {""name"": ""timestamp"", ""dtype"": ""string""}, {""name"": ""url"", ""dtype"": ""string""}, {""name"": ""source"", ""dtype"": ""string""}], ""splits"": [{""name"": ""train"", ""num_bytes"": 146180, ""num_examples"": 43}], ""download_size"": 68225, ""dataset_size"": 146180}, {""config_name"": ""nl"", ""features"": [{""name"": ""text"", ""dtype"": ""string""}, {""name"": ""timestamp"", ""dtype"": ""string""}, {""name"": ""url"", ""dtype"": ""string""}, {""name"": ""source"", ""dtype"": ""string""}], ""splits"": [{""name"": ""train"", ""num_bytes"": 3423958390, ""num_examples"": 1173927}], ""download_size"": 2104260605, ""dataset_size"": 3423958390}, {""config_name"": ""nn"", ""features"": [{""name"": ""text"", ""dtype"": ""string""}, {""name"": ""timestamp"", ""dtype"": ""string""}, {""name"": ""url"", ""dtype"": ""string""}, {""name"": ""source"", ""dtype"": ""string""}], ""splits"": [{""name"": ""train"", ""num_bytes"": 573924, ""num_examples"": 1261}], ""download_size"": 376802, ""dataset_size"": 573924}, {""config_name"": ""no"", ""features"": [{""name"": ""text"", ""dtype"": ""string""}, {""name"": ""timestamp"", ""dtype"": ""string""}, {""name"": ""url"", ""dtype"": ""string""}, {""name"": ""source"", ""dtype"": ""string""}], ""splits"": [{""name"": ""train"", ""num_bytes"": 886293156, ""num_examples"": 189073}], ""download_size"": 544754812, ""dataset_size"": 886293156}, {""config_name"": ""oc"", ""features"": [{""name"": ""text"", ""dtype"": ""string""}, {""name"": ""timestamp"", ""dtype"": ""string""}, {""name"": ""url"", ""dtype"": ""string""}, {""name"": ""source"", ""dtype"": ""string""}], ""splits"": [{""name"": ""train"", ""num_bytes"": 56775, ""num_examples"": 106}], ""download_size"": 37657, ""dataset_size"": 56775}, {""config_name"": ""or"", ""features"": [{""name"": ""text"", ""dtype"": ""string""}, {""name"": ""timestamp"", ""dtype"": ""string""}, {""name"": ""url"", ""dtype"": ""string""}, {""name"": ""source"", ""dtype"": ""string""}], ""splits"": [{""name"": ""train"", ""num_bytes"": 10376851, ""num_examples"": 1535}], ""download_size"": 4026133, ""dataset_size"": 10376851}, {""config_name"": ""os"", ""features"": [{""name"": ""text"", ""dtype"": ""string""}, {""name"": ""timestamp"", ""dtype"": ""string""}, {""name"": ""url"", ""dtype"": ""string""}, {""name"": ""source"", ""dtype"": ""string""}], ""splits"": [{""name"": ""train"", ""num_bytes"": 969628, ""num_examples"": 86}], ""download_size"": 366851, ""dataset_size"": 969628}, {""config_name"": ""pa"", ""features"": [{""name"": ""text"", ""dtype"": ""string""}, {""name"": ""timestamp"", ""dtype"": ""string""}, {""name"": ""url"", ""dtype"": ""string""}, {""name"": ""source"", ""dtype"": ""string""}], ""splits"": [{""name"": ""train"", ""num_bytes"": 53247320, ""num_examples"": 6470}], ""download_size"": 20258193, ""dataset_size"": 53247320}, {""config_name"": ""pl"", ""features"": [{""name"": ""text"", ""dtype"": ""string""}, {""name"": ""timestamp"", ""dtype"": ""string""}, {""name"": ""url"", ""dtype"": ""string""}, {""name"": ""source"", ""dtype"": ""string""}], ""splits"": [{""name"": ""train"", ""num_bytes"": 4797322841, ""num_examples"": 1421672}], ""download_size"": 3144888097, ""dataset_size"": 4797322841}, {""config_name"": ""pms"", ""features"": [{""name"": ""text"", ""dtype"": ""string""}, {""name"": ""timestamp"", ""dtype"": ""string""}, {""name"": ""url"", ""dtype"": ""string""}, {""name"": ""source"", ""dtype"": ""string""}], ""splits"": [{""name"": ""train"", ""num_bytes"": 193689, ""num_examples"": 76}], ""download_size"": 99115, ""dataset_size"": 193689}, {""config_name"": ""pnb"", ""features"": [{""name"": ""text"", ""dtype"": ""string""}, {""name"": ""timestamp"", ""dtype"": ""string""}, {""name"": ""url"", ""dtype"": ""string""}, {""name"": ""source"", ""dtype"": ""string""}], ""splits"": [{""name"": ""train"", ""num_bytes"": 2324564, ""num_examples"": 156}], ""download_size"": 1081639, ""dataset_size"": 2324564}, {""config_name"": ""ps"", ""features"": [{""name"": ""text"", ""dtype"": ""string""}, {""name"": ""timestamp"", ""dtype"": ""string""}, {""name"": ""url"", ""dtype"": ""string""}, {""name"": ""source"", ""dtype"": ""string""}], ""splits"": [{""name"": ""train"", ""num_bytes"": 21838581, ""num_examples"": 3769}], ""download_size"": 10449929, ""dataset_size"": 21838581}, {""config_name"": ""pt"", ""features"": [{""name"": ""text"", ""dtype"": ""string""}, {""name"": ""timestamp"", ""dtype"": ""string""}, {""name"": ""url"", ""dtype"": ""string""}, {""name"": ""source"", ""dtype"": ""string""}], ""splits"": [{""name"": ""train"", ""num_bytes"": 6030569925, ""num_examples"": 1902897}], ""download_size"": 3753416975, ""dataset_size"": 6030569925}, {""config_name"": ""qu"", ""features"": [{""name"": ""text"", ""dtype"": ""string""}, {""name"": ""timestamp"", ""dtype"": ""string""}, {""name"": ""url"", ""dtype"": ""string""}, {""name"": ""source"", ""dtype"": ""string""}], ""splits"": [{""name"": ""train"", ""num_bytes"": 3248, ""num_examples"": 12}], ""download_size"": 4834, ""dataset_size"": 3248}, {""config_name"": ""ro"", ""features"": [{""name"": ""text"", ""dtype"": ""string""}, {""name"": ""timestamp"", ""dtype"": ""string""}, {""name"": ""url"", ""dtype"": ""string""}, {""name"": ""source"", ""dtype"": ""string""}], ""splits"": [{""name"": ""train"", ""num_bytes"": 1647222938, ""num_examples"": 403254}], ""download_size"": 1029473728, ""dataset_size"": 1647222938}, {""config_name"": ""ru"", ""features"": [{""name"": ""text"", ""dtype"": ""string""}, {""name"": ""timestamp"", ""dtype"": ""string""}, {""name"": ""url"", ""dtype"": ""string""}, {""name"": ""source"", ""dtype"": ""string""}], ""splits"": [{""name"": ""train"", ""num_bytes"": 51932917568, ""num_examples"": 7993109}], ""download_size"": 25902408245, ""dataset_size"": 51932917568}, {""config_name"": ""sa"", ""features"": [{""name"": ""text"", ""dtype"": ""string""}, {""name"": ""timestamp"", ""dtype"": ""string""}, {""name"": ""url"", ""dtype"": ""string""}, {""name"": ""source"", ""dtype"": ""string""}], ""splits"": [{""name"": ""train"", ""num_bytes"": 353271, ""num_examples"": 163}], ""download_size"": 109390, ""dataset_size"": 353271}, {""config_name"": ""sah"", ""features"": [{""name"": ""text"", ""dtype"": ""string""}, {""name"": ""timestamp"", ""dtype"": ""string""}, {""name"": ""url"", ""dtype"": ""string""}, {""name"": ""source"", ""dtype"": ""string""}], ""splits"": [{""name"": ""train"", ""num_bytes"": 1626716, ""num_examples"": 221}], ""download_size"": 751737, ""dataset_size"": 1626716}, {""config_name"": ""sd"", ""features"": [{""name"": ""text"", ""dtype"": ""string""}, {""name"": ""timestamp"", ""dtype"": ""string""}, {""name"": ""url"", ""dtype"": ""string""}, {""name"": ""source"", ""dtype"": ""string""}], ""splits"": [{""name"": ""train"", ""num_bytes"": 8935394, ""num_examples"": 1092}], ""download_size"": 4342313, ""dataset_size"": 8935394}, {""config_name"": ""sh"", ""features"": [{""name"": ""text"", ""dtype"": ""string""}, {""name"": ""timestamp"", ""dtype"": ""string""}, {""name"": ""url"", ""dtype"": ""string""}, {""name"": ""source"", ""dtype"": ""string""}], ""splits"": [{""name"": ""train"", ""num_bytes"": 119845, ""num_examples"": 456}], ""download_size"": 28082, ""dataset_size"": 119845}, {""config_name"": ""si"", ""features"": [{""name"": ""text"", ""dtype"": ""string""}, {""name"": ""timestamp"", ""dtype"": ""string""}, {""name"": ""url"", ""dtype"": ""string""}, {""name"": ""source"", ""dtype"": ""string""}], ""splits"": [{""name"": ""train"", ""num_bytes"": 79363360, ""num_examples"": 7537}], ""download_size"": 31632414, ""dataset_size"": 79363360}, {""config_name"": ""sk"", ""features"": [{""name"": ""text"", ""dtype"": ""string""}, {""name"": ""timestamp"", ""dtype"": ""string""}, {""name"": ""url"", ""dtype"": ""string""}, {""name"": ""source"", ""dtype"": ""string""}], ""splits"": [{""name"": ""train"", ""num_bytes"": 656920389, ""num_examples"": 185825}], ""download_size"": 435390243, ""dataset_size"": 656920389}, {""config_name"": ""sl"", ""features"": [{""name"": ""text"", ""dtype"": ""string""}, {""name"": ""timestamp"", ""dtype"": ""string""}, {""name"": ""url"", ""dtype"": ""string""}, {""name"": ""source"", ""dtype"": ""string""}], ""splits"": [{""name"": ""train"", ""num_bytes"": 345866750, ""num_examples"": 73354}], ""download_size"": 222614687, ""dataset_size"": 345866750}, {""config_name"": ""sq"", ""features"": [{""name"": ""text"", ""dtype"": ""string""}, {""name"": ""timestamp"", ""dtype"": ""string""}, {""name"": ""url"", ""dtype"": ""string""}, {""name"": ""source"", ""dtype"": ""string""}], ""splits"": [{""name"": ""train"", ""num_bytes"": 150747645, ""num_examples"": 52056}], ""download_size"": 90031594, ""dataset_size"": 150747645}, {""config_name"": ""sr"", ""features"": [{""name"": ""text"", ""dtype"": ""string""}, {""name"": ""timestamp"", ""dtype"": ""string""}, {""name"": ""url"", ""dtype"": ""string""}, {""name"": ""source"", ""dtype"": ""string""}], ""splits"": [{""name"": ""train"", ""num_bytes"": 330933007, ""num_examples"": 40532}], ""download_size"": 164474120, ""dataset_size"": 330933007}, {""config_name"": ""su"", ""features"": [{""name"": ""text"", ""dtype"": ""string""}, {""name"": ""timestamp"", ""dtype"": ""string""}, {""name"": ""url"", ""dtype"": ""string""}, {""name"": ""source"", ""dtype"": ""string""}], ""splits"": [{""name"": ""train"", ""num_bytes"": 71933, ""num_examples"": 16}], ""download_size"": 50712, ""dataset_size"": 71933}, {""config_name"": ""sv"", ""features"": [{""name"": ""text"", ""dtype"": ""string""}, {""name"": ""timestamp"", ""dtype"": ""string""}, {""name"": ""url"", ""dtype"": ""string""}, {""name"": ""source"", ""dtype"": ""string""}], ""splits"": [{""name"": ""train"", ""num_bytes"": 1658749441, ""num_examples"": 497092}], ""download_size"": 1026016237, ""dataset_size"": 1658749441}, {""config_name"": ""sw"", ""features"": [{""name"": ""text"", ""dtype"": ""string""}, {""name"": ""timestamp"", ""dtype"": ""string""}, {""name"": ""url"", ""dtype"": ""string""}, {""name"": ""source"", ""dtype"": ""string""}], ""splits"": [{""name"": ""train"", ""num_bytes"": 2120663, ""num_examples"": 665}], ""download_size"": 1172807, ""dataset_size"": 2120663}, {""config_name"": ""ta"", ""features"": [{""name"": ""text"", ""dtype"": ""string""}, {""name"": ""timestamp"", ""dtype"": ""string""}, {""name"": ""url"", ""dtype"": ""string""}, {""name"": ""source"", ""dtype"": ""string""}], ""splits"": [{""name"": ""train"", ""num_bytes"": 433993378, ""num_examples"": 47285}], ""download_size"": 153834177, ""dataset_size"": 433993378}, {""config_name"": ""te"", ""features"": [{""name"": ""text"", ""dtype"": ""string""}, {""name"": ""timestamp"", ""dtype"": ""string""}, {""name"": ""url"", ""dtype"": ""string""}, {""name"": ""source"", ""dtype"": ""string""}], ""splits"": [{""name"": ""train"", ""num_bytes"": 142492032, ""num_examples"": 18229}], ""download_size"": 55247684, ""dataset_size"": 142492032}, {""config_name"": ""tg"", ""features"": [{""name"": ""text"", ""dtype"": ""string""}, {""name"": ""timestamp"", ""dtype"": ""string""}, {""name"": ""url"", ""dtype"": ""string""}, {""name"": ""source"", ""dtype"": ""string""}], ""splits"": [{""name"": ""train"", ""num_bytes"": 36581062, ""num_examples"": 4838}], ""download_size"": 17091868, ""dataset_size"": 36581062}, {""config_name"": ""th"", ""features"": [{""name"": ""text"", ""dtype"": ""string""}, {""name"": ""timestamp"", ""dtype"": ""string""}, {""name"": ""url"", ""dtype"": ""string""}, {""name"": ""source"", ""dtype"": ""string""}], ""splits"": [{""name"": ""train"", ""num_bytes"": 1565290526, ""num_examples"": 209606}], ""download_size"": 608699456, ""dataset_size"": 1565290526}, {""config_name"": ""tk"", ""features"": [{""name"": ""text"", ""dtype"": ""string""}, {""name"": ""timestamp"", ""dtype"": ""string""}, {""name"": ""url"", ""dtype"": ""string""}, {""name"": ""source"", ""dtype"": ""string""}], ""splits"": [{""name"": ""train"", ""num_bytes"": 143460, ""num_examples"": 144}], ""download_size"": 82943, ""dataset_size"": 143460}, {""config_name"": ""tl"", ""features"": [{""name"": ""text"", ""dtype"": ""string""}, {""name"": ""timestamp"", ""dtype"": ""string""}, {""name"": ""url"", ""dtype"": ""string""}, {""name"": ""source"", ""dtype"": ""string""}], ""splits"": [{""name"": ""train"", ""num_bytes"": 22766411, ""num_examples"": 3485}], ""download_size"": 12970450, ""dataset_size"": 22766411}, {""config_name"": ""tr"", ""features"": [{""name"": ""text"", ""dtype"": ""string""}, {""name"": ""timestamp"", ""dtype"": ""string""}, {""name"": ""url"", ""dtype"": ""string""}, {""name"": ""source"", ""dtype"": ""string""}], ""splits"": [{""name"": ""train"", ""num_bytes"": 2979296527, ""num_examples"": 942075}], ""download_size"": 1803862992, ""dataset_size"": 2979296527}, {""config_name"": ""tt"", ""features"": [{""name"": ""text"", ""dtype"": ""string""}, {""name"": ""timestamp"", ""dtype"": ""string""}, {""name"": ""url"", ""dtype"": ""string""}, {""name"": ""source"", ""dtype"": ""string""}], ""splits"": [{""name"": ""train"", ""num_bytes"": 15703273, ""num_examples"": 2181}], ""download_size"": 7741613, ""dataset_size"": 15703273}, {""config_name"": ""ug"", ""features"": [{""name"": ""text"", ""dtype"": ""string""}, {""name"": ""timestamp"", ""dtype"": ""string""}, {""name"": ""url"", ""dtype"": ""string""}, {""name"": ""source"", ""dtype"": ""string""}], ""splits"": [{""name"": ""train"", ""num_bytes"": 5337670, ""num_examples"": 470}], ""download_size"": 2355545, ""dataset_size"": 5337670}, {""config_name"": ""uk"", ""features"": [{""name"": ""text"", ""dtype"": ""string""}, {""name"": ""timestamp"", ""dtype"": ""string""}, {""name"": ""url"", ""dtype"": ""string""}, {""name"": ""source"", ""dtype"": ""string""}], ""splits"": [{""name"": ""train"", ""num_bytes"": 2613621381, ""num_examples"": 447405}], ""download_size"": 1303094462, ""dataset_size"": 2613621381}, {""config_name"": ""ur"", ""features"": [{""name"": ""text"", ""dtype"": ""string""}, {""name"": ""timestamp"", ""dtype"": ""string""}, {""name"": ""url"", ""dtype"": ""string""}, {""name"": ""source"", ""dtype"": ""string""}], ""splits"": [{""name"": ""train"", ""num_bytes"": 178978059, ""num_examples"": 27573}], ""download_size"": 85756317, ""dataset_size"": 178978059}, {""config_name"": ""uz"", ""features"": [{""name"": ""text"", ""dtype"": ""string""}, {""name"": ""timestamp"", ""dtype"": ""string""}, {""name"": ""url"", ""dtype"": ""string""}, {""name"": ""source"", ""dtype"": ""string""}], ""splits"": [{""name"": ""train"", ""num_bytes"": 3642685, ""num_examples"": 872}], ""download_size"": 2153925, ""dataset_size"": 3642685}, {""config_name"": ""vec"", ""features"": [{""name"": ""text"", ""dtype"": ""string""}, {""name"": ""timestamp"", ""dtype"": ""string""}, {""name"": ""url"", ""dtype"": ""string""}, {""name"": ""source"", ""dtype"": ""string""}], ""splits"": [{""name"": ""train"", ""num_bytes"": 266, ""num_examples"": 1}], ""download_size"": 3581, ""dataset_size"": 266}, {""config_name"": ""vi"", ""features"": [{""name"": ""text"", ""dtype"": ""string""}, {""name"": ""timestamp"", ""dtype"": ""string""}, {""name"": ""url"", ""dtype"": ""string""}, {""name"": ""source"", ""dtype"": ""string""}], ""splits"": [{""name"": ""train"", ""num_bytes"": 2709971576, ""num_examples"": 576063}], ""download_size"": 1439675245, ""dataset_size"": 2709971576}, {""config_name"": ""vo"", ""features"": [{""name"": ""text"", ""dtype"": ""string""}, {""name"": ""timestamp"", ""dtype"": ""string""}, {""name"": ""url"", ""dtype"": ""string""}, {""name"": ""source"", ""dtype"": ""string""}], ""splits"": [{""name"": ""train"", ""num_bytes"": 39947, ""num_examples"": 66}], ""download_size"": 15487, ""dataset_size"": 39947}, {""config_name"": ""wa"", ""features"": [{""name"": ""text"", ""dtype"": ""string""}, {""name"": ""timestamp"", ""dtype"": ""string""}, {""name"": ""url"", ""dtype"": ""string""}, {""name"": ""source"", ""dtype"": ""string""}], ""splits"": [{""name"": ""train"", ""num_bytes"": 3609, ""num_examples"": 14}], ""download_size"": 4680, ""dataset_size"": 3609}, {""config_name"": ""war"", ""features"": [{""name"": ""text"", ""dtype"": ""string""}, {""name"": ""timestamp"", ""dtype"": ""string""}, {""name"": ""url"", ""dtype"": ""string""}, {""name"": ""source"", ""dtype"": ""string""}], ""splits"": [{""name"": ""train"", ""num_bytes"": 56875, ""num_examples"": 237}], ""download_size"": 27169, ""dataset_size"": 56875}, {""config_name"": ""wuu"", ""features"": [{""name"": ""text"", ""dtype"": ""string""}, {""name"": ""timestamp"", ""dtype"": ""string""}, {""name"": ""url"", ""dtype"": ""string""}, {""name"": ""source"", ""dtype"": ""string""}], ""splits"": [{""name"": ""train"", ""num_bytes"": 897, ""num_examples"": 2}], ""download_size"": 4869, ""dataset_size"": 897}, {""config_name"": ""xal"", ""features"": [{""name"": ""text"", ""dtype"": ""string""}, {""name"": ""timestamp"", ""dtype"": ""string""}, {""name"": ""url"", ""dtype"": ""string""}, {""name"": ""source"", ""dtype"": ""string""}], ""splits"": [{""name"": ""train"", ""num_bytes"": 532, ""num_examples"": 1}], ""download_size"": 5361, ""dataset_size"": 532}, {""config_name"": ""xmf"", ""features"": [{""name"": ""text"", ""dtype"": ""string""}, {""name"": ""timestamp"", ""dtype"": ""string""}, {""name"": ""url"", ""dtype"": ""string""}, {""name"": ""source"", ""dtype"": ""string""}], ""splits"": [{""name"": ""train"", ""num_bytes"": 71001, ""num_examples"": 97}], ""download_size"": 21953, ""dataset_size"": 71001}, {""config_name"": ""yi"", ""features"": [{""name"": ""text"", ""dtype"": ""string""}, {""name"": ""timestamp"", ""dtype"": ""string""}, {""name"": ""url"", ""dtype"": ""string""}, {""name"": ""source"", ""dtype"": ""string""}], ""splits"": [{""name"": ""train"", ""num_bytes"": 13129826, ""num_examples"": 1412}], ""download_size"": 5978777, ""dataset_size"": 13129826}, {""config_name"": ""yo"", ""features"": [{""name"": ""text"", ""dtype"": ""string""}, {""name"": ""timestamp"", ""dtype"": ""string""}, {""name"": ""url"", ""dtype"": ""string""}, {""name"": ""source"", ""dtype"": ""string""}], ""splits"": [{""name"": ""train"", ""num_bytes"": 844, ""num_examples"": 2}], ""download_size"": 4927, ""dataset_size"": 844}, {""config_name"": ""zh"", ""features"": [{""name"": ""text"", ""dtype"": ""string""}, {""name"": ""timestamp"", ""dtype"": ""string""}, {""name"": ""url"", ""dtype"": ""string""}, {""name"": ""source"", ""dtype"": ""string""}], ""splits"": [{""name"": ""train"", ""num_bytes"": 9206626122, ""num_examples"": 2186246}], ""download_size"": 6424105929, ""dataset_size"": 9206626122}], ""configs"": [{""config_name"": ""af"", ""data_files"": [{""split"": ""train"", ""path"": ""af/train-*""}]}, {""config_name"": ""als"", ""data_files"": [{""split"": ""train"", ""path"": ""als/train-*""}]}, {""config_name"": ""am"", ""data_files"": [{""split"": ""train"", ""path"": ""am/train-*""}]}, {""config_name"": ""an"", ""data_files"": [{""split"": ""train"", ""path"": ""an/train-*""}]}, {""config_name"": ""ar"", ""data_files"": [{""split"": ""train"", ""path"": ""ar/train-*""}]}, {""config_name"": ""arz"", ""data_files"": [{""split"": ""train"", ""path"": ""arz/train-*""}]}, {""config_name"": ""as"", ""data_files"": [{""split"": ""train"", ""path"": ""as/train-*""}]}, {""config_name"": ""ast"", ""data_files"": [{""split"": ""train"", ""path"": ""ast/train-*""}]}, {""config_name"": ""av"", ""data_files"": [{""split"": ""train"", ""path"": ""av/train-*""}]}, {""config_name"": ""az"", ""data_files"": [{""split"": ""train"", ""path"": ""az/train-*""}]}, {""config_name"": ""azb"", ""data_files"": [{""split"": ""train"", ""path"": ""azb/train-*""}]}, {""config_name"": ""ba"", ""data_files"": [{""split"": ""train"", ""path"": ""ba/train-*""}]}, {""config_name"": ""be"", ""data_files"": [{""split"": ""train"", ""path"": ""be/train-*""}]}, {""config_name"": ""bg"", ""data_files"": [{""split"": ""train"", ""path"": ""bg/train-*""}]}, {""config_name"": ""bh"", ""data_files"": [{""split"": ""train"", ""path"": ""bh/train-*""}]}, {""config_name"": ""bn"", ""data_files"": [{""split"": ""train"", ""path"": ""bn/train-*""}]}, {""config_name"": ""bo"", ""data_files"": [{""split"": ""train"", ""path"": ""bo/train-*""}]}, {""config_name"": ""bpy"", ""data_files"": [{""split"": ""train"", ""path"": ""bpy/train-*""}]}, {""config_name"": ""br"", ""data_files"": [{""split"": ""train"", ""path"": ""br/train-*""}]}, {""config_name"": ""bs"", ""data_files"": [{""split"": ""train"", ""path"": ""bs/train-*""}]}, {""config_name"": ""bxr"", ""data_files"": [{""split"": ""train"", ""path"": ""bxr/train-*""}]}, {""config_name"": ""ca"", ""data_files"": [{""split"": ""train"", ""path"": ""ca/train-*""}]}, {""config_name"": ""ce"", ""data_files"": [{""split"": ""train"", ""path"": ""ce/train-*""}]}, {""config_name"": ""ceb"", ""data_files"": [{""split"": ""train"", ""path"": ""ceb/train-*""}]}, {""config_name"": ""ckb"", ""data_files"": [{""split"": ""train"", ""path"": ""ckb/train-*""}]}, {""config_name"": ""cs"", ""data_files"": [{""split"": ""train"", ""path"": ""cs/train-*""}]}, {""config_name"": ""cv"", ""data_files"": [{""split"": ""train"", ""path"": ""cv/train-*""}]}, {""config_name"": ""cy"", ""data_files"": [{""split"": ""train"", ""path"": ""cy/train-*""}]}, {""config_name"": ""da"", ""data_files"": [{""split"": ""train"", ""path"": ""da/train-*""}]}, {""config_name"": ""de"", ""data_files"": [{""split"": ""train"", ""path"": ""de/train-*""}]}, {""config_name"": ""dsb"", ""data_files"": [{""split"": ""train"", ""path"": ""dsb/train-*""}]}, {""config_name"": ""dv"", ""data_files"": [{""split"": ""train"", ""path"": ""dv/train-*""}]}, {""config_name"": ""el"", ""data_files"": [{""split"": ""train"", ""path"": ""el/train-*""}]}, {""config_name"": ""eml"", ""data_files"": [{""split"": ""train"", ""path"": ""eml/train-*""}]}, {""config_name"": ""en"", ""data_files"": [{""split"": ""train"", ""path"": ""en/train-*""}]}, {""config_name"": ""eo"", ""data_files"": [{""split"": ""train"", ""path"": ""eo/train-*""}]}, {""config_name"": ""es"", ""data_files"": [{""split"": ""train"", ""path"": ""es/train-*""}]}, {""config_name"": ""et"", ""data_files"": [{""split"": ""train"", ""path"": ""et/train-*""}]}, {""config_name"": ""eu"", ""data_files"": [{""split"": ""train"", ""path"": ""eu/train-*""}]}, {""config_name"": ""fa"", ""data_files"": [{""split"": ""train"", ""path"": ""fa/train-*""}]}, {""config_name"": ""fi"", ""data_files"": [{""split"": ""train"", ""path"": ""fi/train-*""}]}, {""config_name"": ""fr"", ""data_files"": [{""split"": ""train"", ""path"": ""fr/train-*""}]}, {""config_name"": ""fy"", ""data_files"": [{""split"": ""train"", ""path"": ""fy/train-*""}]}, {""config_name"": ""ga"", ""data_files"": [{""split"": ""train"", ""path"": ""ga/train-*""}]}, {""config_name"": ""gd"", ""data_files"": [{""split"": ""train"", ""path"": ""gd/train-*""}]}, {""config_name"": ""gl"", ""data_files"": [{""split"": ""train"", ""path"": ""gl/train-*""}]}, {""config_name"": ""gn"", ""data_files"": [{""split"": ""train"", ""path"": ""gn/train-*""}]}, {""config_name"": ""gom"", ""data_files"": [{""split"": ""train"", ""path"": ""gom/train-*""}]}, {""config_name"": ""gu"", ""data_files"": [{""split"": ""train"", ""path"": ""gu/train-*""}]}, {""config_name"": ""he"", ""data_files"": [{""split"": ""train"", ""path"": ""he/train-*""}]}, {""config_name"": ""hi"", ""data_files"": [{""split"": ""train"", ""path"": ""hi/train-*""}]}, {""config_name"": ""hr"", ""data_files"": [{""split"": ""train"", ""path"": ""hr/train-*""}]}, {""config_name"": ""hsb"", ""data_files"": [{""split"": ""train"", ""path"": ""hsb/train-*""}]}, {""config_name"": ""hu"", ""data_files"": [{""split"": ""train"", ""path"": ""hu/train-*""}]}, {""config_name"": ""hy"", ""data_files"": [{""split"": ""train"", ""path"": ""hy/train-*""}]}, {""config_name"": ""ia"", ""data_files"": [{""split"": ""train"", ""path"": ""ia/train-*""}]}, {""config_name"": ""id"", ""data_files"": [{""split"": ""train"", ""path"": ""id/train-*""}]}, {""config_name"": ""ilo"", ""data_files"": [{""split"": ""train"", ""path"": ""ilo/train-*""}]}, {""config_name"": ""io"", ""data_files"": [{""split"": ""train"", ""path"": ""io/train-*""}]}, {""config_name"": ""is"", ""data_files"": [{""split"": ""train"", ""path"": ""is/train-*""}]}, {""config_name"": ""it"", ""data_files"": [{""split"": ""train"", ""path"": ""it/train-*""}]}, {""config_name"": ""ja"", ""data_files"": [{""split"": ""train"", ""path"": ""ja/train-*""}]}, {""config_name"": ""jbo"", ""data_files"": [{""split"": ""train"", ""path"": ""jbo/train-*""}]}, {""config_name"": ""jv"", ""data_files"": [{""split"": ""train"", ""path"": ""jv/train-*""}]}, {""config_name"": ""ka"", ""data_files"": [{""split"": ""train"", ""path"": ""ka/train-*""}]}, {""config_name"": ""kk"", ""data_files"": [{""split"": ""train"", ""path"": ""kk/train-*""}]}, {""config_name"": ""km"", ""data_files"": [{""split"": ""train"", ""path"": ""km/train-*""}]}, {""config_name"": ""kn"", ""data_files"": [{""split"": ""train"", ""path"": ""kn/train-*""}]}, {""config_name"": ""ko"", ""data_files"": [{""split"": ""train"", ""path"": ""ko/train-*""}]}, {""config_name"": ""krc"", ""data_files"": [{""split"": ""train"", ""path"": ""krc/train-*""}]}, {""config_name"": ""ku"", ""data_files"": [{""split"": ""train"", ""path"": ""ku/train-*""}]}, {""config_name"": ""kv"", ""data_files"": [{""split"": ""train"", ""path"": ""kv/train-*""}]}, {""config_name"": ""kw"", ""data_files"": [{""split"": ""train"", ""path"": ""kw/train-*""}]}, {""config_name"": ""ky"", ""data_files"": [{""split"": ""train"", ""path"": ""ky/train-*""}]}, {""config_name"": ""la"", ""data_files"": [{""split"": ""train"", ""path"": ""la/train-*""}]}, {""config_name"": ""lb"", ""data_files"": [{""split"": ""train"", ""path"": ""lb/train-*""}]}, {""config_name"": ""lez"", ""data_files"": [{""split"": ""train"", ""path"": ""lez/train-*""}]}, {""config_name"": ""li"", ""data_files"": [{""split"": ""train"", ""path"": ""li/train-*""}]}, {""config_name"": ""lmo"", ""data_files"": [{""split"": ""train"", ""path"": ""lmo/train-*""}]}, {""config_name"": ""lo"", ""data_files"": [{""split"": ""train"", ""path"": ""lo/train-*""}]}, {""config_name"": ""lt"", ""data_files"": [{""split"": ""train"", ""path"": ""lt/train-*""}]}, {""config_name"": ""lv"", ""data_files"": [{""split"": ""train"", ""path"": ""lv/train-*""}]}, {""config_name"": ""mai"", ""data_files"": [{""split"": ""train"", ""path"": ""mai/train-*""}]}, {""config_name"": ""mg"", ""data_files"": [{""split"": ""train"", ""path"": ""mg/train-*""}]}, {""config_name"": ""mhr"", ""data_files"": [{""split"": ""train"", ""path"": ""mhr/train-*""}]}, {""config_name"": ""min"", ""data_files"": [{""split"": ""train"", ""path"": ""min/train-*""}]}, {""config_name"": ""mk"", ""data_files"": [{""split"": ""train"", ""path"": ""mk/train-*""}]}, {""config_name"": ""ml"", ""data_files"": [{""split"": ""train"", ""path"": ""ml/train-*""}]}, {""config_name"": ""mn"", ""data_files"": [{""split"": ""train"", ""path"": ""mn/train-*""}]}, {""config_name"": ""mr"", ""data_files"": [{""split"": ""train"", ""path"": ""mr/train-*""}]}, {""config_name"": ""mrj"", ""data_files"": [{""split"": ""train"", ""path"": ""mrj/train-*""}]}, {""config_name"": ""ms"", ""data_files"": [{""split"": ""train"", ""path"": ""ms/train-*""}]}, {""config_name"": ""mt"", ""data_files"": [{""split"": ""train"", ""path"": ""mt/train-*""}]}, {""config_name"": ""my"", ""data_files"": [{""split"": ""train"", ""path"": ""my/train-*""}]}, {""config_name"": ""mzn"", ""data_files"": [{""split"": ""train"", ""path"": ""mzn/train-*""}]}, {""config_name"": ""nah"", ""data_files"": [{""split"": ""train"", ""path"": ""nah/train-*""}]}, {""config_name"": ""nds"", ""data_files"": [{""split"": ""train"", ""path"": ""nds/train-*""}]}, {""config_name"": ""ne"", ""data_files"": [{""split"": ""train"", ""path"": ""ne/train-*""}]}, {""config_name"": ""new"", ""data_files"": [{""split"": ""train"", ""path"": ""new/train-*""}]}, {""config_name"": ""nl"", ""data_files"": [{""split"": ""train"", ""path"": ""nl/train-*""}]}, {""config_name"": ""nn"", ""data_files"": [{""split"": ""train"", ""path"": ""nn/train-*""}]}, {""config_name"": ""no"", ""data_files"": [{""split"": ""train"", ""path"": ""no/train-*""}]}, {""config_name"": ""oc"", ""data_files"": [{""split"": ""train"", ""path"": ""oc/train-*""}]}, {""config_name"": ""or"", ""data_files"": [{""split"": ""train"", ""path"": ""or/train-*""}]}, {""config_name"": ""os"", ""data_files"": [{""split"": ""train"", ""path"": ""os/train-*""}]}, {""config_name"": ""pa"", ""data_files"": [{""split"": ""train"", ""path"": ""pa/train-*""}]}, {""config_name"": ""pl"", ""data_files"": [{""split"": ""train"", ""path"": ""pl/train-*""}]}, {""config_name"": ""pms"", ""data_files"": [{""split"": ""train"", ""path"": ""pms/train-*""}]}, {""config_name"": ""pnb"", ""data_files"": [{""split"": ""train"", ""path"": ""pnb/train-*""}]}, {""config_name"": ""ps"", ""data_files"": [{""split"": ""train"", ""path"": ""ps/train-*""}]}, {""config_name"": ""pt"", ""data_files"": [{""split"": ""train"", ""path"": ""pt/train-*""}]}, {""config_name"": ""qu"", ""data_files"": [{""split"": ""train"", ""path"": ""qu/train-*""}]}, {""config_name"": ""ro"", ""data_files"": [{""split"": ""train"", ""path"": ""ro/train-*""}]}, {""config_name"": ""ru"", ""data_files"": [{""split"": ""train"", ""path"": ""ru/train-*""}]}, {""config_name"": ""sa"", ""data_files"": [{""split"": ""train"", ""path"": ""sa/train-*""}]}, {""config_name"": ""sah"", ""data_files"": [{""split"": ""train"", ""path"": ""sah/train-*""}]}, {""config_name"": ""sd"", ""data_files"": [{""split"": ""train"", ""path"": ""sd/train-*""}]}, {""config_name"": ""sh"", ""data_files"": [{""split"": ""train"", ""path"": ""sh/train-*""}]}, {""config_name"": ""si"", ""data_files"": [{""split"": ""train"", ""path"": ""si/train-*""}]}, {""config_name"": ""sk"", ""data_files"": [{""split"": ""train"", ""path"": ""sk/train-*""}]}, {""config_name"": ""sl"", ""data_files"": [{""split"": ""train"", ""path"": ""sl/train-*""}]}, {""config_name"": ""sq"", ""data_files"": [{""split"": ""train"", ""path"": ""sq/train-*""}]}, {""config_name"": ""sr"", ""data_files"": [{""split"": ""train"", ""path"": ""sr/train-*""}]}, {""config_name"": ""su"", ""data_files"": [{""split"": ""train"", ""path"": ""su/train-*""}]}, {""config_name"": ""sv"", ""data_files"": [{""split"": ""train"", ""path"": ""sv/train-*""}]}, {""config_name"": ""sw"", ""data_files"": [{""split"": ""train"", ""path"": ""sw/train-*""}]}, {""config_name"": ""ta"", ""data_files"": [{""split"": ""train"", ""path"": ""ta/train-*""}]}, {""config_name"": ""te"", ""data_files"": [{""split"": ""train"", ""path"": ""te/train-*""}]}, {""config_name"": ""tg"", ""data_files"": [{""split"": ""train"", ""path"": ""tg/train-*""}]}, {""config_name"": ""th"", ""data_files"": [{""split"": ""train"", ""path"": ""th/train-*""}]}, {""config_name"": ""tk"", ""data_files"": [{""split"": ""train"", ""path"": ""tk/train-*""}]}, {""config_name"": ""tl"", ""data_files"": [{""split"": ""train"", ""path"": ""tl/train-*""}]}, {""config_name"": ""tr"", ""data_files"": [{""split"": ""train"", ""path"": ""tr/train-*""}]}, {""config_name"": ""tt"", ""data_files"": [{""split"": ""train"", ""path"": ""tt/train-*""}]}, {""config_name"": ""ug"", ""data_files"": [{""split"": ""train"", ""path"": ""ug/train-*""}]}, {""config_name"": ""uk"", ""data_files"": [{""split"": ""train"", ""path"": ""uk/train-*""}]}, {""config_name"": ""ur"", ""data_files"": [{""split"": ""train"", ""path"": ""ur/train-*""}]}, {""config_name"": ""uz"", ""data_files"": [{""split"": ""train"", ""path"": ""uz/train-*""}]}, {""config_name"": ""vec"", ""data_files"": [{""split"": ""train"", ""path"": ""vec/train-*""}]}, {""config_name"": ""vi"", ""data_files"": [{""split"": ""train"", ""path"": ""vi/train-*""}]}, {""config_name"": ""vo"", ""data_files"": [{""split"": ""train"", ""path"": ""vo/train-*""}]}, {""config_name"": ""wa"", ""data_files"": [{""split"": ""train"", ""path"": ""wa/train-*""}]}, {""config_name"": ""war"", ""data_files"": [{""split"": ""train"", ""path"": ""war/train-*""}]}, {""config_name"": ""wuu"", ""data_files"": [{""split"": ""train"", ""path"": ""wuu/train-*""}]}, {""config_name"": ""xal"", ""data_files"": [{""split"": ""train"", ""path"": ""xal/train-*""}]}, {""config_name"": ""xmf"", ""data_files"": [{""split"": ""train"", ""path"": ""xmf/train-*""}]}, {""config_name"": ""yi"", ""data_files"": [{""split"": ""train"", ""path"": ""yi/train-*""}]}, {""config_name"": ""yo"", ""data_files"": [{""split"": ""train"", ""path"": ""yo/train-*""}]}, {""config_name"": ""zh"", ""data_files"": [{""split"": ""train"", ""path"": ""zh/train-*""}]}], ""source_datasets"": [""uonlp/CulturaX""], ""task_categories"": [""text-generation"", ""fill-mask""], ""task_ids"": [""language-modeling"", ""masked-language-modeling""], ""multilinguality"": [""multilingual""], ""language"": [""af"", ""als"", ""am"", ""an"", ""ar"", ""arz"", ""as"", ""ast"", ""av"", ""az"", ""azb"", ""ba"", ""be"", ""bg"", ""bh"", ""bn"", ""bo"", ""bpy"", ""br"", ""bs"", ""bxr"", ""ca"", ""ce"", ""ceb"", ""ckb"", ""cs"", ""cv"", ""cy"", ""da"", ""de"", ""dsb"", ""dv"", ""el"", ""eml"", ""en"", ""eo"", ""es"", ""et"", ""eu"", ""fa"", ""fi"", ""fr"", ""fy"", ""ga"", ""gd"", ""gl"", ""gn"", ""gom"", ""gu"", ""he"", ""hi"", ""hr"", ""hsb"", ""hu"", ""hy"", ""ia"", ""id"", ""ilo"", ""io"", ""is"", ""it"", ""ja"", ""jbo"", ""jv"", ""ka"", ""kk"", ""km"", ""kn"", ""ko"", ""krc"", ""ku"", ""kv"", ""kw"", ""ky"", ""la"", ""lb"", ""lez"", ""li"", ""lmo"", ""lo"", ""lt"", ""lv"", ""mai"", ""mg"", ""mhr"", ""min"", ""mk"", ""ml"", ""mn"", ""mr"", ""mrj"", ""ms"", ""mt"", ""my"", ""mzn"", ""nah"", ""nds"", ""ne"", ""new"", ""nl"", ""nn"", false, ""oc"", ""or"", ""os"", ""pa"", ""pl"", ""pms"", ""pnb"", ""ps"", ""pt"", ""qu"", ""ro"", ""ru"", ""sa"", ""sah"", ""sd"", ""sh"", ""si"", ""sk"", ""sl"", ""sq"", ""sr"", ""su"", ""sv"", ""sw"", ""ta"", ""te"", ""tg"", ""th"", ""tk"", ""tl"", ""tr"", ""tt"", ""ug"", ""uk"", ""ur"", ""uz"", ""vec"", ""vi"", ""vo"", ""wa"", ""war"", ""wuu"", ""xal"", ""xmf"", ""yi"", ""yo"", ""zh""]}","This repo contains 1% of each language of uonlp/CulturaX.
```python
load_dataset('devngho/culturax-mini-nonshuffled', '[lang]', split='train') # read specified language
load_dataset('devngho/culturax-mini-nonshuffled', data_files=""*/*"", split='train') # read all language
```"
jp1924/KsponSpeech,{},"---
dataset_info:
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: sentence
dtype: string
- name: id
dtype: string
splits:
- name: dev
num_bytes: 453996265.875
num_examples: 2545
- name: eval_clean
num_bytes: 304987608
num_examples: 3000
- name: eval_other
num_bytes: 438544274
num_examples: 3000
- name: train
num_bytes: 111286133042
num_examples: 620000
download_size: 105060754027
dataset_size: 112483661189.875
configs:
- config_name: default
data_files:
- split: dev
path: data/dev-*
- split: eval_clean
path: data/eval_clean-*
- split: eval_other
path: data/eval_other-*
- split: train
path: data/train-*
task_categories:
- automatic-speech-recognition
language:
- ko
tags:
- STT
- Audio
size_categories:
- 100B
|
KO |
JA |
IT |
RU |
DE |
FR |
TH |
AR |
VI |
Total |
Train Images |
580 |
1039 |
622 |
635 |
984 |
792 |
319 |
568 |
1139 |
6678 |
Test Images |
250 |
250 |
250 |
250 |
250 |
250 |
116 |
250 |
250 |
2116 |
Train QA |
1280 |
3332 |
2168 |
1835 |
4238 |
2743 |
625 |
1597 |
4011 |
21829 |
Test QA |
558 |
828 |
884 |
756 |
1048 |
886 |
231 |
703 |
884 |
6778 |
## - LeaderBoard
Models |
AR |
DE |
FR |
IT |
JA |
KO |
RU |
TH |
VI |
Average |
GPT-4O |
20.2 |
34.2 |
41.2 |
32.7 |
20.0 |
33.9 |
11.5 |
22.5 |
34.2 |
27.8 |
Claude3 Opus |
15.1 |
33.4 |
40.6 |
34.4 |
19.4 |
27.2 |
13.0 |
19.5 |
29.1 |
25.7 |
Gemini Ultra |
14.7 |
32.3 |
40.0 |
31.8 |
12.3 |
17.2 |
11.8 |
20.3 |
28.6 |
23.2 |
GPT-4V |
11.5 |
31.5 |
40.4 |
32.3 |
11.5 |
16.7 |
10.3 |
15.0 |
28.9 |
22.0 |
QwenVL Max |
7.7 |
31.4 |
37.6 |
30.2 |
18.6 |
25.4 |
10.4 |
4.8 |
23.5 |
21.1 |
Claude3 Sonnet |
10.5 |
28.9 |
35.6 |
31.8 |
13.9 |
22.2 |
11.0 |
15.2 |
20.8 |
21.1 |
QwenVL Plus |
4.8 |
28.8 |
33.7 |
27.1 |
12.8 |
19.9 |
9.4 |
5.6 |
18.1 |
17.8 |
MiniCPM-Llama3-V-2_5 |
6.1 |
29.6 |
35.7 |
26.0 |
12.1 |
13.1 |
5.7 |
12.6 |
15.3 |
17.3 |
InternVL-V1.5 |
3.4 |
27.1 |
31.4 |
27.1 |
9.9 |
9.0 |
4.9 |
8.7 |
12.4 |
14.9 |
GLM4V |
0.3 |
30.0 |
34.1 |
30.1 |
3.4 |
5.7 |
3.0 |
3.5 |
12.3 |
13.6 |
TextSquare |
3.7 |
27.0 |
30.8 |
26.7 |
3.2 |
7.2 |
6.7 |
5.2 |
12.4 |
13.6 |
Mini-Gemini-HD-34B |
2.2 |
25.0 |
29.2 |
25.5 |
6.1 |
8.6 |
4.1 |
4.3 |
11.8 |
13.0 |
InternLM-Xcomposer2-4KHD |
2.0 |
20.6 |
23.2 |
21.6 |
5.6 |
7.7 |
4.1 |
6.1 |
10.1 |
11.2 |
Llava-Next-34B |
3.3 |
24.0 |
28.0 |
22.3 |
3.6 |
6.1 |
2.6 |
0.4 |
9.8 |
11.1 |
TextMonkey |
2.0 |
18.1 |
19.9 |
22.1 |
4.6 |
7.2 |
3.2 |
0.9 |
11.1 |
9.9 |
MiniCPM-V-2 |
1.3 |
12.7 |
14.9 |
17.0 |
3.7 |
5.6 |
2.2 |
2.2 |
6.8 |
7.4 |
mPLUG-DocOwl 1.5 |
1.0 |
13.9 |
14.9 |
18.2 |
2.9 |
5.0 |
2.0 |
0.9 |
6.4 |
7.2 |
YI-VL-34B |
1.7 |
13.5 |
15.7 |
12.1 |
4.8 |
5.2 |
0.8 |
3.5 |
4.1 |
6.8 |
DeepSeek-VL |
0.6 |
14.2 |
15.3 |
15.2 |
2.9 |
3.8 |
1.6 |
0.9 |
5.2 |
6.6 |
## - Direct usage
The data is designed to evaluate and enhance the multilingual textual vqa capabilities of multimodal models in the hope of facilitating the understanding of multilingual images, enabling AI to reach more people in the world.
### -- Huggingface dataloader
```
from datasets import load_dataset
dataset = load_dataset(""ByteDance/MTVQA"")
```
## - Out-of-Scope usage
Academic use only, not supported for commercial usage.
## - Ethics Assessment
Both GPT4V and manual assessment are employed to filter out unethical question and answer pairs.
## - Bias, Risks, and Limitations
Your access to and use of this dataset are at your own risk. We do not guarantee the accuracy of this dataset. The dataset is provided “as is” and we make no warranty or representation to you with respect to it and we expressly disclaim, and hereby expressly waive, all warranties, express, implied, statutory or otherwise. This includes, without limitation, warranties of quality, performance, merchantability or fitness for a particular purpose, non-infringement, absence of latent or other defects, accuracy, or the presence or absence of errors, whether or not known or discoverable. In no event will we be liable to you on any legal theory (including, without limitation, negligence) or otherwise for any direct, special, indirect, incidental, consequential, punitive, exemplary, or other losses, costs, expenses, or damages arising out of this public license or use of the licensed material. The disclaimer of warranties and limitation of liability provided above shall be interpreted in a manner that, to the extent possible, most closely approximates an absolute disclaimer and waiver of all liability.
## - Citation
```
@misc{tang2024mtvqa,
title={MTVQA: Benchmarking Multilingual Text-Centric Visual Question Answering},
author={Jingqun Tang and Qi Liu and Yongjie Ye and Jinghui Lu and Shu Wei and Chunhui Lin and Wanqing Li and Mohamad Fitri Faiz Bin Mahmood and Hao Feng and Zhen Zhao and Yanjie Wang and Yuliang Liu and Hao Liu and Xiang Bai and Can Huang},
year={2024},
eprint={2405.11985},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
```"
Bingsu/ko_alpaca_data,"{""dataset_info"": {""features"": [{""name"": ""instruction"", ""dtype"": ""string""}, {""name"": ""input"", ""dtype"": ""string""}, {""name"": ""output"", ""dtype"": ""string""}], ""splits"": [{""name"": ""train"", ""num_bytes"": 13791136, ""num_examples"": 49620}], ""download_size"": 8491044, ""dataset_size"": 13791136}, ""license"": ""cc-by-nc-4.0"", ""language"": [""ko""], ""pretty_name"": ""ko-alpaca-data"", ""size_categories"": [""10K
huggingface: [beomi/KoAlpaca](https://huggingface.co/beomi/KoAlpaca)
1. Translate dataset
Translated 'instruction' and 'input' in the dataset via the DeepL API, except for 'output', which we did not translate because it is the output of OpenAI's `text-davinci-003` model.
2. Generate output data
Then, using the instruction and input, generate output data via the OpenAI ChatGPT API (gpt-3.5-turbo).
Below is the prompt we used to generate the answer.
```python
PROMPT = """"""\
다양한 작업에 대한 답변을 생성해주세요. 이러한 작업 지침은 ChatGPT 모델에 주어지며, ChatGPT 모델이 지침을 완료하는지 평가합니다.
요구 사항은 다음과 같습니다:
1. 다양성을 극대화하기 위해 각 지시에 대해 동사를 반복하지 않도록 하세요.
2. 지시에 사용되는 언어도 다양해야 합니다. 예를 들어, 질문과 명령형 지시를 결합해야 합니다.
3. 지시 사항의 유형이 다양해야 합니다. 목록에는 개방형 생성, 분류, 편집 등과 같은 다양한 유형의 작업이 포함되어야 합니다.
2. GPT 언어 모델은 지시를 완료할 수 있어야 합니다. 예를 들어 어시스턴트에게 시각적 또는 오디오 출력을 생성하도록 요청하지 마세요. 또 다른 예로, 어시스턴트가 어떤 작업도 수행할 수 없으므로 오후 5시에 깨우거나 미리 알림을 설정하도록 요청하지 마세요.
3. 답변은 한국어로 작성해야 합니다.
4. 답변을 1~2문장으로 작성하세요. 명령문이나 질문도 허용됩니다.
5. 지시 사항에 대한 적절한 입력을 생성해야 합니다. 입력 필드에는 지시에 대한 구체적인 예가 포함되어야 합니다. 실제 데이터를 포함해야 하며 단순한 자리 표시자를 포함해서는 안 됩니다. 입력은 지시 사항을 어렵게 만들 수 있는 상당한 내용을 제공해야 하지만 100단어를 넘지 않는 것이 이상적입니다.
6. 일부 지시사항은 추가 입력이 있고, 일부 지시에는 입력 필드가 비어있습니다. 예를 들어 ""세계에서 가장 높은 봉우리는 무엇인가?""라는 일반적인 정보를 묻는 지시의 경우 구체적인 맥락을 제공할 필요가 없어, 입력 필드가 비어있을 수 있습니다.
7. 출력은 명령어와 입력에 대한 적절한 응답이어야 합니다.
아래에 10개의 명령어와 입력(옵션)에 따라 적절한 응답을 생성하세요.
응답은 아래와 같은 형식으로 10가지를 0번 부터 9번 까지, 번호에 따라 해당 번호의 명령어와 입력에 알맞게 작성하세요.
각 응답 사이는 ### 으로 내용을 분리해주세요.
응답0: 첫 번째 응답내용###
응답1: 두 번째 응답내용###
...
응답9: 마지막 응답내용""""""
```
### Lisence
CC-BY-NC-4.0
### Data Splits
| | train |
| --------- | -------- |
| # of data | 49620 |
\# Note that the number is not the same as the original data(52002)
```python
>>> from datasets import load_dataset
>>> ds = load_dataset(""Bingsu/ko_alpaca_data"", split=""train"")
>>> ds
Dataset({
features: ['instruction', 'input', 'output'],
num_rows: 49620
})
```
```python
>>> ds[0]
{'instruction': '건강을 유지하기 위한 세 가지 팁을 알려주세요.',
'input': '',
'output': '세 가지 팁은 아침식사를 꼭 챙기며, 충분한 수면을 취하고, 적극적으로 운동을 하는 것입니다.'}
```"
haoranxu/X-ALMA-Parallel-Data,"{""dataset_info"": [{""config_name"": ""af-en"", ""features"": [{""name"": ""translation"", ""struct"": [{""name"": ""af"", ""dtype"": ""string""}, {""name"": ""en"", ""dtype"": ""string""}]}], ""splits"": [{""name"": ""train"", ""num_bytes"": 803353, ""num_examples"": 2994}], ""download_size"": 520887, ""dataset_size"": 803353}, {""config_name"": ""ar-en"", ""features"": [{""name"": ""translation"", ""struct"": [{""name"": ""ar"", ""dtype"": ""string""}, {""name"": ""en"", ""dtype"": ""string""}]}], ""splits"": [{""name"": ""train"", ""num_bytes"": 1017470, ""num_examples"": 2994}], ""download_size"": 587244, ""dataset_size"": 1017470}, {""config_name"": ""az-en"", ""features"": [{""name"": ""translation"", ""struct"": [{""name"": ""az"", ""dtype"": ""string""}, {""name"": ""en"", ""dtype"": ""string""}]}], ""splits"": [{""name"": ""train"", ""num_bytes"": 868767, ""num_examples"": 2994}], ""download_size"": 548812, ""dataset_size"": 868767}, {""config_name"": ""bg-en"", ""features"": [{""name"": ""translation"", ""struct"": [{""name"": ""bg"", ""dtype"": ""string""}, {""name"": ""en"", ""dtype"": ""string""}]}], ""splits"": [{""name"": ""train"", ""num_bytes"": 1123254, ""num_examples"": 2994}], ""download_size"": 624175, ""dataset_size"": 1123254}, {""config_name"": ""ca-en"", ""features"": [{""name"": ""translation"", ""struct"": [{""name"": ""ca"", ""dtype"": ""string""}, {""name"": ""en"", ""dtype"": ""string""}]}], ""splits"": [{""name"": ""train"", ""num_bytes"": 827496, ""num_examples"": 2994}], ""download_size"": 538392, ""dataset_size"": 827496}, {""config_name"": ""cs-en"", ""features"": [{""name"": ""translation"", ""struct"": [{""name"": ""cs"", ""dtype"": ""string""}, {""name"": ""en"", ""dtype"": ""string""}]}], ""splits"": [{""name"": ""train"", ""num_bytes"": 1550880, ""num_examples"": 6479}], ""download_size"": 1044916, ""dataset_size"": 1550880}, {""config_name"": ""da-en"", ""features"": [{""name"": ""translation"", ""struct"": [{""name"": ""da"", ""dtype"": ""string""}, {""name"": ""en"", ""dtype"": ""string""}]}], ""splits"": [{""name"": ""train"", ""num_bytes"": 786316, ""num_examples"": 2994}], ""download_size"": 514286, ""dataset_size"": 786316}, {""config_name"": ""de-en"", ""features"": [{""name"": ""translation"", ""struct"": [{""name"": ""de"", ""dtype"": ""string""}, {""name"": ""en"", ""dtype"": ""string""}]}], ""splits"": [{""name"": ""train"", ""num_bytes"": 1694313, ""num_examples"": 7015}], ""download_size"": 1097168, ""dataset_size"": 1694313}, {""config_name"": ""el-en"", ""features"": [{""name"": ""translation"", ""struct"": [{""name"": ""el"", ""dtype"": ""string""}, {""name"": ""en"", ""dtype"": ""string""}]}], ""splits"": [{""name"": ""train"", ""num_bytes"": 1211278, ""num_examples"": 2994}], ""download_size"": 672762, ""dataset_size"": 1211278}, {""config_name"": ""es-en"", ""features"": [{""name"": ""translation"", ""struct"": [{""name"": ""es"", ""dtype"": ""string""}, {""name"": ""en"", ""dtype"": ""string""}]}], ""splits"": [{""name"": ""train"", ""num_bytes"": 844431, ""num_examples"": 2994}], ""download_size"": 545686, ""dataset_size"": 844431}, {""config_name"": ""et-en"", ""features"": [{""name"": ""translation"", ""struct"": [{""name"": ""et"", ""dtype"": ""string""}, {""name"": ""en"", ""dtype"": ""string""}]}], ""splits"": [{""name"": ""train"", ""num_bytes"": 1269025, ""num_examples"": 4994}], ""download_size"": 844040, ""dataset_size"": 1269025}, {""config_name"": ""fa-en"", ""features"": [{""name"": ""translation"", ""struct"": [{""name"": ""fa"", ""dtype"": ""string""}, {""name"": ""en"", ""dtype"": ""string""}]}], ""splits"": [{""name"": ""train"", ""num_bytes"": 1043334, ""num_examples"": 2994}], ""download_size"": 587273, ""dataset_size"": 1043334}, {""config_name"": ""fi-en"", ""features"": [{""name"": ""translation"", ""struct"": [{""name"": ""fi"", ""dtype"": ""string""}, {""name"": ""en"", ""dtype"": ""string""}]}], ""splits"": [{""name"": ""train"", ""num_bytes"": 1767639, ""num_examples"": 6987}], ""download_size"": 1151622, ""dataset_size"": 1767639}, {""config_name"": ""fr-en"", ""features"": [{""name"": ""translation"", ""struct"": [{""name"": ""fr"", ""dtype"": ""string""}, {""name"": ""en"", ""dtype"": ""string""}]}], ""splits"": [{""name"": ""train"", ""num_bytes"": 1161017, ""num_examples"": 4494}], ""download_size"": 755975, ""dataset_size"": 1161017}, {""config_name"": ""gl-en"", ""features"": [{""name"": ""translation"", ""struct"": [{""name"": ""gl"", ""dtype"": ""string""}, {""name"": ""en"", ""dtype"": ""string""}]}], ""splits"": [{""name"": ""train"", ""num_bytes"": 817189, ""num_examples"": 2994}], ""download_size"": 534093, ""dataset_size"": 817189}, {""config_name"": ""gu-en"", ""features"": [{""name"": ""translation"", ""struct"": [{""name"": ""gu"", ""dtype"": ""string""}, {""name"": ""en"", ""dtype"": ""string""}]}], ""splits"": [{""name"": ""train"", ""num_bytes"": 2105747, ""num_examples"": 5008}], ""download_size"": 1022173, ""dataset_size"": 2105747}, {""config_name"": ""he-en"", ""features"": [{""name"": ""translation"", ""struct"": [{""name"": ""he"", ""dtype"": ""string""}, {""name"": ""en"", ""dtype"": ""string""}]}], ""splits"": [{""name"": ""train"", ""num_bytes"": 931335, ""num_examples"": 2994}], ""download_size"": 548830, ""dataset_size"": 931335}, {""config_name"": ""hi-en"", ""features"": [{""name"": ""translation"", ""struct"": [{""name"": ""hi"", ""dtype"": ""string""}, {""name"": ""en"", ""dtype"": ""string""}]}], ""splits"": [{""name"": ""train"", ""num_bytes"": 1389945, ""num_examples"": 2994}], ""download_size"": 658112, ""dataset_size"": 1389945}, {""config_name"": ""hu-en"", ""features"": [{""name"": ""translation"", ""struct"": [{""name"": ""hu"", ""dtype"": ""string""}, {""name"": ""en"", ""dtype"": ""string""}]}], ""splits"": [{""name"": ""train"", ""num_bytes"": 848293, ""num_examples"": 2994}], ""download_size"": 560248, ""dataset_size"": 848293}, {""config_name"": ""id-en"", ""features"": [{""name"": ""translation"", ""struct"": [{""name"": ""id"", ""dtype"": ""string""}, {""name"": ""en"", ""dtype"": ""string""}]}], ""splits"": [{""name"": ""train"", ""num_bytes"": 821134, ""num_examples"": 2994}], ""download_size"": 514539, ""dataset_size"": 821134}, {""config_name"": ""is-en"", ""features"": [{""name"": ""translation"", ""struct"": [{""name"": ""is"", ""dtype"": ""string""}, {""name"": ""en"", ""dtype"": ""string""}]}], ""splits"": [{""name"": ""train"", ""num_bytes"": 1384495, ""num_examples"": 4994}], ""download_size"": 884198, ""dataset_size"": 1384495}, {""config_name"": ""it-en"", ""features"": [{""name"": ""translation"", ""struct"": [{""name"": ""it"", ""dtype"": ""string""}, {""name"": ""en"", ""dtype"": ""string""}]}], ""splits"": [{""name"": ""train"", ""num_bytes"": 838865, ""num_examples"": 2994}], ""download_size"": 543944, ""dataset_size"": 838865}, {""config_name"": ""ja-en"", ""features"": [{""name"": ""translation"", ""struct"": [{""name"": ""ja"", ""dtype"": ""string""}, {""name"": ""en"", ""dtype"": ""string""}]}], ""splits"": [{""name"": ""train"", ""num_bytes"": 1715595, ""num_examples"": 7039}], ""download_size"": 1075528, ""dataset_size"": 1715595}, {""config_name"": ""ka-en"", ""features"": [{""name"": ""translation"", ""struct"": [{""name"": ""ka"", ""dtype"": ""string""}, {""name"": ""en"", ""dtype"": ""string""}]}], ""splits"": [{""name"": ""train"", ""num_bytes"": 1483680, ""num_examples"": 2994}], ""download_size"": 674194, ""dataset_size"": 1483680}, {""config_name"": ""kk-en"", ""features"": [{""name"": ""translation"", ""struct"": [{""name"": ""kk"", ""dtype"": ""string""}, {""name"": ""en"", ""dtype"": ""string""}]}], ""splits"": [{""name"": ""train"", ""num_bytes"": 1790056, ""num_examples"": 4992}], ""download_size"": 978776, ""dataset_size"": 1790056}, {""config_name"": ""ko-en"", ""features"": [{""name"": ""translation"", ""struct"": [{""name"": ""ko"", ""dtype"": ""string""}, {""name"": ""en"", ""dtype"": ""string""}]}], ""splits"": [{""name"": ""train"", ""num_bytes"": 864688, ""num_examples"": 2994}], ""download_size"": 551253, ""dataset_size"": 864688}, {""config_name"": ""ky-en"", ""features"": [{""name"": ""translation"", ""struct"": [{""name"": ""ky"", ""dtype"": ""string""}, {""name"": ""en"", ""dtype"": ""string""}]}], ""splits"": [{""name"": ""train"", ""num_bytes"": 1093521, ""num_examples"": 2994}], ""download_size"": 611728, ""dataset_size"": 1093521}, {""config_name"": ""lt-en"", ""features"": [{""name"": ""translation"", ""struct"": [{""name"": ""lt"", ""dtype"": ""string""}, {""name"": ""en"", ""dtype"": ""string""}]}], ""splits"": [{""name"": ""train"", ""num_bytes"": 1360363, ""num_examples"": 4992}], ""download_size"": 892348, ""dataset_size"": 1360363}, {""config_name"": ""lv-en"", ""features"": [{""name"": ""translation"", ""struct"": [{""name"": ""lv"", ""dtype"": ""string""}, {""name"": ""en"", ""dtype"": ""string""}]}], ""splits"": [{""name"": ""train"", ""num_bytes"": 1364674, ""num_examples"": 4995}], ""download_size"": 892646, ""dataset_size"": 1364674}, {""config_name"": ""mg-en"", ""features"": [{""name"": ""translation"", ""struct"": [{""name"": ""mg"", ""dtype"": ""string""}, {""name"": ""en"", ""dtype"": ""string""}]}], ""splits"": [{""name"": ""train"", ""num_bytes"": 885246, ""num_examples"": 2994}], ""download_size"": 534161, ""dataset_size"": 885246}, {""config_name"": ""mk-en"", ""features"": [{""name"": ""translation"", ""struct"": [{""name"": ""mk"", ""dtype"": ""string""}, {""name"": ""en"", ""dtype"": ""string""}]}], ""splits"": [{""name"": ""train"", ""num_bytes"": 1122169, ""num_examples"": 2994}], ""download_size"": 613172, ""dataset_size"": 1122169}, {""config_name"": ""mr-en"", ""features"": [{""name"": ""translation"", ""struct"": [{""name"": ""mr"", ""dtype"": ""string""}, {""name"": ""en"", ""dtype"": ""string""}]}], ""splits"": [{""name"": ""train"", ""num_bytes"": 1430599, ""num_examples"": 2994}], ""download_size"": 679329, ""dataset_size"": 1430599}, {""config_name"": ""ms-en"", ""features"": [{""name"": ""translation"", ""struct"": [{""name"": ""ms"", ""dtype"": ""string""}, {""name"": ""en"", ""dtype"": ""string""}]}], ""splits"": [{""name"": ""train"", ""num_bytes"": 816233, ""num_examples"": 2994}], ""download_size"": 509468, ""dataset_size"": 816233}, {""config_name"": ""ne-en"", ""features"": [{""name"": ""translation"", ""struct"": [{""name"": ""ne"", ""dtype"": ""string""}, {""name"": ""en"", ""dtype"": ""string""}]}], ""splits"": [{""name"": ""train"", ""num_bytes"": 1375637, ""num_examples"": 2994}], ""download_size"": 660276, ""dataset_size"": 1375637}, {""config_name"": ""nl-en"", ""features"": [{""name"": ""translation"", ""struct"": [{""name"": ""nl"", ""dtype"": ""string""}, {""name"": ""en"", ""dtype"": ""string""}]}], ""splits"": [{""name"": ""train"", ""num_bytes"": 829405, ""num_examples"": 2994}], ""download_size"": 535906, ""dataset_size"": 829405}, {""config_name"": ""no-en"", ""features"": [{""name"": ""translation"", ""struct"": [{""name"": ""no"", ""dtype"": ""string""}, {""name"": ""en"", ""dtype"": ""string""}]}], ""splits"": [{""name"": ""train"", ""num_bytes"": 784676, ""num_examples"": 2994}], ""download_size"": 515255, ""dataset_size"": 784676}, {""config_name"": ""pl-en"", ""features"": [{""name"": ""translation"", ""struct"": [{""name"": ""pl"", ""dtype"": ""string""}, {""name"": ""en"", ""dtype"": ""string""}]}], ""splits"": [{""name"": ""train"", ""num_bytes"": 1378794, ""num_examples"": 4995}], ""download_size"": 910980, ""dataset_size"": 1378794}, {""config_name"": ""pt-en"", ""features"": [{""name"": ""translation"", ""struct"": [{""name"": ""pt"", ""dtype"": ""string""}, {""name"": ""en"", ""dtype"": ""string""}]}], ""splits"": [{""name"": ""train"", ""num_bytes"": 821085, ""num_examples"": 2994}], ""download_size"": 535046, ""dataset_size"": 821085}, {""config_name"": ""ro-en"", ""features"": [{""name"": ""translation"", ""struct"": [{""name"": ""ro"", ""dtype"": ""string""}, {""name"": ""en"", ""dtype"": ""string""}]}], ""splits"": [{""name"": ""train"", ""num_bytes"": 1395115, ""num_examples"": 4993}], ""download_size"": 893942, ""dataset_size"": 1395115}, {""config_name"": ""ru-en"", ""features"": [{""name"": ""translation"", ""struct"": [{""name"": ""ru"", ""dtype"": ""string""}, {""name"": ""en"", ""dtype"": ""string""}]}], ""splits"": [{""name"": ""train"", ""num_bytes"": 2328746, ""num_examples"": 7047}], ""download_size"": 1322470, ""dataset_size"": 2328746}, {""config_name"": ""sr-en"", ""features"": [{""name"": ""translation"", ""struct"": [{""name"": ""sr"", ""dtype"": ""string""}, {""name"": ""en"", ""dtype"": ""string""}]}], ""splits"": [{""name"": ""train"", ""num_bytes"": 1063210, ""num_examples"": 2994}], ""download_size"": 610495, ""dataset_size"": 1063210}, {""config_name"": ""sv-en"", ""features"": [{""name"": ""translation"", ""struct"": [{""name"": ""sv"", ""dtype"": ""string""}, {""name"": ""en"", ""dtype"": ""string""}]}], ""splits"": [{""name"": ""train"", ""num_bytes"": 791631, ""num_examples"": 2994}], ""download_size"": 517584, ""dataset_size"": 791631}, {""config_name"": ""th-en"", ""features"": [{""name"": ""translation"", ""struct"": [{""name"": ""th"", ""dtype"": ""string""}, {""name"": ""en"", ""dtype"": ""string""}]}], ""splits"": [{""name"": ""train"", ""num_bytes"": 1398077, ""num_examples"": 2994}], ""download_size"": 676121, ""dataset_size"": 1398077}, {""config_name"": ""tr-en"", ""features"": [{""name"": ""translation"", ""struct"": [{""name"": ""tr"", ""dtype"": ""string""}, {""name"": ""en"", ""dtype"": ""string""}]}], ""splits"": [{""name"": ""train"", ""num_bytes"": 1596930, ""num_examples"": 5994}], ""download_size"": 1029093, ""dataset_size"": 1596930}, {""config_name"": ""uk-en"", ""features"": [{""name"": ""translation"", ""struct"": [{""name"": ""uk"", ""dtype"": ""string""}, {""name"": ""en"", ""dtype"": ""string""}]}], ""splits"": [{""name"": ""train"", ""num_bytes"": 2152390, ""num_examples"": 7049}], ""download_size"": 1233350, ""dataset_size"": 2152390}, {""config_name"": ""ur-en"", ""features"": [{""name"": ""translation"", ""struct"": [{""name"": ""ur"", ""dtype"": ""string""}, {""name"": ""en"", ""dtype"": ""string""}]}], ""splits"": [{""name"": ""train"", ""num_bytes"": 1060022, ""num_examples"": 2994}], ""download_size"": 596439, ""dataset_size"": 1060022}, {""config_name"": ""uz-en"", ""features"": [{""name"": ""translation"", ""struct"": [{""name"": ""uz"", ""dtype"": ""string""}, {""name"": ""en"", ""dtype"": ""string""}]}], ""splits"": [{""name"": ""train"", ""num_bytes"": 842293, ""num_examples"": 2994}], ""download_size"": 537748, ""dataset_size"": 842293}, {""config_name"": ""vi-en"", ""features"": [{""name"": ""translation"", ""struct"": [{""name"": ""vi"", ""dtype"": ""string""}, {""name"": ""en"", ""dtype"": ""string""}]}], ""splits"": [{""name"": ""train"", ""num_bytes"": 927691, ""num_examples"": 2994}], ""download_size"": 552852, ""dataset_size"": 927691}, {""config_name"": ""zh-en"", ""features"": [{""name"": ""translation"", ""struct"": [{""name"": ""zh"", ""dtype"": ""string""}, {""name"": ""en"", ""dtype"": ""string""}]}], ""splits"": [{""name"": ""train"", ""num_bytes"": 1661642, ""num_examples"": 6906}], ""download_size"": 1107090, ""dataset_size"": 1661642}], ""configs"": [{""config_name"": ""af-en"", ""data_files"": [{""split"": ""train"", ""path"": ""af-en/train-*""}]}, {""config_name"": ""ar-en"", ""data_files"": [{""split"": ""train"", ""path"": ""ar-en/train-*""}]}, {""config_name"": ""az-en"", ""data_files"": [{""split"": ""train"", ""path"": ""az-en/train-*""}]}, {""config_name"": ""bg-en"", ""data_files"": [{""split"": ""train"", ""path"": ""bg-en/train-*""}]}, {""config_name"": ""ca-en"", ""data_files"": [{""split"": ""train"", ""path"": ""ca-en/train-*""}]}, {""config_name"": ""cs-en"", ""data_files"": [{""split"": ""train"", ""path"": ""cs-en/train-*""}]}, {""config_name"": ""da-en"", ""data_files"": [{""split"": ""train"", ""path"": ""da-en/train-*""}]}, {""config_name"": ""de-en"", ""data_files"": [{""split"": ""train"", ""path"": ""de-en/train-*""}]}, {""config_name"": ""el-en"", ""data_files"": [{""split"": ""train"", ""path"": ""el-en/train-*""}]}, {""config_name"": ""es-en"", ""data_files"": [{""split"": ""train"", ""path"": ""es-en/train-*""}]}, {""config_name"": ""et-en"", ""data_files"": [{""split"": ""train"", ""path"": ""et-en/train-*""}]}, {""config_name"": ""fa-en"", ""data_files"": [{""split"": ""train"", ""path"": ""fa-en/train-*""}]}, {""config_name"": ""fi-en"", ""data_files"": [{""split"": ""train"", ""path"": ""fi-en/train-*""}]}, {""config_name"": ""fr-en"", ""data_files"": [{""split"": ""train"", ""path"": ""fr-en/train-*""}]}, {""config_name"": ""gl-en"", ""data_files"": [{""split"": ""train"", ""path"": ""gl-en/train-*""}]}, {""config_name"": ""gu-en"", ""data_files"": [{""split"": ""train"", ""path"": ""gu-en/train-*""}]}, {""config_name"": ""he-en"", ""data_files"": [{""split"": ""train"", ""path"": ""he-en/train-*""}]}, {""config_name"": ""hi-en"", ""data_files"": [{""split"": ""train"", ""path"": ""hi-en/train-*""}]}, {""config_name"": ""hu-en"", ""data_files"": [{""split"": ""train"", ""path"": ""hu-en/train-*""}]}, {""config_name"": ""id-en"", ""data_files"": [{""split"": ""train"", ""path"": ""id-en/train-*""}]}, {""config_name"": ""is-en"", ""data_files"": [{""split"": ""train"", ""path"": ""is-en/train-*""}]}, {""config_name"": ""it-en"", ""data_files"": [{""split"": ""train"", ""path"": ""it-en/train-*""}]}, {""config_name"": ""ja-en"", ""data_files"": [{""split"": ""train"", ""path"": ""ja-en/train-*""}]}, {""config_name"": ""ka-en"", ""data_files"": [{""split"": ""train"", ""path"": ""ka-en/train-*""}]}, {""config_name"": ""kk-en"", ""data_files"": [{""split"": ""train"", ""path"": ""kk-en/train-*""}]}, {""config_name"": ""ko-en"", ""data_files"": [{""split"": ""train"", ""path"": ""ko-en/train-*""}]}, {""config_name"": ""ky-en"", ""data_files"": [{""split"": ""train"", ""path"": ""ky-en/train-*""}]}, {""config_name"": ""lt-en"", ""data_files"": [{""split"": ""train"", ""path"": ""lt-en/train-*""}]}, {""config_name"": ""lv-en"", ""data_files"": [{""split"": ""train"", ""path"": ""lv-en/train-*""}]}, {""config_name"": ""mg-en"", ""data_files"": [{""split"": ""train"", ""path"": ""mg-en/train-*""}]}, {""config_name"": ""mk-en"", ""data_files"": [{""split"": ""train"", ""path"": ""mk-en/train-*""}]}, {""config_name"": ""mr-en"", ""data_files"": [{""split"": ""train"", ""path"": ""mr-en/train-*""}]}, {""config_name"": ""ms-en"", ""data_files"": [{""split"": ""train"", ""path"": ""ms-en/train-*""}]}, {""config_name"": ""ne-en"", ""data_files"": [{""split"": ""train"", ""path"": ""ne-en/train-*""}]}, {""config_name"": ""nl-en"", ""data_files"": [{""split"": ""train"", ""path"": ""nl-en/train-*""}]}, {""config_name"": ""no-en"", ""data_files"": [{""split"": ""train"", ""path"": ""no-en/train-*""}]}, {""config_name"": ""pl-en"", ""data_files"": [{""split"": ""train"", ""path"": ""pl-en/train-*""}]}, {""config_name"": ""pt-en"", ""data_files"": [{""split"": ""train"", ""path"": ""pt-en/train-*""}]}, {""config_name"": ""ro-en"", ""data_files"": [{""split"": ""train"", ""path"": ""ro-en/train-*""}]}, {""config_name"": ""ru-en"", ""data_files"": [{""split"": ""train"", ""path"": ""ru-en/train-*""}]}, {""config_name"": ""sr-en"", ""data_files"": [{""split"": ""train"", ""path"": ""sr-en/train-*""}]}, {""config_name"": ""sv-en"", ""data_files"": [{""split"": ""train"", ""path"": ""sv-en/train-*""}]}, {""config_name"": ""th-en"", ""data_files"": [{""split"": ""train"", ""path"": ""th-en/train-*""}]}, {""config_name"": ""tr-en"", ""data_files"": [{""split"": ""train"", ""path"": ""tr-en/train-*""}]}, {""config_name"": ""uk-en"", ""data_files"": [{""split"": ""train"", ""path"": ""uk-en/train-*""}]}, {""config_name"": ""ur-en"", ""data_files"": [{""split"": ""train"", ""path"": ""ur-en/train-*""}]}, {""config_name"": ""uz-en"", ""data_files"": [{""split"": ""train"", ""path"": ""uz-en/train-*""}]}, {""config_name"": ""vi-en"", ""data_files"": [{""split"": ""train"", ""path"": ""vi-en/train-*""}]}, {""config_name"": ""zh-en"", ""data_files"": [{""split"": ""train"", ""path"": ""zh-en/train-*""}]}], ""language"": [""en"", ""da"", ""nl"", ""de"", ""is"", ""no"", ""sc"", ""af"", ""ca"", ""ro"", ""gl"", ""it"", ""pt"", ""es"", ""bg"", ""mk"", ""sr"", ""uk"", ""ru"", ""id"", ""ms"", ""th"", ""vi"", ""mg"", ""fr"", ""hu"", ""el"", ""cs"", ""pl"", ""lt"", ""lv"", ""ka"", ""zh"", ""ja"", ""ko"", ""fi"", ""et"", ""gu"", ""hi"", ""mr"", ""ne"", ""ur"", ""az"", ""kk"", ""ky"", ""tr"", ""uz"", ""ar"", ""he"", ""fa""]}","---
This is the translation parallel dataset used by [X-ALMA](https://arxiv.org/pdf/2410.03115).
```
@misc{xu2024xalmaplugplay,
title={X-ALMA: Plug & Play Modules and Adaptive Rejection for Quality Translation at Scale},
author={Haoran Xu and Kenton Murray and Philipp Koehn and Hieu Hoang and Akiko Eriguchi and Huda Khayrallah},
year={2024},
eprint={2410.03115},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2410.03115},
}
```"
changpt/ko-lima-vicuna,"{""license"": ""cc-by-2.0"", ""task_categories"": [""text-generation""], ""language"": [""ko""], ""size_categories"": [""n<1K""], ""pretty_name"": ""KoLima(vicuna)"", ""tags"": [""KoLima""]}","# Ko Lima Vicuna Dataset
GPT4 API를 사용하여 [lima_vicuna_format 데이터](https://huggingface.co/datasets/64bits/lima_vicuna_format)를 한국어로 재생성한 데이터셋입니다.
GPT4 사용시 프롬프트는 ""단순 번역이 아닌, 원문 내용을 참조하여 데이터를 재생성하라""는 내용으로 작성하였습니다.
원본 데이터셋 1,030건 중 866건의 데이터가 재생성되었습니다.
*데이터셋 제작은 웹 크롤링 전문 기업, **[해시스크래퍼(HASHSCRAPER)](https://www.hashscraper.com/)** 에서 진행되었습니다.*
*This is the lima_vicuna_format data regenerated into Korean using the GPT4 API.*
*When using GPT4, the prompt was written to read ""Regenerate the data by referring to the original content, not just a simple translation"".*
*Out of the 1,030 original datasets, 866 data points were regenerated.*
*The dataset was created by [HASHSCRAPER](https://www.hashscraper.com/), a company specializing in web scraping.*
---
# How to use
```python
>>> from datasets import load_dataset
>>> data = load_dataset(""changpt/ko-lima-vicuna"", data_files=""ko_lima_vicuna.json"", split=""train"")
>>> data
Dataset({
features: ['id', 'conversations'],
num_rows: 866
})
```
```python
>>> data[0]
{'id': '7-stackexchange',
'conversations': [{'from': 'human',
'value': '나는 ```ffmpeg```로 비디오를 GIF 파일로 변환하고 있어:\n```ffmpeg \\\n -i input.flv \\\n -ss 00:00:00.000 \\\n -pix_fmt rgb24 \\\n -r 10 \\\n -s 320x240 \\\n -t 00:00:10.000 \\\n output.gif\n```\n잘 작동하는데, 출력된 gif 파일의 품질이 너무 낮아. 변환된 gif의 품질을 어떻게 개선할 수 있을까?'},
{'from': 'gpt',
'value': '중간 이미지 파일을 피하고 싶다면, LordNeckBeard가 제공한 명령어를 ```ffmpeg```와 ImageMagick의 ```convert``` 사이에 파이프로 연결할 수 있습니다. 이렇게 하면 중간 파일이 필요하지 않습니다:\n```ffmpeg -i input.flv -vf scale=320:-1 -r 10 -f image2pipe -vcodec ppm - | convert -delay 10 -loop 0 - output.gif\n```\n```-f image2pipe```는 ffmpeg에게 비디오를 이미지로 분할하고 파이프 출력에 적합하게 만들라고 지시하며, ```-vcodec ppm```은 출력 형식을 ppm으로 지정합니다(png 형식인 경우, ```convert```가 파이프에서 모든 이미지를 읽지 않거나 ffmpeg가 모두 출력하지 않는 것 같습니다). 두 명령어의 ```-```는 각각 출력과 입력에 파이프를 사용하도록 지정합니다.\n파일을 저장하지 않고 결과를 최적화하려면, ```convert```의 출력을 두 번째 ```convert``` 명령어로 파이프할 수 있습니다:\n```ffmpeg -i input.flv -vf scale=320:-1 -r 10 -f image2pipe -vcodec ppm - | convert -delay 10 -loop 0 - gif:- | convert -layers Optimize - output.gif\n```\n```gif:-```는 ```convert```에게 출력을 gif 형식의 데이터로 파이프하도록 지시하고, ```-layers Optimize```는 두 번째 ```convert```에게 ```optimize-frame``` 및 ```optimize-transparancy``` 방법을 수행하도록 지시합니다(ImageMagick 애니메이션 최적화 소개 참조). ```-layers Optimize```의 출력이 항상 더 작은 파일 크기를 제공하지는 않으므로, 먼저 최적화 없이 gif로 변환해 보는 것이 좋습니다.\n이 과정에서 모든 것이 메모리에 있으므로 이미지가 매우 큰 경우 충분한 메모리가 필요할 수 있습니다.'}]}
```
---
# License
[CC BY 2.0 KR](https://creativecommons.org/licenses/by/2.0/kr/)
[Open AI](https://openai.com/policies/terms-of-use)"
CohereForAI/include-base-44,"{""language"": [""sq"", ""ar"", ""hy"", ""az"", ""be"", ""bn"", ""eu"", ""bg"", ""tr"", ""hr"", ""nl"", ""fa"", ""es"", ""et"", ""fi"", ""fr"", ""de"", ""el"", ""ka"", ""he"", ""hi"", ""hu"", ""id"", ""it"", ""ja"", ""kk"", ""ko"", ""lt"", ""ml"", ""ms"", ""ne"", ""pl"", ""pt"", ""ru"", ""ta"", ""tl"", ""te"", ""uk"", ""ur"", ""uz"", ""vi"", ""zh"", ""sr"", ""mk""], ""license"": ""apache-2.0"", ""size_categories"": [""100K
- **Paper**: http://arxiv.org/abs/2411.19799
### Dataset Summary
INCLUDE is a comprehensive knowledge- and reasoning-centric benchmark across **44 languages** that evaluates multilingual LLMs for performance in the actual language environments where they would be deployed.
It contains 22,637 4-option multiple-choice-questions (MCQ) extracted from academic and professional exams, covering 57 topics, including regional knowledge.
For a quicker evaluation, you can use [include-lite-44](https://huggingface.co/datasets/CohereForAI/include-lite-44), which is a subset of `include-base-44`, covering the same 44 languages.
### Languages
Albanian, Arabic, Armenian, Azerbaijani, Basque, Belarusian, Bengali, Bulgarian, Chinese, Croatian, Dutch, Estonian, Finnish, French, Georgian, German, Greek, Hebrew, Hindi, Hungarian, Indonesia, Italian, Japanese, Kazakh, Korean, Lithuanian, Malay, Malayalam, Nepali, North Macedonian, Persian, Polish, Portuguese, russian, Serbian, Spanish, Tagalog, Tamil, Telugu, Turkish, Ukrainian, Urdu, Uzbek, Vietnamese
### Topics
- **Academic**:
Accounting, Agriculture, Anthropology, Architecture and Design, Arts & Humanities, Biology, Business administration, Business ethics, Business, Chemistry, Computer Science, Culturology, Earth science, Economics, Education, Engineering, Environmental studies and forestry, Family and consumer science, Finance, Geography, Health, History, Human physical performance and recreation, Industrial and labor relations, International trade, Journalism, media studies, and communication, Language, Law, Library and museum studies, Literature, Logic, Management, Marketing, Math, Medicine, Military Sciences, Multiple exams, Performing arts, Philosophy, Physics, Political sciences, Psychology, Public Administration, Public Policy, Qualimetry, Religious studies, Risk management and insurance, Social Work, Social work, Sociology, STEM, Transportation, Visual Arts
- **Licenses**:
Driving License, Marine License, Medical License, Professional Certifications
### Data schema
An example from a French Law question looks as follows:
```
{
""language"": ""French"",
""country"": ""France"",
""level"": ""Academic"",
""domain"": ""Arts & Humanities"",
""subject"": ""Law"",
""regional_feature"": ""region explicit"",
""question"": ""Que permet l'article 49-3 de la Constitution ?"",
""choices"": [""de recourir au référendum"", ""au Parlement de contrôler l'action du Gouvernement"", ""l'adoption sans vote d'une loi"", ""de prononcer la dissolution de l'Assemblée nationale""],
""answer"": 2
}
```
### Model Performance
Models performance on **INCLUDE** using the Harness-eval framework.
| **Model** | **Original Language instructions** | **English instructions** |
|------------------------------------|:--------------------------:|:--------------------:|
| Llama3.1-70B-Instruct | 70.6 | 70.9 |
| Qwen2.5-14B | 62.3 | 62.6 |
| Aya-expanse-32b | 59.1 | 59.5 |
| Qwen2.5-7B | 55.0 | 55.5 |
| Qwen2.5-7B-Instruct | 54.8 | 54.8 |
| Llama-3.1-8B-Instruct | 53.5 | 54.4 |
| Gemma-7B | 53.5 | 53.2 |
| Llama-3.1-8B | 51.2 | 51.9 |
| Aya-expanse-8b | 47.2 | 47.8 |
| Mistral-7B | 44.1 | 44.6 |
| Mistral-7B-Instruct | 44.2 | 44.3 |
| Gemma-7B-Instruct | 38.6 | 39.3 |
## Citation
```
@article{romanou2024include,
title={INCLUDE: Evaluating Multilingual Language Understanding with Regional Knowledge},
author={Romanou, Angelika and Foroutan, Negar and Sotnikova, Anna and Chen, Zeming and Nelaturu, Sree Harsha and Singh, Shivalika and Maheshwary, Rishabh and Altomare, Micol and Haggag, Mohamed A and Amayuelas, Alfonso and others},
journal={arXiv preprint arXiv:2411.19799},
year={2024}
}
```"
1-800-SHARED-TASKS/xlsum-subset,"{""annotations_creators"": [""found""], ""language_creators"": [""found""], ""language"": [""am"", ""ar"", ""az"", ""bn"", ""my"", ""zh"", ""en"", ""fr"", ""gu"", ""ha"", ""hi"", ""ig"", ""id"", ""ja"", ""rn"", ""ko"", ""ky"", ""mr"", ""ne"", ""om"", ""ps"", ""fa"", ""pcm"", ""pt"", ""pa"", ""ru"", ""gd"", ""sr"", ""si"", ""so"", ""es"", ""sw"", ""ta"", ""te"", ""th"", ""ti"", ""tr"", ""uk"", ""ur"", ""uz"", ""vi"", ""cy"", ""yo""], ""license"": [""cc-by-nc-sa-4.0""], ""multilinguality"": [""multilingual""], ""size_categories"": [""1M`, using your real name.
This can be done easily using the `-s` flag on the `git commit`.
Please see the [Contribution guidelines](https://oldi.org/guidelines) for further information.
### How to add a pull request
1. Go to https://huggingface.co/datasets/openlanguagedata/flores_plus/discussions, press ""New pull request"".
2. In the popup window, enter a branch name and press ""Create branch"".
3. On your computer, do `git clone https://huggingface.co/datasets/openlanguagedata/flores_plus`.
4. Checkout to your newly created branch (e.g. `cd flores_plus && git fetch origin refs/pr/4:pr/4 && git checkout pr/4`).
5. Check that you are logged in to the HF CLI tool (`huggingface-cli whoami`). If not, please log into it (`huggingface-cli login` and enter your token).
6. Modify a file (for adding new languages, see the instructions below) and add the changes to git (e.g. `git add dev/rus_Cyrl.parquet`).
7. Commit with an -s flag (e.g. `git commit -s -m ""fix a few typos in the Russian dev set""`).
8. Push (e.g. `git push --set-upstream origin pr/4`).
9. Go to the pull request page and see if it reflects your changes.
10. When your pull request is ready, press the ""Publish"" button in its web interface.
### Testing your changes
After contributing new translations or modifying existing ones, you can check that the data format is OK.
Assuming that you have the Python packages `pytest` and `dataset` installed, you can type
```
pytest
```
in your console (in the `flores_plus` directory), and the tests will run.
If any of them fails, please inspect the translations, following the hints in the test output.
## Changelog
See [CHANGELOG.md](CHANGELOG.md) for information about the latest changes.
## Language Coverage
| Code | Script | Glottocode | Name | Notes |
|-------|--------|------------|-------------------------------------|------------------------------------------------------------|
| `ace` | `Arab` | `achi1257` | Acehnese (Jawi script) | |
| `ace` | `Latn` | `achi1257` | Acehnese (Latin script) | |
| `acm` | `Arab` | `meso1252` | Mesopotamian Arabic | |
| `acq` | `Arab` | `taiz1242` | Taʽizzi-Adeni Arabic | |
| `aeb` | `Arab` | `tuni1259` | Tunisian Arabic | |
| `afr` | `Latn` | `afri1274` | Afrikaans | |
| `als` | `Latn` | `tosk1239` | Albanian (Tosk) | |
| `amh` | `Ethi` | `amha1245` | Amharic | |
| `apc` | `Arab` | `nort3139` | Levantine Arabic (North) | |
| `apc` | `Arab` | `sout3123` | Levantine Arabic (South) | |
| `arb` | `Arab` | `stan1318` | Modern Standard Arabic | |
| `arb` | `Latn` | `stan1318` | Modern Standard Arabic (Romanized) | |
| `arg` | `Latn` | `arag1245` | Aragonese | |
| `ars` | `Arab` | `najd1235` | Najdi Arabic | |
| `ary` | `Arab` | `moro1292` | Moroccan Arabic | |
| `arz` | `Arab` | `egyp1253` | Egyptian Arabic | |
| `asm` | `Beng` | `assa1263` | Assamese | |
| `ast` | `Latn` | `astu1245` | Asturian | |
| `awa` | `Deva` | `awad1243` | Awadhi | |
| `ayr` | `Latn` | `cent2142` | Central Aymara | |
| `azb` | `Arab` | `sout2697` | South Azerbaijani | |
| `azj` | `Latn` | `nort2697` | North Azerbaijani | |
| `bak` | `Cyrl` | `bash1264` | Bashkir | |
| `bam` | `Latn` | `bamb1269` | Bambara | |
| `ban` | `Latn` | `bali1278` | Balinese | |
| `bel` | `Cyrl` | `bela1254` | Belarusian | |
| `bem` | `Latn` | `bemb1257` | Bemba | |
| `ben` | `Beng` | `beng1280` | Bengali | |
| `bho` | `Deva` | `bhoj1244` | Bhojpuri | |
| `bjn` | `Arab` | `banj1239` | Banjar (Jawi script) | |
| `bjn` | `Latn` | `banj1239` | Banjar (Latin script) | |
| `bod` | `Tibt` | `utsa1239` | Lhasa Tibetan | |
| `bos` | `Latn` | `bosn1245` | Bosnian | |
| `brx` | `Deva` | `bodo1269` | Bodo | `dev` only |
| `bug` | `Latn` | `bugi1244` | Buginese | |
| `bul` | `Cyrl` | `bulg1262` | Bulgarian | |
| `cat` | `Latn` | `stan1289` | Catalan | |
| `cat` | `Latn` | `vale1252` | Valencian | |
| `ceb` | `Latn` | `cebu1242` | Cebuano | |
| `ces` | `Latn` | `czec1258` | Czech | |
| `chv` | `Cyrl` | `chuv1255` | Chuvash | |
| `cjk` | `Latn` | `chok1245` | Chokwe | |
| `ckb` | `Arab` | `cent1972` | Central Kurdish | |
| `cmn` | `Hans` | `beij1234` | Mandarin Chinese (Standard Beijing) | |
| `cmn` | `Hant` | `taib1240` | Mandarin Chinese (Taiwanese) | |
| `crh` | `Latn` | `crim1257` | Crimean Tatar | |
| `cym` | `Latn` | `wels1247` | Welsh | |
| `dan` | `Latn` | `dani1285` | Danish | |
| `dar` | `Cyrl` | `darg1241` | Dargwa | `dev` only |
| `deu` | `Latn` | `stan1295` | German | |
| `dgo` | `Deva` | `dogr1250` | Dogri | `dev` only |
| `dik` | `Latn` | `sout2832` | Southwestern Dinka | |
| `dyu` | `Latn` | `dyul1238` | Dyula | |
| `dzo` | `Tibt` | `dzon1239` | Dzongkha | |
| `ekk` | `Latn` | `esto1258` | Estonian | |
| `ell` | `Grek` | `mode1248` | Greek | |
| `eng` | `Latn` | `stan1293` | English | |
| `epo` | `Latn` | `espe1235` | Esperanto | |
| `eus` | `Latn` | `basq1248` | Basque | |
| `ewe` | `Latn` | `ewee1241` | Ewe | |
| `fao` | `Latn` | `faro1244` | Faroese | |
| `fij` | `Latn` | `fiji1243` | Fijian | |
| `fil` | `Latn` | `fili1244` | Filipino | |
| `fin` | `Latn` | `finn1318` | Finnish | |
| `fon` | `Latn` | `fonn1241` | Fon | |
| `fra` | `Latn` | `stan1290` | French | |
| `fur` | `Latn` | `east2271` | Friulian | |
| `fuv` | `Latn` | `nige1253` | Nigerian Fulfulde | |
| `gaz` | `Latn` | `west2721` | West Central Oromo | |
| `gla` | `Latn` | `scot1245` | Scottish Gaelic | |
| `gle` | `Latn` | `iris1253` | Irish | |
| `glg` | `Latn` | `gali1258` | Galician | |
| `gom` | `Deva` | `goan1235` | Goan Konkani | |
| `gug` | `Latn` | `para1311` | Paraguayan Guaraní | |
| `guj` | `Gujr` | `guja1252` | Gujarati | |
| `hat` | `Latn` | `hait1244` | Haitian Creole | |
| `hau` | `Latn` | `haus1257` | Hausa | |
| `heb` | `Hebr` | `hebr1245` | Hebrew | |
| `hin` | `Deva` | `hind1269` | Hindi | |
| `hne` | `Deva` | `chha1249` | Chhattisgarhi | |
| `hrv` | `Latn` | `croa1245` | Croatian | |
| `hun` | `Latn` | `hung1274` | Hungarian | |
| `hye` | `Armn` | `nucl1235` | Armenian | |
| `ibo` | `Latn` | `nucl1417` | Igbo | |
| `ilo` | `Latn` | `ilok1237` | Ilocano | |
| `ind` | `Latn` | `indo1316` | Indonesian | |
| `isl` | `Latn` | `icel1247` | Icelandic | |
| `ita` | `Latn` | `ital1282` | Italian | |
| `jav` | `Latn` | `java1254` | Javanese | |
| `jpn` | `Jpan` | `nucl1643` | Japanese | |
| `kaa` | `Latn` | `kara1467` | Karakalpak | `devtest` only |
| `kab` | `Latn` | `kaby1243` | Kabyle | |
| `kac` | `Latn` | `kach1280` | Jingpho | |
| `kam` | `Latn` | `kamb1297` | Kamba | |
| `kan` | `Knda` | `nucl1305` | Kannada | |
| `kas` | `Arab` | `kash1277` | Kashmiri (Arabic script) | |
| `kas` | `Deva` | `kash1277` | Kashmiri (Devanagari script) | |
| `kat` | `Geor` | `nucl1302` | Georgian | |
| `kaz` | `Cyrl` | `kaza1248` | Kazakh | |
| `kbp` | `Latn` | `kabi1261` | Kabiyè | |
| `kea` | `Latn` | `kabu1256` | Kabuverdianu | |
| `khk` | `Cyrl` | `halh1238` | Halh Mongolian | |
| `khm` | `Khmr` | `cent1989` | Khmer (Central) | |
| `kik` | `Latn` | `kiku1240` | Kikuyu | |
| `kin` | `Latn` | `kiny1244` | Kinyarwanda | |
| `kir` | `Cyrl` | `kirg1245` | Kyrgyz | |
| `kmb` | `Latn` | `kimb1241` | Kimbundu | |
| `kmr` | `Latn` | `nort2641` | Northern Kurdish | |
| `knc` | `Arab` | `cent2050` | Central Kanuri (Arabic script) | |
| `knc` | `Latn` | `cent2050` | Central Kanuri (Latin script) | |
| `kor` | `Hang` | `kore1280` | Korean | |
| `ktu` | `Latn` | `kitu1246` | Kituba (DRC) | |
| `lao` | `Laoo` | `laoo1244` | Lao | |
| `lij` | `Latn` | `geno1240` | Ligurian (Genoese) | |
| `lim` | `Latn` | `limb1263` | Limburgish | |
| `lin` | `Latn` | `ling1263` | Lingala | |
| `lit` | `Latn` | `lith1251` | Lithuanian | |
| `lmo` | `Latn` | `lomb1257` | Lombard | [[1]](https://github.com/openlanguagedata/flores/issues/5) |
| `ltg` | `Latn` | `east2282` | Latgalian | |
| `ltz` | `Latn` | `luxe1241` | Luxembourgish | |
| `lua` | `Latn` | `luba1249` | Luba-Kasai | |
| `lug` | `Latn` | `gand1255` | Ganda | |
| `luo` | `Latn` | `luok1236` | Luo | |
| `lus` | `Latn` | `lush1249` | Mizo | |
| `lvs` | `Latn` | `stan1325` | Standard Latvian | |
| `mag` | `Deva` | `maga1260` | Magahi | |
| `mai` | `Deva` | `mait1250` | Maithili | |
| `mal` | `Mlym` | `mala1464` | Malayalam | |
| `mar` | `Deva` | `mara1378` | Marathi | |
| `mhr` | `Cyrl` | `gras1239` | Meadow Mari | `dev` only |
| `min` | `Arab` | `mina1268` | Minangkabau (Jawi script) | |
| `min` | `Latn` | `mina1268` | Minangkabau (Latin script) | |
| `mkd` | `Cyrl` | `mace1250` | Macedonian | |
| `mlt` | `Latn` | `malt1254` | Maltese | |
| `mni` | `Beng` | `mani1292` | Meitei (Manipuri, Bengali script) | |
| `mni` | `Mtei` | `mani1292` | Meitei (Manipuri, Meitei script) | `dev` only |
| `mos` | `Latn` | `moss1236` | Mossi | |
| `mri` | `Latn` | `maor1246` | Maori | |
| `mya` | `Mymr` | `nucl1310` | Burmese | |
| `myv` | `Cyrl` | `erzy1239` | Erzya | |
| `nld` | `Latn` | `dutc1256` | Dutch | |
| `nno` | `Latn` | `norw1262` | Norwegian Nynorsk | |
| `nob` | `Latn` | `norw1259` | Norwegian Bokmål | |
| `npi` | `Deva` | `nepa1254` | Nepali | |
| `nqo` | `Nkoo` | `nkoa1234` | Nko | |
| `nso` | `Latn` | `pedi1238` | Northern Sotho | |
| `nus` | `Latn` | `nuer1246` | Nuer | |
| `nya` | `Latn` | `nyan1308` | Nyanja | |
| `oci` | `Latn` | `occi1239` | Occitan | |
| `oci` | `Latn` | `aran1260` | Aranese | |
| `ory` | `Orya` | `oriy1255` | Odia | |
| `pag` | `Latn` | `pang1290` | Pangasinan | |
| `pan` | `Guru` | `panj1256` | Eastern Panjabi | |
| `pap` | `Latn` | `papi1253` | Papiamento | |
| `pbt` | `Arab` | `sout2649` | Southern Pashto | |
| `pes` | `Arab` | `west2369` | Western Persian | |
| `plt` | `Latn` | `plat1254` | Plateau Malagasy | |
| `pol` | `Latn` | `poli1260` | Polish | |
| `por` | `Latn` | `braz1246` | Portuguese (Brazilian) | |
| `prs` | `Arab` | `dari1249` | Dari | |
| `quy` | `Latn` | `ayac1239` | Ayacucho Quechua | |
| `ron` | `Latn` | `roma1327` | Romanian | |
| `run` | `Latn` | `rund1242` | Rundi | |
| `rus` | `Cyrl` | `russ1263` | Russian | |
| `sag` | `Latn` | `sang1328` | Sango | |
| `san` | `Deva` | `sans1269` | Sanskrit | |
| `sat` | `Olck` | `sant1410` | Santali | |
| `scn` | `Latn` | `sici1248` | Sicilian | |
| `shn` | `Mymr` | `shan1277` | Shan | |
| `sin` | `Sinh` | `sinh1246` | Sinhala | |
| `slk` | `Latn` | `slov1269` | Slovak | |
| `slv` | `Latn` | `slov1268` | Slovenian | |
| `smo` | `Latn` | `samo1305` | Samoan | |
| `sna` | `Latn` | `shon1251` | Shona | |
| `snd` | `Arab` | `sind1272` | Sindhi (Arabic script) | |
| `snd` | `Deva` | `sind1272` | Sindhi (Devanagari script) | `dev` only |
| `som` | `Latn` | `soma1255` | Somali | |
| `sot` | `Latn` | `sout2807` | Southern Sotho | |
| `spa` | `Latn` | `amer1254` | Spanish (Latin American) | |
| `srd` | `Latn` | `sard1257` | Sardinian | [[1]](https://github.com/openlanguagedata/flores/issues/6) |
| `srp` | `Cyrl` | `serb1264` | Serbian | |
| `ssw` | `Latn` | `swat1243` | Swati | |
| `sun` | `Latn` | `sund1252` | Sundanese | |
| `swe` | `Latn` | `swed1254` | Swedish | |
| `swh` | `Latn` | `swah1253` | Swahili | |
| `szl` | `Latn` | `sile1253` | Silesian | |
| `tam` | `Taml` | `tami1289` | Tamil | |
| `taq` | `Latn` | `tama1365` | Tamasheq (Latin script) | |
| `taq` | `Tfng` | `tama1365` | Tamasheq (Tifinagh script) | |
| `tat` | `Cyrl` | `tata1255` | Tatar | |
| `tel` | `Telu` | `telu1262` | Telugu | |
| `tgk` | `Cyrl` | `taji1245` | Tajik | |
| `tha` | `Thai` | `thai1261` | Thai | |
| `tir` | `Ethi` | `tigr1271` | Tigrinya | |
| `tpi` | `Latn` | `tokp1240` | Tok Pisin | |
| `tsn` | `Latn` | `tswa1253` | Tswana | |
| `tso` | `Latn` | `tson1249` | Tsonga | |
| `tuk` | `Latn` | `turk1304` | Turkmen | |
| `tum` | `Latn` | `tumb1250` | Tumbuka | |
| `tur` | `Latn` | `nucl1301` | Turkish | |
| `twi` | `Latn` | `akua1239` | Akuapem Twi | |
| `twi` | `Latn` | `asan1239` | Asante Twi | |
| `tyv` | `Cyrl` | `tuvi1240` | Tuvan | |
| `uig` | `Arab` | `uigh1240` | Uyghur | |
| `ukr` | `Cyrl` | `ukra1253` | Ukrainian | |
| `umb` | `Latn` | `umbu1257` | Umbundu | |
| `urd` | `Arab` | `urdu1245` | Urdu | |
| `uzn` | `Latn` | `nort2690` | Northern Uzbek | |
| `vec` | `Latn` | `vene1259` | Venetian | |
| `vie` | `Latn` | `viet1252` | Vietnamese | |
| `vmw` | `Latn` | `cent2033` | Emakhuwa (Central) | |
| `war` | `Latn` | `wara1300` | Waray | |
| `wol` | `Latn` | `nucl1347` | Wolof | |
| `wuu` | `Hans` | `suhu1238` | Wu Chinese | `dev` only |
| `xho` | `Latn` | `xhos1239` | Xhosa | |
| `ydd` | `Hebr` | `east2295` | Eastern Yiddish | |
| `yor` | `Latn` | `yoru1245` | Yoruba | |
| `yue` | `Hant` | `xian1255` | Yue Chinese (Hong Kong Cantonese) | |
| `zgh` | `Tfng` | `stan1324` | Standard Moroccan Tamazight | |
| `zsm` | `Latn` | `stan1306` | Standard Malay | |
| `zul` | `Latn` | `zulu1248` | Zulu | |"
heegyu/namuwiki,"{""license"": ""cc-by-nc-sa-2.0"", ""language"": [""ko""], ""language_creators"": [""other""], ""multilinguality"": [""monolingual""], ""size_categories"": [""100K
- 867024 rows
- download size: 3GB
## Usage
```bash
pip install datasets
```
```python
from datasets import load_dataset
dataset = load_dataset(""heegyu/namuwiki"")
print(dataset[""train""][0])
```
```
{'title': '!!아앗!!',
'text': '\n[목차]\n\n\'\'\'{{{+1 !!ああっと!!}}}\'\'\'\n\n== 개요 ==\n[[파일:3444050440.jpg|width=60%]]\n▲[[신 세계수의 미궁 2 파프니르기사|신 세계수의 미궁 2]]에서 뜬 !!아앗!!\n\n[[세계수의 미궁 시리즈]]에 전통으로 등장하는 대사. [[세계수의 미궁 2 제왕의 성배|2편]]부터 등장했으며 훌륭한 [[사망 플래그]]의 예시이다.\n\n세계수의 모험가들이 탐험하는 던전인 수해의 구석구석에는 채취/벌채/채굴 포인트가 있으며, 이를 위한 채집 스킬에 투자하면 제한된 채집 기회에서 보다 큰 이득을 챙길 수 있다. 그러나 분배할 수 있는 스킬 포인트는 한정되어 있기 때문에 채집 스킬에 투자하는 만큼 전투 스킬 레벨은 낮아지게 된다.[* 다만 채집 시스템은 신 세계수 시리즈의 그리모어 복제, 복합 채집 스킬인 야생의 감, 5편의 종족 특유 스킬, 크로스의 1레벨이 만렙인 채집 스킬 등으로 편의성이 점차 나아져서 채집 스킬 때문에 스킬 트리가 내려가는 일은 점점 줄어들었다.] !!아앗!!이 발생하는 과정을 요약하면 다음과 같다.\n\n 1. 채집용 캐릭터들로 이루어진 약한 파티(ex: [[레인저(세계수의 미궁 2)|레인저]] 5명)가 수해에 입장한다.\n 1. 필드 전투를 피해 채집 포인트에 도착한 후 열심히 아이템을 캐는 중에...\n 1. \'\'\'!!아앗!!\'\'\' ~~라플레시아가 나타났다!~~\n 이때 등장하는 것은 [[FOE(세계수의 미궁 시리즈)|FOE]]는 아니지만 \'\'\'훨씬 위층에 등장하는 강력한 필드 몬스터이며 선제 공격을 당하게 된다!\'\'\'\n 1. \'\'\'으앙 죽음\'\'\'(hage)\n\n여담으로 !!아앗!!의 유래는 1인칭 던전 크롤러의 원조 [[위저드리]]에서 함정을 건드렸을 때 나오는 대사 Oops!(おおっと!)라고 한다.\n\n== 각 작품에서의 모습 ==\n=== [[세계수의 미궁 2 제왕의 성배]] ===\n!!아앗!!의 악랄함은 첫 등장한 작품이자 시리즈 중에서도 불친절하기로 정평이 난 2편이 절정이었다. 그야말로 위의 !!아앗!! 시퀀스 그대로, 묻지도 따지지도 않고 채집할 때마다 일정 확률로 \'\'\'강제로\'\'\' 전투에 돌입해야 했다. 게다가 이럴 때 쓰라고 있는 레인저의 스킬 \'위험 감지(중간 확률로 적의 선제 공격을 무효화)\'는 정작 작동하지 않는다!\n\n참고로 2편에서 채집 도중 !!아앗!!이 뜰 확률은 [[http://www.atlusnet.jp/topic/detail/910|고작 1%다.]] [[던파확률의 법칙|낮아 보이는 확률이어도 플레이 중 한 번이라도 일어나는 것]]을 경험하는 체감 확률을 고려하여 확률을 설정한다고.\n\n=== [[세계수의 미궁 3 성해의 내방자]] ===\n다행히 채집 중 낮은 확률로 ""좋은 아이템을 얻을 수 있을 것 같지만... 주변에서 몬스터들의 기척이 느껴진다.""는 메시지가 뜨고 이때 운이 좋으면 레어 아이템을 얻을 수 있지만 반대의 경우 적과 싸우게 되는 것으로 조정되었다.\n\n=== [[세계수의 미궁 4 전승의 거신]] ===\n기본적인 것은 3편과 같지만, 4편에서는 움직이지 않고 채집할 때도 턴이 경과하도록 조정되었기 때문에 주변에 있는 FOE를 잊고 채집에 몰두하다가 FOE와 부딪히면 FOE 버전 !!아앗!!이 뜬다. 그리고 난이도 CASUAL로 플레이시, FOE로 인한 !!아앗!!을 제외하면 절대로 발생하지 않는다.\n\n=== [[신 세계수의 미궁 밀레니엄의 소녀|신 세계수의]] [[신 세계수의 미궁 2 파프니르기사|미궁 시리즈]] ===\n채집 방식이 한 턴으로 끝나는 구조[* 채집으로 한 번 아이템을 획득하면 ""다시, (채집 스킬)에 의해...""가 뜨면서 한꺼번에 획득되는 구조.]로 바뀐 덕분인지 강제 조우로 다시 회귀해버렸다(...). 그나마 위험 감지 먹통과 같은 버그성 난점들은 수정되었다. 그 이후에 나온 [[세계수의 미궁 5 오랜 신화의 끝]]과 시리즈의 집대성 작품이자 3DS 마지막 작품인 [[세계수의 미궁 X]]도 마찬가지.\n\n=== [[세계수의 미궁 X]] ===\n본작의 채집은 신 세계수 시리즈와 같은 매커니즘이라 굳이 언급할 필요는 없으나, 퀘스트중에 2편의 !!아앗!! 시퀀스를 재현하면서 \'\'\'라플레시아\'\'\'가 등장하는 퀘스트가 존재한다.(...) 깨알같이 시스템 메세지 창이 아니라 대화창을 이용해서 완벽 재현한 것이 포인트.\n\n=== [[페르소나 Q 섀도우 오브 더 래버린스]] ===\n세계수 시스템을 기반으로 한 [[페르소나 시리즈]]와의 콜라보 작품인 페르소나 Q에서도 등장한다. 3, 4편과 같이 파워 스폿에서 채집 도중 메시지가 뜨며, 실패하면 파티에 참가하고 있는 멤버 중 한 명의 [[http://nico.ms/sm25683358|!!아앗!! 하는 음성]] ~~또는 [[코로마루|개소리]]~~과 함께 그 던전의 \'강적\'인 거대 [[섀도(페르소나 시리즈)|섀도우]]가 나타난다.\n\n그러나 내비 전용 스킬인 뱀눈 노려보기(위험 감지와 같은 효과)와 채집 보조 스킬은 파티의 전투력에 전혀 지장을 주지 않으며, \'대안심\'을 달면 거의 볼 일이 없어져서 초중반 이후에는 존재감이 급격히 줄어든다.\n[[분류:세계수의 미궁 시리즈]]',
'contributors': '110.46.34.123,kirby10,max0243,218.54.117.149,ruby3141,121.165.63.239,iviyuki,1.229.200.194,anatra95,kiri47,175.127.134.2,nickchaos71,chkong1998,kiwitree2,namubot,huwieblusnow',
'namespace': ''}
```"
mteb/NTREX,"{""annotations_creators"": [""expert-generated""], ""language_creators"": [""expert-generated""], ""language"": [""af"", ""am"", ""ar"", ""az"", ""ba"", ""be"", ""bg"", ""bn"", ""bo"", ""bs"", ""ca"", ""cs"", ""cy"", ""da"", ""de"", ""dv"", ""dz"", ""ee"", ""el"", ""et"", ""eu"", ""fa"", ""fa"", ""fi"", ""fil"", ""fj"", ""fj"", ""fo"", ""fr"", ""gd"", ""gu"", ""ha"", ""he"", ""hi"", ""hmn"", ""hr"", ""hu"", ""hy"", ""id"", ""ig"", ""is"", ""it"", ""ja"", ""kk"", ""km"", ""kn"", ""ko"", ""ku"", ""ku"", ""ky"", ""lb"", ""lo"", ""lt"", ""lv"", ""mi"", ""mk"", ""mn"", ""mr"", ""ms"", ""ms"", ""mt"", ""my"", ""nb"", ""nd"", ""ne"", ""nl"", ""nn"", ""ny"", ""om"", ""oy"", ""pa"", ""ps"", ""pt"", ""ro"", ""ru"", ""rw"", ""sd"", ""sh"", ""shi"", ""si"", ""sk"", ""sl"", ""sm"", ""sn"", ""so"", ""sq"", ""sr"", ""ss"", ""st"", ""sv"", ""sw"", ""ta"", ""te"", ""tg"", ""th"", ""tk"", ""tn"", ""to"", ""tr"", ""tt"", ""ty"", ""uk"", ""ur"", ""uz"", ""ve"", ""vi"", ""wo"", ""xh"", ""yo"", ""zh"", ""zh"", ""zu""], ""license"": [""cc-by-sa-4.0""], ""multilinguality"": [""translation""], ""task_categories"": [""translation""], ""size_categories"": [""1997""], ""configs"": [{""config_name"": ""default"", ""data_files"": [{""split"": ""test"", ""path"": ""test.parquet""}]}]}","## Dataset Description
NTREX -- News Test References for MT Evaluation from English into a total of 128 target languages. See [original GitHub repo](https://github.com/MicrosoftTranslator/NTREX/tree/main) for full details.
Example of loading:
```python
dataset = load_dataset(""davidstap/NTREX"", ""rus_Cyrl"", trust_remote_code=True)
```
## Languages
The following languages are available:
| Language Code | Language Name |
|-----------------|-----------------------------|
| `afr_Latn` | Afrikaans |
| `amh_Ethi` | Amharic |
| `arb_Arab` | Arabic |
| `aze_Latn` | Azerbaijani |
| `bak_Cyrl` | Bashkir |
| `bel_Cyrl` | Belarusian |
| `bem_Latn` | Bemba |
| `ben_Beng` | Bengali |
| `bod_Tibt` | Tibetan |
| `bos_Latn` | Bosnian |
| `bul_Cyrl` | Bulgarian |
| `cat_Latn` | Catalan |
| `ces_Latn` | Czech |
| `ckb_Arab` | Sorani Kurdish |
| `cym_Latn` | Welsh |
| `dan_Latn` | Danish |
| `deu_Latn` | German |
| `div_Thaa` | Dhivehi |
| `dzo_Tibt` | Dzongkha |
| `ell_Grek` | Greek |
| `eng-GB_Latn` | English (Great Britain) |
| `eng-IN_Latn` | English (India) |
| `eng-US_Latn` | English (United States) |
| `eng_Latn` | English |
| `est_Latn` | Estonian |
| `eus_Latn` | Basque |
| `ewe_Latn` | Ewe |
| `fao_Latn` | Faroese |
| `fas_Arab` | Persian |
| `fij_Latn` | Fijian |
| `fil_Latn` | Filipino |
| `fin_Latn` | Finnish |
| `fra-CA_Latn` | French (Canada) |
| `fra_Latn` | French |
| `fuc_Latn` | Pulaar |
| `gle_Latn` | Irish |
| `glg_Latn` | Galician |
| `guj_Gujr` | Gujarati |
| `hau_Latn` | Hausa |
| `heb_Hebr` | Hebrew |
| `hin_Deva` | Hindi |
| `hmn_Latn` | Hmong |
| `hrv_Latn` | Croatian |
| `hun_Latn` | Hungarian |
| `hye_Armn` | Armenian |
| `ibo_Latn` | Igbo |
| `ind_Latn` | Indonesian |
| `isl_Latn` | Icelandic |
| `ita_Latn` | Italian |
| `jpn_Jpan` | Japanese |
| `kan_Knda` | Kannada |
| `kat_Geor` | Georgian |
| `kaz_Cyrl` | Kazakh |
| `khm_Khmr` | Khmer |
| `kin_Latn` | Kinyarwanda |
| `kir_Cyrl` | Kyrgyz |
| `kmr_Latn` | Northern Kurdish |
| `kor_Hang` | Korean |
| `lao_Laoo` | Lao |
| `lav_Latn` | Latvian |
| `lit_Latn` | Lithuanian |
| `ltz_Latn` | Luxembourgish |
| `mal_Mlym` | Malayalam |
| `mar_Deva` | Marathi |
| `mey_Arab` | Hassaniya Arabic |
| `mkd_Cyrl` | Macedonian |
| `mlg_Latn` | Malagasy |
| `mlt_Latn` | Maltese |
| `mon_Mong` | Mongolian |
| `mri_Latn` | Maori |
| `msa_Latn` | Malay |
| `mya_Mymr` | Burmese |
| `nde_Latn` | Ndebele |
| `nep_Deva` | Nepali |
| `nld_Latn` | Dutch |
| `nno_Latn` | Norwegian Nynorsk |
| `nob_Latn` | Norwegian Bokmål |
| `nso_Latn` | Northern Sotho |
| `nya_Latn` | Chichewa |
| `orm_Ethi` | Oromo |
| `pan_Guru` | Punjabi (Gurmukhi) |
| `pol_Latn` | Polish |
| `por-BR_Latn` | Portuguese (Brazil) |
| `por_Latn` | Portuguese |
| `prs_Arab` | Dari |
| `pus_Arab` | Pashto |
| `ron_Latn` | Romanian |
| `rus_Cyrl` | Russian |
| `shi_Arab` | Tachelhit |
| `sin_Sinh` | Sinhala |
| `slk_Latn` | Slovak |
| `slv_Latn` | Slovenian |
| `smo_Latn` | Samoan |
| `sna_Latn` | Shona |
| `snd_Arab` | Sindhi |
| `som_Latn` | Somali |
| `spa-MX_Latn` | Spanish (Mexico) |
| `spa_Latn` | Spanish |
| `sqi_Latn` | Albanian |
| `srp_Cyrl` | Serbian (Cyrillic) |
| `srp_Latn` | Serbian (Latin) |
| `ssw_Latn` | Swati |
| `swa_Latn` | Swahili |
| `swe_Latn` | Swedish |
| `tah_Latn` | Tahitian |
| `tam_Taml` | Tamil |
| `tat_Cyrl` | Tatar |
| `tel_Telu` | Telugu |
| `tgk_Cyrl` | Tajik |
| `tha_Thai` | Thai |
| `tir_Ethi` | Tigrinya |
| `ton_Latn` | Tongan |
| `tsn_Latn` | Tswana |
| `tuk_Latn` | Turkmen |
| `tur_Latn` | Turkish |
| `uig_Arab` | Uighur |
| `ukr_Cyrl` | Ukrainian |
| `urd_Arab` | Urdu |
| `uzb_Latn` | Uzbek |
| `ven_Latn` | Venda |
| `vie_Latn` | Vietnamese |
| `wol_Latn` | Wolof |
| `xho_Latn` | Xhosa |
| `yor_Latn` | Yoruba |
| `yue_Hant` | Cantonese |
| `zho_Hans` | Chinese (Simplified) |
| `zho_Hant` | Chinese (Traditional) |
| `zul_Latn` | Zulu |
### Citation Information
For the original NTREX-128 dataset, please cite:
```
@inproceedings{federmann-etal-2022-ntrex,
title = ""{NTREX}-128 {--} News Test References for {MT} Evaluation of 128 Languages"",
author = ""Federmann, Christian and Kocmi, Tom and Xin, Ying"",
booktitle = ""Proceedings of the First Workshop on Scaling Up Multilingual Evaluation"",
month = ""nov"",
year = ""2022"",
address = ""Online"",
publisher = ""Association for Computational Linguistics"",
url = ""https://aclanthology.org/2022.sumeval-1.4"",
pages = ""21--24"",
}
```
as well as the WMT 2019 paper that provided the English source data NTREX-128 is based on:
```
@inproceedings{barrault-etal-2019-findings,
title = ""Findings of the 2019 Conference on Machine Translation ({WMT}19)"",
author = {Barrault, Lo{\""\i}c and
Bojar, Ond{\v{r}}ej and
Costa-juss{\`a}, Marta R. and
Federmann, Christian and
Fishel, Mark and
Graham, Yvette and
Haddow, Barry and
Huck, Matthias and
Koehn, Philipp and
Malmasi, Shervin and
Monz, Christof and
M{\""u}ller, Mathias and
Pal, Santanu and
Post, Matt and
Zampieri, Marcos},
editor = ""Bojar, Ond{\v{r}}ej and
Chatterjee, Rajen and
Federmann, Christian and
Fishel, Mark and
Graham, Yvette and
Haddow, Barry and
Huck, Matthias and
Yepes, Antonio Jimeno and
Koehn, Philipp and
Martins, Andr{\'e} and
Monz, Christof and
Negri, Matteo and
N{\'e}v{\'e}ol, Aur{\'e}lie and
Neves, Mariana and
Post, Matt and
Turchi, Marco and
Verspoor, Karin"",
booktitle = ""Proceedings of the Fourth Conference on Machine Translation (Volume 2: Shared Task Papers, Day 1)"",
month = aug,
year = ""2019"",
address = ""Florence, Italy"",
publisher = ""Association for Computational Linguistics"",
url = ""https://aclanthology.org/W19-5301"",
doi = ""10.18653/v1/W19-5301"",
pages = ""1--61"",
}
```"
nayeon212/BLEnD,"{""license"": ""cc-by-sa-4.0"", ""task_categories"": [""question-answering""], ""language"": [""en"", ""zh"", ""es"", ""id"", ""ko"", ""el"", ""fa"", ""ar"", ""az"", ""su"", ""as"", ""ha"", ""am""], ""size_categories"": [""10K= 2.19.2
pandas >= 2.1.4
```
## Dataset
All the data samples for short-answer questions, including the human-annotated answers, can be found in the `data/` directory.
Specifically, the annotations from each country are included in the `annotations` split, and each country/region's data can be accessed by **[country codes](https://huggingface.co/datasets/nayeon212/BLEnD#countryregion-codes)**.
```Python
from datasets import load_dataset
annotations = load_dataset(""nayeon212/BLEnD"",'annotations')
# To access data from Assam:
assam_annotations = annotations['AS']
```
Each file includes a JSON variable with question IDs, questions in the local language and English, the human annotations both in the local language and English, and their respective vote counts as values. The same dataset for South Korea is shown below:
```JSON
[{
""ID"": ""Al-en-06"",
""question"": ""대한민국 학교 급식에서 흔히 볼 수 있는 음식은 무엇인가요?"",
""en_question"": ""What is a common school cafeteria food in your country?"",
""annotations"": [
{
""answers"": [
""김치""
],
""en_answers"": [
""kimchi""
],
""count"": 4
},
{
""answers"": [
""밥"",
""쌀밥"",
""쌀""
],
""en_answers"": [
""rice""
],
""count"": 3
},
...
],
""idks"": {
""idk"": 0,
""no-answer"": 0,
""not-applicable"": 0
}
}],
```
The topics and source language for each question can be found in `short-answer-questions` split.
Questions for each country in their local languages and English can be accessed by **[country codes](https://huggingface.co/datasets/nayeon212/BLEnD#countryregion-codes)**.
Each CSV file question ID, topic, source language, question in English, and the local language (in the `Translation` column) for all questions.
```Python
from datasets import load_dataset
questions = load_dataset(""nayeon212/BLEnD"",'short-answer-questions')
# To access data from Assam:
assam_questions = questions['AS']
```
The current set of multiple choice questions and their answers can be found at the `multiple-choice-questions` split.
```Python
from datasets import load_dataset
mcq = load_dataset(""nayeon212/BLEnD"",'multiple-choice-questions')
```
### Country/Region Codes
| **Country/Region** | **Code** | **Language** | **Code**|
|:--------:|:--------------:|:------------:|:------------:|
| United States | US | English | en
| United Kingdom | GB | English |en
| China | CN | Chinese | zh
| Spain | ES | Spanish | es
| Mexico | MX |Spanish|es
| Indonesia | ID | Indonesian | id
| South Korea | KR | Korean | ko
| North Korea | KP | Korean |ko
| Greece | GR | Greek | el
| Iran | IR | Persian | fa
| Algeria | DZ | Arabic | ar
| Azerbaijan | AZ | Azerbaijani | az
| West Java | JB | Sundanese | su
| Assam | AS | Assamese | as
| Northern Nigeria | NG | Hausa | ha
| Ethiopia | ET | Amharic | am"
MarkrAI/KOpen-HQ-Hermes-2.5-60K,"{""language"": [""ko""], ""license"": ""mit"", ""task_categories"": [""question-answering"", ""text-generation""], ""dataset_info"": {""features"": [{""name"": ""input"", ""dtype"": ""string""}, {""name"": ""instruction"", ""dtype"": ""string""}, {""name"": ""output"", ""dtype"": ""string""}], ""splits"": [{""name"": ""train"", ""num_bytes"": 93015065, ""num_examples"": 60061}], ""download_size"": 48634325, ""dataset_size"": 93015065}, ""configs"": [{""config_name"": ""default"", ""data_files"": [{""split"": ""train"", ""path"": ""data/train-*""}]}]}","# MarkrAI/KOpen-HQ-Hermes-2.5-60K
The **`KOpen-HQ-Hermes-2.5-60K`** dataset has been released!
Anyone can use it under the MIT license, so feel free to take advantage of this **high-quality dataset**.
In this dataset, we have focused on incorporating our knowledge rather than human effort as much as possible, so there may be some translation errors.
Please keep this in mind when you cooking with it.
## Dataset Info
- Creator: Markr AI
- Developer: Seungyoo Lee, Kyujin Han
- Data generation:
We used the Near Dedup algorithm on the [Open Hermes dataset](https://huggingface.co/datasets/teknium/OpenHermes-2.5) to remove highly similar data(criteria: **Jaccard Sim, `0.8 >=`**) and then performed translation tasks using the DeepL API with 8 multiprocessing threads.
Afterward, we used SOTA LLMs (GPT-4 Turbo, Gemini, Wizard LM, Llama 3.1 405B) to score the data with Alpaca form prompts.
We then evaluated the appropriateness of these prompts and extracted and published the data with high scores.
## Dataset's purpose
Our Markr AI research guild aims to make a small contribution to the Korean open-source community.
Through this effort, we hope to invigorate the existing Korean LLM models and their ecosystem, fostering the growth of many excellent Korean language models within the expanding community.
The license for this work is the MIT license, and you are welcome to use it. However, our small wish is that instead of merely using and benefiting from this culture of community activation and sharing, all members contribute to its development and, in doing so, help it evolve further.
Lastly, if you start cooking using this dataset, please press the like button to show your support"
nlpai-lab/databricks-dolly-15k-ko,"{""license"": ""cc-by-sa-3.0"", ""task_categories"": [""question-answering"", ""summarization""], ""language"": [""ko""], ""size_categories"": [""10K
### Some preprcoessing algorithms
- [spam_assassin.js](./spam_assassin.js), followed by [spam_assassin.py](./spam_assassin.py)
- [enron_spam.py](./enron_spam.py)
### Data composition

### Description
To make the text format between sms messages and emails consistent, email subjects and content are separated by two newlines:
```python
text = email.subject + ""\n\n"" + email.content
```
### Suggestions
- If you plan to train a model based on this dataset alone, I recommend adding **some** rows with `is_toxic=0` from `FredZhang7/toxi-text-3M`. Make sure the rows aren't spam.
### Other Sources
- https://huggingface.co/datasets/sms_spam
- https://github.com/MWiechmann/enron_spam_data
- https://github.com/stdlib-js/datasets-spam-assassin
- https://repository.ortolang.fr/api/content/comere/v3.3/cmr-simuligne.html"
simon3000/starrail-voice,"{""language"": [""zh"", ""en"", ""ja"", ""ko""], ""task_categories"": [""audio-classification"", ""automatic-speech-recognition"", ""text-to-speech""], ""pretty_name"": ""StarRail Voice"", ""dataset_info"": {""features"": [{""name"": ""audio"", ""dtype"": ""audio""}, {""name"": ""ingame_filename"", ""dtype"": ""string""}, {""name"": ""transcription"", ""dtype"": ""string""}, {""name"": ""language"", ""dtype"": ""string""}, {""name"": ""speaker"", ""dtype"": ""string""}, {""name"": ""voice_type"", ""dtype"": ""string""}], ""splits"": [{""name"": ""train"", ""num_bytes"": 124647844822.266, ""num_examples"": 185511}], ""download_size"": 88624726158, ""dataset_size"": 124647844822.266}, ""configs"": [{""config_name"": ""default"", ""data_files"": [{""split"": ""train"", ""path"": ""data/train-*""}]}]}","# StarRail Voice
StarRail Voice is a dataset of voice lines from the popular game [Honkai: Star Rail](https://hsr.hoyoverse.com/).
Hugging Face 🤗 [StarRail-Voice](https://huggingface.co/datasets/simon3000/starrail-voice)
Last update at `2024-08-30`
`185511` wavs
`49325` without speaker (27%)
`49409` without transcription (27%)
`41142` without inGameFilename (22%)
## Dataset Details
### Dataset Description
The dataset contains voice lines from the game's characters in multiple languages, including Chinese, English, Japanese, and Korean.
The voice lines are spoken by the characters in the game and cover a wide range of topics, including greetings, combat, and story dialogue.
- **Language(s) (NLP):** Chinese, English, Japanese, Korean
## Dataset Creation
### Source Data
The data was obtained by unpacking the [Honkai: Star Rail](https://hsr.hoyoverse.com/) game.
#### Data Collection and Processing
Please refer to [StarRail-Voice](https://github.com/simon300000/starrail-voice) and [bnnm/wwiser-utils#15](https://github.com/bnnm/wwiser-utils/pull/15#issuecomment-1962182022) for more information on how the data was processed.
#### Who are the source data producers?
The source data producers are the developers of the game, HoYoverse.
### Annotations
The dataset contains official annotations from the game, including language, speaker name, and transcription.
## Bias, Risks, and Limitations
Annotations are incomplete. Some voice lines are missing speaker names and transcriptions.
### Recommendations
Users should be made aware of the risks, biases and limitations of the dataset.
Speaker names can be partially inferred from the ingame filenames.
## Licensing Information
Copyright © COGNOSPHERE. All Rights Reserved.
## More Information
I can upload wav files on demand."
hac541309/open-lid-dataset,"{""language"": [""en"", ""ko"", ""fr"", ""aa"", ""hi""], ""license"": ""gpl-3.0"", ""size_categories"": [""100M>> from datasets import load_dataset
>>> dataset = load_dataset(""Bingsu/KSS_Dataset"")
>>> dataset[""train""].features
{'audio': Audio(sampling_rate=44100, mono=True, decode=True, id=None),
'original_script': Value(dtype='string', id=None),
'expanded_script': Value(dtype='string', id=None),
'decomposed_script': Value(dtype='string', id=None),
'duration': Value(dtype='float32', id=None),
'english_translation': Value(dtype='string', id=None)}
```
```python
>>> dataset[""train""][0]
{'audio': {'path': None,
'array': array([ 0.00000000e+00, 3.05175781e-05, -4.57763672e-05, ...,
0.00000000e+00, -3.05175781e-05, -3.05175781e-05]),
'sampling_rate': 44100},
'original_script': '그는 괜찮은 척하려고 애쓰는 것 같았다.',
'expanded_script': '그는 괜찮은 척하려고 애쓰는 것 같았다.',
'decomposed_script': '그는 괜찮은 척하려고 애쓰는 것 같았다.',
'duration': 3.5,
'english_translation': 'He seemed to be pretending to be okay.'}
```
### Data Splits
| | train |
|---------------|------:|
| # of examples | 12853 |"
afaji/cvqa,"{""language"": [""id"", ""su"", ""ja"", ""jv"", ""min"", ""br"", ""ga"", ""es"", ""pt"", ""no"", ""mn"", ""ms"", ""zh"", ""ko"", ""ta"", ""ben"", ""si"", ""bg"", ""ro"", ""ru"", ""am"", ""orm"", ""ar"", ""ig"", ""hi"", ""mr""], ""size_categories"": [""10K,
'ID': '5919991144272485961_0',
'Subset': ""('Japanese', 'Japan')"",
'Question': '写真に写っているキャラクターの名前は? ',
'Translated Question': 'What is the name of the object in the picture? ',
'Options': ['コスモ星丸', 'ミャクミャク', ' フリービー ', 'ハイバオ'],
'Translated Options': ['Cosmo Hoshimaru','MYAKU-MYAKU','Freebie ','Haibao'],
'Label': -1,
'Category': 'Objects / materials / clothing',
'Image Type': 'Self',
'Image Source': 'Self-open',
'License': 'CC BY-SA'
}
```
Data Fields
The data fields are:
- `image`: The image referenced by the question.
- `ID`: A unique ID for the given sample.
- `Subset`: A Language-Country pair
- `Question`: The question elicited in the local language.
- `Translated Question`: The question elicited in the English language.
- `Options`: A list of possible answers to the question in the Local Language.
- `Translated Options`: A list of possible answers to the question in the English Language.
- `Label`: Will always be -1. Please refer to our leaderboard to get your performance.
- `Category`: A specific category for the given sample.
- `Image Type`: `Self` or `External`, meaning if the image is self-taken from the annotator or comes from the internet.
- `Image Source`: If the image type is Self, this can be `Self-open` or `Self-research_only`, meaning that the image can be used for commercial purposes or only for research purposes. If the image type is External, this will be the link to the external source.
- `License`: The corresponding license for the image.
# Dataset Creation
## Source Data
The images in CVQA can either be based on existing external images or from the contributor's own images. You can see this information from the 'Image Type' and 'Image Source' columns. Images based on external sources will retain their original licensing, whereas images from contributors will be licensed based on each contributor's decision.
All the questions are hand-crafted by annotators.
## Data Annotation
Data creation follows two general steps: question formulation and validation.
During question formulation, annotators are asked to write a question, with one correct answer and three distractors.
Questions must be culturally nuanced and relevant to the image. Annotators are asked to mask sensitive information and text that can easily give away the answers.
During data validation, another annotator is asked to check and validate whether the images and questions adhere to the guidelines.
You can learn more about our annotation protocol and guidelines in our paper.
## Annotators
Annotators needed to be fluent speakers of the language in question and be accustomed to the cultures of the locations for which they provided data. Our annotators are predominantly native speakers, with around 89% residing in the respective country for over 16 years.
## Licensing Information
Note that each question has its own license. All data here is free to use for research purposes, but not every entry is permissible for commercial use.
---"
lcw99/oscar-ko-only,"{""language"": [""ko""]}",# oscar dataset only korean
heegyu/kowiki-sentences,"{""license"": ""cc-by-sa-3.0"", ""language"": [""ko""], ""language_creators"": [""other""], ""multilinguality"": [""monolingual""], ""size_categories"": [""1MFor more information, please refer to the paper [K-HATERS](https://arxiv.org/abs/2310.15439) published at EMNLP 2023 Findings.
### Supported tasks
- Hate speech detection
- Multi class classification (labels: normal, offensive, L1_hate, L2_hate)
- Binary classifiction (labels: normal, toxic(offensive, L1_hate, L2_hate))
- Rationale prediction (offensiveness, target rationale)
### Data describtion
```
data['train'][42]
{'text': '군대도 안간 놈 이 주둥아리 는 씽씽하네..보수 놈 들..군대는 안가고 애국이냐..#@이름#,#@이름#,',
'label': 'L1_hate',
'target_label': ['political'],
'offensiveness_rationale': [[7, 8], [11, 15], [27, 28]],
'target_rationale': [[24, 26], [46, 51], [52, 57]]}
```
- Abusive language categories (**label**)
- L2_hate: Comments with explicit forms of hate expressions toward one of the groups of protected attributes (e.g., gender, age, race, ...)
- L1_hate: Comments with more implicit forms of hate expressions
- Offensive: Comments that express offensiveness but not toward a protected attribute group
- Normal: The rest comments
- Multi-label target categories (**target_label**): list of offensiveness targets. A comment can have zero or multiple targets.
- List of target categories: gender, age, race, religion, politics, job, disability, individuals, and others.
- Annotators' rationales for the strength of ratings (**offensiveness_rationale**): lists providing annotators' rationales for the strength of ratings. The list includes the start and end indices of highlight spans.
- Annotators' rationales for the target of offensiveness (**target_rationale**)
### Dataset split
We provide the dataset in the form of splits as 172,158 (for train), 10,000 (for validation), and 10,000 (for test). Label ratio was preseved (stratified split).
### Labeling guidelines
Labeling guidelines are available as a part of SELECTSTAR open datasets (in Korean). [link](https://open.selectstar.ai/ko/?page_id=5948)
# 📜 Data statement
We present the data statement for responsible usage [(Bender and Friedman, 2018)](https://aclanthology.org/Q18-1041/).
### Curation Rationale
We collected the raw data from the news aggregator of Naver, the largest news portal in Korea. We targeted news articles published in the society, world news, and politics sections because discussions are active in the hard news.
### Language Variety
Our dataset consists of the news comments in Korean (ko-KR).
### Speaker Demographic
The user demographic is not available. However, considering that the portal site has the largest share of Korean, it can be assumed that speakers are mostly Korean.
### Annotator Demographic
A total of 405 workers participated in an annotation. 21 workers are 10s, 222 workers are 20s, 116 workers are 30s, 35 workers are 40s, 9 workers are 50s, and 2 workers are 60s.
### Speech Situation
News article in the hard news section deals with controversial events, so there are more likely to exist hate comments or toxicity comments. The target articles were published between July 2021 and August 2021. During that period, the most controversial events were the South Korean presidential election, the Tokyo Olympics, COVID-19, and the Restoration of Taliban Control, etc.
### Text Characteristics
It includes hatred words limited to Korea, such as hatred of certain political orientations and certain groups. For example, '대깨문' (a word that hates former Korean president Moon's supporter), and '꼴페미' (a word that hates feminists)
# 🤝 License & Contributors
### Licensing information
This dataset is shared under CC-BY 4.0.
According to this license, you are free to use the dataset as long as you provide appropriate attribution (e.g., citing our paper).
### Citation information
```
@article{park2023haters,
title={K-HATERS: A Hate Speech Detection Corpus in Korean with Target-Specific Ratings},
author={Park, Chaewon and Kim, Suhwan and Park, Kyubyong and Park, Kunwoo},
journal={Findings of the EMNLP 2023},
year={2023}
}
```
### Contributions
- Chaewon Park
- Suhwan Kim (TUNiB)
- Kyubyong Park (TUNiB)
- Kunwoo Park
#-->"
sentence-transformers/parallel-sentences-global-voices,"{""language"": [""en"", ""multilingual"", ""ar"", ""bg"", ""ca"", ""cs"", ""da"", ""de"", ""el"", ""es"", ""fa"", ""fr"", ""he"", ""hi"", ""hu"", ""id"", ""it"", ""ko"", ""mk"", ""my"", ""nl"", ""pl"", ""pt"", ""ro"", ""ru"", ""sq"", ""sr"", ""sv"", ""tr"", ""ur""], ""size_categories"": [""1M
## Dataset Description
- **Homepage:** [SIL AI](https://ai.sil.org/)
- **Point of Contact:** [SIL AI email](mailto:idx_aqua@sil.org)
- **Source Data:** [Bloom Library](https://bloomlibrary.org/)
 
## Dataset Summary
**Bloom** is free, open-source software and an associated website [Bloom Library](https://bloomlibrary.org/), app, and services developed by [SIL International](https://www.sil.org/). Bloom’s primary goal is to equip non-dominant language communities and their members to create the literature they want for their community and children. Bloom also serves organizations that help such communities develop literature and education or other aspects of community development.
This version of the Bloom Library data is developed specifically for the visual story telling (or VIST) task. It includes data from 364 languages across 31 language families. There is a mean of 32 stories and median of 2 stories per language.
**Note**: If you speak one of these languages and can help provide feedback or corrections, please let us know!
**Note**: Although this data was used in the training of the [BLOOM model](https://huggingface.co/bigscience/bloom), this dataset only represents a small portion of the data used to train that model. Data from ""Bloom Library"" was combined with a large number of other datasets to train that model. ""Bloom Library"" is a project that existed prior to the BLOOM model, and is something separate. All that to say... We were using the ""Bloom"" name before it was cool. 😉
## Languages
Of the 500+ languages listed at BloomLibrary.org, there are 363 languages available in this dataset. Here are the corresponding ISO 639-3 codes:
aaa, abc, ada, adq, aeu, afr, agq, ags, ahk, aia, ajz, aka, ame, amh, amp, amu, ann, aph, awa, awb, azn, azo, bag, bam, baw, bax, bbk, bcc, bce, bec, bef, ben, bfd, bfm, bfn, bgf, bho, bhs, bis, bjn, bjr, bkc, bkh, bkm, bkx, bob, bod, boz, bqm, bra, brb, bri, brv, bss, bud, buo, bwt, bwx, bxa, bya, bze, bzi, cak, cbr, ceb, cgc, chd, chp, cim, clo, cmn, cmo, csw, cuh, cuv, dag, ddg, ded, deu, dig, dje, dmg, dnw, dtp, dtr, dty, dug, eee, ekm, enb, enc, eng, ewo, fas, fil, fli, fon, fra, fub, fuh, gal, gbj, gou, gsw, guc, guj, guz, gwc, hao, hat, hau, hbb, hig, hil, hin, hla, hna, hre, hro, idt, ilo, ind, ino, isu, ita, jgo, jmx, jpn, jra, kak, kam, kan, kau, kbq, kbx, kby, kek, ken, khb, khm, kik, kin, kir, kjb, kmg, kmr, kms, kmu, kor, kqr, krr, ksw, kur, kvt, kwd, kwu, kwx, kxp, kyq, laj, lan, lao, lbr, lfa, lgg, lgr, lhm, lhu, lkb, llg, lmp, lns, loh, lsi, lts, lug, luy, lwl, mai, mal, mam, mar, mdr, mfh, mfj, mgg, mgm, mgo, mgq, mhx, miy, mkz, mle, mlk, mlw, mmu, mne, mnf, mnw, mot, mqj, mrn, mry, msb, muv, mve, mxu, mya, myk, myx, mzm, nas, nco, nep, new, nge, ngn, nhx, njy, nla, nld, nlv, nod, nsk, nsn, nso, nst, nuj, nwe, nwi, nxa, nxl, nya, nyo, nyu, nza, odk, oji, oki, omw, ori, ozm, pae, pag, pan, pbt, pce, pcg, pdu, pea, pex, pis, pkb, pmf, pnz, por, psp, pwg, qub, quc, quf, quz, qve, qvh, qvm, qvo, qxh, rel, rnl, ron, roo, rue, rug, rus, san, saq, sat, sdk, sea, sgd, shn, sml, snk, snl, som, sot, sox, spa, sps, ssn, stk, swa, swh, sxb, syw, taj, tam, tbj, tdb, tdg, tdt, teo, tet, tgk, tha, the, thk, thl, thy, tio, tkd, tnl, tnn, tnp, tnt, tod, tom, tpi, tpl, tpu, tsb, tsn, tso, tuv, tuz, tvs, udg, unr, urd, uzb, ven, vie, vif, war, wbm, wbr, wms, wni, wnk, wtk, xho, xkg, xmd, xmg, xmm, xog, xty, yas, yav, ybb, ybh, ybi, ydd, yea, yet, yid, yin, ymp, zaw, zho, zlm, zuh, zul
## Dataset Statistics
Some of the languages included in the dataset just include 1 or a couple of ""stories."" For those with higher numbers of available stories we include the following numbers of stories:
| ISO639-3 Code | Stories | Image-Caption Pairs |
|:-----------|----------:|----------------------:|
| ahk | 55 | 493 |
| awa | 163 | 1200 |
| ben | 220 | 1938 |
| bho | 172 | 1163 |
| bis | 21 | 183 |
| brb | 22 | 330 |
| bzi | 66 | 497 |
| cak | 50 | 694 |
| ceb | 394 | 2806 |
| cgc | 182 | 1473 |
| deu | 22 | 250 |
| dty | 172 | 1310 |
| eng | 2187 | 24338 |
| fas | 128 | 620 |
| fil | 34 | 366 |
| fra | 315 | 4350 |
| hat | 224 | 1881 |
| hau | 229 | 1594 |
| ind | 232 | 1866 |
| jra | 56 | 575 |
| kak | 195 | 1416 |
| kek | 21 | 419 |
| khb | 31 | 167 |
| khm | 26 | 246 |
| kir | 278 | 2866 |
| kjb | 63 | 584 |
| kor | 129 | 2732 |
| krr | 29 | 362 |
| lsi | 22 | 173 |
| mai | 177 | 1186 |
| mam | 118 | 1058 |
| mhx | 51 | 544 |
| myk | 22 | 214 |
| nep | 194 | 1464 |
| new | 177 | 1225 |
| pbt | 203 | 979 |
| por | 148 | 2939 |
| quc | 99 | 817 |
| rus | 271 | 2977 |
| snk | 21 | 210 |
| spa | 444 | 5201 |
| swh | 34 | 387 |
| tdg | 31 | 231 |
| tha | 275 | 2929 |
| thl | 185 | 1464 |
| tpi | 137 | 1528 |
| tpu | 28 | 513 |
| zho | 42 | 339 |
## Dataset Structure
### Data Instances
The examples look like this for Hindi:
```
from datasets import load_dataset
# Specify the language code.
dataset = load_dataset(""sil-ai/bloom-vist"", 'hin')
# An individual samples consists of stories in the specified language code.
# To see a story:
print(dataset['train'][0]['story'])
```
This would produce an output:
```
{'image_id': ['4e9bdde5-996d-4a98-ac1c-d80fb6349314',
'614e4d51-bbdb-4538-98d3-f603c12dccd0',
'970d60bf-2acb-44ac-8ffb-5aa3f7989630',
'd4ad1199-863e-4929-a377-93276fe5caa8',
'0d9ad694-995a-433d-af4e-6f40ddfa208a',
'811176eb-c9f3-4226-8af5-e6c4e524c494',
'83180da7-4ba8-4104-a0d9-49aa2ef48f7a'],
'image_url': ['https://bloom-vist.s3.amazonaws.com/Saboo+and+Jojo/M_PB_2_-saboo-and-jojo_Page_03_Image_00011.png',
'https://bloom-vist.s3.amazonaws.com/Saboo+and+Jojo/M_PB_2_-saboo-and-jojo_Page_04_Image_0001.png',
'https://bloom-vist.s3.amazonaws.com/Saboo+and+Jojo/M_PB_2_-saboo-and-jojo_Page_05_Image_0001.png',
'https://bloom-vist.s3.amazonaws.com/Saboo+and+Jojo/M_PB_2_-saboo-and-jojo_Page_06_Image_0001.png',
'https://bloom-vist.s3.amazonaws.com/Saboo+and+Jojo/M_PB_2_-saboo-and-jojo_Page_07_Image_0001.png',
'https://bloom-vist.s3.amazonaws.com/Saboo+and+Jojo/M_PB_2_-saboo-and-jojo_Page_07_Image_00011.png',
'https://bloom-vist.s3.amazonaws.com/Saboo+and+Jojo/M_PB_2_-saboo-and-jojo_Page_09_Image_0001.png'],
'story_index': [0, 1, 2, 3, 4, 5, 6],
'story_id': ['cc34c1c7-c086-491b-8e6a-65572e1efdb6',
'cc34c1c7-c086-491b-8e6a-65572e1efdb6',
'cc34c1c7-c086-491b-8e6a-65572e1efdb6',
'cc34c1c7-c086-491b-8e6a-65572e1efdb6',
'cc34c1c7-c086-491b-8e6a-65572e1efdb6',
'cc34c1c7-c086-491b-8e6a-65572e1efdb6',
'cc34c1c7-c086-491b-8e6a-65572e1efdb6'],
'text': ['साबू ने एक कंकड़ को ठोकर मारी। कंकड़ लुढ़कता हुआ एक पेड़ के पास पहुँचा। पेड़ के तने पर मुलायम बाल थे। साबू ने छुए और ऊपर देखा, ऊपर, ऊपर और उससे भी ऊपर...दो आँखें नीचे देख रही थीं।',
'“हेलो, तुम कौन हो?” साबू को बड़ा अचम्भा हुआ।“हेलो, मैं जिराफ़ हूँ। मेरा नाम है जोजो। \xa0मैं तुम्हारे साथ खेल सकता हूँ। मेरी पीठ पर चढ़ जाओ, मैं तुम्हें घुमा के लाता हूँ।”',
'साबू जोजो की पीठ पर चढ़ गया और वे सड़क पर चल निकले। फिर पहाड़ी पर और शहर के बीचों बीच।\nसाबू खुशी से चिल्लाया, “जोजो दाएँ मुड़ो,\n बाएँ मुड़ो और फिर दाएँ।” अब वे उसकी दोस्त मुन्नी के घर पहुँच गये।',
'आज मुन्नी का जन्मदिन था। साबू को जोजो पर सवारी करते देख बच्चों ने ताली बजायी।\xa0\n जोजो ने गुब्बारे लटकाने में आन्टी की मदद करी क्योंकि वह इतना... लम्बा था।\xa0\n कितना आसान था!',
'जोजो ने सब बच्चों को सवारी कराई।\n उनके साथ बॉल भी खेली। बड़े मज़े की पार्टी थी।सब ने गाया, “हैप्पी बर्थ डे टु यू ।”\n आन्टी ने मेज़ पर समोसे, गुलाब जामुन और आइसक्रीम सजाई।',
'जोजो को आइसक्रीम बहुत पसन्द आई। अंकल उसके लिये एक बाल्टी भर के आइसक्रीम लाये। जोजो ने पूरी बाल्टी ख़त्म कर दी। \xa0अब घर जाने का समय हो गया।\n\nसब ने कहा, “बाय बाय जोजो, बाय बाय साबू।” साबू और जोजो घर लौटे।',
'']}
```
### Data Fields
The metadata fields below are available. In terms of licenses, all stories included in the current release are released under a Creative Commons license (even if the individual story metadata fields are missing).
- **id**: id of the sample
- **title**: title of the book, e.g. ""Going to Buy a Book"".
- **license**: specific license used, e.g. ""cc-by-sa"" for ""Creative Commons, by attribution, share-alike"".
- **album_id**: an ID value corresponding to the set of images corresponding to the given story
- **story**: the sequenced story data including lists of image IDs, image URLs, and corresponding text
### Data Splits
Currently all languages include a train split only. In the future, we will be creating manual splits of the data.
## Changelog
- **6 December 2022** - dataset is made public"
ziozzang/EverythingLM-data-V2-Ko,"{""license"": ""mit"", ""language"": [""ko""]}","# Translated into Korean with DeepL
All Texts are translated with DeepL. (Machine Translated.)
- Issue: some data items are missing, cause of DeepL plan and processing method. I use very cheap plan and all datas are merged into single file and splitted by few code and hand.
- This is sample/test processing of data set creation with DeepL.
- Original Dataset: totally-not-an-llm/EverythingLM-data-V2
# EverythingLM V2 Dataset
**EverythingLM V2** is a diverse instruct dataset consisting of 1k of human-assistant conversations. These sets were generated using principles from both evol-instruct and Orca. The dataset encompasses a wide array of topics and interactions.
### Differences for V1:
- All data in V2 is generated by GPT4
- Higher quality dataset generation pipeline:
- More humalike seed prompts
- Fixed some bugs in the script
- More diverse creative writing
- More diverse seed prompts in general
- Attempt not to overfit the model on complex instructions by occasionally skipping evol
### Cost:
Reproducing this dataset would cost roughly $40.
### Instruction Categories:
- Reasoning
- Creative Writing
- General Knowledge
- Brainstorming
- Search Query
- Coding
- Basic Instruct
We also leverage various system prompts for evol-instruct and for responding to prompts.
This dataset has also been filtered to remove OpenAI alignment.
### How it stands out:
- Long, detailed outputs
- Humanlike creativity
- CoT reasoning
- Complex & challenging tasks
### Plans:
- Train Llama 7b & 13b models (13b model V1 trained)
- Train Llama 70b QLoRA
- Generate V2 of the dataset, with more categories and GPT-4 (DONE) ✓
Included in this repo is the script to generate the dataset."
voice-is-cool/voxtube,{},
xhluca/publichealth-qa,"{""license"": ""cc-by-nc-sa-3.0"", ""task_categories"": [""question-answering""], ""language"": [""ar"", ""en"", ""es"", ""fr"", ""ko"", ""ru"", ""vi"", ""zh""], ""size_categories"": [""n<1K""], ""configs"": [{""config_name"": ""english"", ""default"": true, ""data_files"": [{""split"": ""test"", ""path"": ""data/english.csv""}]}, {""config_name"": ""arabic"", ""data_files"": [{""split"": ""test"", ""path"": ""data/arabic.csv""}]}, {""config_name"": ""chinese"", ""data_files"": [{""split"": ""test"", ""path"": ""data/chinese.csv""}]}, {""config_name"": ""french"", ""data_files"": [{""split"": ""test"", ""path"": ""data/french.csv""}]}, {""config_name"": ""korean"", ""data_files"": [{""split"": ""test"", ""path"": ""data/korean.csv""}]}, {""config_name"": ""korean"", ""data_files"": [{""split"": ""test"", ""path"": ""data/korean.csv""}]}, {""config_name"": ""russian"", ""data_files"": [{""split"": ""test"", ""path"": ""data/russian.csv""}]}, {""config_name"": ""spanish"", ""data_files"": [{""split"": ""test"", ""path"": ""data/spanish.csv""}]}, {""config_name"": ""vietnamese"", ""data_files"": [{""split"": ""test"", ""path"": ""data/vietnamese.csv""}]}]}","# Usage
```python
import datasets
langs = ['arabic', 'chinese', 'english', 'french', 'korean', 'russian', 'spanish', 'vietnamese']
data = datasets.load_dataset('xhluca/publichealth-qa', split='test', name=langs[0])
```
# About
This dataset contains question and answer pairs sourced from Q&A pages and FAQs from CDC and WHO pertaining to COVID-19. They were produced and collected between 2019-12 and 2020-04. They were originally published as an [aggregated Kaggle dataset](https://www.kaggle.com/xhlulu/covidqa).
# License
CDC data is licensed under [CC-BY 3.0](https://web.archive.org/web/20201017141031/https://www2a.cdc.gov/cdcup/library/other/policy.htm) and WHO is licensed under [cc-by-nc-sa-3.0](https://web.archive.org/web/20210701063743/https://www.who.int/about/policies/publishing/copyright).
# Source
This data was originally included in the [COVID-QA dataset](https://www.kaggle.com/datasets/xhlulu/covidqa), where it was known as the multilingual split. The files in this updated repository were generated using the [publichealth-qa repository](https://github.com/xhluca/publichealth-qa)."
traintogpb/aihub-koen-translation-integrated-small-100k,"{""language"": [""en"", ""ko""], ""size_categories"": [""100K Post-processing 작업 내용
## OpenOrca-Ko
Repo: [OpenOrca-Ko](https://huggingface.co/datasets/kyujinpy/OpenOrca-KO)
1. NIV // 1571개
2. FLAN // 9434개
3. T0 // 6351개
4. CoT // 2117개
5. KoCoT // 2159개
> Dataset 구성
## Translation
Using DeepL Pro API. Thanks.
---
>Below is original dataset card
## Table of Contents
- [Dataset Summary](#dataset-summary)
- [Dataset Attribution](#dataset-attribution)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Dataset Use](#dataset-use)
- [Use Cases](#use-cases)
- [Usage Caveats](#usage-caveats)
- [Getting Started](#getting-started)
🐋 The OpenOrca Dataset! 🐋

We are thrilled to announce the release of the OpenOrca dataset!
This rich collection of augmented FLAN data aligns, as best as possible, with the distributions outlined in the [Orca paper](https://arxiv.org/abs/2306.02707).
It has been instrumental in generating high-performing model checkpoints and serves as a valuable resource for all NLP researchers and developers!
# Official Models
## OpenOrca-Platypus2-13B
Our [latest release](https://huggingface.co/Open-Orca/OpenOrca-Platypus2-13B), the first 13B model to score higher than LLaMA1-65B on the HuggingFace Leaderboard!
Released in partnership with Platypus.
## LlongOrca 7B & 13B
* Our [first 7B release](https://huggingface.co/Open-Orca/LlongOrca-7B-16k), trained on top of LLongMA2 to achieve 16,000 tokens context. #1 long context 7B model at release time, with >99% of the overall #1 model's performance.
* [LlongOrca-13B-16k](https://huggingface.co/Open-Orca/LlongOrca-13B-16k), trained on top of LLongMA2. #1 long context 13B model at release time, with >97% of the overall #1 model's performance.
## OpenOrcaxOpenChat-Preview2-13B
Our [second model](https://huggingface.co/Open-Orca/OpenOrcaxOpenChat-Preview2-13B), highlighting that we've surpassed the performance reported in the Orca paper.
Was #1 at release time, now surpassed by our own OpenOrca-Platypus2-13B.
Released in partnership with OpenChat.
## OpenOrca-Preview1-13B
[OpenOrca-Preview1-13B](https://huggingface.co/Open-Orca/OpenOrca-Preview1-13B)
This model was trained in less than a day, for <$200, with <10% of our data.
At release, it beat the current state of the art models on BigBench-Hard and AGIEval. Achieves ~60% of the improvements reported in the Orca paper.
# Dataset Summary
The OpenOrca dataset is a collection of augmented [FLAN Collection data](https://arxiv.org/abs/2301.13688).
Currently ~1M GPT-4 completions, and ~3.2M GPT-3.5 completions.
It is tabularized in alignment with the distributions presented in the ORCA paper and currently represents a partial completion of the full intended dataset, with ongoing generation to expand its scope.
The data is primarily used for training and evaluation in the field of natural language processing.
# Dataset Attribution
We would like to give special recognition to the following contributors for their significant efforts and dedication:
Teknium
WingLian/Caseus
Eric Hartford
NanoBit
Pankaj
Winddude
Rohan
http://AlignmentLab.ai:
Autometa
Entropi
AtlasUnified
NeverendingToast
NanoBit
WingLian/Caseus
Also of course, as always, TheBloke, for being the backbone of the whole community.
Many thanks to NanoBit and Caseus, makers of [Axolotl](https://github.com/OpenAccess-AI-Collective/axolotl), for lending us their expertise on the platform that developed and trained manticore, minotaur, and many others!
We are welcoming sponsors or collaborators to help us build these models to the scale they deserve. Please reach out via our socials:
http://Alignmentlab.ai https://discord.gg/n9hXaBPWxx
Want to visualize our full dataset? Check out our [Nomic Atlas Map](https://atlas.nomic.ai/map/c1b88b47-2d9b-47e0-9002-b80766792582/2560fd25-52fe-42f1-a58f-ff5eccc890d2).
[
](https://atlas.nomic.ai/map/c1b88b47-2d9b-47e0-9002-b80766792582/2560fd25-52fe-42f1-a58f-ff5eccc890d2)
# Supported Tasks and Leaderboards
This dataset supports a range of tasks including language modeling, text generation, and text augmentation.
It has been instrumental in the generation of multiple high-performing model checkpoints which have exhibited exceptional performance in our unit testing.
Further information on leaderboards will be updated as they become available.
# Languages
The language of the data is primarily English.
# Dataset Structure
## Data Instances
A data instance in this dataset represents entries from the FLAN collection which have been augmented by submitting the listed question to either GPT-4 or GPT-3.5.
The response is then entered into the response field.
## Data Fields
The fields are:
1) 'id', a unique numbered identifier which includes one of 'niv', 't0', 'cot', or 'flan' to represent which source FLAN Collection submix the 'question' is sourced from.
2) 'system_prompt', representing the System Prompt presented to the GPT-3.5 or GPT-4 API for the datapoint
3) 'question', representing a question entry as provided by the FLAN Collection
4) 'response', a response to that question received from a query to either GPT-3.5 or GPT-4.
## Data Splits
The data is unsplit.
# Dataset Creation
## Curation Rationale
The dataset was created to provide a source of augmented text data for researchers and developers.
The datapoints are intended primarily to provide an enhancement of the core FLAN Collection data which relies upon the detailed step by step reasoning capabilities of GPT-3.5 and GPT-4.
This ""reasoning trace"" augmentation has demonstrated exceptional results, allowing a LLaMA-13B model trained with this data to rival or beat GPT-3.5 on broad sets of hard reasoning tasks which all models below 100B parameters had previously performed dramatically worse on.
## Source Data
The data is generated using techniques in alignment with the distributions outlined in the Orca paper, except as noted below:
1) There is not enough CoT data in the FLAN Collection to generate 150K zero-shot entries, as the paper purports to use.
We suspect this portion was either undocumented or misrepresented. We have used the ~75K points available.
2) We used the pre-generated FLAN Collection datasets hosted on HuggingFace under conceptofmind, e.g. [conceptofmind/flan2021](https://huggingface.co/datasets/conceptofmind/flan2021_submix_original).
These are referenced by the [official FLAN Collection repo](https://github.com/google-research/FLAN/tree/main/flan/v2) as the preferred data source.
However, these are a subset of the full FLAN Collection data, and have less than the required entries for the flan2021 and t0 submixes, by ~1.25M and 200k respectively.
Combined, this gave us ~1.5M fewer datapoints than in the original Orca paper. Completing the set is an ongoing work.
# Dataset Use
## Use Cases
The dataset can be used for tasks related to language understanding, natural language processing, machine learning model training, and model performance evaluation.
## Usage Caveats
Given that this is a work-in-progress dataset, it is recommended to regularly check for updates and improvements.
Further, the data should be used in accordance with the guidelines and recommendations outlined in the Orca paper.
## Getting Started
This dataset is organized such that it can be naively loaded via Hugging Face datasets library.
We recommend using streaming due to the large size of the files.
Regular updates and data generation progress can be monitored through the OpenOrca repository on Hugging Face.
# Citation
```bibtex
@misc{OpenOrca,
title = {OpenOrca: An Open Dataset of GPT Augmented FLAN Reasoning Traces},
author = {Wing Lian and Bleys Goodson and Eugene Pentland and Austin Cook and Chanvichet Vong and ""Teknium""},
year = {2023},
publisher = {HuggingFace},
journal = {HuggingFace repository},
howpublished = {\url{https://https://huggingface.co/Open-Orca/OpenOrca},
}
```
```bibtex
@misc{mukherjee2023orca,
title={Orca: Progressive Learning from Complex Explanation Traces of GPT-4},
author={Subhabrata Mukherjee and Arindam Mitra and Ganesh Jawahar and Sahaj Agarwal and Hamid Palangi and Ahmed Awadallah},
year={2023},
eprint={2306.02707},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
```bibtex
@misc{longpre2023flan,
title={The Flan Collection: Designing Data and Methods for Effective Instruction Tuning},
author={Shayne Longpre and Le Hou and Tu Vu and Albert Webson and Hyung Won Chung and Yi Tay and Denny Zhou and Quoc V. Le and Barret Zoph and Jason Wei and Adam Roberts},
year={2023},
eprint={2301.13688},
archivePrefix={arXiv},
primaryClass={cs.AI}
}
```
```bibtex
@misc{touvron2023llama,
title={Llama 2: Open Foundation and Fine-Tuned Chat Models},
author={Hugo Touvron and Louis Martin and Kevin Stone and Peter Albert and Amjad Almahairi and Yasmine Babaei and Nikolay Bashlykov and Soumya Batra and Prajjwal Bhargava and Shruti Bhosale and Dan Bikel and Lukas Blecher and Cristian Canton Ferrer and Moya Chen and Guillem Cucurull and David Esiobu and Jude Fernandes and Jeremy Fu and Wenyin Fu and Brian Fuller and Cynthia Gao and Vedanuj Goswami and Naman Goyal and Anthony Hartshorn and Saghar Hosseini and Rui Hou and Hakan Inan and Marcin Kardas and Viktor Kerkez and Madian Khabsa and Isabel Kloumann and Artem Korenev and Punit Singh Koura and Marie-Anne Lachaux and Thibaut Lavril and Jenya Lee and Diana Liskovich and Yinghai Lu and Yuning Mao and Xavier Martinet and Todor Mihaylov and Pushkar Mishra and Igor Molybog and Yixin Nie and Andrew Poulton and Jeremy Reizenstein and Rashi Rungta and Kalyan Saladi and Alan Schelten and Ruan Silva and Eric Michael Smith and Ranjan Subramanian and Xiaoqing Ellen Tan and Binh Tang and Ross Taylor and Adina Williams and Jian Xiang Kuan and Puxin Xu and Zheng Yan and Iliyan Zarov and Yuchen Zhang and Angela Fan and Melanie Kambadur and Sharan Narang and Aurelien Rodriguez and Robert Stojnic and Sergey Edunov and Thomas Scialom},
year={2023},
eprint= arXiv 2307.09288
}
@software{touvron2023llama,
title={LLaMA: Open and Efficient Foundation Language Models},
author={Touvron, Hugo and Lavril, Thibaut and Izacard, Gautier and Martinet, Xavier and Lachaux, Marie-Anne and Lacroix, Timoth{\'e}e and Rozi{\`e}re, Baptiste and Goyal, Naman and Hambro, Eric and Azhar, Faisal and Rodriguez, Aurelien and Joulin, Armand and Grave, Edouard and Lample, Guillaume},
journal={arXiv preprint arXiv:2302.13971},
year={2023}
}
```"
slone/nllb-200-10M-sample,"{""dataset_info"": {""features"": [{""name"": ""laser_score"", ""dtype"": ""float64""}, {""name"": ""lang1"", ""dtype"": ""string""}, {""name"": ""text1"", ""dtype"": ""string""}, {""name"": ""lang2"", ""dtype"": ""string""}, {""name"": ""text2"", ""dtype"": ""string""}, {""name"": ""blaser_sim"", ""dtype"": ""float64""}], ""splits"": [{""name"": ""train"", ""num_bytes"": 2279333006.0, ""num_examples"": 9983398}], ""download_size"": 1825697094, ""dataset_size"": 2279333006.0}, ""configs"": [{""config_name"": ""default"", ""data_files"": [{""split"": ""train"", ""path"": ""data/train-*""}]}], ""license"": ""odc-by"", ""task_categories"": [""translation""], ""pretty_name"": ""nllb-200-10M-sample"", ""size_categories"": [""1M
This is the cross-lingual subset of the SWIM-IR dataset, where the query generated is in the target language and the passage is in English.
The SWIM-IR dataset is available as CC-BY-SA 4.0. 18 languages (including English) are available in the cross-lingual dataset.
For full details of the dataset, please read our upcoming [NAACL 2024 paper](https://arxiv.org/abs/2311.05800) and check out our [website](https://github.com/google-research-datasets/swim-ir).
# What is SWIM-IR?
SWIM-IR dataset is a synthetic multilingual retrieval dataset spanning around 29 million retrieval training pairs across 27 languages.
Each question has been automatically generated with the Summarize-then-Ask (STA) prompting technique using PaLM-2 as the question generator.
**Note**: As the question is synthetically generated, there is scope for hallucinations during query generation. The hallucinated queries do not affect retrieval effectiveness.
If you are using SWIM-IR in your research, please cite the following paper:
```
@article{thakur:2023,
author = {Nandan Thakur and
Jianmo Ni and
Gustavo Hern{\'{a}}ndez {\'{A}}brego and
John Wieting and
Jimmy Lin and
Daniel Cer},
title = {Leveraging LLMs for Synthesizing Training Data Across Many Languages
in Multilingual Dense Retrieval},
journal = {CoRR},
volume = {abs/2311.05800},
year = {2023},
url = {https://doi.org/10.48550/arXiv.2311.05800},
doi = {10.48550/ARXIV.2311.05800},
eprinttype = {arXiv},
eprint = {2311.05800},
timestamp = {Tue, 14 Nov 2023 14:47:55 +0100},
biburl = {https://dblp.org/rec/journals/corr/abs-2311-05800.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
## Dataset Details
### Dataset Description
- **Homepage:** [SWIM-IR homepage](https://github.com/google-research-datasets/swim-ir)
- **Repository:** [SWIM-IR repository](https://github.com/google-research-datasets/swim-ir)
- **Paper:** [Leveraging LLMs for Synthesizing Training Data Across Many Languages in Multilingual Dense Retrieval
](https://arxiv.org/abs/2311.05800)
- **Leaderboard:** [Needs More Information]
- **Point of Contact:** [Nandan Thakur](mailto:nandan.thakur@uwaterloo.ca)
#### Dataset Link
SWIM-IR v1.0: http://storage.googleapis.com/gresearch/swim-ir/swim_ir_v1.tar.gz
#### Data Card Author(s)
- **Nandan Thakur, University of Waterloo:** Owner
- **Daniel Cer, Google Research:** Owner
- **Jianmo Ni, Google DeepMind:** Contributor
- **John Wieting, Google DeepMind:** Contributor
- **Gustavo Hernandez Abrego, Google Research:** Contributor
- **Jimmy Lin, University of Waterloo:** Contributor
## Authorship
### Publishers
#### Publishing Organization(s)
University of Waterloo, Google Research, Google DeepMind
#### Industry Type(s)
- Corporate - Tech
- Academic - Tech
### Dataset Owners
#### Team(s)
SWIM-IR Team
#### Contact Detail(s)
- **Dataset Owner(s):** Nandan Thakur, Daniel Cer
- **Affiliation:** University of Waterloo, Google Research
- **Contact:** [nandan.thakur@uwaterloo.ca](mailto:nandan.thakur@uwaterloo.ca)
## Dataset Overview
#### Data Subject(s)
- Synthetically generated data
#### Dataset Snapshot
SWIM-IR is a synthetic multilingual retrieval training dataset.
It contains training pairs for both settings: monolingual, i.e. within the same language, and cross-lingual, i.e. across language.
The dataset is useful to fine-tune state-of-the-art (SoTA) synthetic monolingual and cross-lingual neural retrievers across diverse languages.
Category | Data
--- | ---
Size of Dataset | ~6-7 GB
Number of Instances | 28,265,848
Number of Fields | 6
Labeled Classes | 33*
Number of Labels | 1
**Above:** Dataset statistics comprises both in-language and cross-language settings. The classes above denote a language.
**Additional Notes:** (*) Classes denote the languages we cover in the SWIM-IR dataset. Here is a list of the 18 languages and their ISO codes listed in alphabetical order:
Arabic (ar), Bengali (bn), German (de), English (en), Spanish (es), Persian (fa), Finnish (fi), French (fr), Hindi (hi), Indonesian (id), Japanese (ja), Korean (ko), Russian (ru), Swahili (sw), Thai (th), Yoruba (yo),
Chinese (zh) and rest 15 Indo-European Languages: Assamese (as), Bhojpuri (bho), Konkani (gom), Gujarati (gu), Kannada (kn), Maithili (mai), Malayalam (ml), Manipuri (mni), Marathi (mr), Odia (or), Punjabi (pa), Pashto (ps), Sanskrit (sa), Tamil (ta), Urdu (ur).
#### Content Description
A paragraph is sampled from the Wikipedia corpus which describes an entity. The question arising from the Wikipedia
paragraph is generated using a large language model (LLM). In our work, we used the PaLM 2-S (small) model to generate
synthetic queries across **33 languages**, covering 11 distinct scripts, and 10 language families comprising over 3 billion speakers in the world.
The SWIM-IR dataset contains about **28 million** Wikipedia synthetic query-paragraph training pairs with a multilingual query for each passage generated using PaLM 2 (small),
for both cross-lingual and monolingual retrieval settings.
**Additional Notes:**
- The dataset creation follows a specific procedure that involves a `summarize-then-ask` prompting technique inspired by chain-of-thought prompting.
- PaLM 2 uses **summarize-then-ask promping** containing 5-shot exemplars for cross-lingual and 3-shot exemplars for monolingual query generation.
- The prompt includes the original paragraph, a human-generated summary, and a question translated from English using Machine Translation (MT) for cross-lingual generation,
- whereas for randomly sampled training dataset pairs, and summaries generated using Google BARD for monolingual generation.
- PaLM 2 generates an extractive summary which is used as a proxy to help understand the document and highlight relevant sections within the document.
- Finally, the model generates a question in the target language (different in cross-lingual or same in monolingual) which can be answered using the input paragraph.
### Sensitivity of Data
#### Sensitivity Type(s)
- None
#### Field(s) with Sensitive Data
**Intentional Collected Sensitive Data**
No sensitive data was intentionally collected.
**Unintentionally Collected Sensitive Data**
S/PII, violent, abusive, or toxic text containing racial slurs were not explicitly collected as a part of the dataset creation
process. Sensitive subject and adult content was automatically filtered using the method described in (Thakur et al. 2023).
#### Security and Privacy Handling
We used algorithmic methods and relied on other classifiers for data filtration. Specifically, we (1) did a human inspection of text samples, with the questions automatically translated to English; (2) our observations motivated using a classifier to filter text containing sensitive subjects and adult content.
## Example of Data Points
#### Primary Data Modality
- Text Data
#### Data Fields
| Field name | Datapoint Example | Description |
| --------- | -------- | -------- |
| `lang` | String | The language of the generated question |
| `code` | String | The ISO-Code for the language |
| `query` | String | The generated query using PaLM 2 |
| `_id` | String | unique ID denoting the training pair |
| `title` | String | Title of the Wikipedia article |
| `text` | String | Paragraph of the Wikipedia article
#### Typical Data Point
Example of (English -> Japanese) datapoint from our
cross-lingual dataset on the topic of “The Roki Tunnel” from the
English Wikipedia.
```bash
{
'_id': '1234',
'lang': 'Japanese',
'code': 'ja',
'query': 'The Roki Tunnel は、北オセチア自治共和国と南オセチア共
和国の間を通る唯一の道路ですか?',
'title': 'The Roki Tunnel',
'text': ""The Roki Tunnel (also called Roksky Tunnel, ; Ossetic:
Ручъы тъунел; ) is a mountain tunnel of the Transkam road
through the Greater Caucasus Mountains, north of the village
Upper Roka. It is the only road joining North Ossetia–Alania in
the Russian Federation into South Ossetia, a breakaway
republic of Georgia. The road is manned at the town of Nizhny
Zaramag in North Ossetia and is sometimes referred to as the
Roki-Nizhny Zaramag border crossing. The tunnel, completed
by the Soviet government in 1984, is one of only a handful of
routes that cross the North Caucasus Range.""
}
```
Example of Hindi (hn) datapoint from our monolingual dataset
on the topic of “Aryabhata” from the Hindi Wikipedia
```bash
{
'_id': 'hindi_8987#4',
'lang': 'Hindi',
'code': 'hn',
'query': 'आर्यभर्य ट केरल के कि स स्थान के नि वासी थे ?',
'title': 'आर्यभर्य ट',
'text': ""एक ताजा अध्ययन के अनसु ार आर्यभर्य ट, केरल के
चाम्रवत्तम (१०उत्तर५१, ७५पर्वू ४र्व ५) के नि वासी थे। अध्ययन के अनसु ार
अस्मका एक जनै प्रदेश था जो कि श्रवणबेलगोल के चारों तरफ फैला
हुआ था और यहाँके पत्थर के खम्बों के कारण इसका नाम अस्मका
पड़ा। चाम्रवत्तम इस जनै बस्ती का हि स्सा था, इसका प्रमाण है
भारतापझु ा नदी जि सका नाम जनै ों के पौराणि क राजा भारता के नाम
पर रखा गया है। आर्यभर्य ट ने भी यगु ों को परि भाषि त करते वक्त राजा
भारता का जि क्र कि या है- दसगीति का के पांचवें छंद में राजा भारत
के समय तक बीत चकुे काल का वर्णनर्ण आता है। उन दि नों में
कुसमु परुा में एक प्रसि द्ध वि श्ववि द्यालय था जहाँजनै ों का नि र्णा यक
प्रभाव था और आर्यभर्य ट का काम इस प्रकार कुसमु परुा पहुँच सका और
उसे पसदं भी कि या गया।""
}
```
#### Atypical Data Point
The dataset does not contain atypical data points as far as we know.
## Motivations & Intentions
### Motivations
#### Purpose(s)
- Research
#### Domain(s) of Application
`Multilingual Dense Retrieval`, `Synthetic Dataset`
## Provenance
### Collection
#### Method(s) Used
- Artificially Generated
- Taken from other existing datasets
#### Methodology Detail(s)
**Collection Type**
**Source:** TyDI-QA dataset which provided the English Wikipedia dataset for SWIM cross-lingual IR dataset. MIRACL
provided the language-specific Wikipedia datasets for monolingual SWIM-IR datasets.
**Is this source considered sensitive or high-risk?** [Yes/**No**]
**Dates of Collection:** TyDI-QA [unknown - 01/02/2019], MIRACL [unknown - 01/02/2023], XTREME-UP [unknown - 01/02/2023]
**Primary modality of collection data:**
- Text Data
**Update Frequency for collected data:**
- Static
#### Source Description(s)
- **TyDI-QA:** TyDi-QA [(Clark et al. 2020)](https://aclanthology.org/2020.tacl-1.30/) provided the English Wikipedia passages which have been split into 100-word long paragraphs. It contains around 18.2M passages from the complete English Wikipedia. We selected passages with a maximum of 1M pairs for each language pair (for 17 languages) at random for the preparation of our cross-lingual SWIM-IR dataset.
- **MIRACL:** MIRACL [(Zhang et al. 2023)](https://direct.mit.edu/tacl/article/doi/10.1162/tacl_a_00595/117438/MIRACL-A-Multilingual-Retrieval-Dataset-Covering) provides language-specific paragraphs from the Wikipedia Corpus. The paragraphs were generated by splitting on the “\n\n” delimiter. The MIRACL dataset provides corpora for 18 languages. We selected passages with a maximum of 1M pairs for each language at random for the preparation of our mono-lingual SWIM-IR dataset.
- **XTREME-UP:** XTREME-UP [(Ruder et al. 2023)](https://aclanthology.org/2023.findings-emnlp.125/) provides a 120K sample of the TyDi-QA (Clark et al. 2020) English Wikipedia passages which have been split into 100-word long paragraphs. This sample has been used in the original dataset for cross-language question answering.
#### Collection Cadence
**Static:** Data was collected once from single or multiple sources.
#### Data Integration
**TyDi-QA (XOR-Retrieve and XTREME-UP)**
**Included Fields**
The English Wikipedia title, text, and `_id` fields were taken from the TyDi-QA dataset originally provided as a TSV file containing all fields.
**Excluded Fields**
The rest of the metadata apart from the fields mentioned above were excluded from our SWIM-IR dataset. We do not use any training data provided from the TyDI-QA dataset.
**MIRACL**
**Included Fields**
The Language Wikipedia title, text, and `_id` fields were taken from the MIRACL dataset, originally provided as a JSON-lines file containing all fields.
**Excluded Fields**
The rest of the metadata apart from the fields mentioned above were excluded from our SWIM-IR dataset. We do not use any training data provided from the MIRACL dataset.
#### Data Processing
All data is coming directly from the TyDI-QA and MIRACL datasets without any preprocessing.
### Collection Criteria
#### Data Selection
For the Cross-lingual SWIM-IR dataset, we use a stratified sampling technique to select a subset of passages from the English Wikipedia corpus. We use it to generate questions for SWIM-IR. We ensure all languages have relatively an equal amount of training samples, wherever possible. Our Wikipedia corpus contains entities that are sorted alphabetically (A-Z). We then compute inclusion threshold $I_{th}$, which is defined as $I_{th} = D_{sample} / D_{total}$, where $(D_{sample})$ is number of passages required to sample and $(D_{total})$ is the total numbers of passages in corpus. Next, for each passage ($p_i$) in the corpus, we randomly generate an inclusion probability $\hat{p_i} \in [0,1]$. We select the passage ($p_i$) if $p_i \leq I_{th}$. This ensures uniform sampling of passages with Wikipedia entities between all letters (A-Z).
For the Monolingual SWIM-IR dataset, the language selection criteria were dependent on the Wikipedia corpora availability for the monolingual task. Hence, we chose to fix on the 18 languages provided in MIRACL. To complete the dataset, we included the same languages for the cross-lingual task.
#### Data Inclusion
We include all data available in TyDi-QA English Wikipedia Corpus (maximum of 1M training pairs per language pair), which we use to generate our cross-lingual SWIM-IR dataset. We use the language-specific MIRACL Wikipedia corpora to generate our monolingual queries in SWIM-IR.
#### Data Exclusion
We removed data classified as containing sensitive subjects and adult content using the method described in our paper. No additional filters were applied for data exclusion from MIRACL or TyDi-QA.
The TyDi-QA English paragraph data has been split with a maximum of up to 100 tokens. However, MIRACL used the “\n\n” delimiter to segment paragraphs from the Wikipedia articles."
OzoneAsai/4typeCalculation,"{""license"": ""wtfpl"", ""tag"": ""conversational"", ""task_categories"": [""conversational""], ""language"": [""en"", ""zh"", ""de"", ""ru"", ""ko"", ""fr"", ""ja""]}","# Dataset Card for Calculation
### size
JSON file: output1.json≒1.3GB
~
output60.json
In total 70 ~ 80GB
### Dataset Summary
**en**: Calculation. Its range will be expanded later.
**zh**: 计算。其范围将在以后扩展。
**de**: Berechnung. Der Umfang wird später erweitert werden.
**ru**: Расчет. Его диапазон будет расширен позже.
**ko**: 계산. 범위는 나중에 확장될 것입니다.
**fr**: Calcul. Sa portée sera étendue ultérieurement.
**ja**: 計算。範囲は後で拡張されます。
### Supported Tasks and Leaderboards
**en**: conversation, instruction
**zh**: 会话,指令
**de**: Unterhaltung, Anweisung
**ru**: разговор, инструкция
**ko**: 대화, 지시사항
**fr**: conversation, instruction
**ja**: 会話、指示
### Languages
**en**: It only used numbers and symbols. So any language is able to use this.
**zh**: 该数据集只使用数字和符号。因此任何语言都可以使用它。
**de**: Es werden nur Zahlen und Symbole verwendet. Daher kann diese Datenbank von jeder Sprache verwendet werden.
**ru**: В нем используются только цифры и символы. Таким образом, любой язык может использовать его.
**ko**: 숫자와 기호만 사용되었습니다. 그래서 모든 언어에서 사용할 수 있습니다.
**fr**: Il n'utilise que des chiffres et des symboles. Ainsi, n'importe quelle langue peut l'utiliser.
**ja**: 数字と記号のみが使用されています。したがって、どんな言語でも使用できます.
## Dataset Structure
Input, output,
## Translation
Translated by ChatGPT"
allganize/RAG-Evaluation-Dataset-KO,"{""language"": [""ko""], ""license"": ""mit""}","# Allganize RAG Leaderboard
Allganize RAG 리더보드는 5개 도메인(금융, 공공, 의료, 법률, 커머스)에 대해서 한국어 RAG의 성능을 평가합니다.
일반적인 RAG는 간단한 질문에 대해서는 답변을 잘 하지만, 문서의 테이블과 이미지에 대한 질문은 답변을 잘 못합니다.
RAG 도입을 원하는 수많은 기업들은 자사에 맞는 도메인, 문서 타입, 질문 형태를 반영한 한국어 RAG 성능표를 원하고 있습니다.
평가를 위해서는 공개된 문서와 질문, 답변 같은 데이터 셋이 필요하지만, 자체 구축은 시간과 비용이 많이 드는 일입니다.
이제 올거나이즈는 RAG 평가 데이터를 모두 공개합니다.
RAG는 Parser, Retrieval, Generation 크게 3가지 파트로 구성되어 있습니다.
현재, 공개되어 있는 RAG 리더보드 중, 3가지 파트를 전체적으로 평가하는 한국어로 구성된 리더보드는 없습니다.
Allganize RAG 리더보드에서는 문서를 업로드하고, 자체적으로 만든 질문을 사용해 답변을 얻었습니다.
생성한 답변과 정답 답변을 자동 성능 평가 방법을 적용해 각 RAG 방법별 성능 측정을 했습니다.
# RAG Benchmark
| RAG | 금융 | 공공 | 의료 | 법률 | 커머스 | Average |
|:-----:|:-----:|:-----:|:-----:|:-----:|:-----:|:-----:|
| Alli (claude3.5-sonnet) | **0.85 (51/60)** | **0.983 (59/60)** | 0.85 (51/60) | **0.767 (46/60)** | 0.783 (47/60) | **0.847 (254/300)** |
| Alli (claude3-opus) | 0.817 (49/60) | 0.95 (57/60) | **0.9 (54/60)** | 0.75 (45/60) | 0.767 (46/60) | 0.837 (251/300) |
| Alli (gpt-4o) | 0.8 (48/60) | 0.9 (54/60) | 0.817 (49/60) | 0.683 (41/60) | 0.717 (43/60) | 0.783 (235/300) |
| Alli (gpt-4) | 0.833 (50/60) | 0.85 (51/60) | 0.733 (44/60) | 0.733 (44/60) | 0.733 (44/60) | 0.777 (233/300) |
| Alli (gpt-4-turbo) | 0.783 (47/60) | 0.9 (54/60) | 0.733 (44/60) | 0.717 (43/60) | 0.733 (44/60) | 0.773 (232/300) |
| Alli (alpha-ko-202411-32B) | 0.8 (48/60) | 0.85 (51/60) | 0.75 (45/60) | 0.717 (43/60) | 0.733 (44/60) | 0.77 (231/300) |
| Alli (gpt-4o-mini) | 0.75 (45/60) | 0.883 (53/60) | 0.7 (42/60) | 0.733 (44/60) | 0.75 (45/60) | 0.763 (229/300) |
| Upstage (gpt-4-turbo) | 0.617 (37/60) | 0.85 (51/60) | 0.833 (50/60) | 0.6 (36/60) | **0.817 (49/60)** | 0.743 (223/300) |
| OpenAI Assistant (gpt-4-turbo) | 0.533 (32/60) | 0.883 (53/60) | 0.733 (44/60) | 0.733 (44/60) | 0.783 (47/60) | 0.733 (220/300) |
| OpenAI Assistant (gpt-4) | 0.717 (43/60) | 0.783 (47/60) | 0.767 (46/60) | 0.517 (31/60) | 0.75 (45/60) | 0.707 (212/300) |
| Upstage (gpt-4) | 0.6 (36/60) | 0.783 (47/60) | 0.75 (45/60) | 0.583 (35/60) | 0.783 (47/60) | 0.7 (210/300) |
| Alli (Llama-3-Alpha-Ko-8B-Instruct-Pro) | 0.683 (41/60) | 0.767 (46/60) | 0.633 (38/60) | 0.583 (35/60) | 0.7 (42/60) | 0.673 (202/300) |
| Alli ([KONI-Llama3-8B-Instruct-20240729](https://huggingface.co/KISTI-KONI/KONI-Llama3-8B-Instruct-20240729)) | 0.683 (41/60) | 0.7 (42/60) | 0.533 (32/60) | 0.567 (34/60) | 0.75 (45/60) | 0.647 (194/300) |
| Upstage (solar) | 0.6 (36/60) | 0.683 (41/60) | 0.733 (44/60) | 0.433 (26/60) | 0.717 (43/60) | 0.633 (190/300) |
| Langchain (gpt-4-turbo) | 0.617 (37/60) | 0.517 (31/60) | 0.667 (40/60) | 0.567 (34/60) | 0.683 (41/60) | 0.61 (183/300) |
| Cohere (command-r-plus) | 0.483 (29/60) | 0.65 (39/60) | 0.433 (26/60) | 0.517 (31/60) | 0.683 (41/60) | 0.553 (166/300) |
| Cohere (command-r) | 0.5 (30/60) | 0.633 (38/60) | 0.417 (25/60) | 0.533 (32/60) | 0.667 (40/60) | 0.55 (165/300) |
| Upstage (gpt-3.5-turbo) | 0.5 (30/60) | 0.517 (31/60) | 0.567 (34/60) | 0.417 (25/60) | 0.617 (37/60) | 0.523 (157/300) |
| Alli ([Llama-3-Alpha-Ko-8B-Instruct](https://huggingface.co/allganize/Llama-3-Alpha-Ko-8B-Instruct)) | 0.533 (32/60) | 0.55 (33/60) | 0.533 (32/60) | 0.417 (25/60) | 0.55 (33/60) | 0.517 (155/300) |
| Langchain (gpt-3.5-turbo) | 0.4 (24/60) | 0.333 (20/60) | 0.417 (25/60) | 0.35 (21/60) | 0.467 (28/60) | 0.393 (118/300) |
| Anything LLM (gpt-4-turbo) | 0.267 (16/60) | 0.067 (4/60) | 0.55 (33/60) | 0.283 (17/60) | 0.283 (17/60) | 0.29 (87/300) |
| Anything LLM (claude3-opus) | 0.267 (16/60) | 0.067 (4/60) | 0.55 (33/60) | 0.317 (19/60) | 0.45 (27/60) | 0.33 (99/300) |
| Anything LLM (gpt-3.5-turbo) | 0.133 (8/60) | 0.033 (2/60) | 0.233 (14/60) | 0.15 (9/60) | 0.233 (14/60) | 0.157 (47/300) |
# Auto Evaluate
총 4개의 LLM Eval을 사용하여 평가한 후, voting 하여 ""O"" 혹은 ""X""를 결정했습니다.
- TonicAI : answer_similarity (threshold=4)
- MLflow : answer_similarity/v1/score (threshold=4)
- MLflow : answer_correctness/v1/score (threshold=4)
- Allganize Eval : answer_correctness/claude3-opus
LLM 기반 평가 방법이기 때문에, 오차율이 존재합니다.
Finance 도메인을 기반으로 사람이 평가한 것과 오차율을 비교하였을 때, 약 8%의 오차율을 보였습니다.
Colab에 Auto Evaluate를 사용할 수 있게 정리하였습니다.
- [Colab](https://colab.research.google.com/drive/1c9hH429iAqw4xkgKoQq1SC9f_4p_nwcc?usp=sharing)
# Dataset
### Domain
다양한 도메인 중, 다섯개를 선택해 성능 평가를 진행했습니다.
- finance(금융)
- public(공공)
- medical(의료)
- law(법률)
- commerce(커머스)
### Documents
도메인별로 PDF 문서를 수집하여 질문들을 생성했습니다.
각 도메인별 문서의 페이지 수 총합이 2~300개가 되도록 문서들을 수집했습니다.
각 문서의 이름, 페이지 수, 링크 또한 [documents.csv](https://huggingface.co/datasets/allganize/RAG-Evaluation-Dataset-KO/blob/main/documents.csv) 파일을 다운받으면 확인하실 수 있습니다.
각 도메인별 pdf 문서 갯수는 다음과 같습니다.
- finance: 10개 (301 page)
- public: 12개 (258 page)
- medical: 20개 (276 page)
- law: 12개 (291 page)
- commerce: 9개 (211 page)
### Question and Target answer
문서의 페이지 내용을 보고 사용자가 할만한 질문 및 답변들을 만들었습니다.
각 도메인별로 60개의 질문들을 가지고 있습니다.
### Context type
문서의 페이지를 보고 여기에서 나올 수 있는 질문들을 생성했습니다.
이때 질문에 대한 근거가 문단(paragraph)인지, 테이블(table)인지, 이미지(image)인지를 구분했습니다.
각 질문별 근거 유형을 context_type이라 하여 컬럼을 추가해두었습니다.
각 도메인별 context_type의 비율은 문서의 페이지에 등장한 빈도수를 반영해 설정했습니다. (ex. 금융 도메인 문서 210, 테이블 127, 이미지26)
도메인별 context_type의 비율은 다음과 같습니다.
| domain | paragraph | table | image |
| :--------: | :---------: | :--------: | :--------: |
| finance | 30 (50%) | 10 (17%) | 20 (33%) |
| public | 40 (67%) | 15 (25%) | 5 (8%) |
| medical | 45 (75%) | 5 (8%) | 10 (17%) |
| law | 40 (67%) | 15 (25%) | 5 (8%) |
| commerce | 38 (64%) | 5 (8%) | 17 (28%) |
# RAG Solution
### Alli
Alli는 Allganize의 RAG 솔루션입니다.
Parser는 page 단위로 Allganize Parser를 사용해 구현했습니다.
Retrieval는 Hybrid Search를 사용해 구현했습니다.
Generation은 OpenAI, Cluade, Allganize에서 만든 금융모델 등 간단하게 선택해서 사용할 수 있습니다.
- [Allganize](https://www.allganize.ai/ko/home)
### LangChain
LangChain은 LLM으로 구동되는 애플리케이션을 개발하기 위한 프레임워크입니다.
LangChain RAG Quick Start를 기반으로 성능을 평가했습니다.
Parser는 pypdf를 사용했습니다.
chunk size와 overlap은 튜토리얼에 나와있는데로 1000과 200으로 설정했습니다.
Retrieval은 OpenAI Embedding을 사용했습니다.
Generation은 Langchain에서 지원하는 모델을 자유롭게 사용할 수 있습니다.
- [LangChain Tutorial](https://python.langchain.com/v0.1/docs/use_cases/question_answering/quickstart/)
- [Colab](https://colab.research.google.com/drive/1Jlzs8ZqFOqqIBBT2T5XGBhr23XxEsvHb?usp=sharing)
### OpenAI Assistant
OpenAI Assistant는 File Search, Code Interperter 같은 특정 기능을 지원하는 툴입니다.
문서를 업로드할 수 있으며, 자체 vector stores에 저장됩니다.
질문을 입력하면 vector stores에서 관련된 chunk를 가져와 모델에 입력해 답변을 출력합니다.
어떤 chunk를 사용했는지 citation이 달리며 확인할 수 있습니다.
- [OpenAI](https://platform.openai.com/docs/assistants/tools/file-search/quickstart)
- [Colab](https://colab.research.google.com/drive/1Ag3ylvk3oucQsOPorjgc1C8qZ4JFrJgu?usp=sharing)
### Cohere
Cohere에서는 text embedding 모델과 generation 모델을 제공하고 있습니다.
Parser로 Cohere에는 문서를 업로드하고 파싱하는 기능은 없어서 Langchain의 기본 parser를 사용했습니다.
chunk_size는 500으로 overlap은 200으로 설정했습니다.
Cohere의 임베딩 최대 길이가 512 토큰이라 상대적으로 짧기 때문에 짧게 설정했습니다.
Retrieval는 `embed-multilingual-v3.0`을 사용했습니다.
Generation은 `command-r`과 `command-r-plus`를 사용해 성능을 평가했습니다.
- [Cohere](https://cohere.com/command)
- [Colab](https://colab.research.google.com/drive/1QwozvB-SCeeHhRe6MmlnCETw3bGu9SJe?usp=sharing)
### Anything LLM
Anything LLM은 사용하고 싶은 LLM과 벡터DB를 선택하여 RAG 파이프라인을 로컬에 구축할 수 있는 프로그램입니다.
문서들을 ""Workspace"" 라는 개체로 구분합니다. 각 Workspace에 업로드된 문서들만을 대상으로 대화를 수행합니다.
프로그램을 다운로드하여 사용할 수도 있고, github 코드를 clone하여 docker compose로 실행할 수도 있습니다.
Parser와 Retrieval는 Anything LLM 자체 방법으로 구현되어 있습니다.
Generation model은 OpenAI나 Anthropic 모델을 API key만 등록하면 사용할 수 있습니다.
- [Github link](https://github.com/Mintplex-Labs/anything-llm)
- [Download link](https://useanything.com/download)
### Upstage
Upstage에서는 text embedding 모델과 generation 모델을 제공하고 있습니다.
Parser로 Upstage에는 문서를 업로드하고 파싱하는 기능은 없어서 Langchain의 기본 parser를 사용했습니다.
chunk size와 overlap은 튜토리얼에 나와있는데로 1000과 200으로 설정했습니다.
Retrieval는 `solar-embedding-1-large`를 사용했습니다.
Generation은 `solar-1-mini-chat`을 사용해 성능을 평가했습니다.
`gpt4-turbo`, `gpt4`, `gpt3.5-turbo`는 임베딩만 `solar-embedding-1-large`를 사용해서 성능 평가한 방법입니다.
- [Upstage](https://developers.upstage.ai/docs/apis/embeddings)
- [Colab](https://colab.research.google.com/drive/1JE2IXCACSkWeGiu9xvG8kmr0jmtzVzB1?usp=sharing)
# Contributor
- Junghoon Lee (junghoon.lee@allganize.ai)
- Sounghan Kim (sounghan.kim@allganize.ai)
- Yujung Kim (yujung.kim@allganize.ai)
# History Note
### 2024.08.09
- Auto Evaluate를 5개에서 4개로 변경.
- 모델 추가 : Alli (gpt-4o-mini), Alli (KONI-Llama3-8B-Instruct-20240729), Alli (Llama-3-Ko-8B-Finance-Evol), Alli (Llama-3-Alpha-Ko-8B-Instruct)"
Unbabel/TowerBlocks-v0.1,"{""language"": [""en"", ""de"", ""fr"", ""zh"", ""pt"", ""nl"", ""ru"", ""ko"", ""it"", ""es""], ""size_categories"": [""100K-
ace_Arab, ace_Latn, acm_Arab, acq_Arab, aeb_Arab, afr_Latn, ajp_Arab,
aka_Latn, amh_Ethi, apc_Arab, arb_Arab, ars_Arab, ary_Arab, arz_Arab,
asm_Beng, ast_Latn, awa_Deva, ayr_Latn, azb_Arab, azj_Latn, bak_Cyrl,
bam_Latn, ban_Latn,bel_Cyrl, bem_Latn, ben_Beng, bho_Deva, bjn_Arab, bjn_Latn,
bod_Tibt, bos_Latn, bug_Latn, bul_Cyrl, cat_Latn, ceb_Latn, ces_Latn,
cjk_Latn, ckb_Arab, crh_Latn, cym_Latn, dan_Latn, deu_Latn, dik_Latn,
dyu_Latn, dzo_Tibt, ell_Grek, eng_Latn, epo_Latn, est_Latn, eus_Latn,
ewe_Latn, fao_Latn, pes_Arab, fij_Latn, fin_Latn, fon_Latn, fra_Latn,
fur_Latn, fuv_Latn, gla_Latn, gle_Latn, glg_Latn, grn_Latn, guj_Gujr,
hat_Latn, hau_Latn, heb_Hebr, hin_Deva, hne_Deva, hrv_Latn, hun_Latn,
hye_Armn, ibo_Latn, ilo_Latn, ind_Latn, isl_Latn, ita_Latn, jav_Latn,
jpn_Jpan, kab_Latn, kac_Latn, kam_Latn, kan_Knda, kas_Arab, kas_Deva,
kat_Geor, knc_Arab, knc_Latn, kaz_Cyrl, kbp_Latn, kea_Latn, khm_Khmr,
kik_Latn, kin_Latn, kir_Cyrl, kmb_Latn, kon_Latn, kor_Hang, kmr_Latn,
lao_Laoo, lvs_Latn, lij_Latn, lim_Latn, lin_Latn, lit_Latn, lmo_Latn,
ltg_Latn, ltz_Latn, lua_Latn, lug_Latn, luo_Latn, lus_Latn, mag_Deva,
mai_Deva, mal_Mlym, mar_Deva, min_Latn, mkd_Cyrl, plt_Latn, mlt_Latn,
mni_Beng, khk_Cyrl, mos_Latn, mri_Latn, zsm_Latn, mya_Mymr, nld_Latn,
nno_Latn, nob_Latn, npi_Deva, nso_Latn, nus_Latn, nya_Latn, oci_Latn,
gaz_Latn, ory_Orya, pag_Latn, pan_Guru, pap_Latn, pol_Latn, por_Latn,
prs_Arab, pbt_Arab, quy_Latn, ron_Latn, run_Latn, rus_Cyrl, sag_Latn,
san_Deva, sat_Beng, scn_Latn, shn_Mymr, sin_Sinh, slk_Latn, slv_Latn,
smo_Latn, sna_Latn, snd_Arab, som_Latn, sot_Latn, spa_Latn, als_Latn,
srd_Latn, srp_Cyrl, ssw_Latn, sun_Latn, swe_Latn, swh_Latn, szl_Latn,
tam_Taml, tat_Cyrl, tel_Telu, tgk_Cyrl, tgl_Latn, tha_Thai, tir_Ethi,
taq_Latn, taq_Tfng, tpi_Latn, tsn_Latn, tso_Latn, tuk_Latn, tum_Latn,
tur_Latn, twi_Latn, tzm_Tfng, uig_Arab, ukr_Cyrl, umb_Latn, urd_Arab,
uzn_Latn, vec_Latn, vie_Latn, war_Latn, wol_Latn, xho_Latn, ydd_Hebr,
yor_Latn, yue_Hant, zho_Hans, zho_Hant, zul_Latn
configs:
- config_name: default
data_files:
- split: train
path: train/*
- config_name: eng_Latn-ace_Arab
data_files:
- split: train
path: train/eng_Latn-ace_Arab.jsonl
- config_name: eng_Latn-ace_Latn
data_files:
- split: train
path: train/eng_Latn-ace_Latn.jsonl
- config_name: eng_Latn-acm_Arab
data_files:
- split: train
path: train/eng_Latn-acm_Arab.jsonl
- config_name: eng_Latn-acq_Arab
data_files:
- split: train
path: train/eng_Latn-acq_Arab.jsonl
- config_name: eng_Latn-aeb_Arab
data_files:
- split: train
path: train/eng_Latn-aeb_Arab.jsonl
- config_name: eng_Latn-afr_Latn
data_files:
- split: train
path: train/eng_Latn-afr_Latn.jsonl
- config_name: eng_Latn-ajp_Arab
data_files:
- split: train
path: train/eng_Latn-ajp_Arab.jsonl
- config_name: eng_Latn-aka_Latn
data_files:
- split: train
path: train/eng_Latn-aka_Latn.jsonl
- config_name: eng_Latn-als_Latn
data_files:
- split: train
path: train/eng_Latn-als_Latn.jsonl
- config_name: eng_Latn-amh_Ethi
data_files:
- split: train
path: train/eng_Latn-amh_Ethi.jsonl
- config_name: eng_Latn-apc_Arab
data_files:
- split: train
path: train/eng_Latn-apc_Arab.jsonl
- config_name: eng_Latn-arb_Arab
data_files:
- split: train
path: train/eng_Latn-arb_Arab.jsonl
- config_name: eng_Latn-arb_Latn
data_files:
- split: train
path: train/eng_Latn-arb_Latn.jsonl
- config_name: eng_Latn-ars_Arab
data_files:
- split: train
path: train/eng_Latn-ars_Arab.jsonl
- config_name: eng_Latn-ary_Arab
data_files:
- split: train
path: train/eng_Latn-ary_Arab.jsonl
- config_name: eng_Latn-arz_Arab
data_files:
- split: train
path: train/eng_Latn-arz_Arab.jsonl
- config_name: eng_Latn-asm_Beng
data_files:
- split: train
path: train/eng_Latn-asm_Beng.jsonl
- config_name: eng_Latn-ast_Latn
data_files:
- split: train
path: train/eng_Latn-ast_Latn.jsonl
- config_name: eng_Latn-awa_Deva
data_files:
- split: train
path: train/eng_Latn-awa_Deva.jsonl
- config_name: eng_Latn-ayr_Latn
data_files:
- split: train
path: train/eng_Latn-ayr_Latn.jsonl
- config_name: eng_Latn-azb_Arab
data_files:
- split: train
path: train/eng_Latn-azb_Arab.jsonl
- config_name: eng_Latn-azj_Latn
data_files:
- split: train
path: train/eng_Latn-azj_Latn.jsonl
- config_name: eng_Latn-bak_Cyrl
data_files:
- split: train
path: train/eng_Latn-bak_Cyrl.jsonl
- config_name: eng_Latn-bam_Latn
data_files:
- split: train
path: train/eng_Latn-bam_Latn.jsonl
- config_name: eng_Latn-ban_Latn
data_files:
- split: train
path: train/eng_Latn-ban_Latn.jsonl
- config_name: eng_Latn-bel_Cyrl
data_files:
- split: train
path: train/eng_Latn-bel_Cyrl.jsonl
- config_name: eng_Latn-bem_Latn
data_files:
- split: train
path: train/eng_Latn-bem_Latn.jsonl
- config_name: eng_Latn-ben_Beng
data_files:
- split: train
path: train/eng_Latn-ben_Beng.jsonl
- config_name: eng_Latn-bho_Deva
data_files:
- split: train
path: train/eng_Latn-bho_Deva.jsonl
- config_name: eng_Latn-bjn_Arab
data_files:
- split: train
path: train/eng_Latn-bjn_Arab.jsonl
- config_name: eng_Latn-bjn_Latn
data_files:
- split: train
path: train/eng_Latn-bjn_Latn.jsonl
- config_name: eng_Latn-bod_Tibt
data_files:
- split: train
path: train/eng_Latn-bod_Tibt.jsonl
- config_name: eng_Latn-bos_Latn
data_files:
- split: train
path: train/eng_Latn-bos_Latn.jsonl
- config_name: eng_Latn-bug_Latn
data_files:
- split: train
path: train/eng_Latn-bug_Latn.jsonl
- config_name: eng_Latn-bul_Cyrl
data_files:
- split: train
path: train/eng_Latn-bul_Cyrl.jsonl
- config_name: eng_Latn-cat_Latn
data_files:
- split: train
path: train/eng_Latn-cat_Latn.jsonl
- config_name: eng_Latn-ceb_Latn
data_files:
- split: train
path: train/eng_Latn-ceb_Latn.jsonl
- config_name: eng_Latn-ces_Latn
data_files:
- split: train
path: train/eng_Latn-ces_Latn.jsonl
- config_name: eng_Latn-cjk_Latn
data_files:
- split: train
path: train/eng_Latn-cjk_Latn.jsonl
- config_name: eng_Latn-ckb_Arab
data_files:
- split: train
path: train/eng_Latn-ckb_Arab.jsonl
- config_name: eng_Latn-crh_Latn
data_files:
- split: train
path: train/eng_Latn-crh_Latn.jsonl
- config_name: eng_Latn-cym_Latn
data_files:
- split: train
path: train/eng_Latn-cym_Latn.jsonl
- config_name: eng_Latn-dan_Latn
data_files:
- split: train
path: train/eng_Latn-dan_Latn.jsonl
- config_name: eng_Latn-deu_Latn
data_files:
- split: train
path: train/eng_Latn-deu_Latn.jsonl
- config_name: eng_Latn-dik_Latn
data_files:
- split: train
path: train/eng_Latn-dik_Latn.jsonl
- config_name: eng_Latn-dyu_Latn
data_files:
- split: train
path: train/eng_Latn-dyu_Latn.jsonl
- config_name: eng_Latn-dzo_Tibt
data_files:
- split: train
path: train/eng_Latn-dzo_Tibt.jsonl
- config_name: eng_Latn-ell_Grek
data_files:
- split: train
path: train/eng_Latn-ell_Grek.jsonl
- config_name: eng_Latn-epo_Latn
data_files:
- split: train
path: train/eng_Latn-epo_Latn.jsonl
- config_name: eng_Latn-est_Latn
data_files:
- split: train
path: train/eng_Latn-est_Latn.jsonl
- config_name: eng_Latn-eus_Latn
data_files:
- split: train
path: train/eng_Latn-eus_Latn.jsonl
- config_name: eng_Latn-ewe_Latn
data_files:
- split: train
path: train/eng_Latn-ewe_Latn.jsonl
- config_name: eng_Latn-fao_Latn
data_files:
- split: train
path: train/eng_Latn-fao_Latn.jsonl
- config_name: eng_Latn-fij_Latn
data_files:
- split: train
path: train/eng_Latn-fij_Latn.jsonl
- config_name: eng_Latn-fin_Latn
data_files:
- split: train
path: train/eng_Latn-fin_Latn.jsonl
- config_name: eng_Latn-fon_Latn
data_files:
- split: train
path: train/eng_Latn-fon_Latn.jsonl
- config_name: eng_Latn-fra_Latn
data_files:
- split: train
path: train/eng_Latn-fra_Latn.jsonl
- config_name: eng_Latn-fur_Latn
data_files:
- split: train
path: train/eng_Latn-fur_Latn.jsonl
- config_name: eng_Latn-fuv_Latn
data_files:
- split: train
path: train/eng_Latn-fuv_Latn.jsonl
- config_name: eng_Latn-gaz_Latn
data_files:
- split: train
path: train/eng_Latn-gaz_Latn.jsonl
- config_name: eng_Latn-gla_Latn
data_files:
- split: train
path: train/eng_Latn-gla_Latn.jsonl
- config_name: eng_Latn-gle_Latn
data_files:
- split: train
path: train/eng_Latn-gle_Latn.jsonl
- config_name: eng_Latn-glg_Latn
data_files:
- split: train
path: train/eng_Latn-glg_Latn.jsonl
- config_name: eng_Latn-grn_Latn
data_files:
- split: train
path: train/eng_Latn-grn_Latn.jsonl
- config_name: eng_Latn-guj_Gujr
data_files:
- split: train
path: train/eng_Latn-guj_Gujr.jsonl
- config_name: eng_Latn-hat_Latn
data_files:
- split: train
path: train/eng_Latn-hat_Latn.jsonl
- config_name: eng_Latn-hau_Latn
data_files:
- split: train
path: train/eng_Latn-hau_Latn.jsonl
- config_name: eng_Latn-heb_Hebr
data_files:
- split: train
path: train/eng_Latn-heb_Hebr.jsonl
- config_name: eng_Latn-hin_Deva
data_files:
- split: train
path: train/eng_Latn-hin_Deva.jsonl
- config_name: eng_Latn-hne_Deva
data_files:
- split: train
path: train/eng_Latn-hne_Deva.jsonl
- config_name: eng_Latn-hrv_Latn
data_files:
- split: train
path: train/eng_Latn-hrv_Latn.jsonl
- config_name: eng_Latn-hun_Latn
data_files:
- split: train
path: train/eng_Latn-hun_Latn.jsonl
- config_name: eng_Latn-hye_Armn
data_files:
- split: train
path: train/eng_Latn-hye_Armn.jsonl
- config_name: eng_Latn-ibo_Latn
data_files:
- split: train
path: train/eng_Latn-ibo_Latn.jsonl
- config_name: eng_Latn-ilo_Latn
data_files:
- split: train
path: train/eng_Latn-ilo_Latn.jsonl
- config_name: eng_Latn-ind_Latn
data_files:
- split: train
path: train/eng_Latn-ind_Latn.jsonl
- config_name: eng_Latn-isl_Latn
data_files:
- split: train
path: train/eng_Latn-isl_Latn.jsonl
- config_name: eng_Latn-ita_Latn
data_files:
- split: train
path: train/eng_Latn-ita_Latn.jsonl
- config_name: eng_Latn-jav_Latn
data_files:
- split: train
path: train/eng_Latn-jav_Latn.jsonl
- config_name: eng_Latn-jpn_Jpan
data_files:
- split: train
path: train/eng_Latn-jpn_Jpan.jsonl
- config_name: eng_Latn-kab_Latn
data_files:
- split: train
path: train/eng_Latn-kab_Latn.jsonl
- config_name: eng_Latn-kac_Latn
data_files:
- split: train
path: train/eng_Latn-kac_Latn.jsonl
- config_name: eng_Latn-kam_Latn
data_files:
- split: train
path: train/eng_Latn-kam_Latn.jsonl
- config_name: eng_Latn-kan_Knda
data_files:
- split: train
path: train/eng_Latn-kan_Knda.jsonl
- config_name: eng_Latn-kas_Arab
data_files:
- split: train
path: train/eng_Latn-kas_Arab.jsonl
- config_name: eng_Latn-kas_Deva
data_files:
- split: train
path: train/eng_Latn-kas_Deva.jsonl
- config_name: eng_Latn-kat_Geor
data_files:
- split: train
path: train/eng_Latn-kat_Geor.jsonl
- config_name: eng_Latn-kaz_Cyrl
data_files:
- split: train
path: train/eng_Latn-kaz_Cyrl.jsonl
- config_name: eng_Latn-kbp_Latn
data_files:
- split: train
path: train/eng_Latn-kbp_Latn.jsonl
- config_name: eng_Latn-kea_Latn
data_files:
- split: train
path: train/eng_Latn-kea_Latn.jsonl
- config_name: eng_Latn-khk_Cyrl
data_files:
- split: train
path: train/eng_Latn-khk_Cyrl.jsonl
- config_name: eng_Latn-khm_Khmr
data_files:
- split: train
path: train/eng_Latn-khm_Khmr.jsonl
- config_name: eng_Latn-kik_Latn
data_files:
- split: train
path: train/eng_Latn-kik_Latn.jsonl
- config_name: eng_Latn-kin_Latn
data_files:
- split: train
path: train/eng_Latn-kin_Latn.jsonl
- config_name: eng_Latn-kir_Cyrl
data_files:
- split: train
path: train/eng_Latn-kir_Cyrl.jsonl
- config_name: eng_Latn-kmb_Latn
data_files:
- split: train
path: train/eng_Latn-kmb_Latn.jsonl
- config_name: eng_Latn-kmr_Latn
data_files:
- split: train
path: train/eng_Latn-kmr_Latn.jsonl
- config_name: eng_Latn-knc_Arab
data_files:
- split: train
path: train/eng_Latn-knc_Arab.jsonl
- config_name: eng_Latn-knc_Latn
data_files:
- split: train
path: train/eng_Latn-knc_Latn.jsonl
- config_name: eng_Latn-kon_Latn
data_files:
- split: train
path: train/eng_Latn-kon_Latn.jsonl
- config_name: eng_Latn-kor_Hang
data_files:
- split: train
path: train/eng_Latn-kor_Hang.jsonl
- config_name: eng_Latn-lao_Laoo
data_files:
- split: train
path: train/eng_Latn-lao_Laoo.jsonl
- config_name: eng_Latn-lij_Latn
data_files:
- split: train
path: train/eng_Latn-lij_Latn.jsonl
- config_name: eng_Latn-lim_Latn
data_files:
- split: train
path: train/eng_Latn-lim_Latn.jsonl
- config_name: eng_Latn-lin_Latn
data_files:
- split: train
path: train/eng_Latn-lin_Latn.jsonl
- config_name: eng_Latn-lit_Latn
data_files:
- split: train
path: train/eng_Latn-lit_Latn.jsonl
- config_name: eng_Latn-lmo_Latn
data_files:
- split: train
path: train/eng_Latn-lmo_Latn.jsonl
- config_name: eng_Latn-ltg_Latn
data_files:
- split: train
path: train/eng_Latn-ltg_Latn.jsonl
- config_name: eng_Latn-ltz_Latn
data_files:
- split: train
path: train/eng_Latn-ltz_Latn.jsonl
- config_name: eng_Latn-lua_Latn
data_files:
- split: train
path: train/eng_Latn-lua_Latn.jsonl
- config_name: eng_Latn-lug_Latn
data_files:
- split: train
path: train/eng_Latn-lug_Latn.jsonl
- config_name: eng_Latn-luo_Latn
data_files:
- split: train
path: train/eng_Latn-luo_Latn.jsonl
- config_name: eng_Latn-lus_Latn
data_files:
- split: train
path: train/eng_Latn-lus_Latn.jsonl
- config_name: eng_Latn-lvs_Latn
data_files:
- split: train
path: train/eng_Latn-lvs_Latn.jsonl
- config_name: eng_Latn-mag_Deva
data_files:
- split: train
path: train/eng_Latn-mag_Deva.jsonl
- config_name: eng_Latn-mai_Deva
data_files:
- split: train
path: train/eng_Latn-mai_Deva.jsonl
- config_name: eng_Latn-mal_Mlym
data_files:
- split: train
path: train/eng_Latn-mal_Mlym.jsonl
- config_name: eng_Latn-mar_Deva
data_files:
- split: train
path: train/eng_Latn-mar_Deva.jsonl
- config_name: eng_Latn-min_Arab
data_files:
- split: train
path: train/eng_Latn-min_Arab.jsonl
- config_name: eng_Latn-min_Latn
data_files:
- split: train
path: train/eng_Latn-min_Latn.jsonl
- config_name: eng_Latn-mkd_Cyrl
data_files:
- split: train
path: train/eng_Latn-mkd_Cyrl.jsonl
- config_name: eng_Latn-mlt_Latn
data_files:
- split: train
path: train/eng_Latn-mlt_Latn.jsonl
- config_name: eng_Latn-mni_Beng
data_files:
- split: train
path: train/eng_Latn-mni_Beng.jsonl
- config_name: eng_Latn-mos_Latn
data_files:
- split: train
path: train/eng_Latn-mos_Latn.jsonl
- config_name: eng_Latn-mri_Latn
data_files:
- split: train
path: train/eng_Latn-mri_Latn.jsonl
- config_name: eng_Latn-mya_Mymr
data_files:
- split: train
path: train/eng_Latn-mya_Mymr.jsonl
- config_name: eng_Latn-nld_Latn
data_files:
- split: train
path: train/eng_Latn-nld_Latn.jsonl
- config_name: eng_Latn-nno_Latn
data_files:
- split: train
path: train/eng_Latn-nno_Latn.jsonl
- config_name: eng_Latn-nob_Latn
data_files:
- split: train
path: train/eng_Latn-nob_Latn.jsonl
- config_name: eng_Latn-npi_Deva
data_files:
- split: train
path: train/eng_Latn-npi_Deva.jsonl
- config_name: eng_Latn-nqo_Nkoo
data_files:
- split: train
path: train/eng_Latn-nqo_Nkoo.jsonl
- config_name: eng_Latn-nso_Latn
data_files:
- split: train
path: train/eng_Latn-nso_Latn.jsonl
- config_name: eng_Latn-nus_Latn
data_files:
- split: train
path: train/eng_Latn-nus_Latn.jsonl
- config_name: eng_Latn-nya_Latn
data_files:
- split: train
path: train/eng_Latn-nya_Latn.jsonl
- config_name: eng_Latn-oci_Latn
data_files:
- split: train
path: train/eng_Latn-oci_Latn.jsonl
- config_name: eng_Latn-ory_Orya
data_files:
- split: train
path: train/eng_Latn-ory_Orya.jsonl
- config_name: eng_Latn-pag_Latn
data_files:
- split: train
path: train/eng_Latn-pag_Latn.jsonl
- config_name: eng_Latn-pan_Guru
data_files:
- split: train
path: train/eng_Latn-pan_Guru.jsonl
- config_name: eng_Latn-pap_Latn
data_files:
- split: train
path: train/eng_Latn-pap_Latn.jsonl
- config_name: eng_Latn-pbt_Arab
data_files:
- split: train
path: train/eng_Latn-pbt_Arab.jsonl
- config_name: eng_Latn-pes_Arab
data_files:
- split: train
path: train/eng_Latn-pes_Arab.jsonl
- config_name: eng_Latn-plt_Latn
data_files:
- split: train
path: train/eng_Latn-plt_Latn.jsonl
- config_name: eng_Latn-pol_Latn
data_files:
- split: train
path: train/eng_Latn-pol_Latn.jsonl
- config_name: eng_Latn-por_Latn
data_files:
- split: train
path: train/eng_Latn-por_Latn.jsonl
- config_name: eng_Latn-prs_Arab
data_files:
- split: train
path: train/eng_Latn-prs_Arab.jsonl
- config_name: eng_Latn-quy_Latn
data_files:
- split: train
path: train/eng_Latn-quy_Latn.jsonl
- config_name: eng_Latn-ron_Latn
data_files:
- split: train
path: train/eng_Latn-ron_Latn.jsonl
- config_name: eng_Latn-run_Latn
data_files:
- split: train
path: train/eng_Latn-run_Latn.jsonl
- config_name: eng_Latn-rus_Cyrl
data_files:
- split: train
path: train/eng_Latn-rus_Cyrl.jsonl
- config_name: eng_Latn-sag_Latn
data_files:
- split: train
path: train/eng_Latn-sag_Latn.jsonl
- config_name: eng_Latn-san_Deva
data_files:
- split: train
path: train/eng_Latn-san_Deva.jsonl
- config_name: eng_Latn-sat_Olck
data_files:
- split: train
path: train/eng_Latn-sat_Olck.jsonl
- config_name: eng_Latn-scn_Latn
data_files:
- split: train
path: train/eng_Latn-scn_Latn.jsonl
- config_name: eng_Latn-shn_Mymr
data_files:
- split: train
path: train/eng_Latn-shn_Mymr.jsonl
- config_name: eng_Latn-sin_Sinh
data_files:
- split: train
path: train/eng_Latn-sin_Sinh.jsonl
- config_name: eng_Latn-slk_Latn
data_files:
- split: train
path: train/eng_Latn-slk_Latn.jsonl
- config_name: eng_Latn-slv_Latn
data_files:
- split: train
path: train/eng_Latn-slv_Latn.jsonl
- config_name: eng_Latn-smo_Latn
data_files:
- split: train
path: train/eng_Latn-smo_Latn.jsonl
- config_name: eng_Latn-sna_Latn
data_files:
- split: train
path: train/eng_Latn-sna_Latn.jsonl
- config_name: eng_Latn-snd_Arab
data_files:
- split: train
path: train/eng_Latn-snd_Arab.jsonl
- config_name: eng_Latn-som_Latn
data_files:
- split: train
path: train/eng_Latn-som_Latn.jsonl
- config_name: eng_Latn-sot_Latn
data_files:
- split: train
path: train/eng_Latn-sot_Latn.jsonl
- config_name: eng_Latn-spa_Latn
data_files:
- split: train
path: train/eng_Latn-spa_Latn.jsonl
- config_name: eng_Latn-srd_Latn
data_files:
- split: train
path: train/eng_Latn-srd_Latn.jsonl
- config_name: eng_Latn-srp_Cyrl
data_files:
- split: train
path: train/eng_Latn-srp_Cyrl.jsonl
- config_name: eng_Latn-ssw_Latn
data_files:
- split: train
path: train/eng_Latn-ssw_Latn.jsonl
- config_name: eng_Latn-sun_Latn
data_files:
- split: train
path: train/eng_Latn-sun_Latn.jsonl
- config_name: eng_Latn-swe_Latn
data_files:
- split: train
path: train/eng_Latn-swe_Latn.jsonl
- config_name: eng_Latn-swh_Latn
data_files:
- split: train
path: train/eng_Latn-swh_Latn.jsonl
- config_name: eng_Latn-szl_Latn
data_files:
- split: train
path: train/eng_Latn-szl_Latn.jsonl
- config_name: eng_Latn-tam_Taml
data_files:
- split: train
path: train/eng_Latn-tam_Taml.jsonl
- config_name: eng_Latn-taq_Latn
data_files:
- split: train
path: train/eng_Latn-taq_Latn.jsonl
- config_name: eng_Latn-taq_Tfng
data_files:
- split: train
path: train/eng_Latn-taq_Tfng.jsonl
- config_name: eng_Latn-tat_Cyrl
data_files:
- split: train
path: train/eng_Latn-tat_Cyrl.jsonl
- config_name: eng_Latn-tel_Telu
data_files:
- split: train
path: train/eng_Latn-tel_Telu.jsonl
- config_name: eng_Latn-tgk_Cyrl
data_files:
- split: train
path: train/eng_Latn-tgk_Cyrl.jsonl
- config_name: eng_Latn-tgl_Latn
data_files:
- split: train
path: train/eng_Latn-tgl_Latn.jsonl
- config_name: eng_Latn-tha_Thai
data_files:
- split: train
path: train/eng_Latn-tha_Thai.jsonl
- config_name: eng_Latn-tir_Ethi
data_files:
- split: train
path: train/eng_Latn-tir_Ethi.jsonl
- config_name: eng_Latn-tpi_Latn
data_files:
- split: train
path: train/eng_Latn-tpi_Latn.jsonl
- config_name: eng_Latn-tsn_Latn
data_files:
- split: train
path: train/eng_Latn-tsn_Latn.jsonl
- config_name: eng_Latn-tso_Latn
data_files:
- split: train
path: train/eng_Latn-tso_Latn.jsonl
- config_name: eng_Latn-tuk_Latn
data_files:
- split: train
path: train/eng_Latn-tuk_Latn.jsonl
- config_name: eng_Latn-tum_Latn
data_files:
- split: train
path: train/eng_Latn-tum_Latn.jsonl
- config_name: eng_Latn-tur_Latn
data_files:
- split: train
path: train/eng_Latn-tur_Latn.jsonl
- config_name: eng_Latn-twi_Latn
data_files:
- split: train
path: train/eng_Latn-twi_Latn.jsonl
- config_name: eng_Latn-tzm_Tfng
data_files:
- split: train
path: train/eng_Latn-tzm_Tfng.jsonl
- config_name: eng_Latn-uig_Arab
data_files:
- split: train
path: train/eng_Latn-uig_Arab.jsonl
- config_name: eng_Latn-ukr_Cyrl
data_files:
- split: train
path: train/eng_Latn-ukr_Cyrl.jsonl
- config_name: eng_Latn-umb_Latn
data_files:
- split: train
path: train/eng_Latn-umb_Latn.jsonl
- config_name: eng_Latn-urd_Arab
data_files:
- split: train
path: train/eng_Latn-urd_Arab.jsonl
- config_name: eng_Latn-uzn_Latn
data_files:
- split: train
path: train/eng_Latn-uzn_Latn.jsonl
- config_name: eng_Latn-vec_Latn
data_files:
- split: train
path: train/eng_Latn-vec_Latn.jsonl
- config_name: eng_Latn-vie_Latn
data_files:
- split: train
path: train/eng_Latn-vie_Latn.jsonl
- config_name: eng_Latn-war_Latn
data_files:
- split: train
path: train/eng_Latn-war_Latn.jsonl
- config_name: eng_Latn-wol_Latn
data_files:
- split: train
path: train/eng_Latn-wol_Latn.jsonl
- config_name: eng_Latn-xho_Latn
data_files:
- split: train
path: train/eng_Latn-xho_Latn.jsonl
- config_name: eng_Latn-ydd_Hebr
data_files:
- split: train
path: train/eng_Latn-ydd_Hebr.jsonl
- config_name: eng_Latn-yor_Latn
data_files:
- split: train
path: train/eng_Latn-yor_Latn.jsonl
- config_name: eng_Latn-yue_Hant
data_files:
- split: train
path: train/eng_Latn-yue_Hant.jsonl
- config_name: eng_Latn-zho_Hans
data_files:
- split: train
path: train/eng_Latn-zho_Hans.jsonl
- config_name: eng_Latn-zho_Hant
data_files:
- split: train
path: train/eng_Latn-zho_Hant.jsonl
- config_name: eng_Latn-zsm_Latn
data_files:
- split: train
path: train/eng_Latn-zsm_Latn.jsonl
- config_name: eng_Latn-zul_Latn
data_files:
- split: train
path: train/eng_Latn-zul_Latn.jsonl
---"
Smoked-Salmon-s/empathetic_dialogues_ko,"{""license"": ""apache-2.0"", ""task_categories"": [""text-generation"", ""conversational""], ""language"": [""ko""], ""size_categories"": [""10K `/(/` 같은 오류 등...)
## Citation
```
@misc{mitra2024orcamath,
title={Orca-Math: Unlocking the potential of SLMs in Grade School Math},
author={Arindam Mitra and Hamed Khanpour and Corby Rosset and Ahmed Awadallah},
year={2024},
eprint={2402.14830},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```"
nlpai-lab/openassistant-guanaco-ko,"{""license"": ""apache-2.0"", ""task_categories"": [""text-generation"", ""question-answering"", ""summarization""], ""language"": [""ko""], ""size_categories"": [""1K
📃 Paper • 🌐 Demo • 🤗 ApolloMoEDataset • 🤗 ApolloMoEBench • 🤗 Models •🌐 Apollo • 🌐 ApolloMoE

## 🌈 Update
* **[2024.10.15]** ApolloMoE repo is published!🎉
## Languages Coverage
12 Major Languages and 38 Minor Languages
Click to view the Languages Coverage

## Architecture
Click to view the MoE routing image

## Results
#### Dense
🤗 Apollo2-0.5B • 🤗 Apollo2-1.5B • 🤗 Apollo2-2B
🤗 Apollo2-3.8B • 🤗 Apollo2-7B • 🤗 Apollo2-9B
Click to view the Dense Models Results

#### Post-MoE
🤗 Apollo-MoE-0.5B • 🤗 Apollo-MoE-1.5B • 🤗 Apollo-MoE-7B
Click to view the Post-MoE Models Results

## Usage Format
##### Apollo2
- 0.5B, 1.5B, 7B: User:{query}\nAssistant:{response}<|endoftext|>
- 2B, 9B: User:{query}\nAssistant:{response}\
- 3.8B: <|user|>\n{query}<|end|><|assisitant|>\n{response}<|end|>
##### Apollo-MoE
- 0.5B, 1.5B, 7B: User:{query}\nAssistant:{response}<|endoftext|>
## Dataset & Evaluation
- Dataset
🤗 ApolloMoEDataset
Click to expand

The complete data is stored in `ApolloMoEDataset.json`, while a sample shown in `ApolloMoEDataset_sample.json`
- Evaluation
🤗 ApolloMoEBench
Click to expand
- EN:
- [MedQA-USMLE](https://huggingface.co/datasets/GBaker/MedQA-USMLE-4-options)
- [MedMCQA](https://huggingface.co/datasets/medmcqa/viewer/default/test)
- [PubMedQA](https://huggingface.co/datasets/pubmed_qa): Because the results fluctuated too much, they were not used in the paper.
- [MMLU-Medical](https://huggingface.co/datasets/cais/mmlu)
- Clinical knowledge, Medical genetics, Anatomy, Professional medicine, College biology, College medicine
- ZH:
- [MedQA-MCMLE](https://huggingface.co/datasets/bigbio/med_qa/viewer/med_qa_zh_4options_bigbio_qa/test)
- [CMB-single](https://huggingface.co/datasets/FreedomIntelligence/CMB): Not used in the paper
- Randomly sample 2,000 multiple-choice questions with single answer.
- [CMMLU-Medical](https://huggingface.co/datasets/haonan-li/cmmlu)
- Anatomy, Clinical_knowledge, College_medicine, Genetics, Nutrition, Traditional_chinese_medicine, Virology
- [CExam](https://github.com/williamliujl/CMExam): Not used in the paper
- Randomly sample 2,000 multiple-choice questions
- ES: [Head_qa](https://huggingface.co/datasets/head_qa)
- FR:
- [Frenchmedmcqa](https://github.com/qanastek/FrenchMedMCQA)
- [MMLU_FR]
- Clinical knowledge, Medical genetics, Anatomy, Professional medicine, College biology, College medicine
- HI: [MMLU_HI](https://huggingface.co/datasets/FreedomIntelligence/MMLU_Hindi)
- Clinical knowledge, Medical genetics, Anatomy, Professional medicine, College biology, College medicine
- AR: [MMLU_AR](https://huggingface.co/datasets/FreedomIntelligence/MMLU_Arabic)
- Clinical knowledge, Medical genetics, Anatomy, Professional medicine, College biology, College medicine
- JA: [IgakuQA](https://github.com/jungokasai/IgakuQA)
- KO: [KorMedMCQA](https://huggingface.co/datasets/sean0042/KorMedMCQA)
- IT:
- [MedExpQA](https://huggingface.co/datasets/HiTZ/MedExpQA)
- [MMLU_IT]
- Clinical knowledge, Medical genetics, Anatomy, Professional medicine, College biology, College medicine
- DE: [BioInstructQA](https://huggingface.co/datasets/BioMistral/BioInstructQA): German part
- PT: [BioInstructQA](https://huggingface.co/datasets/BioMistral/BioInstructQA): Portuguese part
- RU: [RuMedBench](https://github.com/sb-ai-lab/MedBench)
- Minor Langs: MMLU Translated Medical Part
## Results reproduction
Click to expand
We take Apollo2-7B or Apollo-MoE-0.5B as example
1. Download Dataset for project:
```
bash 0.download_data.sh
```
2. Prepare test and dev data for specific model:
- Create test data for with special token
```
bash 1.data_process_test&dev.sh
```
3. Prepare train data for specific model (Create tokenized data in advance):
- You can adjust data Training order and Training Epoch in this step
```
bash 2.data_process_train.sh
```
4. Train the model
- If you want to train in Multi Nodes please refer to ./src/sft/training_config/zero_multi.yaml
```
bash 3.single_node_train.sh
```
5. Evaluate your model: Generate score for benchmark
```
bash 4.eval.sh
```
## Citation
Please use the following citation if you intend to use our dataset for training or evaluation:
```
@misc{zheng2024efficientlydemocratizingmedicalllms,
title={Efficiently Democratizing Medical LLMs for 50 Languages via a Mixture of Language Family Experts},
author={Guorui Zheng and Xidong Wang and Juhao Liang and Nuo Chen and Yuping Zheng and Benyou Wang},
year={2024},
eprint={2410.10626},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2410.10626},
}
```"
BramVanroy/xlwic_wn,"{""license"": ""cc-by-nc-4.0"", ""language"": [""bg"", ""zh"", ""hr"", ""da"", ""nl"", ""et"", ""fa"", ""ja"", ""ko""], ""task_categories"": [""text-classification""], ""pretty_name"": ""Multilingual Word-in-Context (WordNet)"", ""configs"": [{""config_name"": ""default"", ""sep"": ""\t"", ""data_files"": [{""split"": ""valid"", ""path"": ""**/*_valid.csv""}, {""split"": ""test"", ""path"": ""**/*_test.csv""}]}, {""config_name"": ""bg"", ""sep"": ""\t"", ""data_files"": [{""split"": ""valid"", ""path"": ""bulgarian_bg/bg_valid.csv""}, {""split"": ""test"", ""path"": ""bulgarian_bg/bg_test.csv""}]}, {""config_name"": ""zh"", ""sep"": ""\t"", ""data_files"": [{""split"": ""valid"", ""path"": ""chinese_zh/zh_valid.csv""}, {""split"": ""test"", ""path"": ""chinese_zh/zh_test.csv""}]}, {""config_name"": ""hr"", ""sep"": ""\t"", ""data_files"": [{""split"": ""valid"", ""path"": ""croatian_hr/hr_valid.csv""}, {""split"": ""test"", ""path"": ""croatian_hr/hr_test.csv""}]}, {""config_name"": ""da"", ""sep"": ""\t"", ""data_files"": [{""split"": ""valid"", ""path"": ""danish_da/da_valid.csv""}, {""split"": ""test"", ""path"": ""danish_da/da_test.csv""}]}, {""config_name"": ""nl"", ""sep"": ""\t"", ""data_files"": [{""split"": ""valid"", ""path"": ""dutch_nl/nl_valid.csv""}, {""split"": ""test"", ""path"": ""dutch_nl/nl_test.csv""}]}, {""config_name"": ""et"", ""sep"": ""\t"", ""data_files"": [{""split"": ""valid"", ""path"": ""estonian_et/et_valid.csv""}, {""split"": ""test"", ""path"": ""estonian_et/et_test.csv""}]}, {""config_name"": ""fa"", ""sep"": ""\t"", ""data_files"": [{""split"": ""valid"", ""path"": ""farsi_fa/fa_valid.csv""}, {""split"": ""test"", ""path"": ""farsi_fa/fa_test.csv""}]}, {""config_name"": ""ja"", ""sep"": ""\t"", ""data_files"": [{""split"": ""valid"", ""path"": ""japanese_ja/ja_valid.csv""}, {""split"": ""test"", ""path"": ""japanese_ja/ja_test.csv""}]}, {""config_name"": ""ko"", ""sep"": ""\t"", ""data_files"": [{""split"": ""valid"", ""path"": ""korean_ko/ko_valid.csv""}, {""split"": ""test"", ""path"": ""korean_ko/ko_test.csv""}]}]}","# Multilingual Word-in-Context (WordNet)
Refer to the [documentation](https://pilehvar.github.io/xlwic/) and [paper](https://aclanthology.org/2020.emnlp-main.584/) for more information."
squarelike/ko_medical_chat,"{""language"": [""ko""], ""tags"": [""medical""]}","[https://github.com/jwj7140/ko-medical-chat](https://github.com/jwj7140/ko-medical-chat)
Korean medical conversation dataset from converting [MedText](https://huggingface.co/datasets/BI55/MedText) and [ChatDoctor](https://github.com/Kent0n-Li/ChatDoctor)"
nlp-with-deeplearning/Ko.SlimOrca,"{""license"": ""cc-by-nc-sa-4.0"", ""task_categories"": [""conversational"", ""text-classification"", ""token-classification"", ""table-question-answering"", ""question-answering"", ""zero-shot-classification"", ""summarization"", ""feature-extraction"", ""text-generation"", ""text2text-generation""], ""language"": [""en"", ""ko""], ""size_categories"": [""100K
- **Paper**: http://arxiv.org/abs/2411.19799
### Dataset Summary
INCLUDE is a comprehensive knowledge- and reasoning-centric benchmark across **44 languages** that evaluates multilingual LLMs for performance in the actual language environments where they would be deployed.
It contains 11,095 4-option multiple-choice-questions (MCQ) extracted from academic and professional exams, covering 57 topics, including regional knowledge.
For evaluation in a larger set, you can use [include-base-44](https://huggingface.co/datasets/CohereForAI/include-base-44), which is a superset of `include-lite-44`, covering the same 44 languages.
### Languages
Albanian, Arabic, Armenian, Azerbaijani, Basque, Belarusian, Bengali, Bulgarian, Chinese, Croatian, Dutch, Estonian, Finnish, French, Georgian, German, Greek, Hebrew, Hindi, Hungarian, Indonesia, Italian, Japanese, Kazakh, Korean, Lithuanian, Malay, Malayalam, Nepali, North Macedonian, Persian, Polish, Portuguese, russian, Serbian, Spanish, Tagalog, Tamil, Telugu, Turkish, Ukrainian, Urdu, Uzbek, Vietnamese
### Topics
- **Academic**:
Accounting, Agriculture, Anthropology, Architecture and Design, Arts & Humanities, Biology, Business administration, Business ethics, Business, Chemistry, Computer Science, Culturology, Earth science, Economics, Education, Engineering, Environmental studies and forestry, Family and consumer science, Finance, Geography, Health, History, Human physical performance and recreation, Industrial and labor relations, International trade, Journalism, media studies, and communication, Language, Law, Library and museum studies, Literature, Logic, Management, Marketing, Math, Medicine, Military Sciences, Multiple exams, Performing arts, Philosophy, Physics, Political sciences, Psychology, Public Administration, Public Policy, Qualimetry, Religious studies, Risk management and insurance, Social Work, Social work, Sociology, STEM, Transportation, Visual Arts
- **Licenses**:
Driving License, Marine License, Medical License, Professional Certifications
### Data schema
An example from a French Law question looks as follows:
```
{
""language"": ""French"",
""country"": ""France"",
""level"": ""Academic"",
""domain"": ""Arts & Humanities"",
""subject"": ""Law"",
""regional_feature"": ""region explicit"",
""question"": ""Que permet l'article 49-3 de la Constitution ?"",
""choices"": [""de recourir au référendum"", ""au Parlement de contrôler l'action du Gouvernement"", ""l'adoption sans vote d'une loi"", ""de prononcer la dissolution de l'Assemblée nationale""],
""answer"": 2
}
```
### Model Performance
Models performance on **INCLUDE** using the Harness-eval framework.
| **Model** | **Original Lang instructions** | **English instructions** |
|------------------------------------|:------------------------------:|:------------------------:|
| Llama3.1-70B-Instruct | 70.3 | 70.6 |
| Qwen2.5-14B | 61.8 | 61.9 |
| Aya-expanse-32b | 58.9 | 59.5 |
| Qwen2.5-7B | 54.4 | 54.9 |
| Qwen2.5-7B-Instruct | 54.5 | 54.6 |
| Llama-3.1-8B-Instruct | 53.5 | 54.4 |
| Gemma-7B | 53.6 | 53.1 |
| Llama-3.1-8B | 51.2 | 52.1 |
| Aya-expanse-8b | 47.3 | 48.0 |
| Mistral-7B | 44.5 | 44.7 |
| Mistral-7B-Instruct | 43.8 | 43.9 |
| Gemma-7B-Instruct | 39.1 | 39.7 |
## Citation
```
@article{romanou2024include,
title={INCLUDE: Evaluating Multilingual Language Understanding with Regional Knowledge},
author={Romanou, Angelika and Foroutan, Negar and Sotnikova, Anna and Chen, Zeming and Nelaturu, Sree Harsha and Singh, Shivalika and Maheshwary, Rishabh and Altomare, Micol and Haggag, Mohamed A and Amayuelas, Alfonso and others},
journal={arXiv preprint arXiv:2411.19799},
year={2024}
}
```"
chengshidehaimianti/CC-Cat,"{""license"": ""odc-by"", ""task_categories"": [""text-generation""], ""language"": [""zh"", ""en"", ""de"", ""ru"", ""es"", ""ja"", ""af"", ""am"", ""an"", ""ar"", ""as"", ""av"", ""az"", ""ba"", ""be"", ""bg"", ""bo"", ""br"", ""bs"", ""ca"", ""cv"", ""cy"", ""da"", ""el"", ""eo"", ""et"", ""eu"", ""fa"", ""fi"", ""fr"", ""fy"", ""ga"", ""gd"", ""gl"", ""gn"", ""gv"", ""he"", ""hi"", ""hr"", ""ht"", ""hu"", ""hy"", ""ia"", ""id"", ""ie"", ""io"", ""is"", ""it"", ""jv"", ""ka"", ""kk"", ""km"", ""kn"", ""ko"", ""kv"", ""kw"", ""ky"", ""la"", ""lb"", ""li"", ""lo"", ""lt"", ""lv"", ""mg"", ""mk"", ""ml"", ""mn"", ""ms"", ""mt"", ""my"", ""ne"", ""nl"", ""nn"", false, ""oc"", ""os"", ""pa"", ""pl"", ""ps"", ""pt"", ""qu"", ""rm"", ""ro"", ""sa"", ""sc"", ""sd"", ""si"", ""sk"", ""sl"", ""so"", ""sq"", ""sr"", ""su"", ""sv"", ""sw"", ""ta"", ""te"", ""tg"", ""tk"", ""tl"", ""tr"", ""tt"", ""ug"", ""uk"", ""ur"", ""uz"", ""vi"", ""wa"", ""yi"", ""yo""], ""tags"": [""croissant""], ""size_categories"": [""n>1T""], ""pretty_name"": ""CCCAT""}","# CC_Cat
- **Extract from *CC-WARC* snapshots.**
- **Mainly includes texts with *149* languages.**
- ***PDF/IMAGE/AUDIO/VIDEO* raw downloading link.**
# Notice
- Since my computing resources are limited, this dataset will update by one-day of CC snapshots timestampts.
- After a snapshot is updated, the deduplicated version will be uploaded.
- If you are interested in providing computing resources or have cooperation needs, please contact me.
carreyallthetime@gmail.com
"
eaglewatch/Korean_Wikipedia_Dataset_for_GPT2_August_2022,"{""annotations_creators"": [""other""], ""language"": [""ko""], ""language_creators"": [""other""], ""license"": [""apache-2.0""], ""multilinguality"": [""multilingual""], ""pretty_name"": ""Korean wikipedia dataset for GPT-2 training"", ""size_categories"": [""100M
## Main Results
The multilingual capabilities of all models except for the LLaMA3.2 series improve with increasing model sizes, as LLaMA3.2-1B and LLaMA3.2-3B exhibit poor instruction-following capabilities, leading to a higher failure rate in answer extraction. In addition, Qwen2.5 demonstrates a strong multilingual performance on understanding and capability-specialized tasks, while Gemma2 excels in generation tasks. Closed-source models generally outperform open-source models.
## Citation
We've published our paper at [this link](https://arxiv.org/pdf/2411.09116). If you find this dataset is helpful, please cite our paper as follows:
```
@misc{zhang2024pmmevalparallelmultilingualmultitask,
title={P-MMEval: A Parallel Multilingual Multitask Benchmark for Consistent Evaluation of LLMs},
author={Yidan Zhang and Yu Wan and Boyi Deng and Baosong Yang and Haoran Wei and Fei Huang and Bowen Yu and Junyang Lin and Fei Huang and Jingren Zhou},
year={2024},
eprint={2411.09116},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2411.09116},
}
```
# Usage
You can use OpenCompass if you want to evaluate your LLMs on P-MMEval . We advice you to use vllm to accelerate the evaluation (requiring vllm installation):
```
# CLI
opencompass --models hf_internlm2_5_1_8b_chat --datasets pmmeval_gen -a vllm
# Python scripts
opencompass ./configs/eval_PMMEval.py
```"
izhx/xtreme-r-udpos,"{""license"": ""other"", ""license_name"": ""ud-2.7"", ""license_link"": ""https://lindat.mff.cuni.cz/repository/xmlui/page/license-ud-2.7"", ""annotations_creators"": [""found""], ""language_creators"": [""found""], ""language"": [""af"", ""ar"", ""bg"", ""bn"", ""de"", ""el"", ""en"", ""es"", ""et"", ""eu"", ""fa"", ""fi"", ""fr"", ""he"", ""hi"", ""hu"", ""id"", ""it"", ""ja"", ""jv"", ""ka"", ""kk"", ""ko"", ""ml"", ""mr"", ""ms"", ""my"", ""nl"", ""pt"", ""ru"", ""sw"", ""ta"", ""te"", ""th"", ""tl"", ""tr"", ""ur"", ""vi"", ""yo"", ""zh""], ""multilinguality"": [""multilingual"", ""translation""], ""size_categories"": [""n<1K"", ""1K>> from datasets import load_dataset
>>> dataset = load_dataset(""Bingsu/KcBERT_Pre-Training_Corpus"")
>>> dataset
DatasetDict({
train: Dataset({
features: ['text'],
num_rows: 86246285
})
})
```
### Data Size
download: 7.90 GiB
generated: 11.86 GiB
total: 19.76 GiB
※ You can download this dataset from [kaggle](https://www.kaggle.com/datasets/junbumlee/kcbert-pretraining-corpus-korean-news-comments), and it's 5 GiB. (12.48 GiB when uncompressed)
### Data Fields
- text: `string`
### Data Splits
| | train |
| ---------- | -------- |
| # of texts | 86246285 |"
nazimali/quran,"{""dataset_info"": {""features"": [{""name"": ""surah"", ""dtype"": ""int64""}, {""name"": ""ayah"", ""dtype"": ""int64""}, {""name"": ""surah-name"", ""dtype"": ""string""}, {""name"": ""surah-total-ayas"", ""dtype"": ""int64""}, {""name"": ""surah-name-transliteration"", ""dtype"": ""string""}, {""name"": ""surah-name-en"", ""dtype"": ""string""}, {""name"": ""surah-type"", ""dtype"": ""string""}, {""name"": ""surah-order-revealed"", ""dtype"": ""int64""}, {""name"": ""surah-rukus"", ""dtype"": ""int64""}, {""name"": ""arabic-text-simple"", ""dtype"": ""string""}, {""name"": ""arabic-text-simple-min"", ""dtype"": ""string""}, {""name"": ""arabic-text-simple-plain"", ""dtype"": ""string""}, {""name"": ""arabic-text-simple-clean"", ""dtype"": ""string""}, {""name"": ""arabic-text-uthmani"", ""dtype"": ""string""}, {""name"": ""translation-am-sadiq"", ""dtype"": ""string""}, {""name"": ""translation-ar-jalalayn"", ""dtype"": ""string""}, {""name"": ""translation-ar-muyassar"", ""dtype"": ""string""}, {""name"": ""translation-az-mammadaliyev"", ""dtype"": ""string""}, {""name"": ""translation-az-musayev"", ""dtype"": ""string""}, {""name"": ""translation-ber-mensur"", ""dtype"": ""string""}, {""name"": ""translation-bg-theophanov"", ""dtype"": ""string""}, {""name"": ""translation-bn-bengali"", ""dtype"": ""string""}, {""name"": ""translation-bn-hoque"", ""dtype"": ""string""}, {""name"": ""translation-bs-korkut"", ""dtype"": ""string""}, {""name"": ""translation-bs-mlivo"", ""dtype"": ""string""}, {""name"": ""translation-cs-hrbek"", ""dtype"": ""string""}, {""name"": ""translation-cs-nykl"", ""dtype"": ""string""}, {""name"": ""translation-de-aburida"", ""dtype"": ""string""}, {""name"": ""translation-de-bubenheim"", ""dtype"": ""string""}, {""name"": ""translation-de-khoury"", ""dtype"": ""string""}, {""name"": ""translation-de-zaidan"", ""dtype"": ""string""}, {""name"": ""translation-dv-divehi"", ""dtype"": ""string""}, {""name"": ""translation-en-ahmedali"", ""dtype"": ""string""}, {""name"": ""translation-en-ahmedraza"", ""dtype"": ""string""}, {""name"": ""translation-en-arberry"", ""dtype"": ""string""}, {""name"": ""translation-en-hilali"", ""dtype"": ""string""}, {""name"": ""translation-en-itani"", ""dtype"": ""string""}, {""name"": ""translation-en-maududi"", ""dtype"": ""string""}, {""name"": ""translation-en-mubarakpuri"", ""dtype"": ""string""}, {""name"": ""translation-en-pickthall"", ""dtype"": ""string""}, {""name"": ""translation-en-qarai"", ""dtype"": ""string""}, {""name"": ""translation-en-qaribullah"", ""dtype"": ""string""}, {""name"": ""translation-en-sahih"", ""dtype"": ""string""}, {""name"": ""translation-en-sarwar"", ""dtype"": ""string""}, {""name"": ""translation-en-shakir"", ""dtype"": ""string""}, {""name"": ""translation-en-transliteration"", ""dtype"": ""string""}, {""name"": ""translation-en-wahiduddin"", ""dtype"": ""string""}, {""name"": ""translation-en-yusufali"", ""dtype"": ""string""}, {""name"": ""translation-es-bornez"", ""dtype"": ""string""}, {""name"": ""translation-es-cortes"", ""dtype"": ""string""}, {""name"": ""translation-es-garcia"", ""dtype"": ""string""}, {""name"": ""translation-fa-ansarian"", ""dtype"": ""string""}, {""name"": ""translation-fa-ayati"", ""dtype"": ""string""}, {""name"": ""translation-fa-bahrampour"", ""dtype"": ""string""}, {""name"": ""translation-fa-fooladvand"", ""dtype"": ""string""}, {""name"": ""translation-fa-gharaati"", ""dtype"": ""string""}, {""name"": ""translation-fa-ghomshei"", ""dtype"": ""string""}, {""name"": ""translation-fa-khorramdel"", ""dtype"": ""string""}, {""name"": ""translation-fa-khorramshahi"", ""dtype"": ""string""}, {""name"": ""translation-fa-makarem"", ""dtype"": ""string""}, {""name"": ""translation-fa-moezzi"", ""dtype"": ""string""}, {""name"": ""translation-fa-mojtabavi"", ""dtype"": ""string""}, {""name"": ""translation-fa-sadeqi"", ""dtype"": ""string""}, {""name"": ""translation-fa-safavi"", ""dtype"": ""string""}, {""name"": ""translation-fr-hamidullah"", ""dtype"": ""string""}, {""name"": ""translation-ha-gumi"", ""dtype"": ""string""}, {""name"": ""translation-hi-farooq"", ""dtype"": ""string""}, {""name"": ""translation-hi-hindi"", ""dtype"": ""string""}, {""name"": ""translation-id-indonesian"", ""dtype"": ""string""}, {""name"": ""translation-id-jalalayn"", ""dtype"": ""string""}, {""name"": ""translation-id-muntakhab"", ""dtype"": ""string""}, {""name"": ""translation-it-piccardo"", ""dtype"": ""string""}, {""name"": ""translation-ja-japanese"", ""dtype"": ""string""}, {""name"": ""translation-ko-korean"", ""dtype"": ""string""}, {""name"": ""translation-ku-asan"", ""dtype"": ""string""}, {""name"": ""translation-ml-abdulhameed"", ""dtype"": ""string""}, {""name"": ""translation-ml-karakunnu"", ""dtype"": ""string""}, {""name"": ""translation-ms-basmeih"", ""dtype"": ""string""}, {""name"": ""translation-nl-keyzer"", ""dtype"": ""string""}, {""name"": ""translation-nl-leemhuis"", ""dtype"": ""string""}, {""name"": ""translation-nl-siregar"", ""dtype"": ""string""}, {""name"": ""translation-no-berg"", ""dtype"": ""string""}, {""name"": ""translation-pl-bielawskiego"", ""dtype"": ""string""}, {""name"": ""translation-ps-abdulwali"", ""dtype"": ""string""}, {""name"": ""translation-pt-elhayek"", ""dtype"": ""string""}, {""name"": ""translation-ro-grigore"", ""dtype"": ""string""}, {""name"": ""translation-ru-abuadel"", ""dtype"": ""string""}, {""name"": ""translation-ru-kalam"", ""dtype"": ""string""}, {""name"": ""translation-ru-krachkovsky"", ""dtype"": ""string""}, {""name"": ""translation-ru-kuliev-alsaadi"", ""dtype"": ""string""}, {""name"": ""translation-ru-kuliev"", ""dtype"": ""string""}, {""name"": ""translation-ru-muntahab"", ""dtype"": ""string""}, {""name"": ""translation-ru-osmanov"", ""dtype"": ""string""}, {""name"": ""translation-ru-porokhova"", ""dtype"": ""string""}, {""name"": ""translation-ru-sablukov"", ""dtype"": ""string""}, {""name"": ""translation-sd-amroti"", ""dtype"": ""string""}, {""name"": ""translation-so-abduh"", ""dtype"": ""string""}, {""name"": ""translation-sq-ahmeti"", ""dtype"": ""string""}, {""name"": ""translation-sq-mehdiu"", ""dtype"": ""string""}, {""name"": ""translation-sq-nahi"", ""dtype"": ""string""}, {""name"": ""translation-sv-bernstrom"", ""dtype"": ""string""}, {""name"": ""translation-sw-barwani"", ""dtype"": ""string""}, {""name"": ""translation-ta-tamil"", ""dtype"": ""string""}, {""name"": ""translation-tg-ayati"", ""dtype"": ""string""}, {""name"": ""translation-th-thai"", ""dtype"": ""string""}, {""name"": ""translation-tr-ates"", ""dtype"": ""string""}, {""name"": ""translation-tr-bulac"", ""dtype"": ""string""}, {""name"": ""translation-tr-diyanet"", ""dtype"": ""string""}, {""name"": ""translation-tr-golpinarli"", ""dtype"": ""string""}, {""name"": ""translation-tr-ozturk"", ""dtype"": ""string""}, {""name"": ""translation-tr-transliteration"", ""dtype"": ""string""}, {""name"": ""translation-tr-vakfi"", ""dtype"": ""string""}, {""name"": ""translation-tr-yazir"", ""dtype"": ""string""}, {""name"": ""translation-tr-yildirim"", ""dtype"": ""string""}, {""name"": ""translation-tr-yuksel"", ""dtype"": ""string""}, {""name"": ""translation-tt-nugman"", ""dtype"": ""string""}, {""name"": ""translation-ug-saleh"", ""dtype"": ""string""}, {""name"": ""translation-ur-ahmedali"", ""dtype"": ""string""}, {""name"": ""translation-ur-jalandhry"", ""dtype"": ""string""}, {""name"": ""translation-ur-jawadi"", ""dtype"": ""string""}, {""name"": ""translation-ur-junagarhi"", ""dtype"": ""string""}, {""name"": ""translation-ur-kanzuliman"", ""dtype"": ""string""}, {""name"": ""translation-ur-maududi"", ""dtype"": ""string""}, {""name"": ""translation-ur-najafi"", ""dtype"": ""string""}, {""name"": ""translation-ur-qadri"", ""dtype"": ""string""}, {""name"": ""translation-uz-sodik"", ""dtype"": ""string""}, {""name"": ""translation-zh-jian"", ""dtype"": ""string""}, {""name"": ""translation-zh-majian"", ""dtype"": ""string""}], ""splits"": [{""name"": ""train"", ""num_bytes"": 171759080, ""num_examples"": 6236}], ""download_size"": 129834597, ""dataset_size"": 171759080}, ""configs"": [{""config_name"": ""default"", ""data_files"": [{""split"": ""train"", ""path"": ""data/train-*""}]}], ""license"": ""cc-by-3.0"", ""task_categories"": [""text-classification"", ""token-classification"", ""translation"", ""feature-extraction"", ""text-generation""], ""tags"": [""islam"", ""quran"", ""translations""], ""pretty_name"": ""Quran"", ""multilinguality"": [""monolingual"", ""multilingual""], ""language"": [""sq"", ""ber"", ""ar"", ""am"", ""az"", ""bn"", ""bs"", ""bg"", ""zh"", ""cs"", ""dv"", ""nl"", ""en"", ""fr"", ""de"", ""ha"", ""hi"", ""id"", ""it"", ""ja"", ""ko"", ""ku"", ""ms"", ""ml"", false, ""ps"", ""fa"", ""pl"", ""pt"", ""ro"", ""ru"", ""sd"", ""so"", ""es"", ""sw"", ""sv"", ""tg"", ""ta"", ""tt"", ""th"", ""tr"", ""ur"", ""ug"", ""uz""], ""size_categories"": [""1K Post-processing 작업 내용
## OpenOrca-Ko-v2
1. NIV // 약 1500개
2. FLAN // 약 9000개
3. T0 // 약 6000개
4. CoT // 약 2000개
> Dataset 구성
- 수작업으로 고친 내용(v2)
1. 영어로 된 답변 수정. (Ex. Nick -> 닉, Lucky -> 운이 좋음, ...)
2. KoCoT 데이터셋 제거.
3. Yes, True, False 등등 일부 답변 수정
> Post-processing 작업 내용
## Translation
Using DeepL Pro API. Thanks.
---
>Below is original dataset card
## Table of Contents
- [Dataset Summary](#dataset-summary)
- [Dataset Attribution](#dataset-attribution)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Dataset Use](#dataset-use)
- [Use Cases](#use-cases)
- [Usage Caveats](#usage-caveats)
- [Getting Started](#getting-started)
🐋 The OpenOrca Dataset! 🐋

We are thrilled to announce the release of the OpenOrca dataset!
This rich collection of augmented FLAN data aligns, as best as possible, with the distributions outlined in the [Orca paper](https://arxiv.org/abs/2306.02707).
It has been instrumental in generating high-performing model checkpoints and serves as a valuable resource for all NLP researchers and developers!
# Official Models
## OpenOrca-Platypus2-13B
Our [latest release](https://huggingface.co/Open-Orca/OpenOrca-Platypus2-13B), the first 13B model to score higher than LLaMA1-65B on the HuggingFace Leaderboard!
Released in partnership with Platypus.
## LlongOrca 7B & 13B
* Our [first 7B release](https://huggingface.co/Open-Orca/LlongOrca-7B-16k), trained on top of LLongMA2 to achieve 16,000 tokens context. #1 long context 7B model at release time, with >99% of the overall #1 model's performance.
* [LlongOrca-13B-16k](https://huggingface.co/Open-Orca/LlongOrca-13B-16k), trained on top of LLongMA2. #1 long context 13B model at release time, with >97% of the overall #1 model's performance.
## OpenOrcaxOpenChat-Preview2-13B
Our [second model](https://huggingface.co/Open-Orca/OpenOrcaxOpenChat-Preview2-13B), highlighting that we've surpassed the performance reported in the Orca paper.
Was #1 at release time, now surpassed by our own OpenOrca-Platypus2-13B.
Released in partnership with OpenChat.
## OpenOrca-Preview1-13B
[OpenOrca-Preview1-13B](https://huggingface.co/Open-Orca/OpenOrca-Preview1-13B)
This model was trained in less than a day, for <$200, with <10% of our data.
At release, it beat the current state of the art models on BigBench-Hard and AGIEval. Achieves ~60% of the improvements reported in the Orca paper.
# Dataset Summary
The OpenOrca dataset is a collection of augmented [FLAN Collection data](https://arxiv.org/abs/2301.13688).
Currently ~1M GPT-4 completions, and ~3.2M GPT-3.5 completions.
It is tabularized in alignment with the distributions presented in the ORCA paper and currently represents a partial completion of the full intended dataset, with ongoing generation to expand its scope.
The data is primarily used for training and evaluation in the field of natural language processing.
# Dataset Attribution
We would like to give special recognition to the following contributors for their significant efforts and dedication:
Teknium
WingLian/Caseus
Eric Hartford
NanoBit
Pankaj
Winddude
Rohan
http://AlignmentLab.ai:
Autometa
Entropi
AtlasUnified
NeverendingToast
NanoBit
WingLian/Caseus
Also of course, as always, TheBloke, for being the backbone of the whole community.
Many thanks to NanoBit and Caseus, makers of [Axolotl](https://github.com/OpenAccess-AI-Collective/axolotl), for lending us their expertise on the platform that developed and trained manticore, minotaur, and many others!
We are welcoming sponsors or collaborators to help us build these models to the scale they deserve. Please reach out via our socials:
http://Alignmentlab.ai https://discord.gg/n9hXaBPWxx
Want to visualize our full dataset? Check out our [Nomic Atlas Map](https://atlas.nomic.ai/map/c1b88b47-2d9b-47e0-9002-b80766792582/2560fd25-52fe-42f1-a58f-ff5eccc890d2).
[
](https://atlas.nomic.ai/map/c1b88b47-2d9b-47e0-9002-b80766792582/2560fd25-52fe-42f1-a58f-ff5eccc890d2)
# Supported Tasks and Leaderboards
This dataset supports a range of tasks including language modeling, text generation, and text augmentation.
It has been instrumental in the generation of multiple high-performing model checkpoints which have exhibited exceptional performance in our unit testing.
Further information on leaderboards will be updated as they become available.
# Languages
The language of the data is primarily English.
# Dataset Structure
## Data Instances
A data instance in this dataset represents entries from the FLAN collection which have been augmented by submitting the listed question to either GPT-4 or GPT-3.5.
The response is then entered into the response field.
## Data Fields
The fields are:
1) 'id', a unique numbered identifier which includes one of 'niv', 't0', 'cot', or 'flan' to represent which source FLAN Collection submix the 'question' is sourced from.
2) 'system_prompt', representing the System Prompt presented to the GPT-3.5 or GPT-4 API for the datapoint
3) 'question', representing a question entry as provided by the FLAN Collection
4) 'response', a response to that question received from a query to either GPT-3.5 or GPT-4.
## Data Splits
The data is unsplit.
# Dataset Creation
## Curation Rationale
The dataset was created to provide a source of augmented text data for researchers and developers.
The datapoints are intended primarily to provide an enhancement of the core FLAN Collection data which relies upon the detailed step by step reasoning capabilities of GPT-3.5 and GPT-4.
This ""reasoning trace"" augmentation has demonstrated exceptional results, allowing a LLaMA-13B model trained with this data to rival or beat GPT-3.5 on broad sets of hard reasoning tasks which all models below 100B parameters had previously performed dramatically worse on.
## Source Data
The data is generated using techniques in alignment with the distributions outlined in the Orca paper, except as noted below:
1) There is not enough CoT data in the FLAN Collection to generate 150K zero-shot entries, as the paper purports to use.
We suspect this portion was either undocumented or misrepresented. We have used the ~75K points available.
2) We used the pre-generated FLAN Collection datasets hosted on HuggingFace under conceptofmind, e.g. [conceptofmind/flan2021](https://huggingface.co/datasets/conceptofmind/flan2021_submix_original).
These are referenced by the [official FLAN Collection repo](https://github.com/google-research/FLAN/tree/main/flan/v2) as the preferred data source.
However, these are a subset of the full FLAN Collection data, and have less than the required entries for the flan2021 and t0 submixes, by ~1.25M and 200k respectively.
Combined, this gave us ~1.5M fewer datapoints than in the original Orca paper. Completing the set is an ongoing work.
# Dataset Use
## Use Cases
The dataset can be used for tasks related to language understanding, natural language processing, machine learning model training, and model performance evaluation.
## Usage Caveats
Given that this is a work-in-progress dataset, it is recommended to regularly check for updates and improvements.
Further, the data should be used in accordance with the guidelines and recommendations outlined in the Orca paper.
## Getting Started
This dataset is organized such that it can be naively loaded via Hugging Face datasets library.
We recommend using streaming due to the large size of the files.
Regular updates and data generation progress can be monitored through the OpenOrca repository on Hugging Face.
# Citation
```bibtex
@misc{OpenOrca,
title = {OpenOrca: An Open Dataset of GPT Augmented FLAN Reasoning Traces},
author = {Wing Lian and Bleys Goodson and Eugene Pentland and Austin Cook and Chanvichet Vong and ""Teknium""},
year = {2023},
publisher = {HuggingFace},
journal = {HuggingFace repository},
howpublished = {\url{https://https://huggingface.co/Open-Orca/OpenOrca},
}
```
```bibtex
@misc{mukherjee2023orca,
title={Orca: Progressive Learning from Complex Explanation Traces of GPT-4},
author={Subhabrata Mukherjee and Arindam Mitra and Ganesh Jawahar and Sahaj Agarwal and Hamid Palangi and Ahmed Awadallah},
year={2023},
eprint={2306.02707},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
```bibtex
@misc{longpre2023flan,
title={The Flan Collection: Designing Data and Methods for Effective Instruction Tuning},
author={Shayne Longpre and Le Hou and Tu Vu and Albert Webson and Hyung Won Chung and Yi Tay and Denny Zhou and Quoc V. Le and Barret Zoph and Jason Wei and Adam Roberts},
year={2023},
eprint={2301.13688},
archivePrefix={arXiv},
primaryClass={cs.AI}
}
```
```bibtex
@misc{touvron2023llama,
title={Llama 2: Open Foundation and Fine-Tuned Chat Models},
author={Hugo Touvron and Louis Martin and Kevin Stone and Peter Albert and Amjad Almahairi and Yasmine Babaei and Nikolay Bashlykov and Soumya Batra and Prajjwal Bhargava and Shruti Bhosale and Dan Bikel and Lukas Blecher and Cristian Canton Ferrer and Moya Chen and Guillem Cucurull and David Esiobu and Jude Fernandes and Jeremy Fu and Wenyin Fu and Brian Fuller and Cynthia Gao and Vedanuj Goswami and Naman Goyal and Anthony Hartshorn and Saghar Hosseini and Rui Hou and Hakan Inan and Marcin Kardas and Viktor Kerkez and Madian Khabsa and Isabel Kloumann and Artem Korenev and Punit Singh Koura and Marie-Anne Lachaux and Thibaut Lavril and Jenya Lee and Diana Liskovich and Yinghai Lu and Yuning Mao and Xavier Martinet and Todor Mihaylov and Pushkar Mishra and Igor Molybog and Yixin Nie and Andrew Poulton and Jeremy Reizenstein and Rashi Rungta and Kalyan Saladi and Alan Schelten and Ruan Silva and Eric Michael Smith and Ranjan Subramanian and Xiaoqing Ellen Tan and Binh Tang and Ross Taylor and Adina Williams and Jian Xiang Kuan and Puxin Xu and Zheng Yan and Iliyan Zarov and Yuchen Zhang and Angela Fan and Melanie Kambadur and Sharan Narang and Aurelien Rodriguez and Robert Stojnic and Sergey Edunov and Thomas Scialom},
year={2023},
eprint= arXiv 2307.09288
}
@software{touvron2023llama,
title={LLaMA: Open and Efficient Foundation Language Models},
author={Touvron, Hugo and Lavril, Thibaut and Izacard, Gautier and Martinet, Xavier and Lachaux, Marie-Anne and Lacroix, Timoth{\'e}e and Rozi{\`e}re, Baptiste and Goyal, Naman and Hambro, Eric and Azhar, Faisal and Rodriguez, Aurelien and Joulin, Armand and Grave, Edouard and Lample, Guillaume},
journal={arXiv preprint arXiv:2302.13971},
year={2023}
}
```"
LDCC/korag,"{""language"": [""ko""], ""size_categories"": [""10K
# KoMagpie-raw-preview 🐦
This is the first, preview version of 🐦 KoMagpie dataset.
## How to use
```python
>>> from datasets import load_dataset
>>> ds = load_dataset(""channelcorp/komagpie-raw-preview"", split=""train"")
>>> ds
Dataset({
features: ['instruction', 'output', 'model'],
num_rows: 487171
})
```
```python
>>> ds[0]
{'instruction': '안녕하세요? 저는 마케팅 매니저로 일하면서 새로운 소비자 트렌드를 파악하기 위해 노력하고 있습니다. 이와 관련해 도움이 될 만한 영어 자료를 찾아주실 수 있나요?',
'output': '안녕하세요! 물론이죠, 적절한 리소스를 찾는 것은 중요합니다. 이 밑에 있는 사이트들은 마케팅 및 소비자 트렌드에 관한 최신 정보를 제공하는 곳들입니다:\n\n1. **eMarketer:** 이 사이트는 디지털 마케팅, 미디어, 그리고 전자상거래의 세계적인 트렌드를 분석합니다. (www.emarketer.com)\n\n2. **Marketing Dive:** 이 웹사이트는 일상적인 비즈니스 뉴스와 마케팅 업계의 트렌드를 포함합니다. (www.marketingdive.com)\n\n3. **HubSpot:** 이 조직은 새로운 디지털 마케팅 트렌드와 그들의 인플루언스에 대한 광범위한 정보를 제공합니다. 또한 리소스, 툴 및 트레이닝을 제공합니다. (www.hubspot.com)\n\n4. **Adweek:** 광고, 디지털 마케팅, 미디어 트렌드를 설명하는 자료를 제공합니다. (www.adweek.com)\n\n5. **Nielsen:** 이 사이트는 전세계의 소비자 행동에 대한 통찰을 제공합니다. ((www.nielsen.com)\n\n6. **Trend Hunter:** 아이디어, 트렌드, 그리고 방대한 혁신에 관한 세계 최대의 웹사이트입니다. (www.trendhunter.com)\n\n Parses는 더 많은 정보가 필요하시다면 무언가를 명확히 말씀해주시면 감사하겠습니다.',
'model': 'Qwen/Qwen2-72B-Instruct'}
```
### Development Process
We followed [Magpie's process](https://github.com/magpie-align/magpie) to create a Korean version.
1. Generate instruction data using BOS token
```python
payload = {
""model"": ""LGAI-EXAONE/EXAONE-3.0-7.8B-Instruct"",
""prompt"": ""[|system|]You are a helpful assistant.[|endofturn|]\n[|Korean user|]"", # we used [|Korean user|] instead of [|user|] to gather Korean Instructions
""stream"": False,
""n"": 128,
""max_tokens"": 128,
""stop"": [""\n"", ""**"", ""|""]
}
```
We used [LGAI-EXAONE/EXAONE-3.0-7.8B-Instruct](https://huggingface.co/LGAI-EXAONE/EXAONE-3.0-7.8B-Instruct) model to generate instruction part of the dataset,
following Magpie's method.
2. Deduplicate using Exact Match
3. Generate output part using open LLMs
We used [Qwen/Qwen2-72B-Instruct](https://huggingface.co/Qwen/Qwen2-72B-Instruct) model to generate output part of the dataset, while limiting within single turn.
## License
- Qwen/Qwen2-72B-Instruct : https://huggingface.co/Qwen/Qwen2-72B-Instruct/blob/main/LICENSE
- LGAI-EXAONE/EXAONE-3.0-7.8B-Instruct : https://huggingface.co/LGAI-EXAONE/EXAONE-3.0-7.8B-Instruct/blob/main/LICENSE
## Disclaimer
This is not an officially supported Channel Corp product.
## Acknowledgement
This research is supported by **TPU Research Cloud program**."
jp1924/KconfSpeech,{},"---
dataset_info:
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: sentence
dtype: string
- name: id
dtype: string
- name: dataSet
struct:
- name: version
dtype: string
- name: date
dtype: string
- name: typeInfo
struct:
- name: category
dtype: string
- name: subcategory
dtype: string
- name: place
dtype: string
- name: speakers
list:
- name: id
dtype: string
- name: gender
dtype: string
- name: age
dtype: string
- name: residence
dtype: string
- name: inputType
dtype: string
- name: dialogs
list:
- name: speaker
dtype: string
- name: audioPath
dtype: string
- name: textPath
dtype: string
splits:
- name: train
num_bytes: 342782915304.375
num_examples: 1824445
- name: validation
num_bytes: 3177111029.875
num_examples: 16113
download_size: 334480278087
dataset_size: 345960026334.25
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
task_categories:
- automatic-speech-recognition
language:
- ko
tags:
- STT
- Audio
size_categories:
- 100B""
}
```
The format has the following keys:
```md
- ""title"" (str) [The title of the article]
- ""text"" (str) [The html content converted fro html into markdown.]
```
## Recursal's Vision
> To make AI accessible to everyone, regardless of language, or economical status
This is the collective goal of the `RWKV Open Source foundation` and `Recursal AI`, the commercial entity who backs it.
We believe that AI should not be controlled by a select few individual organization. And that it should be made accessible regardless if you are rich or poor, or a native speaker of english.
### About RWKV
RWKV is an Open Source, non profit group, under the linux foundation. Focused on developing the RWKV AI architecture, in accordence to our vision.
The RWKV architecture scales efficiently and economically. As an RNN & Transformer hybrid, it is able to provide the performance similar to leading transformer models, while having the compute and energy efficiency of an RNN based architecture.
You can find out more about the project, and latest models, at the following
- [https://blog.rwkv.com](https://blog.rwkv.com)
- [https://wiki.rwkv.com](https://wiki.rwkv.com)
### About Recursal AI
Recursal AI, is the commercial entity built to provide support for RWKV model development and users, while providing commercial services via its public cloud, or private-cloud / on-premise offerings.
As part of our vision. Our commitment, is to ensure open source development and access to the best foundational AI models and datasets.
The following dataset/models provided here, is part of that commitment.
You can find out more about recursal AI here
- [https://recursal.ai](https://recursal.ai)
- [https://blog.recursal.ai](https://blog.recursal.ai)
### Dataset Curators
KaraKaraWitch. (I typically hang out in PygmalionAI discord, sometimes EleutherAI. If something is wrong, `@karakarawitch` on discord.)
I'd be happy if you could spread the word and recommend this dataset.
### Licensing Information
MDN lists their license as [CC-BY-SA.](https://developer.mozilla.org/en-US/docs/MDN/Writing_guidelines/Attrib_copyright_license)
Recursal Waifus (The banner image) are licensed under CC-BY-SA.
They do not represent the related websites in any official capacity unless otherwise or announced by the website.
You may use them as a banner image. However, you must always link back to the dataset.
### Citation Information
```
@misc{MDN,
title = {MDN},
author = {KaraKaraWitch, recursal.ai},
year = {2024},
howpublished = {\url{https://huggingface.co/datasets/recursal/MDN}},
}
```"
jihye-moon/LawQA-Ko,"{""task_categories"": [""text-generation"", ""question-answering""], ""language"": [""ko""], ""tags"": [""legal""], ""size_categories"": [""10K
법률에 대한 질문과 답변으로 구성된 데이터셋 입니다.
아래의 데이터셋에서 질문과 답변을 병합하여 Datasets를 만들었습니다.
| 정보 출처 | Dataset Page | Rows |
|---|---|---|
|[찾기쉬운생활법령정보 백문백답](https://www.easylaw.go.kr/CSP/OnhunqueansLstRetrieve.laf?search_put=)| [jiwoochris/easylaw_kr](https://huggingface.co/datasets/jiwoochris/easylaw_kr) | 2,195 rows |
|[대한법률구조공단 법률상담사례](https://www.klac.or.kr/legalinfo/counsel.do)| [jihye-moon/klac_legal_aid_counseling](https://huggingface.co/datasets/jihye-moon/klac_legal_aid_counseling) | 10,037 rows |
|[대한법률구조공단 사이버상담](https://www.klac.or.kr/legalstruct/cyberConsultation.do)| [jihye-moon/klac_cyber_counseling](https://huggingface.co/datasets/jihye-moon/klac_cyber_counseling) | 2,587 rows |
※ 위의 데이터는 모두 웹 페이지를 크롤링 하여 구축된 데이터 입니다.
※ 대한법률구조공단 데이터는 크롤링 후, 전처리(공단 안내문구 삭제, 쿠션어 삭제 등)를 하였습니다."
Bingsu/namuwiki_20210301_filtered,"{""annotations_creators"": [""no-annotation""], ""language_creators"": [""crowdsourced""], ""language"": [""ko""], ""license"": [""cc-by-nc-sa-2.0""], ""multilinguality"": [""monolingual""], ""pretty_name"": ""Namuwiki database dump (2021-03-01)"", ""size_categories"": [""100K
[heegyu/namuwiki-extracted](https://huggingface.co/datasets/heegyu/namuwiki-extracted)
[heegyu/namuwiki-sentences](https://huggingface.co/datasets/heegyu/namuwiki-sentences)
### Lisence
[CC BY-NC-SA 2.0 KR](https://creativecommons.org/licenses/by-nc-sa/2.0/kr/)
## Data Structure
### Data Instance
```pycon
>>> from datasets import load_dataset
>>> dataset = load_dataset(""Bingsu/namuwiki_20210301_filtered"")
>>> dataset
DatasetDict({
train: Dataset({
features: ['title', 'text'],
num_rows: 571308
})
})
```
```pycon
>>> dataset[""train""].features
{'title': Value(dtype='string', id=None),
'text': Value(dtype='string', id=None)}
```
### Data Size
download: 3.26 GiB
generated: 3.73 GiB
total: 6.99 GiB
### Data Field
- title: `string`
- text: `string`
### Data Splits
| | train |
| ---------- | ------ |
| # of texts | 571308 |
```pycon
>>> dataset[""train""][2323]
{'title': '55번 지방도',
'text': '55번 국가지원지방도\n해남 ~ 금산\n시점 전라남도 해남군 북평면 남창교차로\n종점 충청남도 금산군 금산읍 우체국사거리\n총 구간 279.2km\n경유지 전라남도 강진군, 장흥군, 영암군 전라남도 나주시, 화순군 광주광역시 동구, 북구 전라남도 담양군 전라북도 순창군, 정읍시, 완주군 전라북도 임실군, 진안군\n개요\n국가지원지방도 제55호선은 전라남도 해남군에서 출발하여 충청남도 금산군까지 이어지는 대한민국의 국가지원지방도이다.\n전라남도 해남군 북평면 - 전라남도 강진군 도암면 구간은 광주광역시, 전라남도 동부권, 영남 지방에서 완도군 완도읍으로 갈 때 주로 이용된다.] 해남 - 완도구간이 확장되기 전에는 그랬다. 강진군, 장흥군은 예외]\n노선\n전라남도\n해남군\n백도로\n북평면 남창교차로에서 13번 국도, 77번 국도와 만나며 출발한다.\n쇄노재\n북일면 북일초교 앞에서 827번 지방도와 만난다.\n강진군\n백도로\n도암면소재지 사거리에서 819번 지방도와 만난다. 819번 지방도는 망호선착장까지만 길이 있으며, 뱃길을 통해 간접적으로 바다 건너의 819번 지방도와 연결된다.\n석문공원\n도암면 계라교차로에서 18번 국도에 합류한다. 우회전하자. 이후 강진읍까지 18번 국도와 중첩되고 장흥군 장흥읍까지 2번 국도와 중첩된다. 그리고 장흥읍부터 영암군을 거쳐 나주시 세지면까지는 23번 국도와 중첩된다.\n나주시\n동창로\n세지면 세지교차로에서 드디어 23번 국도로부터 분기하면서 820번 지방도와 직결 합류한다. 이 길은 2013년 현재 확장 공사 중이다. 확장공사가 완료되면 동창로가 55번 지방도 노선이 된다.\n세남로\n봉황면 덕림리 삼거리에서 820번 지방도와 분기한다.\n봉황면 철천리 삼거리에서 818번 지방도와 합류한다.\n봉황면 송현리 삼거리에서 818번 지방도와 분기한다.\n송림산제길\n동창로\n여기부터 완공된 왕복 4차로 길이다. 이 길을 만들면서 교통량이 늘어났지만 주변 농민들이 이용하는 농로의 교량을 설치하지 않아 문제가 생기기도 했다. #1 #2\n세남로\n남평읍에서 다시 왕복 2차로로 줄어든다.\n남평읍 남평오거리에서 822번 지방도와 만난다.\n산남로\n남평교를 건너고 남평교사거리에서 우회전\n동촌로\n남평역\n화순군\n동촌로\n화순읍 앵남리 삼거리에서 817번 지방도와 합류한다. 좌회전하자.\n앵남역\n지강로\n화순읍 앵남리 앵남교차로에서 817번 지방도와 분기한다. 앵남교차로부터 나주 남평읍까지 55번 지방도의 확장공사가 진행중이다.\n오성로\n여기부터 화순읍 대리사거리까지 왕복 4차선으로 확장 공사를 진행했고, 2015년 8월 말 화순읍 구간은 왕복 4차선으로 확장되었다.\n화순역\n화순읍에서 광주광역시 동구까지 22번 국도와 중첩되고, 동구부터 전라북도 순창군 쌍치면까지는 29번 국도와 중첩된다.\n전라북도\n순창군\n청정로\n29번 국도를 따라가다가 쌍치면 쌍길매삼거리에서 우회전하여 21번 국도로 들어가자. 쌍치면 쌍치사거리에서 21번 국도와 헤어진다. 직진하자.\n정읍시\n청정로\n산내면 산내사거리에서 715번 지방도와 직결하면서 30번 국도에 합류한다. 좌회전하여 구절재를 넘자.\n산외로\n칠보면 시산교차로에서 49번 지방도와 교차되면 우회전하여 49번 지방도와 합류한다. 이제 오랜 시간 동안 49번 지방도와 합류하게 될 것이다.\n산외면 산외교차로에서 715번 지방도와 교차한다.\n엄재터널\n완주군\n산외로\n구이면 상용교차로에서 27번 국도에 합류한다. 좌회전하자.\n구이로\n구이면 백여교차로에서 27번 국도로부터 분기된다.\n구이면 대덕삼거리에서 714번 지방도와 만난다.\n구이면 염암삼거리에서 우회전\n신덕평로\n고개가 있다. 완주군과 임실군의 경계이다.\n임실군\n신덕평로\n신덕면 외량삼거리, 삼길삼거리에서 749번 지방도와 만난다.\n야트막한 고개가 하나 있다.\n신평면 원천리 원천교차로에서 745번 지방도와 교차한다.\n신평면 관촌역 앞에서 17번 국도와 합류한다. 좌회전하자.\n관진로\n관촌면 병암삼거리에서 17번 국도로부터 분기된다.\n순천완주고속도로와 교차되나 연결되지 않는다.\n진안군\n관진로\n성수면 좌산리에서 721번 지방도와 만난다.\n성수면 좌산리 좌산삼거리에서 721번 지방도와 만난다.\n마령면 강정교차로 부근에서 745번 지방도와 만난다.\n익산포항고속도로와 교차되나 연결되지 않는다.\n진안읍 진안연장농공단지 앞에서 26번 국도에 합류한다. 좌회전하자.\n전진로\n부귀면 부귀교차로에서 드디어 49번 지방도를 떠나보낸다. 그러나 아직 26번 국도와 중첩된다.\n완주군\n동상로\n드디어 55번이라는 노선 번호가 눈에 보이기 시작한다. 완주군 소양면에서 26번 국도와 분기된다. 이제부터 꼬불꼬불한 산길이므로 각오하고 운전하자.\n밤치. 소양면과 동상면의 경계가 되는 고개다.\n동상면 신월삼거리에서 732번 지방도와 만난다. 동상저수지에 빠지지 않도록 주의하자.\n동상주천로\n운장산고개를 올라가야 한다. 완주군과 진안군의 경계다. 고개 정상에 휴게소가 있다.\n진안군\n동상주천로\n주천면 주천삼거리에서 725번 지방도와 만난다.\n충청남도\n금산군\n보석사로\n남이면 흑암삼거리에서 635번 지방도와 만난다. 우회전해야 한다. 네이버 지도에는 좌회전해서 좀더 가면 나오는 길을 55번 지방도라고 써놓았는데, 잘못 나온 거다. 다음 지도에는 올바르게 나와있다.\n십이폭포로\n남이면에서 남일면으로 넘어간다.\n남일면에서 13번 국도와 합류한다. 좌회전하자. 이후 구간은 남이면을 거쳐 금산읍까지 13번 국도와 중첩되면서 55번 지방도 구간은 종료된다.'}
```"
allenai/pixmo-cap-qa,"{""language"": [""en"", ""ko""], ""license"": ""odc-by"", ""task_categories"": [""visual-question-answering""], ""dataset_info"": {""features"": [{""name"": ""image_url"", ""dtype"": ""string""}, {""name"": ""question"", ""dtype"": ""string""}, {""name"": ""answer"", ""dtype"": ""string""}, {""name"": ""messages"", ""sequence"": ""string""}], ""splits"": [{""name"": ""train"", ""num_bytes"": 465149568, ""num_examples"": 271714}], ""download_size"": 240926242, ""dataset_size"": 465149568}, ""configs"": [{""config_name"": ""default"", ""data_files"": [{""split"": ""train"", ""path"": ""data/train-*""}]}]}","# PixMo-CapQA
PixMo-CapQA is a synthetic dataset of question/answer pairs about images. The data was generated by using the
[Claude](https://www.anthropic.com/claude) large language model to build Q/A pairs from [dense captions of images](https://huggingface.co/datasets/allenai/pixmo-cap) (the model did not see the actual images).
PixMo-CapQA is a part of the [PixMo dataset collection](https://huggingface.co/collections/allenai/pixmo-674746ea613028006285687b) and was used to train the [Molmo family of models](https://huggingface.co/collections/allenai/molmo-66f379e6fe3b8ef090a8ca19)
Quick links:
- 📃 [Paper](https://molmo.allenai.org/paper.pdf)
- 🎥 [Blog with Videos](https://molmo.allenai.org/blog)
## Loading
```python
data = datasets.load_dataset(""allenai/pixmo-cap-qa"", split=""train"")
```
## Data Format
Images are stored as URLs that will need to be downloaded separately.
The image URLs can be repeated since many of the images have multiple Q/A pairs.
- The `question` field contains the input text, it includes ""[USER]"" and ""[ASSISTANT]"" tags
- The `answer` field contains the final target output text
- The `messages` field contains the same data in a list-of-messages formats. The first message is from the
user, then messages alternative between user and assistant. This text does not contain ""[USER]"" and ""[ASSISTANT]"" tags
## License
This dataset is licensed under ODC-BY-1.0. It is intended for research and educational use in accordance with Ai2's [Responsible Use Guidelines](https://allenai.org/responsible-use).
This dataset includes data generated from Claude which are subject to Anthropic [terms of service](https://www.anthropic.com/legal/commercial-terms) and [usage policy](https://www.anthropic.com/legal/aup)."
jp1924/MeetingSpeech,{},"---
dataset_info:
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: id
dtype: string
- name: sentence
dtype: string
- name: original_form
dtype: string
- name: start
dtype: float32
- name: end
dtype: float32
- name: term
dtype: string
- name: environment
dtype: string
- name: isIdiom
dtype: bool
- name: hangeulToEnglish
list:
- name: id
dtype: int16
- name: hangeul
dtype: string
- name: english
dtype: string
- name: begin
dtype: int16
- name: end
dtype: int16
- name: hangeulToNumber
list:
- name: id
dtype: int16
- name: hangeul
dtype: string
- name: number
dtype: string
- name: begin
dtype: int16
- name: end
dtype: int16
- name: speaker
struct:
- name: id
dtype: string
- name: name
dtype: string
- name: age
dtype: string
- name: occupation
dtype: string
- name: role
dtype: string
- name: sex
dtype: string
- name: metadata
struct:
- name: title
dtype: string
- name: creator
dtype: string
- name: distributor
dtype: string
- name: year
dtype: int16
- name: category
dtype: string
- name: sampling
dtype: string
- name: date
dtype: string
- name: topic
dtype: string
- name: media
dtype: string
- name: communication
dtype: string
- name: type
dtype: string
- name: domain
dtype: string
- name: speaker_num
dtype: int16
- name: organization
dtype: string
- name: annotation_level
dtype: string
splits:
- name: train
num_bytes: 649259099466
num_examples: 3446200
- name: validation
num_bytes: 75950798309
num_examples: 374680
download_size: 715527121692
dataset_size: 725209897775
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
task_categories:
- automatic-speech-recognition
language:
- ko
---"
nlpai-lab/ko-triplet-v1.0,"{""language"": [""ko""], ""dataset_info"": {""features"": [{""name"": ""query"", ""dtype"": ""string""}, {""name"": ""document"", ""dtype"": ""string""}, {""name"": ""hard_negative"", ""dtype"": ""string""}], ""splits"": [{""name"": ""train"", ""num_bytes"": 628315763, ""num_examples"": 744862}], ""download_size"": 270060556, ""dataset_size"": 628315763}, ""configs"": [{""config_name"": ""default"", ""data_files"": [{""split"": ""train"", ""path"": ""data/train-*""}]}]}","# Dataset Card for nlpai-lab/ko-triplet-v1.0
## Dataset Statistics
| Split | # Examples | Size (bytes) |
|-------|------------|--------------|
| Train | 744,862 | 628,315,763 |
## Dataset Structure
### Train Sample
| query | document | hard_negative |
| --- | --- | --- |
| 데이터 사전 캐시 방법을 적용하면 어떻게 11초에서 요청한 데이터를 핸드오버 구간이 지나고 난 다음인 14초에 타겟 드론을 통해 받을 수 있어? | 제안된 방법을 적용한 경우에는 11초에서 요청한 데이터를 진행 방향에 있는 타겟 드론의 CS에 사전에 캐시 해둠으로써 핸드오버 구간이 지나고 난 다음인 14초에서 타겟 드론을 통해 데이터를 받는다. | 데이터 요청자가 타겟 드론으로 핸드오버 하기 전에, 요청한 데이터를 타겟 드론의 CS로 사전에 캐시한다. |
| 대통령, 경제, 회복, 고용, 안정, 대책, 발표하다 | 대통령이 신년 방송에서 경제 회복과 고용 안정 대책을 발표했다. | 경제 성장이 높을 때 생산, 고용, 판매, 소득이 더욱 증가한다. |
| 고지방 식이와 간장 무게의 상관관계를 다룬 연구를 한 사람은 누구인가? | 고지방 섭취 시 간장 무게가 증가한다는 Sung, Wursch 및 Park의 보고와 일치되는 결과였으며, 고지방 섭취로 인해 간장이 비대해지고, 동맥 내에 지질이 축적되어 관상 순환의 이상으로 야기된 것으로 생각된다. | Shin 등은 고지방 식이에 연잎 건분을 첨가한 식이로서 6 주간 사육했을 때 유의적인 체중감소효과를 나타내었으며, 이때 간장, 신장, 비장, 폐 등의 장기 무게도 감소한 결과는 체중감소로 인한 장기무게의 감소로 보고한 바 있다. |
| 올해, 엄마, 만나다, 고향, 오다 | 나는 올해 엄마를 만나러 고향에 자주 왔다. | 수박, 참외, 조롱박, 수세미, 오이, 가지를 정성껏 심어 무럭무럭 키웠다. |
| 뛰어오르다, 위, 하다, 수탉, 지붕 | 고양이가 슬금슬금 다가오자 수탉은 푸드득 하고 지붕 위로 뛰어올랐다. | 재주는 예절, 음악, 활쏘기, 글쓰기, 말타기, 계산하기 등등 이다. |"
jp1924/KrespSpeech,{},"---
dataset_info:
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: sentence
dtype: string
- name: id
dtype: string
- name: dataSet
struct:
- name: version
dtype: string
- name: date
dtype: string
- name: typeInfo
struct:
- name: category
dtype: string
- name: subcategory
dtype: string
- name: place
dtype: string
- name: speakers
list:
- name: id
dtype: string
- name: gender
dtype: string
- name: type
dtype: string
- name: age
dtype: string
- name: residence
dtype: string
- name: inputType
dtype: string
- name: dialogs
list:
- name: speaker
dtype: string
- name: audioPath
dtype: string
- name: textPath
dtype: string
splits:
- name: train
num_bytes: 335639155312.5
num_examples: 2067668
- name: validation
num_bytes: 3382855559.25
num_examples: 20830
download_size: 324002692624
dataset_size: 339022010871.75
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
task_categories:
- automatic-speech-recognition
language:
- ko
tags:
- STT
- Audio
size_categories:
- 100B
output은 문서 == 개요 == 에 해당하는 내용입니다. 개요가 없는 항목, 개요가 너무 짧은 항목은 제외하였습니다."
causal-lm/instructions-ko,"{""language"": ""ko"", ""dataset_info"": {""features"": [{""name"": ""instruction"", ""dtype"": ""string""}, {""name"": ""input"", ""dtype"": ""string""}, {""name"": ""output"", ""dtype"": ""string""}, {""name"": ""dialogue"", ""list"": [{""name"": ""content"", ""dtype"": ""string""}, {""name"": ""role"", ""dtype"": ""string""}]}], ""splits"": [{""name"": ""train"", ""num_bytes"": 138160314, ""num_examples"": 112104}, {""name"": ""validation"", ""num_bytes"": 15418231, ""num_examples"": 12429}], ""download_size"": 85992704, ""dataset_size"": 153578545}, ""configs"": [{""config_name"": ""default"", ""data_files"": [{""split"": ""train"", ""path"": ""data/train-*""}, {""split"": ""validation"", ""path"": ""data/validation-*""}]}]}","# Dataset Card for ""instructions-ko""
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)"
Bingsu/laion2B-multi-korean-subset,"{""annotations_creators"": [""crowdsourced""], ""language_creators"": [""crowdsourced""], ""language"": [""ko""], ""license"": [""cc-by-4.0""], ""multilinguality"": [""monolingual""], ""pretty_name"": ""laion2B-multi-korean-subset"", ""size_categories"": [""10M>> from datasets import load_dataset
>>> dataset = load_dataset(""Bingsu/laion2B-multi-korean-subset"")
>>> dataset
DatasetDict({
train: Dataset({
features: ['SAMPLE_ID', 'URL', 'TEXT', 'HEIGHT', 'WIDTH', 'LICENSE', 'LANGUAGE', 'NSFW', 'similarity'],
num_rows: 11376263
})
})
```
```py
>>> dataset[""train""].features
{'SAMPLE_ID': Value(dtype='int64', id=None),
'URL': Value(dtype='string', id=None),
'TEXT': Value(dtype='string', id=None),
'HEIGHT': Value(dtype='int32', id=None),
'WIDTH': Value(dtype='int32', id=None),
'LICENSE': Value(dtype='string', id=None),
'LANGUAGE': Value(dtype='string', id=None),
'NSFW': Value(dtype='string', id=None),
'similarity': Value(dtype='float32', id=None)}
```
### Data Size
download: 1.56 GiB
generated: 2.37 GiB
total: 3.93 GiB
### Data Field
- 'SAMPLE_ID': `int`
- 'URL': `string`
- 'TEXT': `string`
- 'HEIGHT': `int`
- 'WIDTH': `int`
- 'LICENSE': `string`
- 'LANGUAGE': `string`
- 'NSFW': `string`
- 'similarity': `float`
### Data Splits
| | train |
| --------- | -------- |
| # of data | 11376263 |
## Note
### Height, Width
이미지의 가로가 `HEIGHT`로, 세로가 `WIDTH`로 되어있는 것 같습니다.
```pycon
>>> dataset[""train""][98]
{'SAMPLE_ID': 2937471001780,
'URL': 'https://image.ajunews.com/content/image/2019/04/12/20190412175643597949.png',
'TEXT': '인천시교육청, 인천 시군구발전협의회 임원진과의 간담회 개최',
'HEIGHT': 640,
'WIDTH': 321,
'LICENSE': '?',
'LANGUAGE': 'ko',
'NSFW': 'UNLIKELY',
'similarity': 0.33347243070602417}
```

### csv file, pandas
```py
# pip install zstandard
import pandas as pd
from huggingface_hub import hf_hub_url
url = hf_hub_url(""Bingsu/laion2B-multi-korean-subset"", filename=""laion2B-multi-korean-subset.csv.zst"", repo_type=""dataset"")
# url = ""https://huggingface.co/datasets/Bingsu/laion2B-multi-korean-subset/resolve/main/laion2B-multi-korean-subset.csv.zst""
df = pd.read_csv(url)
```
778 MB
### Code used to generate
```py
import csv
import re
from datasets import load_dataset
from tqdm import tqdm
pattern = re.compile(r""[가-힣]"")
def quote(s: str) -> str:
s = s.replace('""""""', """")
return s
def filter_func(example) -> bool:
lang = example.get(""LANGUAGE"")
text = example.get(""TEXT"")
if not isinstance(lang, str) or not isinstance(text, str):
return False
return lang == ""ko"" or pattern.search(text) is not None
file = open(""./laion2B-mulit_korean_subset.csv"", ""w"", encoding=""utf-8"", newline="""")
ds = load_dataset(""laion/laion2B-multi"", split=""train"", streaming=True)
dsf = ds.filter(filter_func)
header = [
""SAMPLE_ID"",
""URL"",
""TEXT"",
""HEIGHT"",
""WIDTH"",
""LICENSE"",
""LANGUAGE"",
""NSFW"",
""similarity"",
]
writer = csv.DictWriter(file, fieldnames=header)
writer.writeheader()
try:
for data in tqdm(dsf): # total=11378843
data[""TEXT""] = quote(data.get(""TEXT"", """"))
if data[""TEXT""]:
writer.writerow(data)
finally:
file.close()
print(""Done!"")
```
실행에 약 8시간이 소요되었습니다. 이후에 `HEIGHT`나 `WIDTH`가 None인 데이터를 제거하고 업로드하였습니다.
### img2dataset
[img2dataset](https://github.com/rom1504/img2dataset)을 사용하여 URL로된 이미지들을 데이터셋 형태로 만들 수 있습니다."
allganize/financial-mmlu-ko,"{""dataset_info"": {""features"": [{""name"": ""conversation_id"", ""dtype"": ""string""}, {""name"": ""conversations"", ""list"": [{""name"": ""from"", ""dtype"": ""string""}, {""name"": ""value"", ""dtype"": ""string""}]}], ""splits"": [{""name"": ""test"", ""num_bytes"": 217945, ""num_examples"": 455}], ""download_size"": 105791, ""dataset_size"": 217945}, ""configs"": [{""config_name"": ""default"", ""data_files"": [{""split"": ""test"", ""path"": ""data/test-*""}]}], ""language"": [""ko""]}","# financial-mmlu-ko
- `financial-mmlu-ko` 데이터는 금융 도메인의 다중 선택(Multiple Choice) 데이터셋입니다. 질문과 선택지가 주어졌을 때, 답을 찾는 객관식 문제입니다. 입력값은 text이며, 아래 Fewshot 예시의 텍스트를 system prompt 혹은 context로 함께 제공할 수 있습니다.
- 한국어 데이터를 생성하기 위해, 여러 금융 문제가 있는 공공 사이트의 문제들을 크롤링하여 수집, 검수(104건)하였습니다. 그리고 Wikipedia와 공공사이트들의 금융사전/보고서들을 기반으로 GPT-4로 문제를 생성하고 사람이 검수하였습니다.(315건)
### 데이터 출처
- [한국어 wikipedia 금융 분류](https://ko.wikipedia.org/wiki/%EB%B6%84%EB%A5%98:%EA%B8%88%EC%9C%B5)
- [한국은행 경제연구 보고서](https://www.bok.or.kr/portal/bbs/P0002454/list.do?menuNo=200431)
- [경제배움e - 퀴즈로 배우는 시사.경제](https://www.econedu.go.kr/mec/ots/brd/list.do?mnuBaseId=MNU0000286&tplSer=ac73e13e-2d3c-485c-b7fe-a5823b527ead)
### 데이터 예시
```
{
'conversation_id': 'financial_mmlu_0',
'conversations': array([
{
'from': 'human',
'value': '금리의 종류에 대한 설명으로 바르지 않은 것은?\n
1. 변동금리는 시장금리 변동에 따른 위험을 자금공급자가 부담하게 된다\n
2. 피셔방정식에 의하면 실질금리는 명목금리에서 기대인플레이션을 차감하면\n 구할 수 있다.\n
3. 복리는 원금에 대한 이자뿐 아니라 이자에 대한 이자도 함께 계산하는 방법이\n다.\n
4. 실효금리는 이자지급방법, 상환방법, 수수료, 세금 등을 감안한 후 차입자가\n실질적으로 부담하는 순자금조달비용을 말한다.\n
5. 채권시장에서는 금리보다 수익률이라는 용어를 더 많이 사용한다.'
},
{
'from': 'gpt',
'value': '1'
}
], dtype=object)
}
```
### Fewshot 예시
```
You are a financial expert.
You must answer the user's question correctly.
Answer the user's question with a number.
Example 1=\""\""\""
User: 다음은 통화기능 중 어느 것과 관련성이 높은가?
장래에 지급해야 하는 채무는 화폐로 표시할 수 있다.
이는 화폐의 액면가치가 노동력이나 물품과 달리 소멸되거나 변질되지 않기 때문이다.
1. 교환의 매개 수단
2. 가치척도의 수단
3. 가치저장의 수단
4. 이연지급의 수단
5. 투자수단
Assistant: 4
\""\""\""
Example 2=\""\""\""
User: ( )은(는) 2006년 2월 앨런 그리스펀 뒤를 이어 미국 중앙은행인 연방준비제도이사회(FRB)의 의장이 된 벤 버냉키(Ben Shalom Bernanke)의 별명이다. 이 별명에 걸맞게 2조 달러가 넘는 자금을 시장에 뿌려 미국 금융시장을 벼랑 끝에서 건져내는 데 성공했다는 평가도 받고 있다.
1. 헬리콥터 벤
2. 전투기 벤
3. 열기구 벤
4. 비행기 벤
Assistant: 1
\""\""\""
```
License
- Wikipedia: CC BY-SA 4.0
- [한국은행 저작권 보호방침](https://www.bok.or.kr/portal/main/contents.do?menuNo=200228)"
Songweii/M3GIA,"{""license"": ""apache-2.0"", ""language"": [""en"", ""zh"", ""es"", ""fr"", ""pt"", ""ko""], ""tags"": [""Multilingual"", ""Multimodal"", ""Cognitive Science"", ""General Intelligence Ability Benchmark""], ""pretty_name"": ""M3GIA"", ""size_categories"": [""1K
- **Source Data:** [https://dumps.wikimedia.org/other/enterprise_html/](https://dumps.wikimedia.org/other/enterprise_html)
### Dataset Summary
Wikipedia dataset containing cleaned articles of all languages.
The dataset is manually built from Wikipedia HTML dumps with each split for each language.
Each example contains the content of one full Wikipedia article.
### Supported Tasks and Leaderboards
The dataset is generally used for Language Modelling.
### Languages
We have selected the following Wikipedia's:
```
af.wikipedia.org
ar.wikipedia.org
ast.wikipedia.org
az.wikipedia.org
be.wikipedia.org
bg.wikipedia.org
bn.wikipedia.org
ca.wikipedia.org
ce.wikipedia.org
cs.wikipedia.org
cy.wikipedia.org
da.wikipedia.org
de.wikipedia.org
el.wikipedia.org
en.wikipedia.org
eo.wikipedia.org
es.wikipedia.org
et.wikipedia.org
eu.wikipedia.org
fa.wikipedia.org
fi.wikipedia.org
fr.wikipedia.org
gl.wikipedia.org
he.wikipedia.org
hi.wikipedia.org
hr.wikipedia.org
hu.wikipedia.org
hy.wikipedia.org
id.wikipedia.org
it.wikipedia.org
ja.wikipedia.org
ka.wikipedia.org
kk.wikipedia.org
ko.wikipedia.org
la.wikipedia.org
lt.wikipedia.org
lv.wikipedia.org
min.wikipedia.org
mk.wikipedia.org
ms.wikipedia.org
my.wikipedia.org
nl.wikipedia.org
nn.wikipedia.org
no.wikipedia.org
pl.wikipedia.org
pt.wikipedia.org
ro.wikipedia.org
ru.wikipedia.org
sh.wikipedia.org
simple.wikipedia.org
sk.wikipedia.org
sl.wikipedia.org
sr.wikipedia.org
sv.wikipedia.org
ta.wikipedia.org
tg.wikipedia.org
th.wikipedia.org
tr.wikipedia.org
uk.wikipedia.org
ur.wikipedia.org
uz.wikipedia.org
vi.wikipedia.org
zh-min-nan.wikipedia.org
zh.wikipedia.org
zh-yue.wikipedia.org
```
*`.wikipedia.org`* extensions have been added for your convenience.
### Selection of Wikipedia
We deem a particular Wikipedia language as high quality if:
1. Has a total article count of `>100,000`.
2. Has a `Depth > 5.1`.
*Depth is calculated using the following equation:*
`depth = (article_edits / total_pages) * ((total_pages - articles) / articles) ** 2`
This formula is directly taken from [list of Wikipedias.](https://meta.wikimedia.org/wiki/Wikipedia_article_depth)
### Filtering
Extensive HTML and markdown filtering has been done to derive the final dataset.
For HTML:
1. Parse the article content with BeautifulSoup.
2. We first extract out titles from the Soup.
3. Drop (As in, don't process / skip processing) *Stub articles.* To ensure multilanguage coverage, we use a list of stub names found across multiple languages using wikidata. (We have included the template names within `wikipedia_template.py`)
4. Drop *Lsjbot* bot created articles.
5. Collapse styles with `data-mw` component into its next sibling.
6. Remove raw `href` links. (Text of href == href link)
7. Remove citation needed Templates
8. Remove citation Templates
9. Remove Redirect Templates
10. Drop articles where the article content consists of 50% or more of tables and lists.
11. Remove message boxes. (Orange alert boxes on top of articles)
12. Remove infoboxes boxes. (Infoboxes on the right)
13. Selectively remove tables which consist of just empty spaces. (Number of `` elements > len(text_size) and text_size < 50)
14. Cleanup latex code.
15. Empty `class` attributes and `data-mw` attributes
For Markdown:
1. Cleanup punctuations.
2. Collect text length (normalized text to NKFC, keeping CJK characters as is while decomposing Arabic characters, Counting double width characters as 2 instead of 1, )
3. Filter based on the collected text length (If the article is less than 1000 characters long, it is dropped.)
The final Markdown text and additional data is included in the jsonl file. Additionally, the scripts used are located in the main directory of this folder as well.
### Data keys
Users can run `less` to see the contents. A sample and a list of dictionary keys have been provided below:
```json
{
""text"": ""\n**Tharman Shanmugaratnam** PBM (born 25 February 1957) is a Singaporean politician and economist. He is the President of Singapore since 2023. \n\nHe was Senior Minister of Singapore between 2019 and 2023. He was also the Coordinating Minister for Social Policies between 2015 and 2023, and Chairman of the Monetary Authority of Singapore between 2011 and 2023.\n\nOn 8 June 2023, Tharman announced his plans to run for president in the 2023 presidential election. He was elected on 2 September 2023 in a landslide victory, winning 70.40% of the vote.\n\nEarly life and education\n------------------------\n\nTharman was born in the Colony of Singapore in 1957. He studied at the Anglo-Chinese School. When he was studying there, he was not interested in his studies and was not disciplined. However, he liked to read and tried out poetry. During his time at Anglo-Chinese School, he created four poets with his schoolmates. Also, he was interested in sports and spent most of his time playing sports. He even joined his school's hockey team.\n\nThen, he attended the London School of Economics (LSE), graduating with a Bachelor of Science degree in economics.\n\nAfter getting his bachelor's, Tharman went on to study at Wolfson College at the University of Cambridge. There, he completed a Master of Philosophy degree in economics. \n\nTharman then became a student at the Harvard Kennedy School at Harvard University, where he finished a Master in Public Administration (MPA) degree. He was a student activist there. He explored left-wing politics, as he did not agree with the ruling People's Action Party back in Singapore.\n\nTharman was a recipient of the Lucius N. Littauer Fellows Award. The award is given to students with MPA's who showed academic excellence and leadership.In 2011, the LSE gave him an Honorary Fellowship.<...TRUNCATED IN SAMPLE>"",
""meta"": {
""title"": ""Tharman Shanmugaratnam"",
""mostly_tablelist"": false,
""tablelist_ratio"": [
4082,
8644,
0.47223507635354
],
""infobox"": [
""<...TRUNCATED IN SAMPLE>""
],
""td_tables"": [],
""text_length"": 5553
}
}
```
```
text: str (Markdown text)
meta: dict (Contains additional metadata / meta)
- title: str (Article Title)
- mostly_tablelist: bool (Internal flag for HTML step 10)
- tablelist_ratio: list (Internal data, used to compute mostly_tablelist.)
- infobox: list (A list of extracted infoboxes with data-mw attribute for the raw html data.)
- td_tables: list (Extracted tables from HTML step 13)
- text_length: int (Obtained from markdown step 2)
```
### Dataset Curators
KaraKaraWitch. (I typically hangout in PygmalionAI discord, sometimes EleutherAI. If something is wrong, `@karakarawitch` on discord.)
I'd be happy if you could spread the word and recommend this dataset over wikitext for your use cases `:)`
### Licensing Information
Most of Wikipedia's text and many of its images are co-licensed under the
[Creative Commons Attribution-ShareAlike 3.0 Unported License](https://en.wikipedia.org/wiki/Wikipedia:Text_of_Creative_Commons_Attribution-ShareAlike_3.0_Unported_License)
(CC BY-SA) and the [GNU Free Documentation License](https://en.wikipedia.org/wiki/Wikipedia:Text_of_the_GNU_Free_Documentation_License)
(GFDL) (un-versioned, with no invariant sections, front-cover texts, or back-cover texts).
Some text has been imported only under CC BY-SA and CC BY-SA-compatible license and cannot be reused under GFDL; such
text will be identified on the page footer, in the page history, or on the discussion page of the article that utilizes
the text.
Recursal Waifus (The banner image) are licensed under CC-BY-SA.
They do not represent the related websites in any official capacity unless otherwise or announced by the website.
You may use them as a banner image. However, you must always link back to the dataset.
### Citation Information
```
@ONLINE{superwiki-next,
title = {SuperWikiNEXT-32B},
author = {KaraKaraWitch, recursal.ai},
year = {2024},
howpublished = {\url{https://huggingface.co/datasets/recursal/SuperWikipedia-NEXT}},
}
```"
Sakalti/Multilingal-sakalt-data,"{""license"": ""mit"", ""language"": [""ab"", ""bho"", ""ce"", ""cs"", ""da"", ""de"", ""et"", ""es"", ""fr"", ""hi"", ""hrv"", ""hu"", ""it"", ""ja"", ""ko"", ""nl"", ""pl"", ""pt"", ""ro"", ""ru"", ""sah"", ""swh"", ""yue"", ""zh""], ""task_categories"": [""text-generation""]}",マルチリンガルデータセットです。mitライセンスです。
FreedomIntelligence/ApolloMoEBench,"{""license"": ""mit"", ""configs"": [{""config_name"": ""test_text"", ""data_files"": [{""split"": ""test"", ""path"": ""ApolloMoEBench.json""}]}], ""task_categories"": [""question-answering""], ""tags"": [""biology"", ""medical""], ""language"": [""ar"", ""en"", ""zh"", ""ko"", ""ja"", ""mn"", ""th"", ""vi"", ""lo"", ""mg"", ""de"", ""pt"", ""es"", ""fr"", ""ru"", ""it"", ""hr"", ""gl"", ""cs"", ""co"", ""la"", ""uk"", ""bs"", ""bg"", ""eo"", ""sq"", ""da"", ""sa"", false, ""gn"", ""sr"", ""sk"", ""gd"", ""lb"", ""hi"", ""ku"", ""mt"", ""he"", ""ln"", ""bm"", ""sw"", ""ig"", ""rw"", ""ha""]}","# Democratizing Medical LLMs For Much More Languages
Covering 12 Major Languages including English, Chinese, French, Hindi, Spanish, Arabic, Russian, Japanese, Korean, German, Italian, Portuguese and 38 Minor Languages So far.
📃 Paper • 🌐 Demo • 🤗 ApolloMoEDataset • 🤗 ApolloMoEBench • 🤗 Models •🌐 Apollo • 🌐 ApolloMoE

## 🌈 Update
* **[2024.10.15]** ApolloMoE repo is published!🎉
## Languages Coverage
12 Major Languages and 38 Minor Languages
Click to view the Languages Coverage

## Architecture
Click to view the MoE routing image

## Results
#### Dense
🤗 Apollo2-0.5B • 🤗 Apollo2-1.5B • 🤗 Apollo2-2B
🤗 Apollo2-3.8B • 🤗 Apollo2-7B • 🤗 Apollo2-9B
Click to view the Dense Models Results

#### Post-MoE
🤗 Apollo-MoE-0.5B • 🤗 Apollo-MoE-1.5B • 🤗 Apollo-MoE-7B
Click to view the Post-MoE Models Results

## Usage Format
##### Apollo2
- 0.5B, 1.5B, 7B: User:{query}\nAssistant:{response}<|endoftext|>
- 2B, 9B: User:{query}\nAssistant:{response}\
- 3.8B: <|user|>\n{query}<|end|><|assisitant|>\n{response}<|end|>
##### Apollo-MoE
- 0.5B, 1.5B, 7B: User:{query}\nAssistant:{response}<|endoftext|>
## Dataset & Evaluation
- Dataset
🤗 ApolloMoEDataset
Click to expand

- [Data category](https://huggingface.co/datasets/FreedomIntelligence/ApolloCorpus/tree/main/train)
- Evaluation
🤗 ApolloMoEBench
Click to expand
- EN:
- [MedQA-USMLE](https://huggingface.co/datasets/GBaker/MedQA-USMLE-4-options)
- [MedMCQA](https://huggingface.co/datasets/medmcqa/viewer/default/test)
- [PubMedQA](https://huggingface.co/datasets/pubmed_qa): Because the results fluctuated too much, they were not used in the paper.
- [MMLU-Medical](https://huggingface.co/datasets/cais/mmlu)
- Clinical knowledge, Medical genetics, Anatomy, Professional medicine, College biology, College medicine
- ZH:
- [MedQA-MCMLE](https://huggingface.co/datasets/bigbio/med_qa/viewer/med_qa_zh_4options_bigbio_qa/test)
- [CMB-single](https://huggingface.co/datasets/FreedomIntelligence/CMB): Not used in the paper
- Randomly sample 2,000 multiple-choice questions with single answer.
- [CMMLU-Medical](https://huggingface.co/datasets/haonan-li/cmmlu)
- Anatomy, Clinical_knowledge, College_medicine, Genetics, Nutrition, Traditional_chinese_medicine, Virology
- [CExam](https://github.com/williamliujl/CMExam): Not used in the paper
- Randomly sample 2,000 multiple-choice questions
- ES: [Head_qa](https://huggingface.co/datasets/head_qa)
- FR:
- [Frenchmedmcqa](https://github.com/qanastek/FrenchMedMCQA)
- [MMLU_FR]
- Clinical knowledge, Medical genetics, Anatomy, Professional medicine, College biology, College medicine
- HI: [MMLU_HI](https://huggingface.co/datasets/FreedomIntelligence/MMLU_Hindi)
- Clinical knowledge, Medical genetics, Anatomy, Professional medicine, College biology, College medicine
- AR: [MMLU_AR](https://huggingface.co/datasets/FreedomIntelligence/MMLU_Arabic)
- Clinical knowledge, Medical genetics, Anatomy, Professional medicine, College biology, College medicine
- JA: [IgakuQA](https://github.com/jungokasai/IgakuQA)
- KO: [KorMedMCQA](https://huggingface.co/datasets/sean0042/KorMedMCQA)
- IT:
- [MedExpQA](https://huggingface.co/datasets/HiTZ/MedExpQA)
- [MMLU_IT]
- Clinical knowledge, Medical genetics, Anatomy, Professional medicine, College biology, College medicine
- DE: [BioInstructQA](https://huggingface.co/datasets/BioMistral/BioInstructQA): German part
- PT: [BioInstructQA](https://huggingface.co/datasets/BioMistral/BioInstructQA): Portuguese part
- RU: [RuMedBench](https://github.com/sb-ai-lab/MedBench)
- Minor Langs: MMLU Translated Medical Part
## Results reproduction
Click to expand
We take Apollo2-7B or Apollo-MoE-0.5B as example
1. Download Dataset for project:
```
bash 0.download_data.sh
```
2. Prepare test and dev data for specific model:
- Create test data for with special token
```
bash 1.data_process_test&dev.sh
```
3. Prepare train data for specific model (Create tokenized data in advance):
- You can adjust data Training order and Training Epoch in this step
```
bash 2.data_process_train.sh
```
4. Train the model
- If you want to train in Multi Nodes please refer to ./src/sft/training_config/zero_multi.yaml
```
bash 3.single_node_train.sh
```
5. Evaluate your model: Generate score for benchmark
```
bash 4.eval.sh
```
## Citation
Please use the following citation if you intend to use our dataset for training or evaluation:
```
@misc{zheng2024efficientlydemocratizingmedicalllms,
title={Efficiently Democratizing Medical LLMs for 50 Languages via a Mixture of Language Family Experts},
author={Guorui Zheng and Xidong Wang and Juhao Liang and Nuo Chen and Yuping Zheng and Benyou Wang},
year={2024},
eprint={2410.10626},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2410.10626},
}
```"
Ash-Hun/Welfare-QA,"{""license"": ""mit"", ""task_categories"": [""question-answering""], ""dataset_info"": {""features"": [{""name"": ""Question"", ""dtype"": ""string""}, {""name"": ""Answer"", ""dtype"": ""string""}, {""name"": ""Documents"", ""dtype"": ""string""}], ""splits"": [{""name"": ""train"", ""num_bytes"": 3207687, ""num_examples"": 9547}]}, ""language"": [""ko""], ""tags"": [""Ask-for-Welfare"", ""WelSSiSKo""], ""pretty_name"": ""AskWelfare-v1.0""}","# Dataset Card for Welfare-QA
## Description
대한민국 보건복지부에서 발간하였으며 2023년 5월 11일에 [복지로](https://www.bokjiro.go.kr/ssis-tbu/index.do)에 등록된 안내책자를 바탕으로 만들어졌습니다.
총 413페이지의 비정형 PDF에 담긴 약 460여개의 복지제도에 대한 Question-Answering-Documents 데이터셋입니다.
원본은 다음 링크에서 확인해보실 수 있습니다. [👉 '2023 나에게 힘이되는 복지서비스 PDF 책자'](https://www.bokjiro.go.kr/ssis-tbu/twatxa/wlfarePr/selectWlfareSubMain.do?dmMnuParam=column27)
## Project Repo
- Github Repo : [Ask-for-Welfare](https://github.com/ssisOneTeam/Ask-for-Welfare)
## How to Uses
```python
>>> from datasets import load_dataset
>>> dataset = load_dataset(""Ash-Hun/Welfare-QA"", split='train')
>>> dataset
Dataset({
features: ['Question', 'Answer', 'Documents'],
num_rows: 9547
})
```
```python
>>> dataset[0]
{'Question': 'LPG 사용 가정의 고무호스를 교체하려면 어떤 지원을 받을 수 있나요?',
'Answer': 'LPG용기 사용가구 시설개선 사업을 통해 LPG 고무호스를 금속배관으로 교체하는 데 필요한 지원을 받으실 수 있습니다.',
'Documents': 'LPG용기 사용가구 시설개선'}
```
"
dbdu/ShareGPT-74k-ko,"{""language"": [""ko""], ""pretty_name"": ""ShareGPT-74k-ko"", ""tags"": [""conversation"", ""chatgpt"", ""gpt-3.5""], ""license"": ""cc-by-2.0"", ""task_categories"": [""text-generation""], ""size_categories"": [""10K Break Free from the Language Barrier
Version: 1 - Date: 30 Oct 2023
Collected and Prepared by Felix Leeb (Max Planck Institute for Intelligent Systems, Tübingen, Germany)
License: Babel Briefings Headlines Dataset © 2023 by Felix Leeb is licensed under [CC BY-NC-SA 4.0](http://creativecommons.org/licenses/by-nc-sa/4.0/)
Check out our paper on [arxiv](https://arxiv.org/abs/2403.19352).
This dataset contains 4,719,199 news headlines across 30 different languages collected between 8 August 2020 and 29 November 2021. The headlines were collected using the [News API](https://newsapi.org/) by collecting the top headlines (usually about 30-70 articles) separately for each combination of the 54 locations x 7 categories almost every day. Note, that the same article may occur more than once across different locations, categories, or dates (which is recorded in the `instances` property), so in total 7,419,089 instances were collected.
For non-English articles, the article data is translated to English using Google Translate (see `en-title`, `en-description`, and `en-content` properties).
The dataset is provided in the form of 54 JSON files, one for each location containing the all the unique headlines that appeared for the first time in the corresponding location. Each headline is represented as a JSON object with the following properties:
- `ID`: (integer) a unique ID for each article
- `title`: (string) the headline text in the original language
- `description`: (string) the article description in the original language
- `content`: (string) the first few words of the article in the original language
- `author`: (string) the author of the article
- `source-id`: (string) the news aggregator (e.g. Google-News)
- `source-name`: (string) usually the domain of the source where the article was published
- `url`: (string) the URL of the article
- `urlToImage`: (string) the URL to an image associated with the article
- `publishedAt`: (date) the article was published
- `instances`: (list) specific time and place where this article was posted. Each element contains:
- `collectedAt`: (date) date and time when the article was collected
- `category`: (string) of the article from 7 possible values (see below for full list)
- `location`: (string) of the article from 54 possible values (see below for full list)
- `language`: (string) ISO-639 2-letter code for the language (inferred from location)
- `en-title`: (string) the headline text translated to English (if necessary)
- `en-description`: (string) the article description text translated to English (if necessary)
- `en-content`: (string) the first few words of the article translated to English (if necessary)
## Notes
- Unfortunately, due to an issue with News API, the `content` of articles originally in a non-latin based script (e.g. Chinese, Arabic, Japanese, Greek, Russian, etc.) are usually not available. However, for the most part all other articles should have a meaningful `content` property, and the `title` and `descriptions` appear unaffected.
- All properties except `language`, `en-title`, `en-description`, and `en-content` are taken directly from the News API responses. The language is inferred from the location, and the English translations are collected using Google Translate.
## Statistics
Here are a few basic summary statistics about the dataset.
### Articles by Language
| Code | Language | Articles | Locations |
|--------|------------|------------|----------------------------------------------------|
| en | English | 1128233 | au, ca, gb, ie, in, my, ng, nz, ph, sa, sg, us, za |
| es | Spanish | 455952 | ar, co, cu, mx, ve |
| fr | French | 288328 | be, fr, ma |
| zh | Chinese | 270887 | cn, hk, tw |
| de | German | 259718 | at, ch, de |
| pt | Portuguese | 243829 | br, pt |
| ar | Arabic | 178854 | ae, eg |
| id | Indonesian | 131252 | id |
| it | Italian | 129005 | it |
| tr | Turkish | 122724 | tr |
| el | Greek | 119940 | gr |
| ja | Japanese | 118475 | jp |
| pl | Polish | 116904 | pl |
| ru | Russian | 113395 | ru |
| nl | Dutch | 104031 | nl |
| th | Thai | 90708 | th |
| sv | Swedish | 86838 | se |
| ko | Korean | 83090 | kr |
| sr | Serbian | 80040 | rs |
| hu | Hungarian | 73509 | hu |
| cs | Czech | 70647 | cz |
| he | Hebrew | 67794 | il |
| bg | Bulgarian | 67223 | bg |
| uk | Ukrainian | 65610 | ua |
| ro | Romanian | 54601 | ro |
| no | Norwegian | 46804 | no |
| sk | Slovak | 43057 | sk |
| lv | Latvian | 40006 | lv |
| lt | Lithuanian | 34719 | lt |
| sl | Slovenian | 33026 | si |
### Instances by category
| Category | Instances |
|---------------|-------------|
| sports | 1132542 |
| entertainment | 982479 |
| business | 840748 |
| technology | 802933 |
| general | 704692 |
| health | 424188 |
| science | 388281 |
### Instances by location
| Code | Location | Instances |
|--------|----------------------|-------------|
| ae | United Arab Emirates | 214256 |
| ar | Argentina | 159139 |
| ph | Philippines | 155365 |
| ng | Nigeria | 155112 |
| in | India | 145536 |
| us | United States | 144800 |
| ca | Canada | 143928 |
| sa | Saudi Arabia | 143382 |
| cu | Cuba | 138675 |
| au | Australia | 138408 |
| br | Brazil | 136101 |
| ma | Morocco | 131974 |
| id | Indonesia | 131252 |
| eg | Egypt | 129382 |
| it | Italy | 129005 |
| gb | United Kingdom | 127391 |
| ie | Ireland | 126640 |
| mx | Mexico | 124499 |
| tr | Turkey | 122724 |
| gr | Greece | 119940 |
| de | Germany | 119917 |
| jp | Japan | 118475 |
| za | South Africa | 117351 |
| fr | France | 117210 |
| pl | Poland | 116904 |
| pt | Portugal | 115976 |
| co | Colombia | 115325 |
| my | Malaysia | 115223 |
| ru | Russian Federation | 113395 |
| at | Austria | 111867 |
| nz | New Zealand | 108809 |
| tw | Taiwan | 108652 |
| nl | Netherlands | 104031 |
| sg | Singapore | 101251 |
| be | Belgium | 99460 |
| cn | China | 91561 |
| ve | Venezuela | 91045 |
| th | Thailand | 90708 |
| se | Sweden | 86838 |
| kr | Korea | 83090 |
| hk | Hong Kong | 83051 |
| rs | Serbia | 80040 |
| hu | Hungary | 73509 |
| cz | Czechia | 70647 |
| ch | Switzerland | 68846 |
| il | Israel | 67794 |
| bg | Bulgaria | 67223 |
| ua | Ukraine | 65610 |
| ro | Romania | 54601 |
| no | Norway | 46804 |
| sk | Slovakia | 43057 |
| lv | Latvia | 40006 |
| lt | Lithuania | 34719 |
| si | Slovenia | 33026 |"
luizapzbn/from-one-to-many-toxicity-mitigation,"{""license"": ""apache-2.0"", ""task_categories"": [""text-generation"", ""text-classification""], ""language"": [""en"", ""pt"", ""hi"", ""it"", ""fr"", ""ru"", ""ar"", ""ko"", ""es""], ""tags"": [""harmful"", ""toxic""]}","# From One to Many: Expanding the Scope of Toxicity Mitigation in Language Models
[[arxiv]](https://arxiv.org/pdf/2403.03893)[[code]](https://github.com/for-ai/goodtriever)[[data]](https://huggingface.co/datasets/luizapzbn/from-one-to-many-toxicity-mitigation)
Data accompanying the paper ""From One to Many: Expanding the Scope of Toxicity Mitigation in Language Models"" accepted to ACL Findings 2024.
_Abstract_: To date, toxicity mitigation in language models has almost entirely been focused on single-language settings. As language models embrace multilingual capabilities, it’s crucial our safety measures keep pace. Recognizing this research gap, our approach expands the scope of conventional toxicity mitigation to address the complexities presented by multiple languages. In the absence of sufficient annotated datasets across languages, we employ translated data to evaluate and enhance our mitigation techniques. We also compare finetuning mitigation approaches against retrieval-augmented techniques under both static and continual toxicity mitigation scenarios. This allows us to examine the effects of translation quality and the cross-lingual transfer on toxicity mitigation. We also explore how model size and data quantity affect the success of these mitigation efforts. Covering nine languages, our study represents a broad array of linguistic families and levels of resource availability, ranging from high to mid-resource languages. Through comprehensive experiments, we provide insights into the complexities of multilingual toxicity mitigation, offering valuable insights and paving the way for future research in this increasingly important field.
## Dataset Description
- **Language(s) (NLP):** English, Portuguese, Spanish, Italian, French, Russian, Arabic, Hindi, Korean
- **License:** This dataset is a translation of existing datasets. Each dataset's original license applies. For more details see the ""Source Data"" section.
## Dataset Structure
- train:
- jigsaw_english: original Jigsaw Unintended Bias dataset in the English language.
- multilingual:
- jigsaw_multilingual: in-language examples from the Jigsaw Multilingual Toxicity classification challenge.
- translated_jigsaw_english: translated samples from the Jigsaw Unintended Bias Challenge. Original samples are in the ""jigsaw_english"" folder one level up.
- full_sized: translations of the jigsaw dataset in its entirety
- minimal: for our main experiments, we selected ~3K (or 3.5K) and ~10K toxic and non-toxic samples, respectively. Here are those subsets, translated by NLLB 600M model.
- nllb1.3b: the same subset of data for all languages, but translated with the NLLB 1.3B model (higher translation quality)
- m2m: the same subset of data for all languages, but translated with the M2M 418M model (lower translation quality)
- different_subsets: we selected different subsets for each of the languages (unparalleled content) and translated them with NLLB 600M
- bleu_subset: samples used to compute BLEU scores for the paper
- eval: a random subset of 200 samples of holistic bias (English) translated with Google Translate to each of the target languages. The contents are the same across all languages.
- _hi: the eval set of the high-resource language experiments
- _mid: the eval set of the mid-resource language experiments
- individual: folder with the individual samples for each language
- results: all of the models generations and experiments from the paper. to be used with the results notebook to generate plots (15GB of data though)
## Source Data
The datasets from this repository are subsets or translations of three others:
- [jigsaw multilingual toxicity classification](https://www.kaggle.com/c/jigsaw-multilingual-toxic-comment-classification)
- [jigsaw unintended bias (english)](https://www.kaggle.com/c/jigsaw-unintended-bias-in-toxicity-classification)
- [holistic bias](https://arxiv.org/abs/2205.09209)
## Bias, Risks, and Limitations
To generate these datasets, we leveraged machine translation. There are inherent risks of either increasing or reducing existing toxicity from the original sentences due to this processing.
The datasets contain toxic sentences that might be used to make models more toxic. This usage is highly discouraged by the authors and the original purpose of this dataset is to make models less harmful.
## Citation [optional]
```
@article{pozzobon2024one,
title={From One to Many: Expanding the Scope of Toxicity Mitigation in Language Models},
author={Pozzobon, Luiza and Lewis, Patrick and Hooker, Sara and Ermis, Beyza},
journal={arXiv preprint arXiv:2403.03893},
year={2024}
}
```"
zhihz0535/X-AlpacaEval,"{""license"": ""cc-by-nc-4.0"", ""configs"": [{""config_name"": ""default"", ""data_files"": [{""split"": ""english"", ""path"": ""english.json""}, {""split"": ""chinese"", ""path"": ""chinese.json""}, {""split"": ""korean"", ""path"": ""korean.json""}, {""split"": ""italian"", ""path"": ""italian.json""}, {""split"": ""spanish"", ""path"": ""spanish.json""}]}], ""task_categories"": [""text-generation"", ""conversational""], ""language"": [""en"", ""zh"", ""ko"", ""it"", ""es""], ""size_categories"": [""1K [/INST]' (to encourage the model to emit when finished a response)
- if a row of data ends with an assistant response, then [INST] was additionally added to the end of that row of data.
Details of the root dataset follow, copied from that repo:
# OpenAssistant Conversations Dataset (OASST1)
## Dataset Description
- **Homepage:** https://www.open-assistant.io/
- **Repository:** https://github.com/LAION-AI/Open-Assistant
- **Paper:** https://arxiv.org/abs/2304.07327
### Dataset Summary
In an effort to democratize research on large-scale alignment, we release OpenAssistant
Conversations (OASST1), a human-generated, human-annotated assistant-style conversation
corpus consisting of 161,443 messages in 35 different languages, annotated with 461,292
quality ratings, resulting in over 10,000 fully annotated conversation trees. The corpus
is a product of a worldwide crowd-sourcing effort involving over 13,500 volunteers.
Please refer to our [paper](https://arxiv.org/abs/2304.07327) for further details.
### Dataset Structure
This dataset contains message trees. Each message tree has an initial prompt message as the root node,
which can have multiple child messages as replies, and these child messages can have multiple replies.
All messages have a role property: this can either be ""assistant"" or ""prompter"". The roles in
conversation threads from prompt to leaf node strictly alternate between ""prompter"" and ""assistant"".
This version of the dataset contains data collected on the [open-assistant.io](https://open-assistant.io/) website until April 12 2023.
### JSON Example: Message
For readability, the following JSON examples are shown formatted with indentation on multiple lines.
Objects are stored without indentation (on single lines) in the actual jsonl files.
```json
{
""message_id"": ""218440fd-5317-4355-91dc-d001416df62b"",
""parent_id"": ""13592dfb-a6f9-4748-a92c-32b34e239bb4"",
""user_id"": ""8e95461f-5e94-4d8b-a2fb-d4717ce973e4"",
""text"": ""It was the winter of 2035, and artificial intelligence (..)"",
""role"": ""assistant"",
""lang"": ""en"",
""review_count"": 3,
""review_result"": true,
""deleted"": false,
""rank"": 0,
""synthetic"": true,
""model_name"": ""oasst-sft-0_3000,max_new_tokens=400 (..)"",
""labels"": {
""spam"": { ""value"": 0.0, ""count"": 3 },
""lang_mismatch"": { ""value"": 0.0, ""count"": 3 },
""pii"": { ""value"": 0.0, ""count"": 3 },
""not_appropriate"": { ""value"": 0.0, ""count"": 3 },
""hate_speech"": { ""value"": 0.0, ""count"": 3 },
""sexual_content"": { ""value"": 0.0, ""count"": 3 },
""quality"": { ""value"": 0.416, ""count"": 3 },
""toxicity"": { ""value"": 0.16, ""count"": 3 },
""humor"": { ""value"": 0.0, ""count"": 3 },
""creativity"": { ""value"": 0.33, ""count"": 3 },
""violence"": { ""value"": 0.16, ""count"": 3 }
}
}
```
### JSON Example: Conversation Tree
For readability, only a subset of the message properties is shown here.
```json
{
""message_tree_id"": ""14fbb664-a620-45ce-bee4-7c519b16a793"",
""tree_state"": ""ready_for_export"",
""prompt"": {
""message_id"": ""14fbb664-a620-45ce-bee4-7c519b16a793"",
""text"": ""Why can't we divide by 0? (..)"",
""role"": ""prompter"",
""lang"": ""en"",
""replies"": [
{
""message_id"": ""894d30b6-56b4-4605-a504-89dd15d4d1c8"",
""text"": ""The reason we cannot divide by zero is because (..)"",
""role"": ""assistant"",
""lang"": ""en"",
""replies"": [
// ...
]
},
{
""message_id"": ""84d0913b-0fd9-4508-8ef5-205626a7039d"",
""text"": ""The reason that the result of a division by zero is (..)"",
""role"": ""assistant"",
""lang"": ""en"",
""replies"": [
{
""message_id"": ""3352725e-f424-4e3b-a627-b6db831bdbaa"",
""text"": ""Math is confusing. Like those weird Irrational (..)"",
""role"": ""prompter"",
""lang"": ""en"",
""replies"": [
{
""message_id"": ""f46207ca-3149-46e9-a466-9163d4ce499c"",
""text"": ""Irrational numbers are simply numbers (..)"",
""role"": ""assistant"",
""lang"": ""en"",
""replies"": []
},
// ...
]
}
]
}
]
}
}
```
Please refer to [oasst-data](https://github.com/LAION-AI/Open-Assistant/tree/main/oasst-data) for
details about the data structure and Python code to read and write jsonl files containing oasst data objects.
If you would like to explore the dataset yourself you can find a
[`getting-started`](https://github.com/LAION-AI/Open-Assistant/blob/main/notebooks/openassistant-oasst1/getting-started.ipynb)
notebook in the `notebooks/openassistant-oasst1` folder of the [LAION-AI/Open-Assistant](https://github.com/LAION-AI/Open-Assistant)
github repository.
## Main Dataset Files
Conversation data is provided either as nested messages in trees (extension `.trees.jsonl.gz`)
or as a flat list (table) of messages (extension `.messages.jsonl.gz`).
### Ready For Export Trees
```
2023-04-12_oasst_ready.trees.jsonl.gz 10,364 trees with 88,838 total messages
2023-04-12_oasst_ready.messages.jsonl.gz 88,838 messages
```
Trees in `ready_for_export` state without spam and deleted messages including message labels.
The oasst_ready-trees file usually is sufficient for supervised fine-tuning (SFT) & reward model (RM) training.
### All Trees
```
2023-04-12_oasst_all.trees.jsonl.gz 66,497 trees with 161,443 total messages
2023-04-12_oasst_all.messages.jsonl.gz 161,443 messages
```
All trees, including those in states `prompt_lottery_waiting` (trees that consist of only one message, namely the initial prompt),
`aborted_low_grade` (trees that stopped growing because the messages had low quality), and `halted_by_moderator`.
### Supplemental Exports: Spam & Prompts
```
2023-04-12_oasst_spam.messages.jsonl.gz
```
These are messages which were deleted or have a negative review result (`""review_result"": false`).
Besides low quality, a frequent reason for message deletion is a wrong language tag.
```
2023-04-12_oasst_prompts.messages.jsonl.gz
```
These are all the kept initial prompt messages with positive review result (no spam) of trees in `ready_for_export` or `prompt_lottery_waiting` state.
### Using the Huggingface Datasets
While HF datasets is ideal for tabular datasets, it is not a natural fit for nested data structures like the OpenAssistant conversation trees.
Nevertheless, we make all messages which can also be found in the file `2023-04-12_oasst_ready.trees.jsonl.gz` available in parquet as train/validation splits.
These are directly loadable by [Huggingface Datasets](https://pypi.org/project/datasets/).
To load the oasst1 train & validation splits use:
```python
from datasets import load_dataset
ds = load_dataset(""OpenAssistant/oasst1"")
train = ds['train'] # len(train)=84437 (95%)
val = ds['validation'] # len(val)=4401 (5%)
```
The messages appear in depth-first order of the message trees.
Full conversation trees can be reconstructed from the flat messages table by using the `parent_id`
and `message_id` properties to identify the parent-child relationship of messages. The `message_tree_id`
and `tree_state` properties (only present in flat messages files) can be used to find all messages of a message tree or to select trees by their state.
### Languages
OpenAssistant Conversations incorporates 35 different languages with a distribution of messages as follows:
**Languages with over 1000 messages**
- English: 71956
- Spanish: 43061
- Russian: 9089
- German: 5279
- Chinese: 4962
- French: 4251
- Thai: 3042
- Portuguese (Brazil): 2969
- Catalan: 2260
- Korean: 1553
- Ukrainian: 1352
- Italian: 1320
- Japanese: 1018
Languages with under 1000 messages
- Vietnamese: 952
- Basque: 947
- Polish: 886
- Hungarian: 811
- Arabic: 666
- Dutch: 628
- Swedish: 512
- Turkish: 454
- Finnish: 386
- Czech: 372
- Danish: 358
- Galician: 339
- Hebrew: 255
- Romanian: 200
- Norwegian Bokmål: 133
- Indonesian: 115
- Bulgarian: 95
- Bengali: 82
- Persian: 72
- Greek: 66
- Esperanto: 59
- Slovak: 19
## Contact
- Discord [Open Assistant Discord Server](https://ykilcher.com/open-assistant-discord)
- GitHub: [LAION-AI/Open-Assistant](https://github.com/LAION-AI/Open-Assistant)
- E-Mail: [open-assistant@laion.ai](mailto:open-assistant@laion.ai)"
kuotient/orca-math-korean-preference,"{""dataset_info"": {""features"": [{""name"": ""llm"", ""dtype"": ""string""}, {""name"": ""question"", ""dtype"": ""string""}, {""name"": ""answer"", ""dtype"": ""string""}, {""name"": ""question_en"", ""dtype"": ""string""}, {""name"": ""answer_en"", ""dtype"": ""string""}, {""name"": ""generated"", ""dtype"": ""string""}, {""name"": ""label"", ""dtype"": ""bool""}, {""name"": ""chosen"", ""dtype"": ""string""}, {""name"": ""rejected"", ""dtype"": ""string""}], ""splits"": [{""name"": ""train"", ""num_bytes"": 1056866134, ""num_examples"": 192848}], ""download_size"": 388808584, ""dataset_size"": 1056866134}, ""configs"": [{""config_name"": ""default"", ""data_files"": [{""split"": ""train"", ""path"": ""data/train-*""}]}], ""license"": ""cc-by-sa-4.0"", ""language"": [""ko""], ""size_categories"": [""10K>"") # Add your cohere API key from www.cohere.com
#Load at max 1000 documents + embeddings
max_docs = 1000
docs_stream = load_dataset(f""Cohere/wikipedia-22-12-ko-embeddings"", split=""train"", streaming=True)
docs = []
doc_embeddings = []
for doc in docs_stream:
docs.append(doc)
doc_embeddings.append(doc['emb'])
if len(docs) >= max_docs:
break
doc_embeddings = torch.tensor(doc_embeddings)
query = 'Who founded Youtube'
response = co.embed(texts=[query], model='multilingual-22-12')
query_embedding = response.embeddings
query_embedding = torch.tensor(query_embedding)
# Compute dot score between query embedding and document embeddings
dot_scores = torch.mm(query_embedding, doc_embeddings.transpose(0, 1))
top_k = torch.topk(dot_scores, k=3)
# Print results
print(""Query:"", query)
for doc_id in top_k.indices[0].tolist():
print(docs[doc_id]['title'])
print(docs[doc_id]['text'], ""\n"")
```
## Performance
You can find performance on the MIRACL dataset (a semantic search evaluation dataset) here: [miracl-en-queries-22-12#performance](https://huggingface.co/datasets/Cohere/miracl-en-queries-22-12#performance)"
kyujinpy/KoCommercial-NoSSL,"{""language"": [""ko""], ""license"": ""cc-by-nc-sa-4.0"", ""configs"": [{""config_name"": ""default"", ""data_files"": [{""split"": ""train"", ""path"": ""data/train-*""}]}], ""dataset_info"": {""features"": [{""name"": ""input"", ""dtype"": ""string""}, {""name"": ""instruction"", ""dtype"": ""string""}, {""name"": ""output"", ""dtype"": ""string""}], ""splits"": [{""name"": ""train"", ""num_bytes"": 187990458, ""num_examples"": 175454}], ""download_size"": 110149618, ""dataset_size"": 187990458}}","# Dataset for kyujinpy/KoCommercial-NoSSL
## Info
**Dataset 개수:** 약 175K
**License:** CC-BY-NC-4.0 (*통합에 활용한 각 데이터셋은 모두 상업적 용도로 사용가능.)
**Dataset list(전부 상업적 용도로 이용가능)**
1. [kyujinpy/KOpen-platypus](kyujinpy/KOpen-platypus) (*Except non-commercial datasets)
2. [beomi/KoAlpaca-v1.1a](https://huggingface.co/datasets/beomi/KoAlpaca-v1.1a)
3. [HumanF-MarkrAI/WIKI_QA_Near_dedup](https://huggingface.co/datasets/HumanF-MarkrAI/WIKI_QA_Near_dedup)
4. [KorQuadv1.0](https://korquad.github.io/KorQuad%201.0/)
# Another Dataset
- [kyujinpy/KoCommercial-SSL](https://huggingface.co/datasets/kyujinpy/KoCommercial-SSL).
- [MarkrAI/KoCommercial-Dataset](https://huggingface.co/datasets/MarkrAI/KoCommercial-Dataset)."
jp1924/VisualQuestionAnswering,{},"---
language:
- ko
size_categories:
- 10B
Each dataset has two columns: `sourceString` and `targetString`, which corresponds to Japanese and Korean sentence.
Check [example code](https://huggingface.co/datasets/sappho192/Tatoeba-Challenge-jpn-kor/blob/main/example.ipynb) to learn how to load the dataset.
## Dataset Creation
### Personal and Sensitive Information
This dataset may contain following inappropriate or explicit sentences:
- personal
- sensitive
- private
- data that reveals addresses
- uniquely identifiable names or aliases
- racial or ethnic origins
- sexual orientations
- religious beliefs
- political opinions
- financial or health data
- etc.
So use with your own risk.
## Citation
**BibTeX:**
```bibtex
@inproceedings{tiedemann-2020-tatoeba,
title = ""The {T}atoeba {T}ranslation {C}hallenge {--} {R}ealistic Data Sets for Low Resource and Multilingual {MT}"",
author = {Tiedemann, J{\""o}rg},
booktitle = ""Proceedings of the Fifth Conference on Machine Translation"",
month = nov,
year = ""2020"",
address = ""Online"",
publisher = ""Association for Computational Linguistics"",
url = ""https://www.aclweb.org/anthology/2020.wmt-1.139"",
pages = ""1174--1182""
}
```
## Dataset Card Authors
[sappho192](https://huggingface.co/sappho192)
## Dataset Card Contact
Please create a thread in the community."
shreyanshu09/BD-EnKo,"{""license"": ""mit"", ""dataset_info"": {""features"": [{""name"": ""id"", ""dtype"": ""string""}, {""name"": ""image"", ""dtype"": ""image""}, {""name"": ""ground_truth"", ""dtype"": ""string""}], ""splits"": [{""name"": ""train"", ""num_bytes"": 9616619571.478, ""num_examples"": 75034}, {""name"": ""validation"", ""num_bytes"": 746918710.6, ""num_examples"": 8360}], ""download_size"": 2177400123, ""dataset_size"": 10363538282.078001}, ""language"": [""en"", ""ko""], ""tags"": [""block diagrams""], ""size_categories"": [""10K>> from datasets import load_dataset
>>> dataset = load_dataset(""Bingsu/laion-translated-to-en-korean-subset"")
>>> dataset
DatasetDict({
train: Dataset({
features: ['hash', 'URL', 'TEXT', 'ENG TEXT', 'WIDTH', 'HEIGHT', 'LANGUAGE', 'similarity', 'pwatermark', 'punsafe', 'AESTHETIC_SCORE'],
num_rows: 12769693
})
})
```
```py
>>> dataset[""train""].features
{'hash': Value(dtype='int64', id=None),
'URL': Value(dtype='large_string', id=None),
'TEXT': Value(dtype='large_string', id=None),
'ENG TEXT': Value(dtype='large_string', id=None),
'WIDTH': Value(dtype='int32', id=None),
'HEIGHT': Value(dtype='int32', id=None),
'LANGUAGE': Value(dtype='large_string', id=None),
'similarity': Value(dtype='float32', id=None),
'pwatermark': Value(dtype='float32', id=None),
'punsafe': Value(dtype='float32', id=None),
'AESTHETIC_SCORE': Value(dtype='float32', id=None)}
```
### Data Size
download: 1.40 GiB
generated: 3.49 GiB
total: 4.89 GiB
### Data Field
- 'hash': `int`
- 'URL': `string`
- 'TEXT': `string`
- 'ENG TEXT': `string`, null data are dropped
- 'WIDTH': `int`, null data are filled with 0
- 'HEIGHT': `int`, null data are filled with 0
- 'LICENSE': `string`
- 'LANGUAGE': `string`
- 'similarity': `float32`, CLIP similarity score, null data are filled with 0.0
- 'pwatermark': `float32`, Probability of containing a watermark, null data are filled with 0.0
- 'punsafe': `float32`, Probability of nsfw image, null data are filled with 0.0
- 'AESTHETIC_SCORE': `float32`, null data are filled with 0.0
### Data Splits
| | train |
| --------- | -------- |
| # of data | 12769693 |
### polars
```sh
pip install polars[fsspec]
```
```py
import polars as pl
from huggingface_hub import hf_hub_url
url = hf_hub_url(""Bingsu/laion-translated-to-en-korean-subset"", filename=""train.parquet"", repo_type=""dataset"")
# url = ""https://huggingface.co/datasets/Bingsu/laion-translated-to-en-korean-subset/resolve/main/train.parquet""
df = pl.read_parquet(url)
```
pandas broke my colab session."
Bingsu/arcalive_220506,"{""annotations_creators"": [""no-annotation""], ""language_creators"": [""crowdsourced""], ""language"": [""ko""], ""license"": [""cc0-1.0""], ""multilinguality"": [""monolingual""], ""paperswithcode_id"": null, ""pretty_name"": ""arcalive_210816_220506"", ""size_categories"": [""100K>> from datasets import load_dataset
>>>
>>> data = load_dataset(""Bingsu/arcalive_220506"")
>>> data[""train""].features
{'text': Value(dtype='string', id=None)}
```
```pycon
>>> data[""train""][0]
{'text': '오오오오...'}
```"
shreyanshu09/Block_Diagram,"{""license"": ""mit"", ""dataset_info"": {""features"": [{""name"": ""id"", ""dtype"": ""string""}, {""name"": ""image"", ""dtype"": ""image""}, {""name"": ""ground_truth"", ""dtype"": ""string""}], ""splits"": [{""name"": ""train"", ""num_bytes"": 5038039728.815, ""num_examples"": 76263}, {""name"": ""validation"", ""num_bytes"": 833810548.666, ""num_examples"": 8662}], ""download_size"": 2276849227, ""dataset_size"": 5871850277.481}, ""language"": [""en"", ""ko""], ""tags"": [""block diagram""], ""size_categories"": [""10K`, ``).
## Supported Tasks and Leaderboards
The dataset was developped for intermediate pre-training of language models.
In the paper we further fine-tune models on entity-centric downstream tasks, such as NER.
## Languages
The dataset covers 93 languages in total, including English.
## Data Statistics
| Statistic | Count |
|:------------------------------|------------:|
| Languages | 93 |
| English Sentences | 54,469,214 |
| English Entities | 104,593,076 |
| Average Sentence Length | 23.37 |
| Average Entities per Sentence | 2 |
| CS Sentences per EN Sentence | ≤ 5 |
| CS Sentences | 231,124,422 |
| CS Entities | 420,907,878 |
## Data Fields
Each instance contains 4 fields:
- `id`: Unique ID of each sentence
- `language`: The language of choice for entity code-switching of the given sentence
- `en_sentence`: The original English sentence
- `cs_sentence`: The code-switched sentence
In the case of the English subset, the `cs_sentence` field does not exist as the sentences are not code-switched.
An example of what a data instance looks like:
```
{
'id': 19,
'en_sentence': 'The subs then enter a coral reef with many bright reflective colors.',
'cs_sentence': 'The subs then enter a Korallenriff with many bright reflective colors.',
'language': 'de'
}
```
## Data Splits
There is a single data split for each language. You can randomly select a few examples from each language to serve as validation set.
## Limitations
An important limitation of the work is that before code-switching an entity, its morphological inflection is not checked.
This can lead to potential errors as the form of the CS entity might not agree with the surrounding context (e.g. plural).
There should be few cases as such, as we are only switching entities. However, this should be improved in a later version of the corpus.
Secondly, the diversity of languages used to construct the EntityCS corpus is restricted to the overlap between the available languages in WikiData and XLM-R pre-training.
This choice was for a better comparison between models, however it is possible to extend the corpus with more languages that XLM-R does not cover, following
the procedure presented in the paper.
## Citation
**BibTeX**
```html
@inproceedings{whitehouse-etal-2022-entitycs,
title = ""{E}ntity{CS}: Improving Zero-Shot Cross-lingual Transfer with Entity-Centric Code Switching"",
author = ""Whitehouse, Chenxi and
Christopoulou, Fenia and
Iacobacci, Ignacio"",
booktitle = ""Findings of the Association for Computational Linguistics: EMNLP 2022"",
month = dec,
year = ""2022"",
address = ""Abu Dhabi, United Arab Emirates"",
publisher = ""Association for Computational Linguistics"",
url = ""https://aclanthology.org/2022.findings-emnlp.499"",
pages = ""6698--6714""
}
```
**APA**
```html
Whitehouse, C., Christopoulou, F., & Iacobacci, I. (2022). EntityCS: Improving Zero-Shot Cross-lingual Transfer with Entity-Centric Code Switching. In Findings of the Association for Computational Linguistics: EMNLP 2022.
```"
werty1248/EnKo-Translation-LongTextOnly-dedup,"{""task_categories"": [""translation""], ""language"": [""ko"", ""en""]}","### 장문 번역 데이터만 추출
- [gemma](https://huggingface.co/google/gemma-7b) 토크나이저 기준으로 영문+한글 토큰 합이 1K 이상인 데이터만 추출
- 데이터 수
- 1K~2K: 146,957
- 2K~4K: 11,823
- 4K~: 2,229
- 한/영 둘 중 한쪽만 중복인 경우는 제거하지 않았습니다.
### 데이터 출처
- [nayohan/aihub-en-ko-translation-12m](https://huggingface.co/datasets/nayohan/aihub-en-ko-translation-12m)
- [nayohan/instruction_en_ko_translation_1.4m](https://huggingface.co/datasets/nayohan/instruction_en_ko_translation_1.4m)
- [jhflow/orca_ko_en_pair](https://huggingface.co/datasets/jhflow/orca_ko_en_pair)
- [jhflow/platypus_ko_en_pair](https://huggingface.co/datasets/jhflow/platypus_ko_en_pair)
- [jhflow/dolly_ko_en_pair](https://huggingface.co/datasets/jhflow/dolly_ko_en_pair)
- [heegyu/OIG-small-chip2-ko](https://huggingface.co/datasets/heegyu/OIG-small-chip2-ko)
- [lemon-mint/en_ko_translation_purified_v0.1](https://huggingface.co/datasets/lemon-mint/en_ko_translation_purified_v0.1)
- [squarelike/sharegpt_deepl_ko_translation](https://huggingface.co/datasets/squarelike/sharegpt_deepl_ko_translation)
- [amphora/parallel-wiki-koen](https://huggingface.co/datasets/amphora/parallel-wiki-koen)
- [kuotient/gsm8k-ko](https://huggingface.co/datasets/kuotient/gsm8k-ko)
- [kuotient/orca-math-word-problems-193k-korean](https://huggingface.co/datasets/kuotient/orca-math-word-problems-193k-korean)
### 데이터 출처 분포


"
lmqg/qag_koquad,"{""license"": ""cc-by-sa-4.0"", ""pretty_name"": ""SQuAD for question generation"", ""language"": ""ko"", ""multilinguality"": ""monolingual"", ""size_categories"": ""1k
## Data Release
### Synthetic Data Samples
To facilitate research in persona-driven data synthesis, we are initially releasing following synthetic data samples we created with various personas, including:
* **50,000 math problems**
* **50,000 logical reasoning problems**
* **50,000 instructions**
* **10,000 knowledge-rich texts**
* **10,000 game NPCs**
* **5,000 tools (functions)**
### Persona Hub
We also release a subset of our PERSONA HUB, including:
* **200,000 personas**
## Run Demo
One can try the demo to synthesize data with PERSONA HUB simply by running code in https://github.com/tencent-ailab/persona-hub:
```bash
# ensure that you have installed datasets and openai (pip install datasets openai) and configured the openai_api_key before running
bash demo_openai_synthesize.sh # using gpt4o to synthesize data with PERSONA HUB
```
or
```bash
# ensure that you have installed datasets, transformers and vllm (pip install datasets transformers vllm) before running
bash demo_vllm_synthesize.sh # using open-sourced models to synthesize data with PERSONA HUB
```
Note that the data synthesis prompt templates we provide are for reference only. You can customize your desired prompts in `code/prompt_templates.py`.
## Argilla
You can also access this dataset in [Argilla space](https://argilla-data-explorers.hf.space/), as introduced in the following video:
* Video: https://youtu.be/timmCn8Nr6g?feature=shared
## Contact
* Please email `xinchan@global.tencent.com` or `dyu@global.tencent.com`
* Github page: https://github.com/tencent-ailab/persona-hub
## Disclaimer
PERSONA HUB can facilitate synthetic data creation at a billion-scale to simulate diverse inputs (i.e., use cases) from a wide variety of real-world users. If this data is used as input to query a target LLM to obtain its outputs at scale, there is a high risk that the LLM's knowledge, intelligence and capabilities will be dumped and easily replicated, thereby challenging the leading position of the most powerful LLMs. It is crucial to avoid misuse and ensure ethical and responsible application to prevent privacy violations and other ethical concerns.
The released data is all generated by public available models (GPT-4, Llama-3 and Qwen), and is intended for research purposes only. Users also must comply with the respective license agreements and usage policies of these models when using the synthesized data. The data may contain inaccuracies, unsafe content, or biases, for which we cannot be held responsible. Please evaluate its accuracy and suitability before use. Tencent and its licensors provide the data AS-IS, without warranty of any kind, express or implied. The views and opinions expressed in the data do not necessarily reflect those of Tencent."
QubitPi/wiktionary-data,"{""license"": ""apache-2.0"", ""pretty_name"": ""English Wiktionary Data in JSONL"", ""language"": [""en"", ""de"", ""la"", ""grc"", ""ko"", ""peo"", ""akk"", ""elx"", ""sa""], ""configs"": [{""config_name"": ""Wiktionary"", ""data_files"": [{""split"": ""German"", ""path"": ""german-wiktextract-data.jsonl""}, {""split"": ""Latin"", ""path"": ""latin-wiktextract-data.jsonl""}, {""split"": ""AncientGreek"", ""path"": ""ancient-greek-wiktextract-data.jsonl""}, {""split"": ""Korean"", ""path"": ""korean-wiktextract-data.jsonl""}, {""split"": ""OldPersian"", ""path"": ""old-persian-wiktextract-data.jsonl""}, {""split"": ""Akkadian"", ""path"": ""akkadian-wiktextract-data.jsonl""}, {""split"": ""Elamite"", ""path"": ""elamite-wiktextract-data.jsonl""}, {""split"": ""Sanskrit"", ""path"": ""sanskrit-wiktextract-data.jsonl""}]}, {""config_name"": ""Knowledge Graph"", ""data_files"": [{""split"": ""AllLanguage"", ""path"": ""word-definition-graph-data.jsonl""}]}], ""tags"": [""Natural Language Processing"", ""NLP"", ""Wiktionary"", ""Vocabulary"", ""German"", ""Latin"", ""Ancient Greek"", ""Korean"", ""Old Persian"", ""Akkadian"", ""Elamite"", ""Sanskrit"", ""Knowledge Graph""], ""size_categories"": [""100M
> [!TIP]
>
> Two words are structurally similar if and only if the two shares the same
> [stem](https://en.wikipedia.org/wiki/Word_stem)
Development
-----------
### Data Source
Although [the original Wiktionary dump](https://dumps.wikimedia.org/) is available, parsing it from scratch involves
rather complicated process. For example,
[acquiring the inflection data of most Indo-European languages on Wiktionary has already triggered some research-level efforts](https://stackoverflow.com/a/62977327).
We would probably do it in the future. At present, however, we would simply take the awesome works by
[tatuylonen](https://github.com/tatuylonen/wiktextract) which has already processed it and presented it in
[in JSONL format](https://kaikki.org/dictionary/rawdata.html). wiktionary-data sources the data from
__raw Wiktextract data (JSONL, one object per line)__ option there.
### Environment Setup
Get the source code:
```console
git clone git@github.com:QubitPi/wiktionary-data.git
cd wiktionary-data
```
It is strongly recommended to work in an isolated environment. Install virtualenv and create an isolated Python
environment by
```console
python3 -m pip install --user -U virtualenv
python3 -m virtualenv .venv
```
To activate this environment:
```console
source .venv/bin/activate
```
or, on Windows
```console
./venv\Scripts\activate
```
> [!TIP]
>
> To deactivate this environment, use
>
> ```console
> deactivate
> ```
### Installing Dependencies
```console
pip3 install -r requirements.txt
```
License
-------
The use and distribution terms for [wiktionary-data]() are covered by the [Apache License, Version 2.0].
[Apache License Badge]: https://img.shields.io/badge/Apache%202.0-F25910.svg?style=for-the-badge&logo=Apache&logoColor=white
[Apache License, Version 2.0]: https://www.apache.org/licenses/LICENSE-2.0
[GitHub workflow status badge]: https://img.shields.io/github/actions/workflow/status/QubitPi/wiktionary-data/ci-cd.yaml?branch=master&style=for-the-badge&logo=github&logoColor=white&label=CI/CD
[GitHub workflow status URL]: https://github.com/QubitPi/wiktionary-data/actions/workflows/ci-cd.yaml
[Hugging Face dataset badge]: https://img.shields.io/badge/Hugging%20Face%20Dataset-wiktionary--data-FF9D00?style=for-the-badge&logo=huggingface&logoColor=white&labelColor=6B7280
[Hugging Face dataset URL]: https://huggingface.co/datasets/QubitPi/wiktionary-data
[Hugging Face sync status badge]: https://img.shields.io/github/actions/workflow/status/QubitPi/wiktionary-data/ci-cd.yaml?branch=master&style=for-the-badge&logo=github&logoColor=white&label=Hugging%20Face%20Sync%20Up
[Hugging Face sync status URL]: https://github.com/QubitPi/wiktionary-data/actions/workflows/ci-cd.yaml
[Python Version Badge]: https://img.shields.io/badge/Python-3.10-FFD845?labelColor=498ABC&style=for-the-badge&logo=python&logoColor=white"
jaejoo/llama-2-ko-law,{},"---
license: apache-2.0
language:
- ko
tags:
- legal
size_categories:
- 1K
This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1).
## Dataset Details
### Dataset Description
- **Curated by:** [Gary Benson](https://gbenson.net/)
- **Languages:** Mostly English (87%);
Dutch, French, Chinese, Japanese (1-2% each); 30+ others (<1% each)
- **License:** [CC0 1.0 Universal](https://creativecommons.org/publicdomain/zero/1.0/)
### Dataset Sources [optional]
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
### Direct Use
[More Information Needed]
### Out-of-Scope Use
[More Information Needed]
## Dataset Structure
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Data Collection and Processing
[More Information Needed]
#### Who are the source data producers?
[More Information Needed]
### Annotations [optional]
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
#### Personal and Sensitive Information
[More Information Needed]
## Bias, Risks, and Limitations
87% of the examples are English.
[More Information Needed]
### Recommendations
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed]"
taeminlee/CLIcK,"{""task_categories"": [""multiple-choice""], ""language"": [""ko""], ""tags"": [""Culture"", ""Language""], ""size_categories"": [""1K
CLIcK 🇰🇷🧠
Evaluation of Cultural and Linguistic Intelligence in Korean
## Introduction 🎉
CLIcK (Cultural and Linguistic Intelligence in Korean) is a comprehensive dataset designed to evaluate cultural and linguistic intelligence in the context of Korean language models. In an era where diverse language models are continually emerging, there is a pressing need for robust evaluation datasets, especially for non-English languages like Korean. CLIcK fills this gap by providing a rich, well-categorized dataset focusing on both cultural and linguistic aspects, enabling a nuanced assessment of Korean language models.
## News 📰
- **[LREC-COLING]** Our paper introducing CLIcK has been accepted to LREC-COLING 2024!🎉
## Dataset Description 📊
The CLIcK benchmark comprises two broad categories: Culture and Language, which are further divided into 11 fine-grained subcategories.
### Categories 📂
- **Language** 🗣️
- Textual Knowledge
- Grammatical Knowledge
- Functional Knowledge
- **Culture** 🌍
- Korean Society
- Korean Tradition
- Korean Politics
- Korean Economy
- Korean Law
- Korean History
- Korean Geography
- Korean Popular Culture (K-Pop)
### Construction 🏗️
CLIcK was developed using two human-centric approaches:
1. Reclassification of **official and well-designed exam data** into our defined categories.
2. Generation of questions using ChatGPT, based on **official educational materials** from the Korean Ministry of Justice, followed by our own validation process.
### Structure 🏛️
The dataset is organized as follows, with each subcategory containing relevant JSON files:
```
📦CLIcK
└─ Dataset
├─ Culture
│ ├─ [Each cultural subcategory with associated JSON files]
└─ Language
├─ [Each language subcategory with associated JSON files]
```
### Exam Code Descriptions 📜
- KIIP: Korea Immigration & Integration Program ([Website](www.immigration.go.kr))
- CSAT: College Scholastic Ability Test for Korean ([Website](https://www.suneung.re.kr/))
- Kedu: Test of Teaching Korean as a Foreign Language exams ([Website](https://www.q-net.or.kr/man001.do?gSite=L&gId=36))
- PSE: Public Service Exam for 9th grade
- TOPIK: Test of Proficiency in Korean ([Website](https://www.topik.go.kr/))
- KHB: Korean History Exam Basic ([Website](https://www.historyexam.go.kr/))
- PSAT: Public Service Aptitude Test in Korea
## Results
| Models | Average Accuracy (Korean Culture) | Average Accuracy (Korean Language) |
|-------------------|-----------------------------------|------------------------------------|
| Polyglot-Ko 1.3B | 32.71% | 22.88% |
| Polyglot-Ko 3.8B | 32.90% | 22.38% |
| Polyglot-Ko 5.8B | 33.14% | 23.27% |
| Polyglot-Ko 12.8B | 33.40% | 22.24% |
| KULLM 5.8B | 33.79% | 23.50% |
| KULLM 12.8B | 33.51% | 23.78% |
| KoAlpaca 5.8B | 32.33% | 23.87% |
| KoAlpaca 12.8B | 33.80% | 22.42% |
| LLaMA-Ko 7B | 33.26% | 25.69% |
| LLaMA 7B | 35.44% | 27.17% |
| LLaMA 13B | **36.22%** | **26.71%** |
| GPT-3.5 | 49.30% | 42.32% |
| Claude2 | **51.72%** | **45.39%** |
## Dataset Link 🔗
The CLIcK dataset is available on the Hugging Face Hub: [CLIcK Dataset](https://huggingface.co/datasets/your_username/CLIcK)
## Citation 📝
If you use CLIcK in your research, please cite our paper:
```bibtex
@misc{kim2024click,
title={CLIcK: A Benchmark Dataset of Cultural and Linguistic Intelligence in Korean},
author={Eunsu Kim and Juyoung Suk and Philhoon Oh and Haneul Yoo and James Thorne and Alice Oh},
year={2024},
eprint={2403.06412},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
## Contact 📧
For any questions or inquiries, please contact [kes0317@kaist.ac.kr](mailto:kes0317@kaist.ac.kr)."
izhx/mewsli-x,"{""language"": [""af"", ""ar"", ""az"", ""bg"", ""bn"", ""de"", ""el"", ""en"", ""es"", ""et"", ""eu"", ""fa"", ""fi"", ""fr"", ""gu"", ""he"", ""hi"", ""ht"", ""hu"", ""id"", ""it"", ""ja"", ""jv"", ""ka"", ""kk"", ""ko"", ""lt"", ""ml"", ""mr"", ""ms"", ""my"", ""nl"", ""pa"", ""pl"", ""pt"", ""qu"", ""ro"", ""ru"", ""sw"", ""ta"", ""te"", ""th"", ""tl"", ""tr"", ""uk"", ""ur"", ""vi"", ""wo"", ""yo"", ""zh""], ""license"": ""apache-2.0"", ""pretty_name"": ""Mewsli-X"", ""task_categories"": [""text-retrieval""], ""task_ids"": [""entity-linking-retrieval""], ""configs"": [{""config_name"": ""wikipedia_pairs"", ""data_files"": [{""split"": ""train"", ""path"": ""wikipedia_pairs/train.jsonl.tar.gz""}, {""split"": ""validation"", ""path"": ""wikipedia_pairs/dev.jsonl.tar.gz""}]}, {""config_name"": ""ar"", ""data_files"": [{""split"": ""validation"", ""path"": ""wikinews_mentions/ar/dev.jsonl""}, {""split"": ""test"", ""path"": ""wikinews_mentions/ar/test.jsonl""}]}, {""config_name"": ""de"", ""data_files"": [{""split"": ""validation"", ""path"": ""wikinews_mentions/de/dev.jsonl""}, {""split"": ""test"", ""path"": ""wikinews_mentions/de/test.jsonl""}]}, {""config_name"": ""en"", ""data_files"": [{""split"": ""validation"", ""path"": ""wikinews_mentions/en/dev.jsonl""}, {""split"": ""test"", ""path"": ""wikinews_mentions/en/test.jsonl""}]}, {""config_name"": ""es"", ""data_files"": [{""split"": ""validation"", ""path"": ""wikinews_mentions/es/dev.jsonl""}, {""split"": ""test"", ""path"": ""wikinews_mentions/es/test.jsonl""}]}, {""config_name"": ""fa"", ""data_files"": [{""split"": ""validation"", ""path"": ""wikinews_mentions/fa/dev.jsonl""}, {""split"": ""test"", ""path"": ""wikinews_mentions/fa/test.jsonl""}]}, {""config_name"": ""ja"", ""data_files"": [{""split"": ""validation"", ""path"": ""wikinews_mentions/ja/dev.jsonl""}, {""split"": ""test"", ""path"": ""wikinews_mentions/ja/test.jsonl""}]}, {""config_name"": ""pl"", ""data_files"": [{""split"": ""validation"", ""path"": ""wikinews_mentions/pl/dev.jsonl""}, {""split"": ""test"", ""path"": ""wikinews_mentions/pl/test.jsonl""}]}, {""config_name"": ""ro"", ""data_files"": [{""split"": ""validation"", ""path"": ""wikinews_mentions/ro/dev.jsonl""}, {""split"": ""test"", ""path"": ""wikinews_mentions/ro/test.jsonl""}]}, {""config_name"": ""ta"", ""data_files"": [{""split"": ""validation"", ""path"": ""wikinews_mentions/ta/dev.jsonl""}, {""split"": ""test"", ""path"": ""wikinews_mentions/ta/test.jsonl""}]}, {""config_name"": ""tr"", ""data_files"": [{""split"": ""validation"", ""path"": ""wikinews_mentions/tr/dev.jsonl""}, {""split"": ""test"", ""path"": ""wikinews_mentions/tr/test.jsonl""}]}, {""config_name"": ""uk"", ""data_files"": [{""split"": ""validation"", ""path"": ""wikinews_mentions/uk/dev.jsonl""}, {""split"": ""test"", ""path"": ""wikinews_mentions/uk/test.jsonl""}]}, {""config_name"": ""candidate_entities"", ""data_files"": [{""split"": ""test"", ""path"": ""candidate_entities.jsonl.tar.gz""}]}], ""size_categories"": [""100K _**NOTE:** New evaluation results on Mewsli-X are **not** directly comparable to those reported in the paper because the dataset required further updates, as detailed [below](#updated-dataset). This does not affect the overall findings of the paper._
```
@inproceedings{ruder-etal-2021-xtreme,
title = ""{XTREME}-{R}: Towards More Challenging and Nuanced Multilingual Evaluation"",
author = ""Ruder, Sebastian and
Constant, Noah and
Botha, Jan and
Siddhant, Aditya and
Firat, Orhan and
Fu, Jinlan and
Liu, Pengfei and
Hu, Junjie and
Garrette, Dan and
Neubig, Graham and
Johnson, Melvin"",
booktitle = ""Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing"",
month = nov,
year = ""2021"",
address = ""Online and Punta Cana, Dominican Republic"",
publisher = ""Association for Computational Linguistics"",
url = ""https://aclanthology.org/2021.emnlp-main.802"",
doi = ""10.18653/v1/2021.emnlp-main.802"",
pages = ""10215--10245"",
}
```"
Nikity/Pornhub,"{""license"": ""odc-by"", ""configs"": [{""config_name"": ""default"", ""data_files"": [{""split"": ""train"", ""path"": ""data.csv""}], ""sep"": ""\u203d""}], ""language"": [""sq"", ""ar"", ""bn"", ""bg"", ""zh"", ""hr"", ""cs"", ""da"", ""nl"", ""en"", ""et"", ""fi"", ""fr"", ""de"", ""el"", ""he"", ""hi"", ""hu"", ""id"", ""it"", ""ja"", ""ko"", ""lv"", ""lt"", ""mk"", ""ml"", ""mr"", ""ne"", ""no"", ""fa"", ""pl"", ""pt"", ""pa"", ""ro"", ""ru"", ""sk"", ""sl"", ""so"", ""es"", ""sw"", ""sv"", ""tl"", ""ta"", ""te"", ""th"", ""tr"", ""uk"", ""ur"", ""vi"", ""cy""], ""tags"": [""not-for-all-audiences""], ""pretty_name"": ""Pornhub"", ""size_categories"": [""100K
CaLMQA is a long-form question answering (LFQA) dataset spanning 23 high- to low-resource languages.
## Dataset Details
### Dataset Description
CaLMQA is an LFQA dataset with 2K questions from 23 languages, 11 high- to mid-resource and 12 low-resource.
Questions are either *culturally specific* – uniquely or more likely to be asked by people of a specific
culture – or *culturally agnostic* (not culturally specific). These questions were collected to
evaluate the multilingual capabilities and
cultural knowledge of state-of-the-art models.
- **Languages (high- to mid-resource):** Arabic, Chinese, English, German, Hindi, Hebrew, Hungarian, Japanese, Korean, Russian, Spanish
- **Languages (low-resource):** Afar, Balochi, Faroese, Fijian, Hiligaynon, Kirundi, Papiamento, Pashto, Samoan, Tongan, Tswana, Wolof
- **License:** [MIT](https://opensource.org/license/MIT)
- **Repository:** [CaLMQA](https://github.com/2015aroras/CaLMQA/tree/main)
- **Paper:** *Pending*
## Uses
These questions were collected to evaluate the multilingual capabilities and
cultural knowledge of state-of-the-art models. Automatic metrics are not
sufficiently developed for multilingual LFQA, but human evaluation is viable.
## Dataset Structure
The dataset consists of QA entries.
Entry structure:
- `language`: The language of the question. For culturally specific questions, this is the question's original language. Culturally agnostic questions are all translated from English.
- `question_type`: Indicates whether the question is 'culturally specific' or 'culturally agnostic'. These are the only 2 values `question_type` can currently be.
- `question`: The question that admits a long-form answer, in the language `language`.
- `question_english` : The English translation of the question.
- `answer` (optional): The answer to the question, in the language `language`.
Culturally specific questions are unique to each language. By contrast,
all culturally agnostic questions are parallel across all languages; they were translated from English to all
other language.
## Dataset Creation
### Source Data
Culturally specific questions in low-resource languages are manually written by hired croudworkers.
Culturally specific questions in high- to mid-resource languages are sourced from the following websites.
- [Ejaba](https://www.ejaba.com/) (Arabic)
- [Ujeeb](https://ujeeb.com/) (Arabic)
- [Zhihu](https://www.zhihu.com/) (Chinese)
- [Reddit ELI5](https://www.reddit.com/r/explainlikeimfive/) (English)
- [Gutefrage](https://www.gutefrage.net/) (German)
- [Quora](https://he.quora.com) (Hebrew)
- [Let's Diskuss](https://hi.letsdiskuss.com/) (Hindi)
- [Gyakori kérdések](https://www.gyakorikerdesek.hu/) (Hungarian)
- [Yahoo Japan](https://chiebukuro.yahoo.co.jp/) (Japanese)
- [OKWave](https://okwave.jp/) (Japanese)
- [Naver](https://kin.naver.com/qna/) (Korean)
- [Yandex](https://yandex.ru/q/) (Russian)
- [Todoexpertos](https://www.todoexpertos.com/) (Spanish)
Culturally agnostic questions are obtained from [Reddit ELI5](https://www.reddit.com/r/explainlikeimfive/) in English.
#### Data Collection and Processing
We used separate data collection processes for high- to mid-resource languages and for low-resource languages.
For high- to mid-resource languages, we first conducted a survey amongst workers, asking them to provide community LFQA websites
(like Reddit and Quora) in their native non-English languages. We then hire workers to collected long-form culturally specific
questions information-seeking questions from our [collected websites](#source-data).
For low-resource languages, we instruct workers to write culturally specific questions.
#### Who are the source data producers?
All workers were native speakers of the language they collected questions for, as well as proficient English speakers.
Workers from the [Prolific](https://www.prolific.com/) platform were hired to collect culturally specific questions from websites.
Workers from the [UpWork](https://www.upwork.com/) platform were hired to write culturally specific questions in low-resource languages.
#### Personal and Sensitive Information
Question topics include religion, politics and history, and so some questions may pertain to sensitive issues.
We explicitly specify in our workers' guidelines that collected questions should not be controversial,
and we manually reviewed all questions. However, some questions may still be unagreeable with some people.
## Bias, Risks, and Limitations
The questions we source from community QA websites might reflect societal biases in those communities and
might under-represent cultures not captured in these QA forums. Our worker-written questions might have workers' biases.
## Citation
**BibTeX:**
*pending*"
bongsoo/social_science_en_ko,"{""language"": [""ko""], ""license"": ""apache-2.0""}",- 사회과학-en-ko 번역 말뭉치
yachay/text_coordinates_regions,"{""license"": ""mit"", ""tags"": [""multilingual"", ""text"", ""coordinates"", ""geospatial"", ""translation"", ""NER"", ""geo"", ""geo-tagged"", ""named-entity-recognition"", ""natural-language-processing"", ""geographic-data"", ""geolocation"", ""twitter"", ""reddit""], ""task_categories"": [""feature-extraction"", ""token-classification"", ""text-classification""], ""pretty_name"": ""Multilingual Geo-Tagged Social Media Posts (by 123 world regions)"", ""language"": [""en"", ""zh"", ""es"", ""hi"", ""ar"", ""bn"", ""pt"", ""ru"", ""ja"", ""pa"", ""de"", ""jv"", ""ms"", ""te"", ""vi"", ""ko"", ""fr"", ""mr"", ""ta"", ""ur"", ""tr"", ""it"", ""th"", ""gu"", ""fa"", ""pl""], ""size_categories"": [""100M
This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1).
## Dataset Details
### Dataset Description
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
### Direct Use
[More Information Needed]
### Out-of-Scope Use
[More Information Needed]
## Dataset Structure
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Data Collection and Processing
[More Information Needed]
#### Who are the source data producers?
[More Information Needed]
### Annotations [optional]
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
#### Personal and Sensitive Information
[More Information Needed]
## Bias, Risks, and Limitations
[More Information Needed]
### Recommendations
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation
```bibtex
@misc{sälevä2024paranames,
title={ParaNames 1.0: Creating an Entity Name Corpus for 400+ Languages using Wikidata},
author={Jonne Sälevä and Constantine Lignos},
year={2024},
eprint={2405.09496},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```"
neon-mao/language-dataset,"{""license"": ""mit"", ""task_categories"": [""text-classification""], ""language"": [""en"", ""zh"", ""fr"", ""ru"", ""ja"", ""it"", ""tr"", ""de"", ""pt"", ""es"", ""he"", ""uk"", ""nl"", ""fi"", ""pl"", ""lt"", ""cs"", ""da"", ""sv"", ""sr"", ""ar"", ""el"", ""ro"", ""bg"", ""vi"", ""sk"", ""id"", ""is"", ""ko"", ""ca"", ""hr"", ""th"", ""et"", ""sl"", ""no""], ""size_categories"": [""10M
# Dataset Card for ""WEATHub""
This dataset corresponds to the data described in the paper ""Global Voices, Local Biases: Socio-Cultural Prejudices across Languages""
accepted to EMNLP 2023.
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [Website](https://iamshnoo.github.io/global_voices_local_biases/)
- **Repository:** [GitHub](https://github.com/iamshnoo/weathub)
- **Paper:** https://arxiv.org/abs/2310.17586
- **Point of Contact:** Anjishnu Mukherjee
### Dataset Summary
WEATHub is a dataset containing 24 languages. It contains words organized into groups of (target1, target2, attribute1, attribute2)
to measure the association target1:target2 :: attribute1:attribute2. For example target1 can be insects, target2 can be flowers. And we
might be trying to measure whether we find insects or flowers pleasant or unpleasant. The measurement of word associations is quantified
using the WEAT metric in our paper. It is a metric that calculates an effect size (Cohen's d) and also provides a p-value (to measure
statistical significance of the results). In our paper, we use word embeddings from language models to perform these tests and understand
biased associations in language models across different languages.
### Supported Tasks and Leaderboards
- `bias_eval` : The dataset is used to measure biased associations.
- This particular task isn't a standard task that is currently supported.
### Languages
The languages (in alphabetical order of language codes) are: Arabic (ar), Bengali (bn), Sorani Kurdish (ckb), Danish (da), German (de),
Greek (el), Spanish (es), Persian (fa), French (fr), Hindi (hi), Italian (it), Japanese (ja), Korean (ko), Kurmanji Kurdish (ku),
Marathi (mr), Punjabi (pa), Russian (ru), Telugu (te), Thai (th), Tagalog (tl), Turkish (tr), Urdu (ur), Vietnamese (vi), Chinese (zh).
## Dataset Structure
### Data Instances
An example instance is of the form:
```json
{
'attr1': {'category': 'Career',
'examples': ['σύμβουλος', 'διεύθυνση', 'επαγγελματίας', 'εταιρεία', 'μισθός', 'γραφείο', 'επιχείρηση', 'καριέρα', 'διευθύνων σύμβουλος']},
'attr2': {'category': 'Family',
'examples': ['σπίτι', 'γονείς', 'παιδιά', 'οικογένεια', 'ξαδερφια', 'γάμος', 'γάμος', 'συγγενείς']},
'targ1': {'category': 'MaleNames',
'examples': ['Αλέξανδρος', 'Δημήτρης', 'Γιώργος', 'Κώστας', 'Νίκος', 'Παναγιώτης', 'Σπύρος', 'Θοδωρής']},
'targ2': {'category': 'FemaleNames',
'examples': ['Αθηνά', 'Ελένη', 'Κατερίνα', 'Μαρία', 'Ευαγγελία', 'Αναστασία', 'Δέσποινα', 'Χριστίνα']},
'language': 'el',
'weat': 'WEAT6'
}
```
### Data Fields
- A single data point has the following features:
- name: language (corresponding to the language codes given above)
- name: weat (ID corresponding to a WEAT category)
- name: attr1.category (a descriptive name for attribute 1)
- name: attr1.examples (list of words for attribute 1)
- name: attr2.category (a descriptive name for attribute 2)
- name: attr2.examples (list of words for attribute 2)
- name: targ1.category (a descriptive name for target 1)
- name: targ1.examples (list of words for target 1)
- name: targ2.category (a descriptive name for target 2)
- name: targ2.examples (list of words for target 2)
- All the features are stored as strings. The examples represent lists of strings.
### Data Splits
- The dataset is divided into 3 splits as per the description in our paper:
- original_weat - described in Table 1 of our paper, this corresponds to the original WEAT categories as given by Caliskan et al. in their
seminal work from 2017 (Semantics derived automatically from language corpora contain human-like biases)
- new_human_biases - described in Table 2 of our paper, this corresponds to contemporary dimensions of bias that are more human-centric in
modern society.
- india_specific_biases - These contain data corresponding to india specific bias dimensions as described in the paper (Socially Aware Bias Measurements for Hindi Language Representations)
from NAACL '22 by Malik et al.
## Dataset Creation
### Curation Rationale
This dataset is intended to be used for measuring intrinsic biases in word embeddings obtained from language models.
### Source Data
#### Initial Data Collection and Normalization
Described in details in section 2 of our paper. Briefly, for existing weat categories, we use human annotations to improve the quality of the
translated WEAT word lists. For new weat categories, we research possible relevant dimensions thoroughly and come up with words after thorough
discussions with our annotators.
#### Who are the source language producers?
Data for each of the language is from native speakers of that language. All annotators who participated in our study are native speakers of
their respective languages and have at least college-level education background.
### Annotations
#### Annotation process
Described in details in section 2 of our paper. Word level annotations.
To collect annotated data in various languages, we provide our annotators with the English words and their corresponding automatic translation
, separated by WEAT category. We provide instructions to verify the accuracy of the translations and provide corrected versions for any
inaccuracies. Additionally, we ask annotators to provide grammatically gendered forms of words, if applicable, or multiple translations
of a word, if necessary.
#### Who are the annotators?
All annotators who participated in our study are native speakers of
their respective languages and have at least college-level education background.
### Personal and Sensitive Information
Since this dataset tries to measure biased associations at the word level, there may be some word level biases that are sensitive to certain
groups.
## Considerations for Using the Data
### Social Impact of Dataset
This dataset should be a starting point for measuring word level biased associations in a multilingual setting, which has not been explored
in much depth in recent literature.
### Discussion of Biases
This dataset represents word level information used for measuring biases. Since these are annotated by humans, they may to certain extent reflect
the biases that they hold at an individual level.
### Other Known Limitations
- For most of the languages in our dataset WEATHub, we had access to at least two annotators for cross-verifying the accuracy of
the human translations to determine if the translated words fit into the context of that particular WEAT category.
However, for some languages, we only have one annotator per language, so this might mean that for some languages the data may represent
the biases of that individual annotator even though those biases are somewhat also reflected by Google Translate so it isn't completely
an individualistic issue.
- While we have tried to cover as many languages from the global South as possible, we acknowledge that 24 languages are indeed a
tiny proportion of the 7000 languages in the world, some of which do not even have text representations.
- WEAT can be an unreliable metric for contextualized embeddings from transformer models. We need better metrics to study intrinsic biases in
transformer models. We believe the target and attribute pairs we provide as part of WEATHub in multiple languages is an important step
towards a better multilingual metric for evaluating intrinsic biases in language models.
## Additional Information
### Dataset Curators
This dataset was curated by Anjishnu Mukherjee, Chahat Raj, Ziwei Zhu and Antonios Anastasopoulos for their EMNLP paper while the first two authors were
pursuing their PhD at George Mason University. This work
was generously supported by the National Science Foundation under award IIS-2327143. Computational resources for experiments were provided by the
Office of of Research Computing at George Mason University (URL: https://orc.gmu.edu) and funded in part by grants from the
National Science Foundation (Awards Number 1625039 and 2018631).
### Licensing Information
Currently this dataset is released under CC-4.0 (might need to update this if required)
### Citation Information
```
@inproceedings{mukherjee-etal-2023-global,
title = ""{G}lobal {V}oices, Local Biases: Socio-Cultural Prejudices across Languages"",
author = ""Mukherjee, Anjishnu and
Raj, Chahat and
Zhu, Ziwei and
Anastasopoulos, Antonios"",
editor = ""Bouamor, Houda and
Pino, Juan and
Bali, Kalika"",
booktitle = ""Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing"",
month = dec,
year = ""2023"",
address = ""Singapore"",
publisher = ""Association for Computational Linguistics"",
url = ""https://aclanthology.org/2023.emnlp-main.981"",
doi = ""10.18653/v1/2023.emnlp-main.981"",
pages = ""15828--15845"",
abstract = ""Human biases are ubiquitous but not uniform: disparities exist across linguistic, cultural, and societal borders. As large amounts of recent literature suggest, language models (LMs) trained on human data can reflect and often amplify the effects of these social biases. However, the vast majority of existing studies on bias are heavily skewed towards Western and European languages. In this work, we scale the Word Embedding Association Test (WEAT) to 24 languages, enabling broader studies and yielding interesting findings about LM bias. We additionally enhance this data with culturally relevant information for each language, capturing local contexts on a global scale. Further, to encompass more widely prevalent societal biases, we examine new bias dimensions across toxicity, ableism, and more. Moreover, we delve deeper into the Indian linguistic landscape, conducting a comprehensive regional bias analysis across six prevalent Indian languages. Finally, we highlight the significance of these social biases and the new dimensions through an extensive comparison of embedding methods, reinforcing the need to address them in pursuit of more equitable language models."",
}
```
### Contributions
Thanks to [@iamshnoo](https://github.com/iamshnoo) for adding this dataset."
youngwoo3283/df_sentiment_chat,"{""language"": [""ko""]}","### 데이터 출처 : https://www.aihub.or.kr/aihubdata/data/view.do?currMenu=115&topMenu=100&dataSetSn=86
해당 데이터에서 사람응답1과 시스템 응답1로만 만든 데이터"
traintogpb/aihub-kozh-translation-integrated-large-5.9m,"{""license"": ""mit"", ""task_categories"": [""translation""], ""language"": [""ko"", ""zh""]}","### AI Hub Ko-Zh Translation Dataset (Integrated)
AI Hub의 한-중 번역 관련 데이터셋 10개를 병합한 자료입니다. 병합 시 총 데이터 개수는 5,934,596개이며, 이중 10,000개의 validation set와 2,000개의 test set가 분리되어 모든 데이터 사이즈(large-5.9m, base-1m, small-100k)에서 동일하게 사용됩니다.
- large-5.9m (train): 병합 데이터 100% 사용; 총 5,922,596개
- base-1m (train): 병합 데이터 중 1M개 사용; 총 1,000,000개
- small-100k (train): 병합 데이터 중 100K개 사용; 총 100,000개
### Subsets
| Name | Total Size | Chinese Size (Utilized Only) | URL | Datasetkey (AIHub) |
|---|---|---|---|---|
| 한국어-중국어 번역 말뭉치(기술과학) | 1170000 | 1170000 | [URL](https://www.aihub.or.kr/aihubdata/data/view.do?currMenu=&topMenu=&aihubDataSe=data&dataSetSn=128) | 128 |
| 한국어-중국어 번역 말뭉치(사회과학) | 1170000 | 1170000 | [URL](https://www.aihub.or.kr/aihubdata/data/view.do?currMenu=&topMenu=&aihubDataSe=data&dataSetSn=129) | 129 |
| 일상생활 및 구어체 한-중, 한-일 번역 병렬 말뭉치 데이터 | 2700000 | 1349470 | [URL](https://www.aihub.or.kr/aihubdata/data/view.do?currMenu=&topMenu=&aihubDataSe=data&dataSetSn=546) | 546 |
| 전문분야 영-한, 중-한 번역 말뭉치(식품) | 1350000 | 1326837 | [URL](https://www.aihub.or.kr/aihubdata/data/view.do?currMenu=&topMenu=&aihubDataSe=data&dataSetSn=71262) | 71262 |
| 방송 콘텐츠 한-중, 한-일 번역 병렬 말뭉치 데이터 | 1487088 | 367921 | [URL](https://www.aihub.or.kr/aihubdata/data/view.do?currMenu=&topMenu=&aihubDataSe=data&dataSetSn=71263) | 71263 |
| 발화유형(문어, 구어, 채팅) 별 기계번역 병렬 말뭉치 | 82002 | 26989 | [URL](https://www.aihub.or.kr/aihubdata/data/view.do?currMenu=&topMenu=&aihubDataSe=data&dataSetSn=71411) | 71411 |
| 한국어-다국어 번역 말뭉치(기술과학) | 270459 | 146317 | [URL](https://www.aihub.or.kr/aihubdata/data/view.do?currMenu=&topMenu=&aihubDataSe=data&dataSetSn=71493) | 71493 |
| 한국어-다국어 번역 말뭉치(기초과학) | 270317 | 84419 | [URL](https://www.aihub.or.kr/aihubdata/data/view.do?currMenu=&topMenu=&aihubDataSe=data&dataSetSn=71496) | 71496 |
| 한국어-다국어 번역 말뭉치(인문학) | 271721 | 80375 | [URL](https://www.aihub.or.kr/aihubdata/data/view.do?currMenu=&topMenu=&aihubDataSe=data&dataSetSn=71498) | 71498 |
| 방송콘텐츠 한국어-아시아어 번역 말뭉치 | 820387 | 112978 | [URL](https://www.aihub.or.kr/aihubdata/data/view.do?currMenu=&topMenu=&aihubDataSe=data&dataSetSn=71591) | 71591 |
| AI 허브 데이터 활용을 위한 기계 번역말뭉치 | 2653948 | 212268 | [URL](https://www.aihub.or.kr/aihubdata/data/view.do?currMenu=&topMenu=&aihubDataSe=data&dataSetSn=71593) | 71593 |"
zzunyang/LawQA_LawSee,{},"---
task_categories:
- conversational
language:
- ko
tags:
- legal
---"
DinoTheLewis/KoAlpaca_persona_multiturn,{},
nayohan/Magpie-Pro-MT-300K-v0.1-ko,"{""language"": [""ko""], ""task_categories"": [""text-generation""], ""dataset_info"": {""features"": [{""name"": ""input1"", ""dtype"": ""string""}, {""name"": ""output1"", ""dtype"": ""string""}, {""name"": ""input2"", ""dtype"": ""string""}, {""name"": ""output2"", ""dtype"": ""string""}, {""name"": ""model"", ""dtype"": ""string""}, {""name"": ""gen_input_config"", ""struct"": [{""name"": ""temperature"", ""dtype"": ""float64""}, {""name"": ""top_p"", ""dtype"": ""float64""}]}, {""name"": ""conversations"", ""list"": [{""name"": ""from"", ""dtype"": ""string""}, {""name"": ""value"", ""dtype"": ""string""}]}, {""name"": ""uuid"", ""dtype"": ""string""}], ""splits"": [{""name"": ""train"", ""num_bytes"": 3617261192, ""num_examples"": 300000}], ""download_size"": 1857815558, ""dataset_size"": 3617261192}, ""configs"": [{""config_name"": ""default"", ""data_files"": [{""split"": ""train"", ""path"": ""data/train-*""}]}], ""tags"": [""instruction"", ""korean""]}","Translated [Magpie-Align/Magpie-Pro-MT-300K-v0.1](https://huggingface.co/datasets/Magpie-Align/Magpie-Pro-MT-300K-v0.1) using [nayohan/llama3-instrucTrans-enko-8b](https://huggingface.co/nayohan/llama3-instrucTrans-enko-8b).
This dataset is a raw translated dataset and contains repetitive sentences generated by the model, so it needs to be filtered.
```
@misc{xu2024magpie,
title={Magpie: Alignment Data Synthesis from Scratch by Prompting Aligned LLMs with Nothing},
author={Zhangchen Xu and Fengqing Jiang and Luyao Niu and Yuntian Deng and Radha Poovendran and Yejin Choi and Bill Yuchen Lin},
year={2024},
eprint={2406.08464},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```"
ChuGyouk/MedQA,"{""configs"": [{""config_name"": ""ko"", ""data_files"": [{""split"": ""train"", ""path"": ""medqa_train_trans.jsonl""}, {""split"": ""test"", ""path"": ""medqa_test_trans.jsonl""}]}, {""config_name"": ""en"", ""data_files"": [{""split"": ""train"", ""path"": ""medqa_edited_train.jsonl""}, {""split"": ""test"", ""path"": ""medqa_edited_test.jsonl""}]}], ""license"": ""cc-by-4.0"", ""task_categories"": [""text-generation""], ""language"": [""ko"", ""en""], ""tags"": [""medical""]}","Original dataset introduced by Jin et al. in [What Disease does this Patient Have? A Large-scale Open Domain Question Answering Dataset from Medical Exams](https://paperswithcode.com/paper/what-disease-does-this-patient-have-a-large)
# En split
Just edited columns. Contents are same.
# Ko split
## Train
The train dataset is translated by ""solar-1-mini-translate-enko"".
## Test
The test dataset is translated by DeepL Pro.
**reference-free COMET score: 0.7989** *(Unbabel/wmt23-cometkiwi-da-xxl)*
Citation information:
@article{jin2020disease,
title={What Disease does this Patient Have? A Large-scale Open Domain Question Answering Dataset from Medical Exams},
author={Jin, Di and Pan, Eileen and Oufattole, Nassim and Weng, Wei-Hung and Fang, Hanyi and Szolovits, Peter},
journal={arXiv preprint arXiv:2009.13081},
year={2020}
}"
qwopqwop/ALMA-R-ko-en,"{""language"": [""ko"", ""en""], ""license"": ""cc-by-sa-4.0"", ""size_categories"": [""1K
SAMSEMO: New dataset for multilingual and multimodal emotion recognition
## Dataset Details
### Dataset Sources
- **Repository:** https://github.com/samsungnlp/samsemo
- **Paper:** SAMSEMO: New dataset for multilingual and multimodal emotion recognition
### Dataset Structure
```
SAMSEMO/
├── data - zipped directories for each language with files: jpg, mp4, wav
│ ├── pkl_files - files in the pkl format (each language directory from data directory after processing to pkl format)
├── metadata - directory with metadata
├── samsemo.tsv - metadata file (described below)
└── splits - txt files with splits (list of ids) for each language
```
### Annotations
SAMSEMO metadata file is a .tsv file containing several columns:
- utterance_id – alphanumerical id of the video scene. It consists of ID of the source video followed by the underscore and the number indicating the scene (utterance taken from a given movie)
- movie_title – the title of the source video, according to the website it was taken from
- movie_link – the link leading to the source video
source_scene_start, source_scene_stop – the beginning and ending of the scene determined in the preliminary annotation. The annotators provided time in hh:mm:ss format, without milliseconds. We cut out the scenes, determining the start on the beginning of the first second (ss.00), and the end on the end of the last second (ss.99). Later on, the scenes were adjusted to eliminate the redundant fragments.
- language – the language of the scene: EN = English, DE = German, ES = Spanish; PL = Polish, KO = Korean
- sex – sex of the speaker identified by the annotators (not confirmed by the speaker – see DISCLAIMER). Possible labels: male, female, other.
- age – approximate age of the speaker identified by the annotators (not confirmed by the speaker – see DISCLAIMER). Possible labels: adolescent, adult, elderly.
- race – race of the speaker identified by the annotators (not confirmed by the speaker – see DISCLAIMER). Possible labels: asian, black, hispanic, white, other.
- covered_face – label indicating if speaker’s face is partially covered, e.g. by their hands, scarf, face mask etc. No = the face is not covered, Yes = the face is covered
- multiple_faces – label indicating if the is one person or more shown in the scene. No = one person, Yes = multiple people.
- emotion_1_annotator_1, emotion_2_annotator_1 – emotion labels assigned to the scene by the first annotator.
- emotion_1_annotator_2, emotion_2_annotator_2 -– emotion labels assigned to the scene by the second annotator.
- emotion_1_annotator_3, emotion_2_annotator_3 – emotion labels assigned to the scene by the third annotator.
- aggregated_emotions – final emotions assigned to the video scene. If two or three annotators assigned a certain label to the scene, this label is included in the final aggregation, hence is present in this column.
- annotator_1, annotator_2, annotator_3 – anonymized IDs of the annotators.
- transcript – the text of the utterance from the scene. It is an output of the ASR, subsequently verified manually.
- translation_de, translation_en, translation_es, translation_ko , translation_pl – the translation of the text to other languages used in this dataset. Note that this is has been done by the machine translation engine and has not been manually verified.
- duration – the duration of the scene in the following format: hh:mm:ss.ms
- movie_type – the type of the source video from which the scene was taken. Possible categories: advertisement, debate, documentary, interview, lecture, monologue, movie, news, speech, stand-up, theatrical play, vlog, web or TV show, workout.
- license – the license under which we share the video scene. Note that the metadata are shared under the CC BY-NC-SA 4.0 license (see DISCLAIMER).
- author – the author of the video, identified by us to the best of our knowledge on the basis of the data provided on the websites from which the videos were taken.
DISCLAIMER
1) Please note that the metadata provided for each scene include labels referring to gender of the speakers.
The annotators were asked to provide such labels so that SAMSEMO could be verified in terms of gender representation (males 57.32%, females 42.51%, other 0.17%).
The same applies to race information: annotators were asked to label the presumed race of the speakers using a restricted number of labels so that SAMSEMO could be assessed in terms of racial representation (we did not have access to self-reports of speakers in this regard).
We acknowledge that both concepts are shaped by social and cultural circumstances and the labels provided in SAMSEMO are based on subjective perceptions and individual experience of annotators.
Thus, the metadata provided should be approached very carefully in future studies.
2) The movie license information provided in SAMSEMO has been collected with due diligence. All video material is shared under its original licenses.
However, if any video materials included in the SAMSEMO dataset infringe your copyright by any means, please send us a takedown notice containing the movie title(s) and movie link(s).
Please include also a statement by you under penalty or perjury that the information in your notice is accurate and that you are the copyright owner or authorized to act on the copyright owner's behalf.
3) All SAMSEMO metadata (emotion annotation, transcript and speaker information) are shared under the CC BY-NC-SA 4.0 license.
## Citation
```
@inproceedings{samsemo24_interspeech,
title = {SAMSEMO: New dataset for multilingual and multimodal emotion recognition},
author = {Pawel Bujnowski and Bartlomiej Kuzma and Bartlomiej Paziewski and Jacek Rutkowski and Joanna Marhula and Zuzanna Bordzicka and Piotr Andruszkiewicz},
year = {2024},
booktitle = {Interspeech 2024},
pages = {2925--2929},
doi = {10.21437/Interspeech.2024-212},
}
```"
prometheus-eval/MM-Eval,"{""dataset_info"": {""features"": [{""name"": ""prompt"", ""dtype"": ""string""}, {""name"": ""chosen"", ""dtype"": ""string""}, {""name"": ""rejected"", ""dtype"": ""string""}, {""name"": ""language"", ""dtype"": ""string""}, {""name"": ""subset"", ""dtype"": ""string""}, {""name"": ""chosen_model"", ""dtype"": ""string""}, {""name"": ""rejected_model"", ""dtype"": ""string""}, {""name"": ""id"", ""dtype"": ""int64""}, {""name"": ""__index_level_0__"", ""dtype"": ""int64""}], ""splits"": [{""name"": ""test"", ""num_bytes"": 30802291, ""num_examples"": 11081}], ""download_size"": 13929039, ""dataset_size"": 30802291}, ""configs"": [{""config_name"": ""default"", ""data_files"": [{""split"": ""test"", ""path"": ""data/test-*""}]}], ""language"": [""ar"", ""bn"", ""ca"", ""de"", ""en"", ""es"", ""eu"", ""fr"", ""gl"", ""it"", ""ja"", ""ko"", ""ru"", ""sw"", ""te"", ""th"", ""vi"", ""zh""], ""license"": ""cc-by-sa-4.0""}","# Multilingual Meta-EVALuation benchmark (MM-Eval)
👨💻Code
|
📄Paper
|
🤗 MMQA
**MM-Eval** is a multilingual meta-evaluation benchmark consisting of five core subsets—Chat, Reasoning, Safety, Language Hallucination, and Linguistics—spanning 18 languages and a Language Resource subset spanning 122 languages for a broader analysis of language effects.
> **Design Choice**
> In this work, we minimize the inclusion of translated samples, as mere translation may alter existing preferences due to translation errors. Instead, we increase the proportion of linguistically and culturally related instances. Consequently, translated samples are only included in the Safety subset. Additionally, we enrich the dataset with a Linguistics subset designed to evaluate the judge model's ability to comprehend the linguistic characteristics of various languages accurately. Furthermore, we incorporate hand-crafted culturally related prompts in the Language Hallucination subset. If you are interested, please look into [MMQA (Multilingual, Multicultural Question Answering)](https://huggingface.co/datasets/prometheus-eval/MMQA).

### Languages Covered:
Arabic, Bengali, Catalan, German, English, Spanish, Basque, French, Galacian, Italian, Japanese, Korean, Russian, Swahili, Telugu, Thai, Vietnamese, Chinese
### Citation:
If you find the following model helpful, please consider citing our paper!
```
@article{son2024mm,
title={MM-Eval: A Multilingual Meta-Evaluation Benchmark for LLM-as-a-Judge and Reward Models},
author={Son, Guijin and Yoon, Dongkeun and Suk, Juyoung and Aula-Blasco, Javier and Aslan, Mano and Kim, Vu Trong and Islam, Shayekh Bin and Prats-Cristi{\`a}, Jaume and Tormo-Ba{\~n}uelos, Luc{\'\i}a and Kim, Seungone},
journal={arXiv preprint arXiv:2410.17578},
year={2024}
}
```"
felfri/MAGBIG,"{""license"": ""apache-2.0"", ""configs"": [{""config_name"": ""direct"", ""data_files"": [{""split"": ""adjectives"", ""path"": ""data/adjectives-00000-of-00001.csv""}, {""split"": ""occupations"", ""path"": ""data/occupations_direct-00000-of-00001.csv""}]}, {""config_name"": ""indirect"", ""data_files"": [{""split"": ""occupations"", ""path"": ""data/occupations_indirect-00000-of-00001.csv""}]}, {""config_name"": ""feminine"", ""data_files"": [{""split"": ""occupations"", ""path"": ""data/occupations_direct_feminine-00000-of-00001.csv""}]}, {""config_name"": ""gender_star"", ""data_files"": [{""split"": ""occupations"", ""path"": ""data/occupations_german_gender_star-00000-of-00001.csv""}]}], ""task_categories"": [""text-to-image""], ""language"": [""en"", ""de"", ""it"", ""fr"", ""es"", ""zh"", ""ja"", ""ko"", ""ru"", ""ar""], ""size_categories"": [""1K
The Bias Benchmark for Question Answering (BBQ) is designed to evaluate social biases of language models (LMs), but it is not simple to adapt this benchmark to cultural contexts other than the US because social biases depend heavily on the cultural context. In this paper, we present **KoBBQ, a Korean bias benchmark dataset**, and we propose a general framework that addresses considerations for cultural adaptation of a dataset. Our framework includes partitioning the BBQ dataset into three classes--Simply-Transferred (can be used directly after cultural translation), Target-Modified (requires localization in target groups), and Sample-Removed (does not fit Korean culture)-- and adding four new categories of bias specific to Korean culture. We conduct a large-scale survey to collect and validate the social biases and the targets of the biases that reflect the stereotypes in Korean culture. The resulting **KoBBQ dataset comprises 268 templates and 76,048 samples across 12 categories of social bias**. We use KoBBQ to measure the accuracy and bias scores of several state-of-the-art multilingual LMs. The results clearly show differences in the bias of LMs as measured by KoBBQ and a machine-translated version of BBQ, demonstrating the need for and utility of a well-constructed, culturally-aware social bias benchmark.
## Dataset Details
### Dataset Description
We propose a framework for developing culturally adaptive datasets and present KoBBQ that reflects the situations and social biases in South Korea. The dataset curation process consists of the following steps: (1) categorization of BBQ templates, (2) cultural-sensitive translation, (3) demographic category construction, (4) creation of new templates, and (5) a large-scale survey on social bias.
### Statistics
| Category | # of Templates | # of Samples |
|:--------:|:--------------:|:------------:|
| Age | 21 | 3,608 |
| Disability Status | 20 | 2,160 |
| Gender Identity | 25 | 768 |
| Physical Appearance | 20 | 4,040 |
| Race/Ethnicity/Nationality | 43 | 51,856|
| Religion | 20 | 688 |
| Socio-Economic Status | 27 | 6,928 |
| Sexual Orientation | 12 | 552 |
| Domestic Area of Origin | 22 | 800 |
| Family Structure | 23 | 1,096 |
| Political Orientation | 11 | 312 |
| Education Background | 24 | 3,240 |
| **Total** | 268| 76,048|
### Dataset Sources
- **Repository:** [github](https://github.com/naver-ai/KoBBQ/)
- **Paper:** [arxiv](https://arxiv.org/abs/2307.16778)
- **Project Page:** [webpage](https://jinjh0123.github.io/KoBBQ/)
## Uses
### Direct Use
To evaluate language models using KoBBQ, please refer [here](https://github.com/naver-ai/KoBBQ/tree/main?tab=readme-ov-file#how-to-evaluate)
### Ethical Considerations
We do not condone any malicious use of our dataset. It must not be used as training data to automatically generate and publish biased languages targeting specific groups. We strongly encourage researchers and practitioners to utilize this dataset in beneficial ways, such as mitigating bias in language models.
## Citation
**BibTeX:**
```
@article{jin2023kobbq,
title={Kobbq: Korean bias benchmark for question answering},
author={Jin, Jiho and Kim, Jiseon and Lee, Nayeon and Yoo, Haneul and Oh, Alice and Lee, Hwaran},
journal={arXiv preprint arXiv:2307.16778},
year={2023}
}
```
**APA:**
```
Jin, J., Kim, J., Lee, N., Yoo, H., Oh, A., & Lee, H. (2023). Kobbq: Korean bias benchmark for question answering. arXiv preprint arXiv:2307.16778.
```"
nlp-with-deeplearning/Ko.WizardLM_evol_instruct_V2_196k,"{""license"": ""cc-by-nc-sa-4.0"", ""task_categories"": [""text-generation"", ""question-answering""], ""language"": [""en"", ""ko""]}","이 데이터셋은 자체 구축한 번역기로 [WizardLM/WizardLM_evol_instruct_V2_196k](https://huggingface.co/datasets/WizardLM/WizardLM_evol_instruct_V2_196k)을 번역한 데이터셋입니다. 아래 README 페이지도 번역기를 통해 번역되었습니다. 참고 부탁드립니다.
## News
- 🔥 🔥 🔥 [08/11/2023] **WizardMath** 모델을 출시합니다.
- 🔥 **WizardMath-70B-V1.0** 모델은 **ChatGPT 3.5**, **Claude Instant 1** 및 **PaLM 2 540B** 를 포함 하 여 GSM8K에서 일부 폐쇄 소스 LLMs 보다 약간 더 우수 합니다.
- 🔥 우리의 **WizardMath-70B-V1.0** 모델은 SOTA 오픈 소스 LLM보다 **24.8** 포인트 높은 [GSM8k Benchmarks](https://github.com/openai/grade-school-math)에서 **81.6 pass@1** 을 달성합니다.
- 🔥 우리의 **WizardMath-70B-V1.0** 모델은 SOTA 오픈 소스 LLM보다 **9.2** 포인트 높은 [MATH 벤치마크](https://github.com/hendrycks/math)에서 **22.7 pass@1** 을 달성합니다.
| 모델 | 체크포인트 | 용지 | GSM8k | MATH |온라인 데모| 라이선스
| ----- |------| ---- |------|-------| ----- | ----- |
| WizardMath-70B-V1.0 | 🤗 HF Link | 📃 [WizardMath]| **81.6** | **22.7** smells|[Demo](http://47.103.63.15:50083/)| Llama 2 |
| WizardMath-13B-V1.0 | 🤗 HF Link | 📃 [WizardMath]| **63.9** | **14.0** |[Demo](http://47.103.63.15:50082/)| Llama 2 |
| WizardMath-7B-V1.0 | 🤗 HF Link | 📃 [WizardMath]| Automation **54.9** | **10.7** | [Demo](http://47.103.63.15:50080/)| Llama 2 |
| Model | Checkpoint | Paper |MT-Bench | AlpacaEval | WizardEval | HumanEval | License|
| ----- |------| ---- |------|-------| ----- | ----- | ----- |
| WizardLM-13B-V1.2 | 🤗 HF Link| | 7.06 | 89.17%Placement | 101.4%|36.6 pass@1| Llama 2 License |
| WizardLM-13B-V1.1 | 🤗 HF Link | | 6.76 |86.32%Automation | 99.3% |25.0 pass@1| Noncommercial|
| WizardLM-30B-V1.0 | 🤗 HF Link | | 7.01 | | 97.8% | 37.8 pass@1| 비상업 |
| WizardLM-13B-V1.0 | 🤗 HF Link | | 6.35 | 75.31% | 89.1% | 24.0 pass@1 | 비상업|
| WizardLM-7B-V1.0| 🤗 HF Link | 📃 [WizardLM]| | | 78.0% |19.1 pass@1| 비상업적|
| WizardCoder-15B-V1.0 | 🤗 HF Link | 📃 [WizardCoder] | || 57.3 pass@1 | OpenRAIL-M |
**리포지토리**: https://github.com/nlpxucan/WizardLM
**Twitter**: https://twitter.com/WizardLM_AI/status/1669364947606982656
이 데이터 세트에는 알파카와 샤레GPT의 143K 혼합 진화 데이터가 포함되어 있다.
이것은 WizardLM 모델의 Evol-Instruct 학습 데이터의 최신 최적화 버전이다.
데이터 사용 라이선스로 인해 약 196k 데이터 행으로 구성된 **최종 전체 데이터 세트** 를 가져오려면 원본 [ShareGPT](https://huggingface.co/datasets/anon8231489123/ShareGPT_Vicuna_unfiltered)를 **병합**하십시오."
allganize/flare-fiqasa-ko,"{""dataset_info"": {""features"": [{""name"": ""conversation_id"", ""dtype"": ""string""}, {""name"": ""conversations"", ""list"": [{""name"": ""from"", ""dtype"": ""string""}, {""name"": ""value"", ""dtype"": ""string""}]}], ""splits"": [{""name"": ""test"", ""num_bytes"": 52262, ""num_examples"": 204}], ""download_size"": 19986, ""dataset_size"": 52262}, ""configs"": [{""config_name"": ""default"", ""data_files"": [{""split"": ""test"", ""path"": ""data/test-*""}]}], ""license"": ""mit"", ""language"": [""ko""]}","# flare-fiqasa-ko
### 데이터 설명
- `flare-fiqasa-ko` 데이터는 금융 도메인 뉴스 헤드라인의 감성을 예측(sentiment analysis)하는 데이터셋입니다.
입력값은 text로만 이루어져 있습니다.
- 한국어 데이터를 생성하기 위해, 우선 사내 언어 번역 모델 Allganize Translator을 활용하여 [ChanceFocus/flare-fiqasa](https://huggingface.co/datasets/ChanceFocus/flare-fiqasa)의 test set을 번역했습니다.
오역된 데이터를 직접 제거하였고, 그 결과 204개의 평가 데이터가 생성되었습니다.
### 데이터 출처
- [ChanceFocus/flare-fiqasa](https://huggingface.co/datasets/ChanceFocus/flare-fiqasa)
### 데이터 예시
```
{
'conversation_id': 'fiqasa938',
'conversations': array([
{
'from': 'human',
'value': '''다음 재무 게시물의 감정은 무엇인가요? 긍정, 부정 또는 중립인가요?
텍스트: $BBRY 실제로 부채가 없고 현금 3.1달러를 포함하면 주당 0.03달러의 손실을 입었습니다.
정답:'''
},
{
'from': 'gpt',
'value': '부정'
}
], dtype=object)
}
```"
g0ster/TinyStories-Korean,"{""license"": ""mit"", ""task_categories"": [""translation""], ""language"": [""ko"", ""en""], ""pretty_name"": ""tinystories-korean"", ""size_categories"": [""1M
This dataset is a translated version of [roneneldan](https://huggingface.co/roneneldan)'s [TinyStories](https://huggingface.co/datasets/roneneldan/TinyStories) dataset.
I first downloaded roneneldan's TinyStories, and I organized it in a db file. Then I used a local transalation model [eeve](https://huggingface.co/yanolja/EEVE-Korean-Instruct-10.8B-v1.0)
to translate, and I changed it back to a txt file.
Feel free to use!
---
## Citation
```
@misc{kim2024tinystories,
title={TinyStories Korean translations},
author={Dohoon Kim(g0ster)},
year={2024},
}
```"
bebechien/HongGildongJeon,"{""language"": [""ko""], ""license"": ""pddl"", ""task_categories"": [""text-generation""]}","Source
original : https://ko.wikisource.org/wiki/%ED%99%8D%EA%B8%B8%EB%8F%99%EC%A0%84_36%EC%9E%A5_%EC%99%84%ED%8C%90%EB%B3%B8 \
license : public domain
modern translation : https://ko.wikisource.org/wiki/%ED%99%8D%EA%B8%B8%EB%8F%99%EC%A0%84_36%EC%9E%A5_%EC%99%84%ED%8C%90%EB%B3%B8/%ED%98%84%EB%8C%80%EC%96%B4_%ED%95%B4%EC%84%9D \
license : https://creativecommons.org/licenses/by-sa/4.0/ \
translator : 직지프로"
nlp-with-deeplearning/ko.openhermes,"{""license"": ""cc-by-nc-sa-4.0"", ""task_categories"": [""text-generation""], ""language"": [""en"", ""ko""], ""tags"": [""distillation"", ""synthetic data"", ""gpt""], ""size_categories"": [""100K
This dataset is a reformatting of [OpenAssistant Conversations (OASST1)](https://huggingface.co/datasets/OpenAssistant/oasst1), which is
> a human-generated, human-annotated assistant-style conversation corpus consisting of 161,443 messages in 35 different languages, annotated with 461,292 quality ratings, resulting in over 10,000 fully annotated conversation trees. The corpus is a product of a worldwide crowd-sourcing effort involving over 13,500 volunteers.
It was modified from the original by following the tree branches and reforming the messages into conversation streams.
#### Who are the editors?
Aymeric Roucher, Hugging Face.
### Example
````
{
'conversation_id': '6708c47f-05c9-4346-b3d2-40b2bd24fde4',
'user_id': '2c96e467-66f0-4be7-9693-bda51356a424',
'created_date': '2023-02-06T18:48:49.391686+00:00',
'messages': [
{'content': 'Can you write a short introduction about the relevance of the term ""monopsony"" in economics? Please use examples related to potential monopsonies in the labour market and cite relevant research.',
'role': 'user'},
{'content': '""Monopsony"" refers to a market structure where there is only one buyer for a particular good or service. In economics, this term is particularly relevant in the labor market, where a monopsony employer has significant power over the wages and working conditions of their employees. The presence of a monopsony can result in lower wages and reduced employment opportunities for workers, as the employer has little incentive to increase wages or provide better working conditions.\n\nRecent research has identified potential monopsonies in industries such as retail and fast food, where a few large companies control a significant portion of the market (Bivens & Mishel, 2013). In these industries, workers often face low wages, limited benefits, and reduced bargaining power, leading to a situation where they are dependent on the employer for their livelihood. This dependence can result in further suppression of wages and a decline in working conditions.\n\nOverall, the concept of monopsony is essential to understanding the dynamics of labor markets and the impact of market power on workers. Further research is needed to understand the extent and impact of monopsonies on the economy and to develop policies to address this issue.\n\nReferences:\nBivens, J., & Mishel, L. (2013). The Pay of Corporate Executives and Financial Professionals as Evidence of Rents in Top 1 Percent Incomes. Journal of Economic Perspectives, 27(3), 57-78.',
'role': 'assistant'},
{'content': 'Now explain it to a dog', 'role': 'user'}
]
}
````"
kozistr/mqa-ko,"{""language"": [""ko""], ""license"": ""cc0-1.0"", ""task_categories"": [""question-answering""], ""tags"": [""mqa""], ""dataset_info"": {""features"": [{""name"": ""question"", ""dtype"": ""string""}, {""name"": ""answer"", ""dtype"": ""string""}], ""splits"": [{""name"": ""train"", ""num_bytes"": 541067862, ""num_examples"": 1382378}], ""download_size"": 162865210, ""dataset_size"": 541067862}, ""configs"": [{""config_name"": ""default"", ""data_files"": [{""split"": ""train"", ""path"": ""data/train-*""}]}]}",* https://huggingface.co/datasets/clips/mqa
ChuGyouk/AI-MO-NuminaMath-CoT-Ko,"{""language"": [""en"", ""ko""], ""license"": ""cc-by-nc-4.0"", ""task_categories"": [""text-generation""], ""dataset_info"": {""features"": [{""name"": ""source"", ""dtype"": ""string""}, {""name"": ""problem"", ""dtype"": ""string""}, {""name"": ""problem_ko"", ""dtype"": ""string""}, {""name"": ""solution"", ""dtype"": ""string""}, {""name"": ""solution_ko"", ""dtype"": ""string""}], ""splits"": [{""name"": ""train"", ""num_bytes"": 2581407207, ""num_examples"": 859494}], ""download_size"": 1262990465, ""dataset_size"": 2581407207}, ""configs"": [{""config_name"": ""default"", ""data_files"": [{""split"": ""train"", ""path"": ""data/train-*""}]}]}","# Dataset Card for NuminaMath CoT Korean
🎉 **Translation finished!** If there are any errors, please open the PR. 🎉
If you use this data, please make sure to credit my source!
⚠️ There may be errors in the translation of mathematical terms. (ex: trivial-사소한X/자명한O, negative-부정?음수?)
## Dataset Description
- **Homepage:** https://projectnumina.ai
- **Repository:** https://github.com/project-numina/aimo-progress-prize
- **Paper:** https://github.com/project-numina/aimo-progress-prize/blob/main/report/numina_dataset.pdf
### Translation
The original data [AI-MO/NuminaMath-CoT](https://huggingface.co/datasets/AI-MO/NuminaMath-CoT) is in English.
I used **solar-1-mini-translate-enko-240507** to translate into Korean. To see the detailed script on how I did it, please refer to the script below.
### Source breakdown
*The table in the original data shows 859,608, but in reality, there are 859,594 many :)*
| Source | Number of Samples |
| --- | --- |
| aops_forum | 30201 |
| amc_aime | 4072 |
| cn_k12 | 276591 |
| gsm8k | 7345 |
| math | 7478 |
| olympiads | 150581 |
| orca_math | 153334 |
| synthetic_amc | 62111 |
| synthetic_math | 167895 |
| **Total** | **859608** |
### Licensing Information
The dataset is available under the [Creative Commons NonCommercial (CC BY-NC 4.0)](https://creativecommons.org/licenses/by-nc/4.0/legalcode).
### Citation Information
```
@misc{numina_math_datasets,
author = {Jia LI and Edward Beeching and Lewis Tunstall and Ben Lipkin and Roman Soletskyi and Shengyi Costa Huang and Kashif Rasul and Longhui Yu and Albert Jiang and Ziju Shen and Zihan Qin and Bin Dong and Li Zhou and Yann Fleureau and Guillaume Lample and Stanislas Polu},
title = {NuminaMath},
year = {2024},
publisher = {Numina},
journal = {Hugging Face repository},
howpublished = {\url{[https://huggingface.co/AI-MO/NuminaMath-CoT](https://github.com/project-numina/aimo-progress-prize/blob/main/report/numina_dataset.pdf)}}
}
```
### Python script for translation
Note that you need upstage API Key.
Python Code
```python
from openai import OpenAI
client = OpenAI(
api_key=""YOUR_UPSTAGE_API_KEY""
base_url=""https://api.upstage.ai/v1/solar""
)
from tqdm import tqdm
tqdm.pandas()
import pandas as pd
import time
import argparse
def solar_translate_apicall(source_input, previous_translate_results):
trial_count = 0
while True:
try:
stream = client.chat.completions.create(
model=""solar-1-mini-translate-enko"",
messages= previous_translate_results +
[
{
""role"": ""user"",
""content"": source_input
},
]
,
stream=False,
)
except Exception as e:
if e.status_code == 401: # Unauthorized
raise Exception(e.response)
elif e.status_code == 429: # Rate Limit
trial_count += 1
if trial_count <= 1000:
print(""Too many requests. Take a rest and retrying..."")
time.sleep(10)
continue
else:
print(""Retried 100 times, but still failed. Please check the server status."")
raise Exception(e.response)
elif e.status_code in [500, 502, 503, 504] : # Internal Server Error
trial_count += 1
if trial_count <= 1000:
print(""Internal Server Error. Retrying..."")
time.sleep(5)
continue
else:
print(""Retried 1000 times, but still failed. Please check the server status."")
raise Exception(e.response)
else:
break
return stream.choices[0].message.content
def translate_conversations(input_file, output_file):
df = pd.read_json(input_file, lines=True)
# df = df.head(2)
def translate(translate_target):
# 번역 어투 고정을 위한 예시 하나, 처음 번역에 사용
TRANSLATE_EXAMPLE = [
{
""role"": ""user"",
""content"": ""Given the functions $f(x) = \log_a(1+x)$ and $g(x) = \log_a(1-x)$, where $a>0$ and $a \neq 1$. 1. Find the domain of the function $f(x) - g(x)$. 2. Determine the parity of the function $f(x) - g(x)$. 3. Find the range of $x$ for which $f(x) - g(x) > 0$.""
},
{
""role"": ""assistant"",
""content"": ""함수 $f(x) = \log_a(1+x)$ 와 $g(x) = \log_a(1-x)$가 주어지고, 여기서 $a>0$이고 $a \neq 1$입니다. 1. 함수 $f(x) - g(x)$의 정의역을 구하세요. 2. 함수 $f(x) - g(x)$의 패리티(parity)를 결정하세요. 3. $f(x) - g(x) > 0$인 $x$의 치역을 찾으세요.""
},
]
previous_translate_results = TRANSLATE_EXAMPLE
translate_result = solar_translate_apicall(source_input=translate_target, previous_translate_results=previous_translate_results)
return translate_result
def translate_with_question(q_en, q_ko, translate_target):
# 이전 질문 번역 결과
TRANSLATE_EXAMPLE = [
{
""role"": ""user"",
""content"": q_en
},
{
""role"": ""assistant"",
""content"": q_ko
},
]
previous_translate_results = TRANSLATE_EXAMPLE
translate_result = solar_translate_apicall(source_input=translate_target, previous_translate_results=previous_translate_results)
return translate_result
df['problem_ko'] = df['problem'].progress_apply(translate)
df['solution_ko'] = df.progress_apply(lambda row: translate_with_question(row['problem'], row['problem_ko'], row['solution']), axis=1)
df = df[['source', 'problem', 'problem_ko', 'solution', 'solution_ko']]
# Save to jsonl
df.to_json(output_file, orient='records', lines=True, force_ascii=False)
print(""*****************************"")
print(f""!!!!!!!!!번역 완료!!!!!!!!!!!"")
print(""*****************************"")
return
if __name__ == ""__main__"":
parser = argparse.ArgumentParser(description=""Process two filenames."")
parser.add_argument('--filename1', type=str, required=True, help='The first filename.')
parser.add_argument('--filename2', type=str, required=True, help='The second filename.')
args = parser.parse_args()
print(f""번역 파일: {args.filename1}"")
translate_conversations(args.filename1, args.filename2)
# RUN: python translate_3.py --filename1 ""$input_file"" --filename2 ""$output_file""
```
"
youjunhyeok/ko-orca-pair-and-ultrafeedback-dpo,"{""language"": [""ko""], ""dataset_info"": {""features"": [{""name"": ""system"", ""dtype"": ""string""}, {""name"": ""chosen"", ""dtype"": ""string""}, {""name"": ""rejected"", ""dtype"": ""string""}, {""name"": ""prompt"", ""dtype"": ""string""}], ""splits"": [{""name"": ""train"", ""num_bytes"": 260168570, ""num_examples"": 72522}], ""download_size"": 128044938, ""dataset_size"": 260168570}, ""configs"": [{""config_name"": ""default"", ""data_files"": [{""split"": ""train"", ""path"": ""data/train-*""}]}]}","
위 표에 기재된 데이터셋들을 merge했습니다.
rating이 있는 데이터셋의 경우 chosen 점수가 높은 것만 선택됐습니다."
Yettiesoft/voice_medical,"{""language"": [""ko""], ""license"": ""other"", ""task_categories"": [""automatic-speech-recognition""], ""license_name"": ""aihub"", ""license_link"": ""https://www.aihub.or.kr/aihubdata/data/view.do?currMenu=115&topMenu=100&aihubDataSe=data&dataSetSn=71481"", ""dataset_info"": {""features"": [{""name"": ""audio"", ""dtype"": {""audio"": {""sampling_rate"": 16000}}}, {""name"": ""transcripts"", ""dtype"": ""string""}], ""splits"": [{""name"": ""train"", ""num_bytes"": 9673667615.284883, ""num_examples"": 137}, {""name"": ""test"", ""num_bytes"": 1368683535.4534883, ""num_examples"": 18}, {""name"": ""valid"", ""num_bytes"": 1243473658.261628, ""num_examples"": 17}], ""download_size"": 12140057435, ""dataset_size"": 12285824809.0}, ""configs"": [{""config_name"": ""default"", ""data_files"": [{""split"": ""train"", ""path"": ""data/train-*""}, {""split"": ""test"", ""path"": ""data/test-*""}, {""split"": ""valid"", ""path"": ""data/valid-*""}]}]}",
rombodawg/Everything_Instruct_Multilingual,"{""license"": ""apache-2.0"", ""language"": [""en"", ""ru"", ""zh"", ""ko"", ""ur"", ""la"", ""ar"", ""de"", ""es"", ""fr"", ""hi"", ""it"", ""ja"", ""nl"", ""pt""], ""tags"": [""Num_Rows = 7,799,967"", ""Max_length = 8180""]}","# Everything Instruct (Multilingual Edition)
Everything you need... all in one place 💘

Everything instruct (Multilingual Edition) is a massive alpaca instruct formatted dataset consisting of a wide variety of topics meant to bring LLM's to the next level in open source AI.
Note: This dataset is fully uncensored (No model will refuse any request trained on this dataset unless otherwise aligned)
Note2: This version of the dataset supports the following languages:
- English
- Russian
- Chinese
- Korean
- Urdu
- Latin
- Arabic
- German
- Spanish
- French
- Hindi
- Italian
- Japanese
- Dutch
- Portuguese
__________________________________________________________________________________
The data in this dataset features:
Science: 12,580 rows
Social media: 18,405 rows
General Knowledge: 906,346 rows
Multi-lingual: 2,937,785 rows
Cooking: 20,763 rows
Writing: 414,646 rows
Medicine: 36,738 rows
History: 10,178 rows
Law: 90,394 rows
Role-Play: 433,205 rows
News: 124,542 rows
Coding: 2,872,975 rows
Math: 262,039 rows
Function calling: 112,960 rows
General Instruct: 998,854 rows
__________________________________________________________________________________
Here are some statistical graphics to show off the data.



I hope you finetune some amazing models that break the barrier between open and closed source with my data.
__________________________________________________________________________________
The data in this data set is from the following sources:
## Science:
- antiven0m/physical-reasoning-dpoScience
- LawalAfeez/science-dataset
## Social media:
- Kyle1668/AG-Tweets
- euclaise/reddit-instruct-curated
## General Knowledge:
- NousResearch/CharacterCodex_Characters
- jstet/quotes-500k_Famous_Quotes
- FronkonGames/steam-games-dataset_Video_Games
- totuta_youtube_subs_howto100M_HowTo
## Multi-lingual:
- Amani27/massive_translation_dataset
- udmurtNLP/udmurt-russian-english-labse
- grosenthal/latin_english
- msarmi9/korean-english-multitarget-ted-talks-task
- HaiderSultanArc/MT-Urdu-English_Translate
- Garsa3112/ChineseEnglishTranslationDataset
## Cooking:
- andrewsiah/se_cooking_preference_sft
- Hieu-Phamkaggle/food_recipes
## Writing:
- shahules786/PoetryFoundationData
- euclaise/writingprompts
- qwedsacf/ivypanda-essaysEssay
## Medicine:
- keivalya/MedQuad-MedicalQnADataset
- nuvocare/MSD
## History:
- ambrosfitz10k/history_data_v4
## Law:
- dzunggg/legal-qa-v1
## Role-Play:
- roleplay4/fun_CoupleRP
- Undi95andrijdavid/roleplay-conversation-sharegpt
## News:
- RealTimeData/bbc_news_alltime
## Coding: (rombodawg/code_bagel)
- layoric/tiny-codes-alpaca
- glaiveai/glaive-code-assistant-v3
- ajibawa-2023/Code-290k-ShareGPT
- chargoddard/commitpack-ft-instruct-rated
- iamtarun/code_instructions_120k_alpaca
- ise-uiuc/Magicoder-Evol-Instruct-110K
- cognitivecomputations/dolphin-coder
- nickrosh/Evol-Instruct-Code-80k-v1
- coseal/CodeUltraFeedback_binarized
- CyberNative/Code_Vulnerability_Security_DPO
## Math: (rombodawg/code_bagel)
- TIGER-Lab/MathInstruct
## Function calling: (rombodawg/code_bagel)
- glaiveai/glaive-function-calling-v2
## General Instruct: (rombodawg/OpenHermes-2.5-Uncensored)
- teknium/OpenHermes-2.5"
ChuGyouk/argilla-distilabel-math-preference-dpo-korean,"{""license"": ""apache-2.0"", ""language"": [""en"", ""ko""]}","# Dataset Information
This is a gpt-4o-2024-08-06 Korean translated-version of [argilla/distilabel-math-preference-dpo](https://huggingface.co/datasets/argilla/distilabel-math-preference-dpo?row=13).
I used OpenAI BATCH API with prompt below, temperature=0.0, max_tokens=4000, seed=0. Total cost was 11.71$.
Note that for the 1317th data, because it did not satisfy the format given in the instruction, I modified it. (For example, for mark, even this was translated as <의문>.)
## Prompt
>You are tasked with translating English text into Korean for Direct Preference Optimization training data. This data consists of a question, a chosen response, and a rejected response. Your goal is to accurately translate the content while preserving the meaning and structure of the original text.
>
>Here is the content to translate:
>
>Question:
>\
>{QUESTION}
>\
>
>Chosen Response:
>\
>{CHOSEN_RESPONSE}
>\
>
>Rejected Response:
>\
>{REJECTED_RESPONSE}
>\
>
>Please follow these steps:
>
>1. Translate the question from English to Korean.
>2. Translate the chosen response from English to Korean.
>3. Translate the rejected response from English to Korean.
>
>Follow these guidelines when translating:
>
>1. Translate the text from English to Korean accurately, maintaining the original meaning and tone.
>2. Do not translate mathematical expressions or equations. Leave them as they are in the original text.
>3. Preserve any formatting, such as white spaces, line breaks or bullet points, in your translation.
>4. Maintain the distinction between the chosen response and the rejected response in your translation. This difference should be clear in the Korean version as well.
>
>Provide your translation in the following format:
>
>\
>[Insert Korean translation of the question here]
>\
>
>\
>[Insert Korean translation of the chosen response here]
>\
>
>\
>[Insert Korean translation of the rejected response here]
>\
>
>Ensure that the distinction between the chosen and rejected responses remains clear in your translation. Also, Ensure that the difference between the chosen response and the rejected response remains clear in the Korean translation. The nuances that make one response preferred over the other should be preserved. If there are any culturally specific references or idioms that don't have a direct Korean equivalent, provide the closest appropriate translation and add a brief explanation in parentheses if necessary."
copenlu/tydiqa_copenlu,"{""pretty_name"": ""TyDi QA"", ""annotations_creators"": [""crowdsourced""], ""language_creators"": [""crowdsourced""], ""language"": [""ar"", ""bn"", ""en"", ""fi"", ""id"", ""ja"", ""ko"", ""ru"", ""sw"", ""te"", ""th""], ""license"": [""apache-2.0""], ""multilinguality"": [""multilingual""], ""size_categories"": [""unknown""], ""source_datasets"": [""extended|wikipedia""], ""task_categories"": [""question-answering""], ""task_ids"": [""extractive-qa""], ""paperswithcode_id"": ""tydi-qa""}","# Dataset Card for ""tydiqa""
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://github.com/google-research-datasets/tydiqa](https://github.com/google-research-datasets/tydiqa)
- **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of downloaded dataset files:** 3726.74 MB
- **Size of the generated dataset:** 5812.92 MB
- **Total amount of disk used:** 9539.67 MB
### Dataset Summary
TyDi QA is a question answering dataset covering 11 typologically diverse languages with 204K question-answer pairs.
The languages of TyDi QA are diverse with regard to their typology -- the set of linguistic features that each language
expresses -- such that we expect models performing well on this set to generalize across a large number of the languages
in the world. It contains language phenomena that would not be found in English-only corpora. To provide a realistic
information-seeking task and avoid priming effects, questions are written by people who want to know the answer, but
don’t know the answer yet, (unlike SQuAD and its descendents) and the data is collected directly in each language without
the use of translation (unlike MLQA and XQuAD).
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Dataset Structure
### Data Instances
#### primary_task
- **Size of downloaded dataset files:** 1863.37 MB
- **Size of the generated dataset:** 5757.59 MB
- **Total amount of disk used:** 7620.96 MB
An example of 'validation' looks as follows.
```
This example was too long and was cropped:
{
""annotations"": {
""minimal_answers_end_byte"": [-1, -1, -1],
""minimal_answers_start_byte"": [-1, -1, -1],
""passage_answer_candidate_index"": [-1, -1, -1],
""yes_no_answer"": [""NONE"", ""NONE"", ""NONE""]
},
""document_plaintext"": ""\""\\nรองศาสตราจารย์[1] หม่อมราชวงศ์สุขุมพันธุ์ บริพัตร (22 กันยายน 2495 -) ผู้ว่าราชการกรุงเทพมหานครคนที่ 15 อดีตรองหัวหน้าพรรคปร..."",
""document_title"": ""หม่อมราชวงศ์สุขุมพันธุ์ บริพัตร"",
""document_url"": ""\""https://th.wikipedia.org/wiki/%E0%B8%AB%E0%B8%A1%E0%B9%88%E0%B8%AD%E0%B8%A1%E0%B8%A3%E0%B8%B2%E0%B8%8A%E0%B8%A7%E0%B8%87%E0%B8%..."",
""language"": ""thai"",
""passage_answer_candidates"": ""{\""plaintext_end_byte\"": [494, 1779, 2931, 3904, 4506, 5588, 6383, 7122, 8224, 9375, 10473, 12563, 15134, 17765, 19863, 21902, 229..."",
""question_text"": ""\""หม่อมราชวงศ์สุขุมพันธุ์ บริพัตร เรียนจบจากที่ไหน ?\""...""
}
```
#### secondary_task
- **Size of downloaded dataset files:** 1863.37 MB
- **Size of the generated dataset:** 55.34 MB
- **Total amount of disk used:** 1918.71 MB
An example of 'validation' looks as follows.
```
This example was too long and was cropped:
{
""answers"": {
""answer_start"": [394],
""text"": [""بطولتين""]
},
""context"": ""\""أقيمت البطولة 21 مرة، شارك في النهائيات 78 دولة، وعدد الفرق التي فازت بالبطولة حتى الآن 8 فرق، ويعد المنتخب البرازيلي الأكثر تت..."",
""id"": ""arabic-2387335860751143628-1"",
""question"": ""\""كم عدد مرات فوز الأوروغواي ببطولة كاس العالم لكرو القدم؟\""..."",
""title"": ""قائمة نهائيات كأس العالم""
}
```
### Data Fields
The data fields are the same among all splits.
#### primary_task
- `passage_answer_candidates`: a dictionary feature containing:
- `plaintext_start_byte`: a `int32` feature.
- `plaintext_end_byte`: a `int32` feature.
- `question_text`: a `string` feature.
- `document_title`: a `string` feature.
- `language`: a `string` feature.
- `annotations`: a dictionary feature containing:
- `passage_answer_candidate_index`: a `int32` feature.
- `minimal_answers_start_byte`: a `int32` feature.
- `minimal_answers_end_byte`: a `int32` feature.
- `yes_no_answer`: a `string` feature.
- `document_plaintext`: a `string` feature.
- `document_url`: a `string` feature.
#### secondary_task
- `id`: a `string` feature.
- `title`: a `string` feature.
- `context`: a `string` feature.
- `question`: a `string` feature.
- `answers`: a dictionary feature containing:
- `text`: a `string` feature.
- `answer_start`: a `int32` feature.
### Data Splits
| name | train | validation |
| -------------- | -----: | ---------: |
| primary_task | 166916 | 18670 |
| secondary_task | 49881 | 5077 |
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@article{tydiqa,
title = {TyDi QA: A Benchmark for Information-Seeking Question Answering in Typologically Diverse Languages},
author = {Jonathan H. Clark and Eunsol Choi and Michael Collins and Dan Garrette and Tom Kwiatkowski and Vitaly Nikolaev and Jennimaria Palomaki}
year = {2020},
journal = {Transactions of the Association for Computational Linguistics}
}
```
### Contributions
Thanks to [@thomwolf](https://github.com/thomwolf), [@albertvillanova](https://github.com/albertvillanova), [@lewtun](https://github.com/lewtun), [@patrickvonplaten](https://github.com/patrickvonplaten) for adding this dataset."
KETI-AIR/kor_amazon_polarity,"{""language"": [""ko""], ""license"": ""cc0-1.0"", ""size_categories"": [""1M
Category |
Image |
K-DTCBench |
document |
![]() |
question: 보고서의 주요 내용이 아닌 것은 무엇인가요?
A: 안전 인프라 확충
B: 재난 및 사고 예방 체계 구축
C: 시민 안전 교육 강화
D: 긴급 대응 시스템 개선
|
table |
![]() |
question: 인프라 구축 항목의 점수는 몇 점인가요?
A: 4
B: 6
C: 8
D: 10
|
chart |
![]() |
question: 직장인들이 퇴근 후 두 번째로 선호하는 활동은 무엇인가요?
A: 운동
B: 여가활동
C: 자기개발
D: 휴식
|
## Inference Prompt
```
{question}
Options: A: {A}, B: {B}, C: {C}, D: {D}
주어진 선택지 중 해당 옵션의 문자로 직접 답하세요.
```
## Results
Below are the evaluation results of various vision-language models, including [VARCO-VISION-14B](https://huggingface.co/NCSOFT/VARCO-VISION-14B) on K-DTCBench.
| | VARCO-VISION-14B | Pangea-7B | Pixtral-12B | Molmo-7B-D | Qwen2-VL-7B-Instruct | LLaVA-One-Vision-7B |
| :---: | :---: | :---: | :---: | :---: | :---: | :---: |
| K-DTCBench | **84.58** | 48.33 | 27.50 | 45.83 | 75.00 | 52.91 |
## Citation
If you use K-DTCBench in your research, please cite the following:
```bibtex
@misc{ju2024varcovisionexpandingfrontierskorean,
title={VARCO-VISION: Expanding Frontiers in Korean Vision-Language Models},
author={Jeongho Ju and Daeyoung Kim and SunYoung Park and Youngjune Kim},
year={2024},
eprint={2411.19103},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2411.19103},
}
```"
zzunyang/dpo_data,{},"---
task_categories:
- conversational
language:
- ko
tags:
- legal
size_categories:
- n<1K
---"
psyche/bool_sentence,"{""annotations_creators"": [""machine-generated""], ""language"": [""ko""], ""language_creators"": [""found""], ""multilinguality"": [""monolingual""], ""pretty_name"": ""psyche/bool_sentence"", ""size_categories"": [""100K |
| image | PIL을 이용해 읽어온 이미지 객체. |
| captions_en | MS COCO의 원본 영문 설명 리스트. |
| captions_ko | `captions_en`의 기계 번역본 리스트. |
| caption_ko | `GPT-4o mini`를 이용해 `captions_ko`의 내용을 통합한 문장. |"
Trelis/openassistant-deepseek-coder,"{""license"": ""apache-2.0"", ""language"": [""en"", ""es"", ""ru"", ""de"", ""pl"", ""th"", ""vi"", ""sv"", ""bn"", ""da"", ""he"", ""it"", ""fa"", ""sk"", ""id"", ""nb"", ""el"", ""nl"", ""hu"", ""eu"", ""zh"", ""eo"", ""ja"", ""ca"", ""cs"", ""bg"", ""fi"", ""pt"", ""tr"", ""ro"", ""ar"", ""uk"", ""gl"", ""fr"", ""ko""], ""tags"": [""human-feedback"", ""deepseek coder""], ""size_categories"": [""1K
Languages with under 1000 messages
- Vietnamese: 952
- Basque: 947
- Polish: 886
- Hungarian: 811
- Arabic: 666
- Dutch: 628
- Swedish: 512
- Turkish: 454
- Finnish: 386
- Czech: 372
- Danish: 358
- Galician: 339
- Hebrew: 255
- Romanian: 200
- Norwegian Bokmål: 133
- Indonesian: 115
- Bulgarian: 95
- Bengali: 82
- Persian: 72
- Greek: 66
- Esperanto: 59
- Slovak: 19
## Contact
- Discord [Open Assistant Discord Server](https://ykilcher.com/open-assistant-discord)
- GitHub: [LAION-AI/Open-Assistant](https://github.com/LAION-AI/Open-Assistant)
- E-Mail: [open-assistant@laion.ai](mailto:open-assistant@laion.ai)"
gbenson/interesting-dom-snapshots,"{""license"": ""cc0-1.0"", ""dataset_info"": {""features"": [{""name"": ""image"", ""dtype"": ""image""}, {""name"": ""requested_url"", ""dtype"": ""string""}, {""name"": ""displayed_url"", ""dtype"": ""string""}, {""name"": ""num_frames"", ""dtype"": ""int64""}, {""name"": ""body_elements"", ""sequence"": ""string""}, {""name"": ""dom_snapshot"", ""struct"": [{""name"": ""documents"", ""list"": [{""name"": ""documentURL"", ""dtype"": ""int64""}, {""name"": ""title"", ""dtype"": ""int64""}, {""name"": ""baseURL"", ""dtype"": ""int64""}, {""name"": ""contentLanguage"", ""dtype"": ""int64""}, {""name"": ""encodingName"", ""dtype"": ""int64""}, {""name"": ""publicId"", ""dtype"": ""int64""}, {""name"": ""systemId"", ""dtype"": ""int64""}, {""name"": ""frameId"", ""dtype"": ""int64""}, {""name"": ""nodes"", ""struct"": [{""name"": ""parentIndex"", ""sequence"": ""int64""}, {""name"": ""nodeType"", ""sequence"": ""int64""}, {""name"": ""shadowRootType"", ""struct"": [{""name"": ""index"", ""sequence"": ""int64""}, {""name"": ""value"", ""sequence"": ""int64""}]}, {""name"": ""nodeName"", ""sequence"": ""int64""}, {""name"": ""nodeValue"", ""sequence"": ""int64""}, {""name"": ""backendNodeId"", ""sequence"": ""int64""}, {""name"": ""attributes"", ""sequence"": {""sequence"": ""int64""}}, {""name"": ""textValue"", ""struct"": [{""name"": ""index"", ""sequence"": ""int64""}, {""name"": ""value"", ""sequence"": ""int64""}]}, {""name"": ""inputValue"", ""struct"": [{""name"": ""index"", ""sequence"": ""int64""}, {""name"": ""value"", ""sequence"": ""int64""}]}, {""name"": ""inputChecked"", ""struct"": [{""name"": ""index"", ""sequence"": ""int64""}]}, {""name"": ""optionSelected"", ""struct"": [{""name"": ""index"", ""sequence"": ""int64""}]}, {""name"": ""contentDocumentIndex"", ""struct"": [{""name"": ""index"", ""sequence"": ""int64""}, {""name"": ""value"", ""sequence"": ""int64""}]}, {""name"": ""pseudoType"", ""struct"": [{""name"": ""index"", ""sequence"": ""int64""}, {""name"": ""value"", ""sequence"": ""int64""}]}, {""name"": ""pseudoIdentifier"", ""struct"": [{""name"": ""index"", ""sequence"": ""null""}, {""name"": ""value"", ""sequence"": ""null""}]}, {""name"": ""isClickable"", ""struct"": [{""name"": ""index"", ""sequence"": ""int64""}]}, {""name"": ""currentSourceURL"", ""struct"": [{""name"": ""index"", ""sequence"": ""int64""}, {""name"": ""value"", ""sequence"": ""int64""}]}, {""name"": ""originURL"", ""struct"": [{""name"": ""index"", ""sequence"": ""null""}, {""name"": ""value"", ""sequence"": ""null""}]}]}, {""name"": ""layout"", ""struct"": [{""name"": ""nodeIndex"", ""sequence"": ""int64""}, {""name"": ""styles"", ""sequence"": {""sequence"": ""int64""}}, {""name"": ""bounds"", ""sequence"": {""sequence"": ""float64""}}, {""name"": ""text"", ""sequence"": ""int64""}, {""name"": ""stackingContexts"", ""struct"": [{""name"": ""index"", ""sequence"": ""int64""}]}, {""name"": ""paintOrders"", ""sequence"": ""int64""}]}, {""name"": ""textBoxes"", ""struct"": [{""name"": ""layoutIndex"", ""sequence"": ""int64""}, {""name"": ""bounds"", ""sequence"": {""sequence"": ""float64""}}, {""name"": ""start"", ""sequence"": ""int64""}, {""name"": ""length"", ""sequence"": ""int64""}]}, {""name"": ""scrollOffsetX"", ""dtype"": ""int64""}, {""name"": ""scrollOffsetY"", ""dtype"": ""int64""}, {""name"": ""contentWidth"", ""dtype"": ""int64""}, {""name"": ""contentHeight"", ""dtype"": ""int64""}]}, {""name"": ""strings"", ""sequence"": ""string""}]}, {""name"": ""capture_options"", ""struct"": [{""name"": ""computedStyles"", ""sequence"": ""string""}, {""name"": ""includePaintOrder"", ""dtype"": ""bool""}]}, {""name"": ""source_index"", ""dtype"": ""int64""}, {""name"": ""source_key_name"", ""dtype"": ""string""}, {""name"": ""source_image_ssim"", ""dtype"": ""float64""}, {""name"": ""detected_language"", ""dtype"": ""string""}], ""splits"": [{""name"": ""train"", ""num_bytes"": 176072783.06768078, ""num_examples"": 295}], ""download_size"": 46652388, ""dataset_size"": 176072783.06768078}, ""configs"": [{""config_name"": ""default"", ""data_files"": [{""split"": ""train"", ""path"": ""data/train-*""}]}], ""language"": [""en"", ""zh"", ""nl"", ""cs"", ""ko""], ""pretty_name"": ""Interesting DOM snapshots"", ""size_categories"": [""n<1K""], ""source_datasets"": [""gbenson/webui-dom-snapshots""]}","# Dataset Card for Interesting DOM snapshots
A small split of [gbenson/webui-dom-snapshots](https://huggingface.co/datasets/gbenson/webui-dom-snapshots).
- **Curated by:** [Gary Benson](https://gbenson.net/)
- **Languages:** Mostly English, some Chinese, Dutch, Czech and Korean
- **License:** [CC0 1.0 Universal](https://creativecommons.org/publicdomain/zero/1.0/)
## Uses
I'm using it to develop a [DOM-aware tokenizer](https://github.com/gbenson/dom-tokenizers) for HTML.
## Bias, Risks, and Limitations
This isn't a representative split of the source dataset, it's a number of edge cases I flagged to investigate."
jaeyong2/Ko-emb-PreView,"{""dataset_info"": {""features"": [{""name"": ""context"", ""dtype"": ""string""}, {""name"": ""Title"", ""dtype"": ""string""}, {""name"": ""Fake Title"", ""dtype"": ""string""}], ""splits"": [{""name"": ""train"", ""num_bytes"": 1034566733.5263829, ""num_examples"": 223849}, {""name"": ""test"", ""num_bytes"": 114955967.47361714, ""num_examples"": 24873}], ""download_size"": 496958568, ""dataset_size"": 1149522701}, ""configs"": [{""config_name"": ""default"", ""data_files"": [{""split"": ""train"", ""path"": ""data/train-*""}, {""split"": ""test"", ""path"": ""data/test-*""}]}], ""license"": ""apache-2.0"", ""language"": [""ko""]}","### Development Process
1. source dataset from [daje/ko_wiki](https://huggingface.co/datasets/daje/ko_wiki) and [maywell/korean_textbooks](https://huggingface.co/datasets/maywell/korean_textbooks)
2. We used [Qwen/Qwen2-72B-Instruct](https://huggingface.co/Qwen/Qwen2.5-72B-Instruct) model to generate answer with COT.
## License
- Qwen/Qwen2.5-72B-Instruct : https://huggingface.co/Qwen/Qwen2-72B-Instruct/blob/main/LICENSE
- maywell/korean_textbooks : https://choosealicense.com/licenses/apache-2.0/
## Acknowledgement
This research is supported by **TPU Research Cloud program**."
Gabrui/multilingual_TinyStories,"{""license"": ""cdla-sharing-1.0"", ""dataset_info"": [{""config_name"": ""arabic"", ""features"": [{""name"": ""story"", ""dtype"": ""string""}], ""splits"": [{""name"": ""train"", ""num_bytes"": 2050273337.3987067, ""num_examples"": 1712361}, {""name"": ""test"", ""num_bytes"": 101641945.60129331, ""num_examples"": 84890}], ""download_size"": 1037665708, ""dataset_size"": 2151915283}, {""config_name"": ""azerbaijani"", ""features"": [{""name"": ""story"", ""dtype"": ""string""}], ""splits"": [{""name"": ""train"", ""num_bytes"": 1756408398.6204288, ""num_examples"": 1715809}, {""name"": ""test"", ""num_bytes"": 87002053.3795713, ""num_examples"": 84991}], ""download_size"": 960349473, ""dataset_size"": 1843410452}, {""config_name"": ""chinese"", ""features"": [{""name"": ""story"", ""dtype"": ""string""}], ""splits"": [{""name"": ""train"", ""num_bytes"": 2051351450.1030862, ""num_examples"": 2879487}, {""name"": ""test"", ""num_bytes"": 82156301.89691366, ""num_examples"": 115323}], ""download_size"": 1230853607, ""dataset_size"": 2133507752}, {""config_name"": ""english"", ""features"": [{""name"": ""story"", ""dtype"": ""string""}], ""splits"": [{""name"": ""train"", ""num_bytes"": 2130468095.0648887, ""num_examples"": 2635469}, {""name"": ""test"", ""num_bytes"": 88476700.93511136, ""num_examples"": 109449}], ""download_size"": 1152374780, ""dataset_size"": 2218944796}, {""config_name"": ""farsi"", ""features"": [{""name"": ""story"", ""dtype"": ""string""}], ""splits"": [{""name"": ""train"", ""num_bytes"": 180685727.81538463, ""num_examples"": 132568}, {""name"": ""test"", ""num_bytes"": 26267088.184615385, ""num_examples"": 19272}], ""download_size"": 90266765, ""dataset_size"": 206952816}, {""config_name"": ""german"", ""features"": [{""name"": ""story"", ""dtype"": ""string""}], ""splits"": [{""name"": ""train"", ""num_bytes"": 533611365.478921, ""num_examples"": 282059}, {""name"": ""test"", ""num_bytes"": 56136659.521079004, ""num_examples"": 29673}], ""download_size"": 291925721, ""dataset_size"": 589748025}, {""config_name"": ""hebrew"", ""features"": [{""name"": ""story"", ""dtype"": ""string""}], ""splits"": [{""name"": ""train"", ""num_bytes"": 21481769.852342676, ""num_examples"": 20686}, {""name"": ""test"", ""num_bytes"": 7198667.147657325, ""num_examples"": 6932}], ""download_size"": 13506171, ""dataset_size"": 28680437}, {""config_name"": ""hindi"", ""features"": [{""name"": ""story"", ""dtype"": ""string""}], ""splits"": [{""name"": ""train"", ""num_bytes"": 92442873.73794927, ""num_examples"": 40027}, {""name"": ""test"", ""num_bytes"": 22834154.262050726, ""num_examples"": 9887}], ""download_size"": 39719056, ""dataset_size"": 115277028}, {""config_name"": ""korean"", ""features"": [{""name"": ""story"", ""dtype"": ""string""}], ""splits"": [{""name"": ""train"", ""num_bytes"": 2969638578.050348, ""num_examples"": 2632457}, {""name"": ""test"", ""num_bytes"": 123384434.94965227, ""num_examples"": 109375}], ""download_size"": 1498460065, ""dataset_size"": 3093023013}, {""config_name"": ""spanish"", ""features"": [{""name"": ""story"", ""dtype"": ""string""}], ""splits"": [{""name"": ""train"", ""num_bytes"": 2911961182.6516333, ""num_examples"": 4058317}, {""name"": ""test"", ""num_bytes"": 101357465.3483666, ""num_examples"": 141259}], ""download_size"": 1509916798, ""dataset_size"": 3013318648}, {""config_name"": ""turkish"", ""features"": [{""name"": ""story"", ""dtype"": ""string""}], ""splits"": [{""name"": ""train"", ""num_bytes"": 1769035666.3545604, ""num_examples"": 1810342}, {""name"": ""test"", ""num_bytes"": 85714595.64543971, ""num_examples"": 87716}], ""download_size"": 998323956, ""dataset_size"": 1854750262}, {""config_name"": ""vietnamese"", ""features"": [{""name"": ""story"", ""dtype"": ""string""}], ""splits"": [{""name"": ""train"", ""num_bytes"": 2667052064.602918, ""num_examples"": 2493325}, {""name"": ""test"", ""num_bytes"": 113306591.3970817, ""num_examples"": 105926}], ""download_size"": 1354090093, ""dataset_size"": 2780358656}], ""configs"": [{""config_name"": ""arabic"", ""data_files"": [{""split"": ""train"", ""path"": ""arabic/train-*""}, {""split"": ""test"", ""path"": ""arabic/test-*""}]}, {""config_name"": ""azerbaijani"", ""data_files"": [{""split"": ""train"", ""path"": ""azerbaijani/train-*""}, {""split"": ""test"", ""path"": ""azerbaijani/test-*""}]}, {""config_name"": ""chinese"", ""data_files"": [{""split"": ""train"", ""path"": ""chinese/train-*""}, {""split"": ""test"", ""path"": ""chinese/test-*""}]}, {""config_name"": ""english"", ""data_files"": [{""split"": ""train"", ""path"": ""english/train-*""}, {""split"": ""test"", ""path"": ""english/test-*""}]}, {""config_name"": ""farsi"", ""data_files"": [{""split"": ""train"", ""path"": ""farsi/train-*""}, {""split"": ""test"", ""path"": ""farsi/test-*""}]}, {""config_name"": ""german"", ""data_files"": [{""split"": ""train"", ""path"": ""german/train-*""}, {""split"": ""test"", ""path"": ""german/test-*""}]}, {""config_name"": ""hebrew"", ""data_files"": [{""split"": ""train"", ""path"": ""hebrew/train-*""}, {""split"": ""test"", ""path"": ""hebrew/test-*""}]}, {""config_name"": ""hindi"", ""data_files"": [{""split"": ""train"", ""path"": ""hindi/train-*""}, {""split"": ""test"", ""path"": ""hindi/test-*""}]}, {""config_name"": ""korean"", ""data_files"": [{""split"": ""train"", ""path"": ""korean/train-*""}, {""split"": ""test"", ""path"": ""korean/test-*""}]}, {""config_name"": ""spanish"", ""data_files"": [{""split"": ""train"", ""path"": ""spanish/train-*""}, {""split"": ""test"", ""path"": ""spanish/test-*""}]}, {""config_name"": ""turkish"", ""data_files"": [{""split"": ""train"", ""path"": ""turkish/train-*""}, {""split"": ""test"", ""path"": ""turkish/test-*""}]}, {""config_name"": ""vietnamese"", ""data_files"": [{""split"": ""train"", ""path"": ""vietnamese/train-*""}, {""split"": ""test"", ""path"": ""vietnamese/test-*""}]}], ""task_categories"": [""text-generation""], ""language"": [""ar"", ""az"", ""zh"", ""en"", ""fa"", ""de"", ""he"", ""hi"", ""ko"", ""es"", ""tr"", ""vi""], ""pretty_name"": ""Multilingual TinyStories"", ""size_categories"": [""10M
Image |
SEED-Bench |
K-SEED |
![]() |
question: How many towels are in the image?
choice_a: One
choice_b: Two
choice_c: Three
choice_d: Four
|
question: 이미지에 수건이 몇 개 있나요?
choice_a: 한 개
choice_b: 두 개
choice_c: 세 개
choice_d: 네 개
|
## Inference Prompt
```
{question}
A. {choice_a}
B. {choice_b}
C. {choice_c}
D. {choice_d}
주어진 선택지 중 해당 옵션의 문자로 직접 답하세요.
```
## Results
Below are the evaluation results of various vision-language models, including [VARCO-VISION-14B](https://huggingface.co/NCSOFT/VARCO-VISION-14B) on K-SEED.
| | VARCO-VISION-14B | Pangea-7B | Pixtral-12B | Molmo-7B-D | Qwen2-VL-7B-Instruct | LLaVA-One-Vision-7B |
| :---: | :---: | :---: | :---: | :---: | :---: | :---: |
| K-SEED | **75.39** | 73.34 | 46.44 | 69.53 | 74.08 | 73.21 |
## References
[1] Bohao Li, Yuying Ge, Yixiao Ge, Guangzhi Wang, Rui Wang, Ruimao Zhang, and Ying Shan. Seed-bench: Benchmarking multimodal large language models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 13299–13308, 2024.
## Citation
If you use K-SEED in your research, please cite the following:
```bibtex
@misc{ju2024varcovisionexpandingfrontierskorean,
title={VARCO-VISION: Expanding Frontiers in Korean Vision-Language Models},
author={Jeongho Ju and Daeyoung Kim and SunYoung Park and Youngjune Kim},
year={2024},
eprint={2411.19103},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2411.19103},
}
```"
Fumika/Wikinews-multilingual,"{""license"": ""cc-by-2.5"", ""language"": [""en"", ""es"", ""fr"", ""de"", ""pt"", ""pl"", ""it"", ""zh"", ""ru"", ""ja"", ""nl"", ""sv"", ""ta"", ""sr"", ""cs"", ""ca"", ""he"", ""tr"", ""fi"", ""eo"", ""el"", ""hu"", ""uk"", ""no"", ""ar"", ""fa"", ""ko"", ""ro"", ""bg"", ""bs"", ""li"", ""sq"", ""th""], ""task_categories"": [""text-classification"", ""feature-extraction""]}","# Wikinews - weakly aligned multilingual pararell sentence datasets
This dataset contains 15,200 multilingual WikiNews articles in 33 languages.
Out of 15,200 articles, 9,960 are non-English news and 5240 are English news. All non-English news are linked to one of 5240 English news. Linked articles show the same event.
List of non-English languages are: Spanish, French, German, Portuguese, Polish, Italian, Chinese, Russian, Japanese, Dutch, Swedish, Tamil, Serbian, Czech, Catalan, Hebrew, Turkish, Finnish, Esperanto, Greek, Hungarian, Ukrainian, Norwegian, Arabic, Persian, Korean, Romanian, Bulgarian, Bosnian, Limburgish, Albanian, Thai.
## Dataset Details
### Example raw datasets
| | title | pageid | categories | lang | url | text | date | type |
|---|-------------------------------------------------------------|--------|----------------------------------------------------|------|-----------------------------------------------------------------------------------------|-----------------------------------------------------------|-----------------------------|-----------------|
| 0 | 'Bloody Sunday Inquiry' publishes report into ... | 191513 | [Northern Ireland, Martin McGuinness, Politics...] | en | https://en.wikinews.org/wiki/%27Bloody_Sunday_... | [On Tuesday, the ""Bloody Sunday Inquiry"" publi... | 2010-06-17 | title |
| 1 | 1972 ”இரத்த ஞாயிறு” படுகொலைகள் தொடர்பில் பிரித... | 191513 | [Northern Ireland, Martin McGuinness, Politics...] | ta | https://ta.wikinews.org/wiki/1972_%E2%80%9D%E0... | [வடக்கு அயர்லாந்தில் 38 ஆண்டுகளுக்கு முன்னர் இ... | வியாழன், சூன் 17, 2010 | interlang link |
| 2 | 'Very serious': Chinese government releases co... | 232226 | [China, December 30, 2010, Politics and confli...] | en | https://en.wikinews.org/wiki/%27Very_serious%2... | [A report by the Chinese government states cor... | 2010-12-30 | title |
| 3 | Čína připustila, že tamní korupce je vážný pro... | 232226 | [China, December 30, 2010, Politics and confli...] | cs | https://cs.wikinews.org/wiki/%C4%8C%C3%ADna_p%... | [Zpráva čínské vlády připouští, že korupce v z... | Středa 29. prosince 2010 | interlang link |
| 4 | China admite que la corrupción en el país es '... | 232226 | [China, December 30, 2010, Politics and confli...] | es | https://es.wikinews.org/wiki/China_admite_que_... | [29 de diciembre de 2010Beijing, China —, Un r... | None | interlang link |
### Variables
Each data point includes following variables:
| Field Name | Description |
|-----------------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------|
| title | WikiNews article title |
| pageid | pageid defined by the English WikiNews article. Data with the same pageid corresponds to the same news event linked together. |
| categories | list of topics defined by WikiNews. All pages have at least one topic from [Crime and law, Culture and entertainment, Disasters and accidents, Economy and business, Education, Environment, Heath, Obituaries, Politics and conflicts, Science and technology, Sports, Wackynews, Weather] |
| text | content of the article. Some foreign pages have news titles but no content. For those, text is left empty. |
| lang | languages of the article (WP code, check [here](https://en.wikipedia.org/wiki/List_of_Wikipedias#Lists) for lists ) |
| url | articles' URL |
| date | date of publish in YYYY-MM-DD for English pages. Dates in foreign pages were left as it is. To get a date with YYYY-MM-DD format, look for a English page with the same pageid. |
| type | `title` for the English page, `interlang link` for non-English page linked to the English page with the `pageid`
### Dataset Description
- **Curated by:** Fumika Isono, Primer AI
- **Language(s) (NLP):** en, es, fr, de, pt, pl, it, zh, ru, ja, nl, sv, ta, sr, cs, ca, he, tr, fi, eo, el, hu, uk, 'no', ar, fa, ko, ro, bg, bs, li, sq, th
- **License:** cc-by-2.5
### Dataset Sources
- **Repository:** [Github](https://github.com/PrimerAI/primer-research/tree/main)
- **Paper:** ArXiv [Linear Cross-Lingual Mapping of Sentence Embeddings](https://arxiv.org/abs/2305.14256)
## Uses
### Weakly aligned multilingual pararell sentence datasets
Weakly aligned multilingual pararell sentence datasets can be constructed by comparing the titles and/or contents of the WikiNews pages that are linked to the same English WikiNews page (in the dataset, they have the same pageid).
Following is the example case where titles of the same pageid are retrieved. These five phrases (news titles) are the news titles of the same incident.
| News title | Language | type |
|---------------------------------------------------------------|----------|-------------------|
| Bomb blast in Delhi kills 12, injures 62 | English | title |
| چندین کشته بر اثر انفجار بمب در مقابل دادگاه عالی هند | Farsi | title|
| 9 נהרגו בפיגוע מחוץ לבית המשפט העליון של הודו | Hebrew | title|
| У Индији 11 мртвих, 64 повређених | Serbian | title|
| தில்லி உயர்நீதிமன்றத்தில் குண்டு வெடிப்பு, 10 பேர் உயிரிழப்பு | Tamil | title|
### Direct Use
- Multilingual embeddings
- Language comparison
### Source Data
[Wikinews](https://www.wikinews.org/)
## Dataset Card Authors
Fumika Isono"
FrancophonIA/multilingual-hatespeech-dataset,"{""language"": [""ar"", ""fr"", ""en"", ""zh"", ""ka"", ""de"", ""hi"", ""id"", ""ga"", ""it"", ""ko"", ""mn"", ""fa"", ""pt"", ""ru"", ""es"", ""tr"", ""ur""], ""multilinguality"": [""multilingual""], ""configs"": [{""config_name"": ""Multilingual_train"", ""data_files"": [{""split"": ""train"", ""path"": ""MultiLanguageTrainDataset.csv""}]}, {""config_name"": ""Arabic_test"", ""data_files"": [{""split"": ""test"", ""path"": ""Arabic_test.csv""}]}, {""config_name"": ""Chinese_test"", ""data_files"": [{""split"": ""test"", ""path"": ""Chinese_test.csv""}]}, {""config_name"": ""English_test"", ""data_files"": [{""split"": ""test"", ""path"": ""English_test.csv""}]}, {""config_name"": ""French_test"", ""data_files"": [{""split"": ""test"", ""path"": ""French_test.csv""}]}, {""config_name"": ""Georgian_test"", ""data_files"": [{""split"": ""test"", ""path"": ""GeorgianTranslatedHateSpeech.csv""}]}, {""config_name"": ""German_test"", ""data_files"": [{""split"": ""test"", ""path"": ""German_test.csv""}]}, {""config_name"": ""Hindi_test"", ""data_files"": [{""split"": ""test"", ""path"": ""HindiTranslatedHateSpeech.csv""}]}, {""config_name"": ""Indonesian_test"", ""data_files"": [{""split"": ""test"", ""path"": ""Indonesian_test.csv""}]}, {""config_name"": ""Irish_test"", ""data_files"": [{""split"": ""test"", ""path"": ""IrishTranslatedHateSpeech.csv""}]}, {""config_name"": ""Italian_test"", ""data_files"": [{""split"": ""test"", ""path"": ""Italian_test.csv""}]}, {""config_name"": ""Korean_test"", ""data_files"": [{""split"": ""test"", ""path"": ""Korean_test.csv""}]}, {""config_name"": ""Mangolian_test"", ""data_files"": [{""split"": ""test"", ""path"": ""MangolianTranslatedHateSpeech.csv""}]}, {""config_name"": ""Persian_test"", ""data_files"": [{""split"": ""test"", ""path"": ""PersianTranslatedHateSpeech.csv""}]}, {""config_name"": ""Porto_test"", ""data_files"": [{""split"": ""test"", ""path"": ""Porto_test.csv""}]}, {""config_name"": ""Rurdu_test"", ""data_files"": [{""split"": ""test"", ""path"": ""Rurdu_test.csv""}]}, {""config_name"": ""Russian_test"", ""data_files"": [{""split"": ""test"", ""path"": ""Russian_test.csv""}]}, {""config_name"": ""Spain_test"", ""data_files"": [{""split"": ""test"", ""path"": ""Spain_test.csv""}]}, {""config_name"": ""Turkish_test"", ""data_files"": [{""split"": ""test"", ""path"": ""Turkish_test.csv""}]}, {""config_name"": ""Urdu_test"", ""data_files"": [{""split"": ""test"", ""path"": ""UrduTranslatedHateSpeech.csv""}]}]}","> [!NOTE]
> Dataset origin: https://www.kaggle.com/datasets/wajidhassanmoosa/multilingual-hatespeech-dataset
## Description
This dataset contains hate speech text with labels where 0 represents non-hate and 1 shows hate
texts also the data from different languages needed to be identified as a corresponding
correct language. The following are the languages in the dataset with the numbers corresponding to that language.
(1 Arabic)(2 English)(3 Chinese)(4 French) (5 German) (6 Russian)(7 Turkish) (8 Roman Hindi/Urdu) (9 Korean)(10 Italian) (11 Spanish)(12 Portuguese) (13 Indonesian)"
Trelis/openassistant-falcon,"{""license"": ""apache-2.0"", ""language"": [""en"", ""es"", ""ru"", ""de"", ""pl"", ""th"", ""vi"", ""sv"", ""bn"", ""da"", ""he"", ""it"", ""fa"", ""sk"", ""id"", ""nb"", ""el"", ""nl"", ""hu"", ""eu"", ""zh"", ""eo"", ""ja"", ""ca"", ""cs"", ""bg"", ""fi"", ""pt"", ""tr"", ""ro"", ""ar"", ""uk"", ""gl"", ""fr"", ""ko""], ""tags"": [""human-feedback"", ""llama-2""], ""size_categories"": [""1K as EOS and BOS token, as per Falcon.
Sample
Preparation:
1. The dataset is cloned from [TimDettmers](https://huggingface.co/datasets/timdettmers/openassistant-guanaco), which itself is a subset of the Open Assistant dataset, which you can find [here](https://huggingface.co/datasets/OpenAssistant/oasst1/tree/main). This subset of the data only contains the highest-rated paths in the conversation tree, with a total of 9,846 samples.
1. The dataset was then filtered to:
- replace instances of '### Human:' with '\nHuman:'
- replace instances of '### Assistant:' with '\nAssistant:'
- end assistant responses with <|endoftext|> (to encourage the model to emit <|endoftext|> when finished a response).
Details of the root dataset follow, copied from that repo:
# OpenAssistant Conversations Dataset (OASST1)
## Dataset Description
- **Homepage:** https://www.open-assistant.io/
- **Repository:** https://github.com/LAION-AI/Open-Assistant
- **Paper:** https://arxiv.org/abs/2304.07327
### Dataset Summary
In an effort to democratize research on large-scale alignment, we release OpenAssistant
Conversations (OASST1), a human-generated, human-annotated assistant-style conversation
corpus consisting of 161,443 messages in 35 different languages, annotated with 461,292
quality ratings, resulting in over 10,000 fully annotated conversation trees. The corpus
is a product of a worldwide crowd-sourcing effort involving over 13,500 volunteers.
Please refer to our [paper](https://arxiv.org/abs/2304.07327) for further details.
### Dataset Structure
This dataset contains message trees. Each message tree has an initial prompt message as the root node,
which can have multiple child messages as replies, and these child messages can have multiple replies.
All messages have a role property: this can either be ""assistant"" or ""prompter"". The roles in
conversation threads from prompt to leaf node strictly alternate between ""prompter"" and ""assistant"".
This version of the dataset contains data collected on the [open-assistant.io](https://open-assistant.io/) website until April 12 2023.
### JSON Example: Message
For readability, the following JSON examples are shown formatted with indentation on multiple lines.
Objects are stored without indentation (on single lines) in the actual jsonl files.
```json
{
""message_id"": ""218440fd-5317-4355-91dc-d001416df62b"",
""parent_id"": ""13592dfb-a6f9-4748-a92c-32b34e239bb4"",
""user_id"": ""8e95461f-5e94-4d8b-a2fb-d4717ce973e4"",
""text"": ""It was the winter of 2035, and artificial intelligence (..)"",
""role"": ""assistant"",
""lang"": ""en"",
""review_count"": 3,
""review_result"": true,
""deleted"": false,
""rank"": 0,
""synthetic"": true,
""model_name"": ""oasst-sft-0_3000,max_new_tokens=400 (..)"",
""labels"": {
""spam"": { ""value"": 0.0, ""count"": 3 },
""lang_mismatch"": { ""value"": 0.0, ""count"": 3 },
""pii"": { ""value"": 0.0, ""count"": 3 },
""not_appropriate"": { ""value"": 0.0, ""count"": 3 },
""hate_speech"": { ""value"": 0.0, ""count"": 3 },
""sexual_content"": { ""value"": 0.0, ""count"": 3 },
""quality"": { ""value"": 0.416, ""count"": 3 },
""toxicity"": { ""value"": 0.16, ""count"": 3 },
""humor"": { ""value"": 0.0, ""count"": 3 },
""creativity"": { ""value"": 0.33, ""count"": 3 },
""violence"": { ""value"": 0.16, ""count"": 3 }
}
}
```
### JSON Example: Conversation Tree
For readability, only a subset of the message properties is shown here.
```json
{
""message_tree_id"": ""14fbb664-a620-45ce-bee4-7c519b16a793"",
""tree_state"": ""ready_for_export"",
""prompt"": {
""message_id"": ""14fbb664-a620-45ce-bee4-7c519b16a793"",
""text"": ""Why can't we divide by 0? (..)"",
""role"": ""prompter"",
""lang"": ""en"",
""replies"": [
{
""message_id"": ""894d30b6-56b4-4605-a504-89dd15d4d1c8"",
""text"": ""The reason we cannot divide by zero is because (..)"",
""role"": ""assistant"",
""lang"": ""en"",
""replies"": [
// ...
]
},
{
""message_id"": ""84d0913b-0fd9-4508-8ef5-205626a7039d"",
""text"": ""The reason that the result of a division by zero is (..)"",
""role"": ""assistant"",
""lang"": ""en"",
""replies"": [
{
""message_id"": ""3352725e-f424-4e3b-a627-b6db831bdbaa"",
""text"": ""Math is confusing. Like those weird Irrational (..)"",
""role"": ""prompter"",
""lang"": ""en"",
""replies"": [
{
""message_id"": ""f46207ca-3149-46e9-a466-9163d4ce499c"",
""text"": ""Irrational numbers are simply numbers (..)"",
""role"": ""assistant"",
""lang"": ""en"",
""replies"": []
},
// ...
]
}
]
}
]
}
}
```
Please refer to [oasst-data](https://github.com/LAION-AI/Open-Assistant/tree/main/oasst-data) for
details about the data structure and Python code to read and write jsonl files containing oasst data objects.
If you would like to explore the dataset yourself you can find a
[`getting-started`](https://github.com/LAION-AI/Open-Assistant/blob/main/notebooks/openassistant-oasst1/getting-started.ipynb)
notebook in the `notebooks/openassistant-oasst1` folder of the [LAION-AI/Open-Assistant](https://github.com/LAION-AI/Open-Assistant)
github repository.
## Main Dataset Files
Conversation data is provided either as nested messages in trees (extension `.trees.jsonl.gz`)
or as a flat list (table) of messages (extension `.messages.jsonl.gz`).
### Ready For Export Trees
```
2023-04-12_oasst_ready.trees.jsonl.gz 10,364 trees with 88,838 total messages
2023-04-12_oasst_ready.messages.jsonl.gz 88,838 messages
```
Trees in `ready_for_export` state without spam and deleted messages including message labels.
The oasst_ready-trees file usually is sufficient for supervised fine-tuning (SFT) & reward model (RM) training.
### All Trees
```
2023-04-12_oasst_all.trees.jsonl.gz 66,497 trees with 161,443 total messages
2023-04-12_oasst_all.messages.jsonl.gz 161,443 messages
```
All trees, including those in states `prompt_lottery_waiting` (trees that consist of only one message, namely the initial prompt),
`aborted_low_grade` (trees that stopped growing because the messages had low quality), and `halted_by_moderator`.
### Supplemental Exports: Spam & Prompts
```
2023-04-12_oasst_spam.messages.jsonl.gz
```
These are messages which were deleted or have a negative review result (`""review_result"": false`).
Besides low quality, a frequent reason for message deletion is a wrong language tag.
```
2023-04-12_oasst_prompts.messages.jsonl.gz
```
These are all the kept initial prompt messages with positive review result (no spam) of trees in `ready_for_export` or `prompt_lottery_waiting` state.
### Using the Huggingface Datasets
While HF datasets is ideal for tabular datasets, it is not a natural fit for nested data structures like the OpenAssistant conversation trees.
Nevertheless, we make all messages which can also be found in the file `2023-04-12_oasst_ready.trees.jsonl.gz` available in parquet as train/validation splits.
These are directly loadable by [Huggingface Datasets](https://pypi.org/project/datasets/).
To load the oasst1 train & validation splits use:
```python
from datasets import load_dataset
ds = load_dataset(""OpenAssistant/oasst1"")
train = ds['train'] # len(train)=84437 (95%)
val = ds['validation'] # len(val)=4401 (5%)
```
The messages appear in depth-first order of the message trees.
Full conversation trees can be reconstructed from the flat messages table by using the `parent_id`
and `message_id` properties to identify the parent-child relationship of messages. The `message_tree_id`
and `tree_state` properties (only present in flat messages files) can be used to find all messages of a message tree or to select trees by their state.
### Languages
OpenAssistant Conversations incorporates 35 different languages with a distribution of messages as follows:
**Languages with over 1000 messages**
- English: 71956
- Spanish: 43061
- Russian: 9089
- German: 5279
- Chinese: 4962
- French: 4251
- Thai: 3042
- Portuguese (Brazil): 2969
- Catalan: 2260
- Korean: 1553
- Ukrainian: 1352
- Italian: 1320
- Japanese: 1018
Languages with under 1000 messages
- Vietnamese: 952
- Basque: 947
- Polish: 886
- Hungarian: 811
- Arabic: 666
- Dutch: 628
- Swedish: 512
- Turkish: 454
- Finnish: 386
- Czech: 372
- Danish: 358
- Galician: 339
- Hebrew: 255
- Romanian: 200
- Norwegian Bokmål: 133
- Indonesian: 115
- Bulgarian: 95
- Bengali: 82
- Persian: 72
- Greek: 66
- Esperanto: 59
- Slovak: 19
## Contact
- Discord [Open Assistant Discord Server](https://ykilcher.com/open-assistant-discord)
- GitHub: [LAION-AI/Open-Assistant](https://github.com/LAION-AI/Open-Assistant)
- E-Mail: [open-assistant@laion.ai](mailto:open-assistant@laion.ai)"
wisenut-nlp-team/aihub_corpus_expertise,"{""dataset_info"": {""features"": [{""name"": ""text"", ""dtype"": ""string""}, {""name"": ""ner_tag"", ""list"": [{""name"": ""begin"", ""dtype"": ""int64""}, {""name"": ""end"", ""dtype"": ""int64""}, {""name"": ""entity"", ""dtype"": ""string""}, {""name"": ""id"", ""dtype"": ""int64""}, {""name"": ""type"", ""dtype"": ""string""}]}], ""splits"": [{""name"": ""train"", ""num_bytes"": 43832237782, ""num_examples"": 90365874}, {""name"": ""validation"", ""num_bytes"": 2994766999, ""num_examples"": 6065316}], ""download_size"": 15686081097, ""dataset_size"": 46827004781}, ""annotations_creators"": [""no-annotation""], ""language"": [""ko""], ""language_creators"": [""found""], ""license"": [""cc-by-4.0""], ""multilinguality"": [""monolingual""], ""pretty_name"": ""wisenut-nlp-team/aihub_corpus_expertise"", ""size_categories"": [""100M
[[arXiv]](https://arxiv.org/abs/2209.07562)
[[HuggingFace Models]](https://huggingface.co/Twitter/twhin-bert-base)
[[Github repo]](https://github.com/xinyangz/TwHIN-BERT)
![]() This work is licensed under a Creative Commons Attribution 4.0 International License.
## Download
Use the `hashtag-classification-id.zip` in this repo. [Link](https://huggingface.co/datasets/Twitter/HashtagPrediction/blob/main/hashtag-classification-id.zip).
Check the first-author's GitHub repo for any supplemental dataset material or code. [Link](https://github.com/xinyangz/TwHIN-BERT)
## Dataset Description
The hashtag prediction dataset is a multilingual classification dataset. Separate datasets are given for different languages. We first select 500 (or all available) popular hashtags of each language and then sample 10k (or all available) popular Tweets that contain these hashtags. We make sure each Tweet will have exactly one of the selected hashtags.
The evaluation task is a multiclass classification task, with hashtags as labels. We remove the hashtag from the Tweet, and let the model predict the removed hashtag.
We provide Tweet ID and raw text hashtag labels in `tsv` files. For each language, we provide train, development, and test splits.
To use the dataset, you must hydrate the Tweet text with [Twitter API](https://developer.twitter.com/en/docs/twitter-api), and **remove the hashtag used for label from each Tweet** .
The data format is displayed below.
| ID | label |
| ------------- | ------------- |
| 1 | hashtag |
| 2 | another hashtag |
## Citation
If you use our dataset in your work, please cite the following:
```bib
@article{zhang2022twhin,
title={TwHIN-BERT: A Socially-Enriched Pre-trained Language Model for Multilingual Tweet Representations},
author={Zhang, Xinyang and Malkov, Yury and Florez, Omar and Park, Serim and McWilliams, Brian and Han, Jiawei and El-Kishky, Ahmed},
journal={arXiv preprint arXiv:2209.07562},
year={2022}
}
```"
NCSOFT/K-MMBench,"{""language"": [""ko""], ""license"": ""cc-by-nc-4.0"", ""dataset_info"": {""features"": [{""name"": ""index"", ""dtype"": ""int64""}, {""name"": ""question"", ""dtype"": ""string""}, {""name"": ""hint"", ""dtype"": ""string""}, {""name"": ""A"", ""dtype"": ""string""}, {""name"": ""B"", ""dtype"": ""string""}, {""name"": ""C"", ""dtype"": ""string""}, {""name"": ""D"", ""dtype"": ""string""}, {""name"": ""answer"", ""dtype"": ""string""}, {""name"": ""category"", ""dtype"": ""string""}, {""name"": ""image"", ""dtype"": ""image""}, {""name"": ""source"", ""dtype"": ""string""}, {""name"": ""l2-category"", ""dtype"": ""string""}, {""name"": ""comment"", ""dtype"": ""string""}, {""name"": ""split"", ""dtype"": ""string""}], ""splits"": [{""name"": ""dev"", ""num_bytes"": 103023727.794, ""num_examples"": 4329}], ""download_size"": 96835472, ""dataset_size"": 103023727.794}, ""configs"": [{""config_name"": ""default"", ""data_files"": [{""split"": ""dev"", ""path"": ""data/dev-*""}]}]}","# K-MMBench
We introduce **K-MMBench**, a Korean adaptation of the [MMBench](https://arxiv.org/abs/2307.06281) [1] designed for evaluating vision-language models.
By translating the ```dev``` subset of MMBench into Korean and carefully reviewing its naturalness through human inspection, we developed a novel robust evaluation benchmark specifically for Korean language.
K-MMBench consists of questions across 20 evaluation dimensions, such as identity reasoning, image emotion, and attribute recognition, allowing a thorough evaluation of model performance in Korean.
To ensure a fair evaluation, we adopt the ***CircularEval Strategy*** as proposed by the MMBench benchmark [1]. For detailed information, please refer to Section 4.3 of the corresponding [paper](https://arxiv.org/abs/2307.06281).
For more details, Please refer to the VARCO-VISION technical report.
- **Technical Report:** [VARCO-VISION: Expanding Frontiers in Korean Vision-Language Models](https://arxiv.org/pdf/2411.19103)
- **Blog(Korean):** [VARCO-VISION Technical Report Summary](https://ncsoft.github.io/ncresearch/95ad8712e60063e9ac97538504ac3eea0ac530af)
- **Huggingface Version Model:** [NCSOFT/VARCO-VISION-14B-HF](https://huggingface.co/NCSOFT/VARCO-VISION-14B-HF)
Image |
MMBench |
K-MMBench |
![]() |
hint: The passage below describes an experiment. Read the passage and then follow the instructions below. Madelyn applied a thin layer of wax to the underside of her snowboard and rode the board straight down a hill. Then, she removed the wax and rode the snowboard straight down the hill again. She repeated the rides four more times, alternating whether she rode with a thin layer of wax on the board or not. Her friend Tucker timed each ride. Madelyn and Tucker calculated the average time it took to slide straight down the hill on the snowboard with wax compared to the average time on the snowboard without wax. Figure: snowboarding down a hill.
question: Identify the question that Madelyn and Tucker's experiment can best answer.
A: Does Madelyn's snowboard slide down a hill in less time when it has a thin layer of wax or a thick layer of wax?
B: Does Madelyn's snowboard slide down a hill in less time when it has a layer of wax or when it does not have a layer of wax?
|
hint: 아래의 문단은 한 실험을 설명하고 있습니다. 문단을 읽고 아래의 지시사항을 따르세요. 매들린은 스노보드의 아랫면에 얇은 왁스층을 바르고 언덕을 직선으로 내려갔습니다. 그런 다음, 그녀는 왁스를 제거하고 다시 스노보드를 언덕을 직선으로 내려갔습니다. 그녀는 스노보드에 얇은 왁스층을 바르고 타는지 아닌지를 번갈아 가며 네 번 더 탔습니다. 그녀의 친구 터커는 각각의 타기를 시간을 재었습니다. 매들린과 터커는 왁스를 바른 스노보드로 언덕을 직선으로 내려가는데 걸리는 평균 시간을 왁스를 바르지 않은 스노보드로 언덕을 내려가는데 걸리는 평균 시간과 비교하여 계산하였습니다. 그림: 언덕을 내려가는 스노보딩.
question: 매들린과 터커의 실험이 가장 잘 대답할 수 있는 질문을 확인하세요.
A: 매들린의 스노보드는 얇은 왁스층이 있는 경우와 두꺼운 왁스층이 있는 경우 중 어느 경우에 언덕을 더 빨리 내려갈까요?
B: 매들린의 스노보드는 왁스층이 있는 경우와 없는 경우 중 어느 경우에 언덕을 더 빨리 내려갈까요?
|
## Inference Prompt
- As mentioned earlier, we adopt the ***CircularEval Strategy*** as proposed by the MMBench benchmark [1]. For detailed information, please refer to Section 4.3 of the corresponding [paper](https://arxiv.org/abs/2307.06281).
```
힌트: {hint} [optional]
질문: {question}
Options:
A. {A}
B. {B}
C. {C} [optional]
D. {D} [optional]
주어진 선택지 중 해당 옵션의 문자로 직접 답하세요.
```
## Results
Below are the evaluation results of various vision-language models, including [VARCO-VISION-14B](https://huggingface.co/NCSOFT/VARCO-VISION-14B) on K-MMBench.
| | VARCO-VISION-14B | Pangea-7B | Pixtral-12B | Molmo-7B-D | Qwen2-VL-7B-Instruct | LLaVA-One-Vision-7B |
| :---: | :---: | :---: | :---: | :---: | :---: | :---: |
| K-MMBench | **82.21** | 71.64 | 57.47 | 63.83 | 78.26 | 76.28 |
## References
[1] Yuan Liu, Haodong Duan, Yuanhan Zhang, Bo Li, Songyang Zhang, Wangbo Zhao, Yike Yuan, Jiaqi Wang, Conghui He, Ziwei Liu, et al. Mmbench: Is your multi-modal model an all-around player? In European Conference on Computer Vision, pages 216–233. Springer, 2025.
## Citation
If you use K-MMBench in your research, please cite the following:
```bibtex
@misc{ju2024varcovisionexpandingfrontierskorean,
title={VARCO-VISION: Expanding Frontiers in Korean Vision-Language Models},
author={Jeongho Ju and Daeyoung Kim and SunYoung Park and Youngjune Kim},
year={2024},
eprint={2411.19103},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2411.19103},
}
```"
zhihz0535/X-TruthfulQA_en_zh_ko_it_es,"{""license"": ""apache-2.0"", ""configs"": [{""config_name"": ""default"", ""data_files"": [{""split"": ""english"", ""path"": ""english.json""}, {""split"": ""chinese"", ""path"": ""chinese.json""}, {""split"": ""korean"", ""path"": ""korean.json""}, {""split"": ""italian"", ""path"": ""italian.json""}, {""split"": ""spanish"", ""path"": ""spanish.json""}]}], ""task_categories"": [""question-answering""], ""language"": [""en"", ""zh"", ""ko"", ""it"", ""es""], ""size_categories"": [""1KYou are an AI assistant acting in the role of a professional doctor. Your task is to provide reliable and helpful answers to health-related questions posed by patients.
>
>You will be presented with a question from a patient. Your goal is to answer this question professionally, accurately, and helpfully.
>
>When responding to the patient's question, follow these guidelines:
>
>1. Answer in Korean.
>2. Always maintain a professional and compassionate tone.
>3. Provide accurate information based on the medical knowledge.
>4. If the question is outside your scope of knowledge or requires in-person examination, advise the patient to consult with a healthcare professional in person.
>5. Do not make definitive diagnoses. Instead, discuss possible causes or conditions related to the symptoms or concerns described.
>6. Recommend appropriate general treatment or management methods when applicable, but emphasize the importance of personalized medical advice from a healthcare provider.
>7. If the question involves a medical emergency, advise the patient to seek immediate medical attention.
>
>When answering the patient's question:
>1. Begin by acknowledging the patient's concern.
>2. Provide a clear and concise explanation related to their question.
>3. If appropriate, discuss potential causes, symptoms, or related conditions.
>4. Suggest general management strategies or lifestyle modifications if applicable.
>5. Emphasize the importance of consulting with a healthcare provider for personalized advice and treatment.
>
>Please provide your response in the following format:
>
>\
>[Your detailed response to the patient's question]
>\
>
>Now, please answer the following patient question:
>
>\
>{{PATIENT_QUESTION}}
>\"
tabtoyou/KoLLaVA-CC3M-Pretrain-595K,"{""license"": ""other"", ""task_categories"": [""visual-question-answering""], ""language"": [""ko""]}","## LLaVA Visual Instruct CC3M 595K Pretrain Dataset Card
[LLaVA](https://llava-vl.github.io/)에서 공개한 CC3M의 595K개 Visual Instruction dataset을 한국어로 번역한 데이터셋입니다. 기존 [Ko-conceptual-captions](https://github.com/QuoQA-NLP/Ko-conceptual-captions)에 공개된 한국어 caption을 가져와 데이터셋을 구축했습니다. 번역 결과가 다소 좋지 않아, 추후에 DeepL로 다시 번역할 수 있습니다.
License: [CC-3M](https://github.com/google-research-datasets/conceptual-captions/blob/master/LICENSE) 준수"
RajChat/Chatgpt,"{""language"": [""en"", ""es"", ""ru"", ""de"", ""pl"", ""th"", ""vi"", ""sv"", ""bn"", ""da"", ""he"", ""it"", ""fa"", ""sk"", ""id"", ""nb"", ""el"", ""nl"", ""hu"", ""eu"", ""zh"", ""eo"", ""ja"", ""ca"", ""cs"", ""bg"", ""fi"", ""pt"", ""tr"", ""ro"", ""ar"", ""uk"", ""gl"", ""fr"", ""ko""], ""license"": ""apache-2.0"", ""size_categories"": [""100K
Languages with under 1000 messages
- Vietnamese: 952
- Basque: 947
- Polish: 886
- Hungarian: 811
- Arabic: 666
- Dutch: 628
- Swedish: 512
- Turkish: 454
- Finnish: 386
- Czech: 372
- Danish: 358
- Galician: 339
- Hebrew: 255
- Romanian: 200
- Norwegian Bokmål: 133
- Indonesian: 115
- Bulgarian: 95
- Bengali: 82
- Persian: 72
- Greek: 66
- Esperanto: 59
- Slovak: 19
## Contact
- Discord [Open Assistant Discord Server](https://ykilcher.com/open-assistant-discord)
- GitHub: [LAION-AI/Open-Assistant](https://github.com/LAION-AI/Open-Assistant)
- E-Mail: [open-assistant@laion.ai](mailto:open-assistant@laion.ai)"
dev7halo/bluehouse-national-petition,"{""license"": ""apache-2.0"", ""language"": [""ko""]}","## Usage
```bash
pip install datasets
```
```python
from datasets import load_dataset
dataset = load_dataset(""dev7halo/bluehouse-national-petition"")
```
```
DatasetDict({
train: Dataset({
features: ['number', '제목', '답변상태', '참여인원', '카테고리', '청원시작', '청원마감', '청원내용', '답변원고'],
num_rows: 451513
})
})
```
```
# dataset['train'][0]
{'number': 605368,
'제목': '당신의 나라에서 행복했습니다.',
'답변상태': '청원종료',
'참여인원': '15,350',
'카테고리': '기타',
'청원시작': '2022-05-09',
'청원마감': '2022-06-08',
'청원내용': '우선 이 청원은 14시간만 유효함을 알립니다. 대통령님. 당신의 나라에서 행복했습니다. 감사합을 표현하고자 청원을 올립니다. 그간 대통령님께 감사함을 표현하는 청원이 많았음을 알고 있습니다. 하지만 임기 마지막 날 꼭 감사하다는 인사를 드리고 싶었습니다. 당신의 나라에서 5년 동안 걱정없이 꿈같고 행복한 나날들을 보냈습니다. 욕심 같아선 임기가 끝나는 것이 너무 아쉬워 하루라도 더 붙잡고 싶은 심정이지만 당신의 몸이 이미 방전된 배터리와 같다는 말씀에 붙잡고 싶었던 마음 마저 내려놓습니다. 어리석은 제가 대통령님을 지킨답시고 행했던 일들 중 잘못된 일들도 많았고 돌이켜보면 늘 대통령님께서 저를 지켜주셨지 제가 대통령님을 지킬 깜냥은 아니었는데... 깨어있었다 생각했던 저는 늘 어리석었고 아둔하였습니다. 대통령님 덕분에 깨어있다는 게 어떤 의미인지 조금이라도 알게 되었으니 평생 상대에 의해 정의되지 않고 제가 왜 하는지 찾아가며 살겠습니다. 부디 임기 후에는 평안한 삶을 사시길 기원합니다. 그리 되실 수 있게 제 마음을 열심히 보태겠습니다. 제 평생 다시는 없을 성군이신 문재인 대통령님 사랑하고 또 사랑합니다. 감사하고 또 감사합니다. 걸으시는 걸음 걸음마다 꽃길이시길 기원합니다. 여사님과 함께 부디 행복하시고 건강하십시오.',
'답변원고': ''}
```
# Github
[Github](https://github.com/HaloKim/bluehouse_petitions)"
Cohere/miracl-ko-queries-22-12,"{""annotations_creators"": [""expert-generated""], ""language"": [""ko""], ""multilinguality"": [""multilingual""], ""size_categories"": [], ""source_datasets"": [], ""tags"": [], ""task_categories"": [""text-retrieval""], ""license"": [""apache-2.0""], ""task_ids"": [""document-retrieval""]}","# MIRACL (ko) embedded with cohere.ai `multilingual-22-12` encoder
We encoded the [MIRACL dataset](https://huggingface.co/miracl) using the [cohere.ai](https://txt.cohere.ai/multilingual/) `multilingual-22-12` embedding model.
The query embeddings can be found in [Cohere/miracl-ko-queries-22-12](https://huggingface.co/datasets/Cohere/miracl-ko-queries-22-12) and the corpus embeddings can be found in [Cohere/miracl-ko-corpus-22-12](https://huggingface.co/datasets/Cohere/miracl-ko-corpus-22-12).
For the orginal datasets, see [miracl/miracl](https://huggingface.co/datasets/miracl/miracl) and [miracl/miracl-corpus](https://huggingface.co/datasets/miracl/miracl-corpus).
Dataset info:
> MIRACL 🌍🙌🌏 (Multilingual Information Retrieval Across a Continuum of Languages) is a multilingual retrieval dataset that focuses on search across 18 different languages, which collectively encompass over three billion native speakers around the world.
>
> The corpus for each language is prepared from a Wikipedia dump, where we keep only the plain text and discard images, tables, etc. Each article is segmented into multiple passages using WikiExtractor based on natural discourse units (e.g., `\n\n` in the wiki markup). Each of these passages comprises a ""document"" or unit of retrieval. We preserve the Wikipedia article title of each passage.
## Embeddings
We compute for `title+"" ""+text` the embeddings using our `multilingual-22-12` embedding model, a state-of-the-art model that works for semantic search in 100 languages. If you want to learn more about this model, have a look at [cohere.ai multilingual embedding model](https://txt.cohere.ai/multilingual/).
## Loading the dataset
In [miracl-ko-corpus-22-12](https://huggingface.co/datasets/Cohere/miracl-ko-corpus-22-12) we provide the corpus embeddings. Note, depending on the selected split, the respective files can be quite large.
You can either load the dataset like this:
```python
from datasets import load_dataset
docs = load_dataset(f""Cohere/miracl-ko-corpus-22-12"", split=""train"")
```
Or you can also stream it without downloading it before:
```python
from datasets import load_dataset
docs = load_dataset(f""Cohere/miracl-ko-corpus-22-12"", split=""train"", streaming=True)
for doc in docs:
docid = doc['docid']
title = doc['title']
text = doc['text']
emb = doc['emb']
```
## Search
Have a look at [miracl-ko-queries-22-12](https://huggingface.co/datasets/Cohere/miracl-ko-queries-22-12) where we provide the query embeddings for the MIRACL dataset.
To search in the documents, you must use **dot-product**.
And then compare this query embeddings either with a vector database (recommended) or directly computing the dot product.
A full search example:
```python
# Attention! For large datasets, this requires a lot of memory to store
# all document embeddings and to compute the dot product scores.
# Only use this for smaller datasets. For large datasets, use a vector DB
from datasets import load_dataset
import torch
#Load documents + embeddings
docs = load_dataset(f""Cohere/miracl-ko-corpus-22-12"", split=""train"")
doc_embeddings = torch.tensor(docs['emb'])
# Load queries
queries = load_dataset(f""Cohere/miracl-ko-queries-22-12"", split=""dev"")
# Select the first query as example
qid = 0
query = queries[qid]
query_embedding = torch.tensor(queries['emb'])
# Compute dot score between query embedding and document embeddings
dot_scores = torch.mm(query_embedding, doc_embeddings.transpose(0, 1))
top_k = torch.topk(dot_scores, k=3)
# Print results
print(""Query:"", query['query'])
for doc_id in top_k.indices[0].tolist():
print(docs[doc_id]['title'])
print(docs[doc_id]['text'])
```
You can get embeddings for new queries using our API:
```python
#Run: pip install cohere
import cohere
co = cohere.Client(f""{api_key}"") # You should add your cohere API Key here :))
texts = ['my search query']
response = co.embed(texts=texts, model='multilingual-22-12')
query_embedding = response.embeddings[0] # Get the embedding for the first text
```
## Performance
In the following table we compare the cohere multilingual-22-12 model with Elasticsearch version 8.6.0 lexical search (title and passage indexed as independent fields). Note that Elasticsearch doesn't support all languages that are part of the MIRACL dataset.
We compute nDCG@10 (a ranking based loss), as well as hit@3: Is at least one relevant document in the top-3 results. We find that hit@3 is easier to interpret, as it presents the number of queries for which a relevant document is found among the top-3 results.
Note: MIRACL only annotated a small fraction of passages (10 per query) for relevancy. Especially for larger Wikipedias (like English), we often found many more relevant passages. This is know as annotation holes. Real nDCG@10 and hit@3 performance is likely higher than depicted.
| Model | cohere multilingual-22-12 nDCG@10 | cohere multilingual-22-12 hit@3 | ES 8.6.0 nDCG@10 | ES 8.6.0 acc@3 |
|---|---|---|---|---|
| miracl-ar | 64.2 | 75.2 | 46.8 | 56.2 |
| miracl-bn | 61.5 | 75.7 | 49.2 | 60.1 |
| miracl-de | 44.4 | 60.7 | 19.6 | 29.8 |
| miracl-en | 44.6 | 62.2 | 30.2 | 43.2 |
| miracl-es | 47.0 | 74.1 | 27.0 | 47.2 |
| miracl-fi | 63.7 | 76.2 | 51.4 | 61.6 |
| miracl-fr | 46.8 | 57.1 | 17.0 | 21.6 |
| miracl-hi | 50.7 | 62.9 | 41.0 | 48.9 |
| miracl-id | 44.8 | 63.8 | 39.2 | 54.7 |
| miracl-ru | 49.2 | 66.9 | 25.4 | 36.7 |
| **Avg** | 51.7 | 67.5 | 34.7 | 46.0 |
Further languages (not supported by Elasticsearch):
| Model | cohere multilingual-22-12 nDCG@10 | cohere multilingual-22-12 hit@3 |
|---|---|---|
| miracl-fa | 44.8 | 53.6 |
| miracl-ja | 49.0 | 61.0 |
| miracl-ko | 50.9 | 64.8 |
| miracl-sw | 61.4 | 74.5 |
| miracl-te | 67.8 | 72.3 |
| miracl-th | 60.2 | 71.9 |
| miracl-yo | 56.4 | 62.2 |
| miracl-zh | 43.8 | 56.5 |
| **Avg** | 54.3 | 64.6 |"
letgoofthepizza/pokemon-blip-captions-en-ko,"{""language"": [""en"", ""ko""], ""license"": ""apache-2.0""}",
jiyounglee0523/KorNAT,"{""configs"": [{""config_name"": ""Social Values (Kor)"", ""data_files"": [{""split"": ""test"", ""path"": ""KorNAT/social-values-kor-test.csv""}]}, {""config_name"": ""Social Values (Eng)"", ""data_files"": [{""split"": ""test"", ""path"": ""KorNAT/social-values-eng-test.csv""}]}, {""config_name"": ""Common Knowledge (Kor)"", ""data_files"": [{""split"": ""test"", ""path"": ""KorNAT/common-knowledge-kor-test.csv""}]}, {""config_name"": ""Common Knowledge (Eng)"", ""data_files"": [{""split"": ""test"", ""path"": ""KorNAT/common-knowledge-eng-test.csv""}]}], ""license"": ""cc-by-nc-2.0"", ""task_categories"": [""multiple-choice""], ""language"": [""ko"", ""en""], ""tags"": [""national-alignment""], ""size_categories"": [""10Below is original dataset card
## Table of Contents
- [Dataset Summary](#dataset-summary)
- [Dataset Attribution](#dataset-attribution)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Dataset Use](#dataset-use)
- [Use Cases](#use-cases)
- [Usage Caveats](#usage-caveats)
- [Getting Started](#getting-started)
🐋 The OpenOrca Dataset! 🐋

We are thrilled to announce the release of the OpenOrca dataset!
This rich collection of augmented FLAN data aligns, as best as possible, with the distributions outlined in the [Orca paper](https://arxiv.org/abs/2306.02707).
It has been instrumental in generating high-performing model checkpoints and serves as a valuable resource for all NLP researchers and developers!
# Official Models
## OpenOrca-Platypus2-13B
Our [latest release](https://huggingface.co/Open-Orca/OpenOrca-Platypus2-13B), the first 13B model to score higher than LLaMA1-65B on the HuggingFace Leaderboard!
Released in partnership with Platypus.
## LlongOrca 7B & 13B
* Our [first 7B release](https://huggingface.co/Open-Orca/LlongOrca-7B-16k), trained on top of LLongMA2 to achieve 16,000 tokens context. #1 long context 7B model at release time, with >99% of the overall #1 model's performance.
* [LlongOrca-13B-16k](https://huggingface.co/Open-Orca/LlongOrca-13B-16k), trained on top of LLongMA2. #1 long context 13B model at release time, with >97% of the overall #1 model's performance.
## OpenOrcaxOpenChat-Preview2-13B
Our [second model](https://huggingface.co/Open-Orca/OpenOrcaxOpenChat-Preview2-13B), highlighting that we've surpassed the performance reported in the Orca paper.
Was #1 at release time, now surpassed by our own OpenOrca-Platypus2-13B.
Released in partnership with OpenChat.
## OpenOrca-Preview1-13B
[OpenOrca-Preview1-13B](https://huggingface.co/Open-Orca/OpenOrca-Preview1-13B)
This model was trained in less than a day, for <$200, with <10% of our data.
At release, it beat the current state of the art models on BigBench-Hard and AGIEval. Achieves ~60% of the improvements reported in the Orca paper.
# Dataset Summary
The OpenOrca dataset is a collection of augmented [FLAN Collection data](https://arxiv.org/abs/2301.13688).
Currently ~1M GPT-4 completions, and ~3.2M GPT-3.5 completions.
It is tabularized in alignment with the distributions presented in the ORCA paper and currently represents a partial completion of the full intended dataset, with ongoing generation to expand its scope.
The data is primarily used for training and evaluation in the field of natural language processing.
# Dataset Attribution
We would like to give special recognition to the following contributors for their significant efforts and dedication:
Teknium
WingLian/Caseus
Eric Hartford
NanoBit
Pankaj
Winddude
Rohan
http://AlignmentLab.ai:
Autometa
Entropi
AtlasUnified
NeverendingToast
NanoBit
WingLian/Caseus
Also of course, as always, TheBloke, for being the backbone of the whole community.
Many thanks to NanoBit and Caseus, makers of [Axolotl](https://github.com/OpenAccess-AI-Collective/axolotl), for lending us their expertise on the platform that developed and trained manticore, minotaur, and many others!
We are welcoming sponsors or collaborators to help us build these models to the scale they deserve. Please reach out via our socials:
http://Alignmentlab.ai https://discord.gg/n9hXaBPWxx
Want to visualize our full dataset? Check out our [Nomic Atlas Map](https://atlas.nomic.ai/map/c1b88b47-2d9b-47e0-9002-b80766792582/2560fd25-52fe-42f1-a58f-ff5eccc890d2).
[ ](https://atlas.nomic.ai/map/c1b88b47-2d9b-47e0-9002-b80766792582/2560fd25-52fe-42f1-a58f-ff5eccc890d2)
# Supported Tasks and Leaderboards
This dataset supports a range of tasks including language modeling, text generation, and text augmentation.
It has been instrumental in the generation of multiple high-performing model checkpoints which have exhibited exceptional performance in our unit testing.
Further information on leaderboards will be updated as they become available.
# Languages
The language of the data is primarily English.
# Dataset Structure
## Data Instances
A data instance in this dataset represents entries from the FLAN collection which have been augmented by submitting the listed question to either GPT-4 or GPT-3.5.
The response is then entered into the response field.
## Data Fields
The fields are:
1) 'id', a unique numbered identifier which includes one of 'niv', 't0', 'cot', or 'flan' to represent which source FLAN Collection submix the 'question' is sourced from.
2) 'system_prompt', representing the System Prompt presented to the GPT-3.5 or GPT-4 API for the datapoint
3) 'question', representing a question entry as provided by the FLAN Collection
4) 'response', a response to that question received from a query to either GPT-3.5 or GPT-4.
## Data Splits
The data is unsplit.
# Dataset Creation
## Curation Rationale
The dataset was created to provide a source of augmented text data for researchers and developers.
The datapoints are intended primarily to provide an enhancement of the core FLAN Collection data which relies upon the detailed step by step reasoning capabilities of GPT-3.5 and GPT-4.
This ""reasoning trace"" augmentation has demonstrated exceptional results, allowing a LLaMA-13B model trained with this data to rival or beat GPT-3.5 on broad sets of hard reasoning tasks which all models below 100B parameters had previously performed dramatically worse on.
## Source Data
The data is generated using techniques in alignment with the distributions outlined in the Orca paper, except as noted below:
1) There is not enough CoT data in the FLAN Collection to generate 150K zero-shot entries, as the paper purports to use.
We suspect this portion was either undocumented or misrepresented. We have used the ~75K points available.
2) We used the pre-generated FLAN Collection datasets hosted on HuggingFace under conceptofmind, e.g. [conceptofmind/flan2021](https://huggingface.co/datasets/conceptofmind/flan2021_submix_original).
These are referenced by the [official FLAN Collection repo](https://github.com/google-research/FLAN/tree/main/flan/v2) as the preferred data source.
However, these are a subset of the full FLAN Collection data, and have less than the required entries for the flan2021 and t0 submixes, by ~1.25M and 200k respectively.
Combined, this gave us ~1.5M fewer datapoints than in the original Orca paper. Completing the set is an ongoing work.
# Dataset Use
## Use Cases
The dataset can be used for tasks related to language understanding, natural language processing, machine learning model training, and model performance evaluation.
## Usage Caveats
Given that this is a work-in-progress dataset, it is recommended to regularly check for updates and improvements.
Further, the data should be used in accordance with the guidelines and recommendations outlined in the Orca paper.
## Getting Started
This dataset is organized such that it can be naively loaded via Hugging Face datasets library.
We recommend using streaming due to the large size of the files.
Regular updates and data generation progress can be monitored through the OpenOrca repository on Hugging Face.
# Citation
```bibtex
@misc{OpenOrca,
title = {OpenOrca: An Open Dataset of GPT Augmented FLAN Reasoning Traces},
author = {Wing Lian and Bleys Goodson and Eugene Pentland and Austin Cook and Chanvichet Vong and ""Teknium""},
year = {2023},
publisher = {HuggingFace},
journal = {HuggingFace repository},
howpublished = {\url{https://https://huggingface.co/Open-Orca/OpenOrca},
}
```
```bibtex
@misc{mukherjee2023orca,
title={Orca: Progressive Learning from Complex Explanation Traces of GPT-4},
author={Subhabrata Mukherjee and Arindam Mitra and Ganesh Jawahar and Sahaj Agarwal and Hamid Palangi and Ahmed Awadallah},
year={2023},
eprint={2306.02707},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
```bibtex
@misc{longpre2023flan,
title={The Flan Collection: Designing Data and Methods for Effective Instruction Tuning},
author={Shayne Longpre and Le Hou and Tu Vu and Albert Webson and Hyung Won Chung and Yi Tay and Denny Zhou and Quoc V. Le and Barret Zoph and Jason Wei and Adam Roberts},
year={2023},
eprint={2301.13688},
archivePrefix={arXiv},
primaryClass={cs.AI}
}
```
```bibtex
@misc{touvron2023llama,
title={Llama 2: Open Foundation and Fine-Tuned Chat Models},
author={Hugo Touvron and Louis Martin and Kevin Stone and Peter Albert and Amjad Almahairi and Yasmine Babaei and Nikolay Bashlykov and Soumya Batra and Prajjwal Bhargava and Shruti Bhosale and Dan Bikel and Lukas Blecher and Cristian Canton Ferrer and Moya Chen and Guillem Cucurull and David Esiobu and Jude Fernandes and Jeremy Fu and Wenyin Fu and Brian Fuller and Cynthia Gao and Vedanuj Goswami and Naman Goyal and Anthony Hartshorn and Saghar Hosseini and Rui Hou and Hakan Inan and Marcin Kardas and Viktor Kerkez and Madian Khabsa and Isabel Kloumann and Artem Korenev and Punit Singh Koura and Marie-Anne Lachaux and Thibaut Lavril and Jenya Lee and Diana Liskovich and Yinghai Lu and Yuning Mao and Xavier Martinet and Todor Mihaylov and Pushkar Mishra and Igor Molybog and Yixin Nie and Andrew Poulton and Jeremy Reizenstein and Rashi Rungta and Kalyan Saladi and Alan Schelten and Ruan Silva and Eric Michael Smith and Ranjan Subramanian and Xiaoqing Ellen Tan and Binh Tang and Ross Taylor and Adina Williams and Jian Xiang Kuan and Puxin Xu and Zheng Yan and Iliyan Zarov and Yuchen Zhang and Angela Fan and Melanie Kambadur and Sharan Narang and Aurelien Rodriguez and Robert Stojnic and Sergey Edunov and Thomas Scialom},
year={2023},
eprint= arXiv 2307.09288
}
@software{touvron2023llama,
title={LLaMA: Open and Efficient Foundation Language Models},
author={Touvron, Hugo and Lavril, Thibaut and Izacard, Gautier and Martinet, Xavier and Lachaux, Marie-Anne and Lacroix, Timoth{\'e}e and Rozi{\`e}re, Baptiste and Goyal, Naman and Hambro, Eric and Azhar, Faisal and Rodriguez, Aurelien and Joulin, Armand and Grave, Edouard and Lample, Guillaume},
journal={arXiv preprint arXiv:2302.13971},
year={2023}
}
```"
NickyNicky/oasst2_chatml,"{""dataset_info"": {""features"": [{""name"": ""Text"", ""dtype"": ""string""}], ""splits"": [{""name"": ""train"", ""num_bytes"": 35636342, ""num_examples"": 13848}], ""download_size"": 19635797, ""dataset_size"": 35636342}, ""configs"": [{""config_name"": ""default"", ""data_files"": [{""split"": ""train"", ""path"": ""data/train-*""}]}], ""language"": [""en"", ""es"", ""ru"", ""zh"", ""de"", ""fr"", ""th"", ""ca"", ""it"", ""ja"", ""pl"", ""eo"", ""eu"", ""vi"", ""fi"", ""hu"", ""ar"", ""nl"", ""da"", ""tr"", ""ko"", ""he"", ""id"", ""cs"", ""bn"", ""sv""]}","```
link: https://huggingface.co/datasets/OpenAssistant/oasst2
```
Message counts by language:
- en: 64,513
- es: 28,199
- ru: 13,935
- zh: 8,615
- de: 6,145
- fr: 3,880
- pt-BR: 2,699
- th: 1,560
- ca: 1,283
- it: 943
- uk-UA: 845
- ja: 788
- pl: 435
- eo: 295
- eu: 274
- vi: 207
- fi: 138
- hu: 113
- ar: 80
- nl: 72
- da: 44
- tr: 37
- ko: 24
- he: 24
- id: 12
- cs: 12
- bn: 1
- sv: 1"
traintogpb/aihub-kozh-translation-integrated-base-1m,"{""license"": ""mit"", ""task_categories"": [""translation""], ""language"": [""ko"", ""zh""]}","### AI Hub Ko-Zh Translation Dataset (Integrated)
AI Hub의 한-중 번역 관련 데이터셋 10개를 병합한 자료입니다. 병합 시 총 데이터 개수는 5,934,596개이며, 이중 10,000개의 validation set와 2,000개의 test set가 분리되어 모든 데이터 사이즈(large-5.9m, base-1m, small-100k)에서 동일하게 사용됩니다.
- large-5.9m (train): 병합 데이터 100% 사용; 총 5,922,596개
- base-1m (train): 병합 데이터 중 1M개 사용; 총 1,000,000개
- small-100k (train): 병합 데이터 중 100K개 사용; 총 100,000개
### Subsets
| Name | Total Size | Chinese Size (Utilized Only) | URL | Datasetkey (AIHub) |
|---|---|---|---|---|
| 한국어-중국어 번역 말뭉치(기술과학) | 1170000 | 1170000 | [URL](https://www.aihub.or.kr/aihubdata/data/view.do?currMenu=&topMenu=&aihubDataSe=data&dataSetSn=128) | 128 |
| 한국어-중국어 번역 말뭉치(사회과학) | 1170000 | 1170000 | [URL](https://www.aihub.or.kr/aihubdata/data/view.do?currMenu=&topMenu=&aihubDataSe=data&dataSetSn=129) | 129 |
| 일상생활 및 구어체 한-중, 한-일 번역 병렬 말뭉치 데이터 | 2700000 | 1349470 | [URL](https://www.aihub.or.kr/aihubdata/data/view.do?currMenu=&topMenu=&aihubDataSe=data&dataSetSn=546) | 546 |
| 전문분야 영-한, 중-한 번역 말뭉치(식품) | 1350000 | 1326837 | [URL](https://www.aihub.or.kr/aihubdata/data/view.do?currMenu=&topMenu=&aihubDataSe=data&dataSetSn=71262) | 71262 |
| 방송 콘텐츠 한-중, 한-일 번역 병렬 말뭉치 데이터 | 1487088 | 367921 | [URL](https://www.aihub.or.kr/aihubdata/data/view.do?currMenu=&topMenu=&aihubDataSe=data&dataSetSn=71263) | 71263 |
| 발화유형(문어, 구어, 채팅) 별 기계번역 병렬 말뭉치 | 82002 | 26989 | [URL](https://www.aihub.or.kr/aihubdata/data/view.do?currMenu=&topMenu=&aihubDataSe=data&dataSetSn=71411) | 71411 |
| 한국어-다국어 번역 말뭉치(기술과학) | 270459 | 146317 | [URL](https://www.aihub.or.kr/aihubdata/data/view.do?currMenu=&topMenu=&aihubDataSe=data&dataSetSn=71493) | 71493 |
| 한국어-다국어 번역 말뭉치(기초과학) | 270317 | 84419 | [URL](https://www.aihub.or.kr/aihubdata/data/view.do?currMenu=&topMenu=&aihubDataSe=data&dataSetSn=71496) | 71496 |
| 한국어-다국어 번역 말뭉치(인문학) | 271721 | 80375 | [URL](https://www.aihub.or.kr/aihubdata/data/view.do?currMenu=&topMenu=&aihubDataSe=data&dataSetSn=71498) | 71498 |
| 방송콘텐츠 한국어-아시아어 번역 말뭉치 | 820387 | 112978 | [URL](https://www.aihub.or.kr/aihubdata/data/view.do?currMenu=&topMenu=&aihubDataSe=data&dataSetSn=71591) | 71591 |
| AI 허브 데이터 활용을 위한 기계 번역말뭉치 | 2653948 | 212268 | [URL](https://www.aihub.or.kr/aihubdata/data/view.do?currMenu=&topMenu=&aihubDataSe=data&dataSetSn=71593) | 71593 |"
Nexdata/93_Hours_Korean_Children_Real_world_Casual_Conversation_and_Monologue_speech_dataset,"{""license"": ""cc-by-nc-nd-4.0"", ""language"": [""ko""]}","## Description
Korean(Korea) Children Real-world Casual Conversation and Monologue speech dataset, covers self-media, conversation, live, lecture, variety show and other generic domains, mirrors real-world interactions. Transcribed with text content, speaker's ID, gender, age, accent and other attributes. Our dataset was collected from extensive and diversify speakers(12 years old and younger children), geographicly speaking, enhancing model performance in real and complex tasks.rnQuality tested by various AI companies. We strictly adhere to data protection regulations and privacy standards, ensuring the maintenance of user privacy and legal rights throughout the data collection, storage, and usage processes, our datasets are all GDPR, CCPA, PIPL complied.
For more details, please refer to the link: https://www.nexdata.ai/datasets/speechrecog/1329?source=Huggingface
## Format
16kHz, 16 bit, wav, mono channel
## Age
12 years old and younger children
## Content category
including interview, self-meida,variety show, etc.
## Recording environment
Low background noise
## Country
South Korea(KOR)
## Language(Region) Code
ko-KR
## Language
Korean
## Features of annotation
Transcription text, timestamp, speaker ID, gender, noise
## Accuracy
Word Accuracy Rate (WAR) 98%
# Licensing Information
Commercial License"
clapAI/MultiLingualSentiment,"{""dataset_info"": {""features"": [{""name"": ""text"", ""dtype"": ""string""}, {""name"": ""label"", ""dtype"": ""string""}, {""name"": ""source"", ""dtype"": ""string""}, {""name"": ""domain"", ""dtype"": ""string""}, {""name"": ""language"", ""dtype"": ""string""}], ""splits"": [{""name"": ""train"", ""num_bytes"": 1364685913, ""num_examples"": 3147478}, {""name"": ""validation"", ""num_bytes"": 170841288, ""num_examples"": 393435}, {""name"": ""test"", ""num_bytes"": 170338153, ""num_examples"": 393436}], ""download_size"": 988308759, ""dataset_size"": 1705865354}, ""configs"": [{""config_name"": ""default"", ""data_files"": [{""split"": ""train"", ""path"": ""data/train-*""}, {""split"": ""validation"", ""path"": ""data/validation-*""}, {""split"": ""test"", ""path"": ""data/test-*""}]}], ""license"": ""apache-2.0"", ""task_categories"": [""text-classification""], ""language"": [""ar"", ""de"", ""en"", ""es"", ""fr"", ""hi"", ""id"", ""it"", ""ko"", ""ms"", ""pt"", ""ru"", ""tr"", ""vi"", ""zh"", ""ja""], ""tags"": [""sentiment"", ""multilingual"", ""emotion"", ""review"", ""classification""], ""pretty_name"": ""text"", ""size_categories"": [""1M"",
""output"": ""화이자-바이오엔텍 및 모더나 COVID-19 백신과 같은 mRNA 백신은 우리 몸의 세포에 스파이크 단백질이라는 특정 단백질을 생산하도록 지시를 내리는 방식으로 작동합니다. 이 단백질은 COVID-19를 유발하는 SARS-CoV-2 바이러스의 표면에서 발견됩니다. 백신의 mRNA가 세포에 들어가 세포의 기계를 사용하여 스파이크 단백질을 생성합니다. 스파이크 단백질이 생성되면 면역 반응을 일으켜 스파이크 단백질을 인식하고 공격할 수 있는 항체가 생성됩니다. 향후 백신을 접종한 개인이 실제 바이러스에 직면하면 면역 체계가 감염과 싸울 준비가 더 잘 되어 있습니다.\n\n따라서 mRNA 백신의 주요 기능 메커니즘은 세포가 바이러스 단백질을 생산하도록 지시하여 면역 반응을 촉발하여 바이러스에 대한 보호를 제공하는 것입니다.\n""
```
## Reference
```
@misc{zhang2023alpacareinstructiontuned,
title={AlpaCare:Instruction-tuned Large Language Models for Medical Application},
author={Xinlu Zhang and Chenxin Tian and Xianjun Yang and Lichang Chen and Zekun Li and Linda Ruth Petzold},
year={2023},
eprint={2310.14558},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```"
KETI-AIR/kor_anli,"{""language"": [""ko""], ""license"": [""cc-by-nc-4.0""], ""size_categories"": [""100K
Image |
LLaVA-Bench-in-the-wild |
K-LLaVA-W |
![]() |
question: What is the name of this famous sight in the photo?
caption: An aerial view of Diamond Head in the Hawaiian Islands.
gpt_answer: The famous sight in the photo is Diamond Head.
|
question: 사진에 나오는 이 유명한 장소의 이름은 무엇인가요?
caption: 하와이 제도의 다이아몬드 헤드를 공중에서 본 모습입니다.
gpt_answer: 이 사진은 하와이에 있는 다이아몬드 헤드입니다.
|
## Inference Prompt
```
{question}
```
## Evaluation Prompt
```
[설명]
{caption}
[질문]
{question}
[어시스턴트 1]
{gpt_answer}
[어시스턴트 1 끝]
[어시스턴트 2]
{target_model_answer}
[어시스턴트 2 끝]
[System]
두 인공지능 어시스턴트의 성능을 [질문]에 대한 응답에 기반하여 평가하세요. 해당 [질문]은 특정 이미지를 보고 생성되었습니다. `유용성`, `관련성`, `정확성`, `세부 수준`, `한국어 생성능력`을 기준으로 응답을 평가하세요. 각각의 어시스턴트에게 1에서 10까지의 전반적인 점수를 부여하며, 높은 점수일수록 더 나은 전반적인 성능을 나타냅니다.
# 단계
1. 제공된 이미지 [설명]을 검토하세요.
2. 각 어시스턴트의 응답을 다음 기준으로 분석하세요:
- `유용성`: 응답이 사용자의 질문을 얼마나 잘 해결하는가?
- `관련성`: 응답이 사용자의 질문에 얼마나 적절한가?
- `정확성`: 응답에서 제공한 정보가 얼마나 정확한가?
- `세부 수준`: 응답이 과하지 않게 충분히 자세한가?
- `한국어 생성능력`: 생성된 한국어 문장이 자연스럽고 문법적으로 올바른가?
3. 분석에 기반하여 각 어시스턴트에게 1에서 10까지의 점수를 부여하세요.
4. 두 점수를 공백으로 구분하여 한 줄로 제공하세요.
5. 점수에 대한 이유를 강조하면서 포괄적인 평가를 제공하고, 편견을 피하며 응답의 순서가 판단에 영향을 미치지 않도록 하세요.
# 출력 형식
- 첫 번째 줄: `어시스턴트1_점수 어시스턴트2_점수` (예: `8 9`)
- 두 번째 줄: `유용성`, `관련성`, `정확성`, `세부 수준`, `한국어 생성능력` 기준으로 점수를 설명하는 자세한 문단을 제공합니다.
# 주의사항
- 평가 시 잠재적 편견을 방지하여 객관성을 확보하세요.
- 분석과 설명에서 일관성과 명확성을 유지하세요.
```
## Results
Below are the evaluation results of various vision-language models, including [VARCO-VISION-14B](https://huggingface.co/NCSOFT/VARCO-VISION-14B) on K-LLaVA-W.
| | VARCO-VISION-14B | Pangea-7B | Pixtral-12B | Molmo-7B-D-0924 | Qwen2-VL-7B-Instruct | LLaVA-One-Vision-7B |
| :---: | :---: | :---: | :---: | :---: | :---: | :---: |
| K-LLaVA-W | **84.74** | 69.70 | 82.00 | 63.90 | 62.00 | 48.80 |
## References
[1] Haotian Liu, Chunyuan Li, Qingyang Wu, and Yong Jae Lee. Visual instruction tuning. Advances in neural information processing systems, 36, 2024.
## Citation
If you use K-LLaVA-W in your research, please cite the following:
```bibtex
@misc{ju2024varcovisionexpandingfrontierskorean,
title={VARCO-VISION: Expanding Frontiers in Korean Vision-Language Models},
author={Jeongho Ju and Daeyoung Kim and SunYoung Park and Youngjune Kim},
year={2024},
eprint={2411.19103},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2411.19103},
}
```"
miracl/nomiracl-instruct,"{""dataset_info"": {""features"": [{""name"": ""messages"", ""list"": [{""name"": ""content"", ""dtype"": ""string""}, {""name"": ""role"", ""dtype"": ""string""}]}, {""name"": ""query_id"", ""dtype"": ""string""}, {""name"": ""subset"", ""dtype"": ""string""}, {""name"": ""language"", ""dtype"": ""string""}], ""splits"": [{""name"": ""train"", ""num_bytes"": 154438200.3377625, ""num_examples"": 21471}, {""name"": ""test"", ""num_bytes"": 17162197.6622375, ""num_examples"": 2386}], ""download_size"": 92309140, ""dataset_size"": 171600398}, ""configs"": [{""config_name"": ""default"", ""data_files"": [{""split"": ""train"", ""path"": ""data/train-*""}, {""split"": ""test"", ""path"": ""data/test-*""}]}], ""license"": ""apache-2.0"", ""task_categories"": [""text-classification"", ""text-generation""], ""language"": [""ar"", ""bn"", ""de"", ""en"", ""es"", ""fa"", ""fi"", ""fr"", ""hi"", ""id"", ""ja"", ""ko"", ""ru"", ""sw"", ""te"", ""th"", ""yo"", ""zh""], ""pretty_name"": ""NoMIRACL Fine-tuning Dataset"", ""size_categories"": [""10K
| 📖 Paper | 📝 Blog | 🖥️ Code(Coming soon!) |
# HRM8K
We introduce **HAE-RAE Math 8K** (**HRM8K**), a bilingual math reasoning benchmark for Korean and English.
HRM8K comprises 8,011 instances for evaluation, sourced through a combination of translations from established English benchmarks (e.g., GSM8K, MATH, OmniMath, MMMLU) and original problems curated from existing Korean math exams.
## Benchmark Overview
The **HRM8K** benchmark consists of two subsets:
- **Korean School Math** (**KSM**): This subset comprises 1,428 challenging mathematical problems from Korean sources.
We collect only from Olympiad or competition-level exams, regardless of the target age group.
Consequently, even problems from younger curricula require a certain level of reasoning ability to solve.
The sources from which data was collected are as follows:
- KMO (한국수학올림피아드)
- KJMO (한국주니어수학올림피아드)
- CSAT (대학수학능력시험)
- KMS (한국대학수학경시대회)
- TQ (교원임용경쟁시험)
- **Prior Sets**: This subset comprises 6,583 problems from existing English mathematics benchmarks.
We retain only instances with numeric answers for the Math and Omni-MATH datasets, excluding those with text, equations, or proofs as final answers.
In addition, we select only three math-related subsets, including `abstract_algebra`, `college_mathematics`, and `high_school_mathematics` from MMMLU datasets.
The sources from which data was collected are as follows:
- [GSM8K](https://huggingface.co/datasets/openai/gsm8k)
- [MATH](https://huggingface.co/datasets/hendrycks/competition_math)
- [Omni-MATH](https://huggingface.co/datasets/KbsdJames/Omni-MATH)
- [MMMLU](https://huggingface.co/datasets/openai/MMMLU)
## Benchmark Formulation
- **Translation**: To create a bilingual (English-Korean) dataset, we translate every instance in both **KSM** and **Prior Sets** using GPT-4o.
Translated samples undergo human review, and inaccurate entries are removed.
- **OCR**: For the KSM dataset, we manually capture the problems as screenshots, process them through OCR using the GPT-4 API, and validate.
## Benchmark Contamination
To ensure that the **KSM** subset is not included in the pretraining corpora of LLMs, we perform a contamination check in the following steps:
1. Retrieve approximately 58 million Korean documents, totaling 95GB, from [FineWeb-2](HuggingFaceFW/fineweb-2).
3. Verify whether the sources used to construct **HRM8K** are present in retrieved documents, resulting in 149 matches over the 11-year period.
4. Examine these 149 documents for the presence of an exact match string from HRM8K, and we find no matches.
This is likely because, although we collect samples from online sources, none are directly crawled;
the authors manually downloaded PDF or HWP files and extracted questions, making it challenging for automatic crawlers to collect them.
## Dataset Usage
```python
from datasets import load_dataset
data_category = [""GSM8K"", ""MATH"", ""OMNI_MATH"", ""MMMLU"", ""KSM""] # The subests of HRM8K
# Load all subests
all_dataset = {cat: load_dataset('HAERAE-HUB/HRM8K', cat, split=""test"") for cat in data_category}
# Load one subest
dataset = load_dataset(""HAERAE-HUB/HRM8K"", subset, split=""test"") # Change 'subset' to the desired subest
```
## Contributors
```
Hyunwoo Ko, Guijin Son, Dasol Choi
```
## Point of Contact
For any questions contact us via the following email :)
```
hyunwooko@onelineai.com, spthsrbwls123@yonsei.ac.kr, dasolchoi@yonsei.ac.kr
```"
bongsoo/bongevalsmall,"{""language"": [""ko""], ""license"": ""apache-2.0""}",- 평가 말뭉치
mncai/kin_med_2M,{},"---
license: gpl-3.0
task_categories:
- conversational
language:
- ko
tags:
- medical
---"
yatsby/persona_chat,"{""language"": [""ko""], ""task_categories"": [""conversational""], ""dataset_info"": {""features"": [{""name"": ""persona"", ""struct"": [{""name"": ""\ub098\uc774"", ""dtype"": ""string""}, {""name"": ""\ube44\ubc00"", ""dtype"": ""string""}, {""name"": ""\uc131\uaca9"", ""dtype"": ""string""}, {""name"": ""\uc678\ubaa8"", ""dtype"": ""string""}, {""name"": ""\uc774\ub984"", ""dtype"": ""string""}, {""name"": ""\uc774\uc0c1"", ""dtype"": ""string""}, {""name"": ""\uc9c1\uc5c5"", ""dtype"": ""string""}]}, {""name"": ""question"", ""dtype"": ""string""}, {""name"": ""answer"", ""dtype"": ""string""}, {""name"": ""text"", ""dtype"": ""string""}], ""splits"": [{""name"": ""train"", ""num_bytes"": 47910381, ""num_examples"": 21973}, {""name"": ""valid"", ""num_bytes"": 2519850, ""num_examples"": 1160}], ""download_size"": 25171790, ""dataset_size"": 50430231}, ""configs"": [{""config_name"": ""default"", ""data_files"": [{""split"": ""train"", ""path"": ""data/train-*""}, {""split"": ""valid"", ""path"": ""data/valid-*""}]}]}","gemini 에서 생성한 Persona 와 질문, 답변을 담은 데이터셋입니다."
devngho/ko_llm_annotations,"{""annotations_creators"": [""machine-generated""], ""language_creators"": [""machine-generated""], ""language"": [""ko""], ""license"": ""mit"", ""source_datasets"": [""HAERAE-HUB/KOREAN-WEBTEXT"", ""blueapple8259/c4-ko-cleaned-2""], ""task_categories"": [""text-classification""], ""tags"": [""synthetic""], ""dataset_info"": {""features"": [{""name"": ""text"", ""dtype"": ""string""}, {""name"": ""analysis"", ""dtype"": ""string""}, {""name"": ""score"", ""dtype"": ""int64""}], ""splits"": [{""name"": ""v3"", ""num_bytes"": 2572557211.77768, ""num_examples"": 499896}, {""name"": ""v2.1"", ""num_bytes"": 1509722708.3685167, ""num_examples"": 289288}, {""name"": ""v2"", ""num_bytes"": 1509649645.8221369, ""num_examples"": 289274}, {""name"": ""v1"", ""num_bytes"": 3456764386, ""num_examples"": 468616}], ""download_size"": 4977797687, ""dataset_size"": 9048693951.968334}, ""configs"": [{""config_name"": ""v3"", ""data_files"": [{""split"": ""train"", ""path"": ""data/v3-*""}], ""default"": true}, {""config_name"": ""v2.1"", ""data_files"": [{""split"": ""train"", ""path"": ""data/v2.1-*""}]}, {""config_name"": ""v2"", ""data_files"": [{""split"": ""train"", ""path"": ""data/v2-*""}]}, {""config_name"": ""v1"", ""data_files"": [{""split"": ""train"", ""path"": ""data/v1-*""}]}]}","## Dataset
이 데이터셋은 [fineweb-edu](https://huggingface.co/datasets/HuggingFaceFW/fineweb-edu)의 방법을 한국어에 적용하기 위해 만들어진 합성 데이터셋입니다.
v1과 v2는 퀄리티가 낮습니다. **v3을 사용**하는 것을 권장합니다.
This synthetic dataset was created to apply the methods of [fineweb-edu](https://huggingface.co/datasets/HuggingFaceFW/fineweb-edu) to Korean datasets.
v1 and v2 are of low quality. I recommend **using v3**.
### v1
- source: [HAERAE-HUB/KOREAN-WEBTEXT](https://huggingface.co/datasets/HAERAE-HUB/KOREAN-WEBTEXT), sampled 500k
- analysis model: [mistralai/Mistral-Nemo-Instruct-2407](https://huggingface.co/mistralai/Mistral-Nemo-Instruct-2407)
-
- temperature: 0.5
- min_p: 0.1
- max_model_len: 8192
-
- generation time: ~42 hrs
### v2
- source: [blueapple8259/c4-ko-cleaned-2](https://huggingface.co/datasets/blueapple8259/c4-ko-cleaned-2), sampled 500k
- analysis model: [mistralai/Mistral-Nemo-Instruct-2407](https://huggingface.co/mistralai/Mistral-Nemo-Instruct-2407)
-
- temperature: 1.0
- min_p: 0.1
- max_model_len: 8192
-
- generation time: ~29 hrs (stopped during generation)
#### v2.1
- ""Educational score: 3/10"" 같은 출력이 3이 아닌 2(==round(1.5))처럼 올바르게 되도록 정정
### v3
- source: [blueapple8259/c4-ko-cleaned-2](https://huggingface.co/datasets/blueapple8259/c4-ko-cleaned-2), sampled 500k (==v2)
- analysis model: [Qwen/Qwen2.5-32B-Instruct](https://huggingface.co/Qwen/Qwen2.5-32B-Instruct)
-
- temperature: 0.5
- min_p: 0.1
- max_model_len: 8192
-
- generation time: ~56 hrs
prompt
fineweb-edu 프롬프트에서 일관성과 한국 문화를 위해 수정
```
|im_start|>system
You are Qwen, created by Alibaba Cloud. You are a helpful assistant.<|im_end|>
<|im_start|>user
You are tasked with evaluating the educational value of a Korean text extract for primary to grade school levels. Here's the text to analyze:
Use the following 5-point additive scoring system:
- 1 point: If the extract provides some basic information relevant to educational topics, even if it includes some irrelevant or non-academic content like advertisements and promotional material.
- 2 point: If the extract addresses certain elements pertinent to education but does not align closely with educational standards. It might mix educational content with non-educational material, offering a superficial overview of potentially useful topics, or presenting information in a disorganized manner and incoherent writing style.
- 3 point: If the extract is appropriate for educational use and introduces key concepts relevant to school curricula. It is coherent though it may not be comprehensive or could include some extraneous information. It may resemble an introductory section of a textbook or a basic tutorial that is suitable for learning but has notable limitations like treating concepts that are too complex for grade school students.
- 4 point: If the extract highly relevant and beneficial for educational purposes for a level not higher than grade school, exhibiting a clear and consistent writing style. It could be similar to a chapter from a textbook or a tutorial, offering substantial educational content, including exercises and solutions, with minimal irrelevant information, and the concepts aren't too advanced for grade school students. The content is coherent, focused, and valuable for structured learning.
- 5 point: if the extract is outstanding in its educational value, perfectly suited for teaching either at primary school or grade school. It follows detailed reasoning, the writing style is easy to follow and offers profound and thorough insights into the subject matter, devoid of any non-educational or complex content.
Analyze the text, considering its relevance and appropriateness for Korean primary to grade school education.
The text extract:
{text}
After your analysis, provide:
1. A justification for your score in English (up to 100 words).
2. The final score, stated as ""Educational score: X"" (where X is the total points).
Present your justification before the final score.<|im_end|>
<|im_start|>assistant
```
### Compute Infrastructure
Google Cloud TPU, vLLM
#### Hardware
TPU v4-8
이 연구는 Google의 TPU Research Cloud [(TRC)](https://sites.research.google/trc/about/)의 Cloud TPU 제공으로 수행되었습니다. ⚡
This research was supported with Cloud TPUs from Google's TPU Research Cloud [(TRC)](https://sites.research.google/trc/about/).⚡"
traintogpb/aihub-koja-translation-integrated-base-1m,"{""license"": ""mit"", ""task_categories"": [""translation""], ""language"": [""ko"", ""ja""]}","### AI Hub Ko-Ja Translation Dataset (Integrated)
AI Hub의 한-일 번역 관련 데이터셋 10개를 병합한 자료입니다. 병합 시 총 데이터 개수는 4,339,465개이며, 이중 10,000개의 validation set와 2,000개의 test set가 분리되어 모든 데이터 사이즈(large-4.3m, base-1m, small-100k)에서 동일하게 사용됩니다.
- large-4.3m (train): 병합 데이터 100% 사용; 총 4,327,465개
- base-1m (train): 병합 데이터 중 1M개 사용; 총 1,000,000개
- small-100k (train): 병합 데이터 중 100K개 사용; 총 100,000개
### Subsets
| Name | Total Size | Japanese Size (Utilized Only) | URL | Datasetkey (AIHub) |
|---|---|---|---|---|
| 한국어-일본어 번역 말뭉치 | 1350000 | 1350000 | [URL](https://www.aihub.or.kr/aihubdata/data/view.do?currMenu=&topMenu=&aihubDataSe=data&dataSetSn=127) | 127 |
| 일상생활 및 구어체 한-중, 한-일 번역 병렬 말뭉치 데이터 | 2700000 | 1343763 | [URL](https://www.aihub.or.kr/aihubdata/data/view.do?currMenu=&topMenu=&aihubDataSe=data&dataSetSn=546) | 546 |
| 방송 콘텐츠 한-중, 한-일 번역 병렬 말뭉치 데이터 | 1487088 | 887425 | [URL](https://www.aihub.or.kr/aihubdata/data/view.do?currMenu=&topMenu=&aihubDataSe=data&dataSetSn=71263) | 71263 |
| 발화유형(문어, 구어, 채팅) 별 기계번역 병렬 말뭉치 | 82002 | 26990 | [URL](https://www.aihub.or.kr/aihubdata/data/view.do?currMenu=&topMenu=&aihubDataSe=data&dataSetSn=71411) | 71411 |
| 한국어-다국어(영어 제외) 번역 말뭉치(기술과학) | 270459 | 124142 | [URL](https://www.aihub.or.kr/aihubdata/data/view.do?currMenu=&topMenu=&aihubDataSe=data&dataSetSn=71493) | 71493 |
| 한국어-다국어 번역 말뭉치(기초과학) | 270317 | 81449 | [URL](https://www.aihub.or.kr/aihubdata/data/view.do?currMenu=&topMenu=&aihubDataSe=data&dataSetSn=71496) | 71496 |
| 한국어-다국어 번역 말뭉치(인문학) | 271721 | 80431 | [URL](https://www.aihub.or.kr/aihubdata/data/view.do?currMenu=&topMenu=&aihubDataSe=data&dataSetSn=71498) | 71498 |
| 다국어 통번역 낭독체 데이터 | 1468948 | 120168 | [URL](https://www.aihub.or.kr/aihubdata/data/view.do?currMenu=&topMenu=&aihubDataSe=data&dataSetSn=71524) | 71524 |
| 방송콘텐츠 한국어-아시아어 번역 말뭉치 | 820387 | 112978 | [URL](https://www.aihub.or.kr/aihubdata/data/view.do?currMenu=&topMenu=&aihubDataSe=data&dataSetSn=71591) | 71591 |
| AI 허브 데이터 활용을 위한 기계 번역말뭉치 | 2653948 | 212119 | [URL](https://www.aihub.or.kr/aihubdata/data/view.do?currMenu=&topMenu=&aihubDataSe=data&dataSetSn=71593) | 71593 |"
sungmogi/en2ko_hiphop,"{""configs"": [{""config_name"": ""default"", ""data_files"": [{""split"": ""train"", ""path"": ""data/train-*""}, {""split"": ""test"", ""path"": ""data/test-*""}, {""split"": ""valid"", ""path"": ""data/valid-*""}]}], ""dataset_info"": {""features"": [{""name"": ""id"", ""dtype"": ""int64""}, {""name"": ""translation"", ""struct"": [{""name"": ""en"", ""dtype"": ""string""}, {""name"": ""ko"", ""dtype"": ""string""}]}], ""splits"": [{""name"": ""train"", ""num_bytes"": 5061272.804687347, ""num_examples"": 46158}, {""name"": ""test"", ""num_bytes"": 281254.92317741335, ""num_examples"": 2565}, {""name"": ""valid"", ""num_bytes"": 281145.272135239, ""num_examples"": 2564}], ""download_size"": 4172120, ""dataset_size"": 5623673}, ""task_categories"": [""translation""], ""language"": [""en"", ""ko""], ""pretty_name"": ""en2ko_hiphop"", ""size_categories"": [""10K 기상 및 기후') |
For more information, visit:
- https://github.com/binjang/NIKL-dictionary-parser
- https://krdict.korean.go.kr/kor/mainAction"
nayohan/CodeFeedback-Filtered-Instruction-ko,"{""configs"": [{""config_name"": ""default"", ""data_files"": [{""split"": ""train"", ""path"": ""data/train-*""}]}], ""dataset_info"": {""features"": [{""name"": ""query"", ""dtype"": ""string""}, {""name"": ""answer"", ""dtype"": ""string""}, {""name"": ""resource"", ""dtype"": ""string""}, {""name"": ""lang"", ""dtype"": ""string""}], ""splits"": [{""name"": ""train"", ""num_bytes"": 394762941, ""num_examples"": 156526}], ""download_size"": 183779812, ""dataset_size"": 394762941}, ""language"": [""ko""], ""tags"": [""code""]}","# Dataset Card for ""CodeFeedback-Filtered-Instruction-ko""
Translated [m-a-p/CodeFeedback-Filtered-Instruction](https://huggingface.co/datasets/m-a-p/CodeFeedback-Filtered-Instruction) using [nayohan/llama3-instrucTrans-enko-8b](https://huggingface.co/nayohan/llama3-instrucTrans-enko-8b)."
dev7halo/korean-mcfaq,"{""license"": ""apache-2.0"", ""language"": [""ko""]}","## Usage
```bash
pip install datasets
```
```python
from datasets import load_dataset
dataset = load_dataset(""dev7halo/korean-mcfaq"")
```
```
DatasetDict({
train: Dataset({
features: ['Unnamed: 0', '제목', '등록일', '질문', '답변'],
num_rows: 2452
})
})
```
```
# dataset['train'][0]
{'Unnamed: 0': 0,
'제목': ""'언젠가', '언젠가는'의 표현"",
'등록일': '2019. 12. 6. ',
'질문': '\n\t\t \n\t\t \n\t\t""저는 언젠가 간호사가 되고 싶어요.""와 같이 쓸 때, 미래의 불특정한 때를 나타내는 \'언젠가\'라는 단어를 \'언젠가는\'이라고 써도 되나요? \'언젠가\'가 표준어인 것 같은데, 뒤에 \'는\'을 쓴 \'언젠가는\'이 더 많이 쓰이는 것 같아요.\n\t\t \n\t\t \n\t',
'답변': ""\n\t\t\t \n\t\t\t \n\t\t\t\xa0'미래의 어느 때에 가서는'을 뜻하는 부사 '언젠가'를 강조하기 위하여, '강조'의 뜻을 나타내는 보조사 '는'을 붙여 '언젠가는'과 같이 쓸 수 있습니다.\n\t\t\t \n\t\t""}
```
# Github
[Github](https://github.com/HaloKim/korean-mcfaq)"
sh2orc/bccard-qna-augmented,"{""language"": [""ko""], ""license"": ""apache-2.0""}","Data Augmented BC Card Q&A Dataset by BERT Insertion
- Korean
- Payment"
ChuGyouk/AIHUB-mathsolution-edit,"{""language"": [""ko""], ""dataset_info"": {""features"": [{""name"": ""question"", ""dtype"": ""string""}, {""name"": ""new_answer"", ""dtype"": ""string""}], ""splits"": [{""name"": ""train"", ""num_bytes"": 45048807.86342928, ""num_examples"": 28957}], ""download_size"": 13761873, ""dataset_size"": 45048807.86342928}, ""tags"": [""math""], ""configs"": [{""config_name"": ""default"", ""data_files"": [{""split"": ""train"", ""path"": ""data/train-*""}]}]}","# Details
This is the subset of [수학 과목 자동 풀이 데이터](https://aihub.or.kr/aihubdata/data/view.do?currMenu=115&topMenu=100&aihubDataSe=data&dataSetSn=71716) from AIHUB, and do edit by [Qwen/Qwen2.5-72B-Instruct](Qwen/Qwen2.5-72B-Instruct). To see the difference of answers between original dataset and this dataset, go to the original data [here](https://huggingface.co/datasets/ChuGyouk/AIHUB-mathsolution)."
SungJoo/KBMC,"{""license"": ""apache-2.0"", ""task_categories"": [""text-classification""], ""language"": [""ko""], ""tags"": [""medical"", ""NER"", ""Korean"", ""Bio-medical""], ""pretty_name"": ""KBMC"", ""size_categories"": [""1K xP3x (Crosslingual Public Pool of Prompts eXtended) is a collection of prompts & datasets across 277 languages & 16 NLP tasks. It contains all of xP3 + much more! It is used for training future contenders of mT0 & BLOOMZ at project Aya @[C4AI](https://cohere.for.ai/) 🧡
>
- **Creation:** The dataset can be recreated using instructions available [here](https://github.com/bigscience-workshop/xmtf#create-xp3) together with the file in this repository named `xp3x_create.py`. We provide this version to save processing time.
- **Languages:** 277
- **xP3 Dataset Family:**
Name |
Explanation |
Example models |
xP3x
| Mixture of 17 tasks in 277 languages with English prompts |
WIP - Join us at Project Aya @C4AI to help! |
xP3
| Mixture of 13 training tasks in 46 languages with English prompts |
bloomz & mt0-xxl |
xP3mt
| Mixture of 13 training tasks in 46 languages with prompts in 20 languages (machine-translated from English) |
bloomz-mt & mt0-xxl-mt |
xP3all
| xP3 + evaluation datasets adding an additional 3 tasks for a total of 16 tasks in 46 languages with English prompts |
|
xP3megds
| Megatron-DeepSpeed processed version of xP3 |
bloomz |
P3
| Repreprocessed version of the English-only P3 with 8 training tasks |
bloomz-p3 & mt0-xxl-p3 |
## Dataset Structure
### Data Instances
An example looks as follows:
```json
{
'inputs': '11月、遂にクロームはファイヤーフォックスを引き離し始めた。_はインターネットユーザーの評価が高まったのだ。\nReplace the _ in the above sentence with the correct option: \n- ファイヤーフォックス\n- クローム',
'targets': 'クローム',
'language': 'jpn_Jpan',
'split': 'test',
'template': 'Replace',
'dataset': 'Muennighoff/xwinograd',
'config': 'jp'
}
```
### Data Fields
The data fields are the same among all splits:
- `inputs`: the natural language input fed to the model
- `targets`: the natural language target that the model has to generate
- `language`: The language code. The codes are an extension of the FLORES-200 codes, where the first part is the language code and the second part the script code.
- `template`: The name of the prompt used.
- `dataset`: The Hugging Face dataset identifier of where the data stems from.
- `config`: The config of the Hugging Face dataset.
### Usage
The dataset has 680 gigabytes and 530 million samples. You may want to filter it and then deduplicate depending on your needs.
Loading by language:
```python
# pip install -q datasets
from datasets import load_dataset
ds = load_dataset(""Muennighoff/xP3x"", ""zho_Hans"", streaming=True) # Use streaming to not download all at once
for x in ds[""train""]:
print(x)
break
```
You can then filter down by the data fields to e.g. only get certain configs or datasets.
As every dataset-config-template is its own jsonl file, you can also decide on the datasets, configs and templates you want and only download them.
For example, to download all Japanese xwinograd samples, you could do:
```python
# pip install -q datasets
from datasets import load_dataset
import multiprocessing
# pip install --upgrade huggingface-hub
from huggingface_hub import HfFileSystem, hf_hub_url
fs = HfFileSystem()
fps = fs.glob(f""datasets/CohereForAI/xP3x/data/jpn_Jpan/*xwinograd*"")
resolved_paths = [fs.resolve_path(file) for file in fps]
data_files = [hf_hub_url(resolved_path.repo_id, resolved_path.path_in_repo, repo_type=resolved_path.repo_type) for resolved_path in resolved_paths]
ds = load_dataset(""json"", data_files=data_files, num_proc=8)[""train""]
```
Sometimes it may be faster to clone the entire repo. To download all English files, you could do e.g.
```bash
GIT_LFS_SKIP_SMUDGE=1 git clone https://huggingface.co/datasets/CohereForAI/xP3x
cd xP3x
git lfs pull --include=""data/eng_Latn/*""
```
### Data Splits
|Language|Code|Kilobytes|%|Samples|%|
|--------|------:|------:|-:|---:|-:|
|Kikongo|kon_Latn|648,992|0.1|1,223,481|0.23|
#### Language specifics
- `Japanese`: Data in `jpn_Hira`, `jpn_Kana`, `jpn_Hani` is guaranteed to have Hiragana, Katakana or Kanji, respectively in each sample. However, they may still include other styles. So while all samples in `jpn_Kana` are guaranteed to have Katakana, there may still be Hiragana or Kanji.
## Dataset Creation
### Source Data
#### Training datasets
- Code Miscellaneous
- [CodeComplex](https://huggingface.co/datasets/codeparrot/codecomplex)
- [Docstring Corpus](https://huggingface.co/datasets/teven/code_docstring_corpus)
- [GreatCode](https://huggingface.co/datasets/great_code)
- [State Changes](https://huggingface.co/datasets/Fraser/python-state-changes)
- Closed-book QA
- [Hotpot QA](https://huggingface.co/datasets/hotpot_qa)
- [Trivia QA](https://huggingface.co/datasets/trivia_qa)
- [Web Questions](https://huggingface.co/datasets/web_questions)
- [Wiki QA](https://huggingface.co/datasets/wiki_qa)
- Extractive QA
- [Adversarial QA](https://huggingface.co/datasets/adversarial_qa)
- [CMRC2018](https://huggingface.co/datasets/cmrc2018)
- [DRCD](https://huggingface.co/datasets/clue)
- [DuoRC](https://huggingface.co/datasets/duorc)
- [MLQA](https://huggingface.co/datasets/mlqa)
- [Quoref](https://huggingface.co/datasets/quoref)
- [ReCoRD](https://huggingface.co/datasets/super_glue)
- [ROPES](https://huggingface.co/datasets/ropes)
- [SQuAD v2](https://huggingface.co/datasets/squad_v2)
- [xQuAD](https://huggingface.co/datasets/xquad)
- TyDI QA
- [Primary](https://huggingface.co/datasets/khalidalt/tydiqa-primary)
- [Goldp](https://huggingface.co/datasets/khalidalt/tydiqa-goldp)
- Multiple-Choice QA
- [ARC](https://huggingface.co/datasets/ai2_arc)
- [C3](https://huggingface.co/datasets/c3)
- [CoS-E](https://huggingface.co/datasets/cos_e)
- [Cosmos](https://huggingface.co/datasets/cosmos)
- [DREAM](https://huggingface.co/datasets/dream)
- [MultiRC](https://huggingface.co/datasets/super_glue)
- [OpenBookQA](https://huggingface.co/datasets/openbookqa)
- [PiQA](https://huggingface.co/datasets/piqa)
- [QUAIL](https://huggingface.co/datasets/quail)
- [QuaRel](https://huggingface.co/datasets/quarel)
- [QuaRTz](https://huggingface.co/datasets/quartz)
- [QASC](https://huggingface.co/datasets/qasc)
- [RACE](https://huggingface.co/datasets/race)
- [SciQ](https://huggingface.co/datasets/sciq)
- [Social IQA](https://huggingface.co/datasets/social_i_qa)
- [Wiki Hop](https://huggingface.co/datasets/wiki_hop)
- [WiQA](https://huggingface.co/datasets/wiqa)
- Paraphrase Identification
- [MRPC](https://huggingface.co/datasets/super_glue)
- [PAWS](https://huggingface.co/datasets/paws)
- [PAWS-X](https://huggingface.co/datasets/paws-x)
- [QQP](https://huggingface.co/datasets/qqp)
- Program Synthesis
- [APPS](https://huggingface.co/datasets/codeparrot/apps)
- [CodeContests](https://huggingface.co/datasets/teven/code_contests)
- [JupyterCodePairs](https://huggingface.co/datasets/codeparrot/github-jupyter-text-code-pairs)
- [MBPP](https://huggingface.co/datasets/Muennighoff/mbpp)
- [NeuralCodeSearch](https://huggingface.co/datasets/neural_code_search)
- [XLCoST](https://huggingface.co/datasets/codeparrot/xlcost-text-to-code)
- Structure-to-text
- [Common Gen](https://huggingface.co/datasets/common_gen)
- [Wiki Bio](https://huggingface.co/datasets/wiki_bio)
- Sentiment
- [Amazon](https://huggingface.co/datasets/amazon_polarity)
- [App Reviews](https://huggingface.co/datasets/app_reviews)
- [IMDB](https://huggingface.co/datasets/imdb)
- [Rotten Tomatoes](https://huggingface.co/datasets/rotten_tomatoes)
- [Yelp](https://huggingface.co/datasets/yelp_review_full)
- Simplification
- [BiSECT](https://huggingface.co/datasets/GEM/BiSECT)
- Summarization
- [CNN Daily Mail](https://huggingface.co/datasets/cnn_dailymail)
- [Gigaword](https://huggingface.co/datasets/gigaword)
- [MultiNews](https://huggingface.co/datasets/multi_news)
- [SamSum](https://huggingface.co/datasets/samsum)
- [Wiki-Lingua](https://huggingface.co/datasets/GEM/wiki_lingua)
- [XLSum](https://huggingface.co/datasets/GEM/xlsum)
- [XSum](https://huggingface.co/datasets/xsum)
- Topic Classification
- [AG News](https://huggingface.co/datasets/ag_news)
- [DBPedia](https://huggingface.co/datasets/dbpedia_14)
- [TNEWS](https://huggingface.co/datasets/clue)
- [TREC](https://huggingface.co/datasets/trec)
- [CSL](https://huggingface.co/datasets/clue)
- Translation
- [Flores-200](https://huggingface.co/datasets/Muennighoff/flores200)
- [Tatoeba](https://huggingface.co/datasets/Helsinki-NLP/tatoeba_mt)
- [MultiEURLEX](https://huggingface.co/datasets/multi_eurlex)
- Word Sense disambiguation
- [WiC](https://huggingface.co/datasets/super_glue)
- [XL-WiC](https://huggingface.co/datasets/pasinit/xlwic)
- Natural Language Inference (NLI)
- [ANLI](https://huggingface.co/datasets/anli)
- [CB](https://huggingface.co/datasets/super_glue)
- [RTE](https://huggingface.co/datasets/super_glue)
- [XNLI](https://huggingface.co/datasets/xnli)
- Coreference Resolution
- [Winogrande](https://huggingface.co/datasets/winogrande)
- [XWinograd](https://huggingface.co/datasets/Muennighoff/xwinograd)
- Sentence Completion
- [COPA](https://huggingface.co/datasets/super_glue)
- [Story Cloze](https://huggingface.co/datasets/story_cloze)
- [XCOPA](https://huggingface.co/datasets/xcopa)
- [XStoryCloze](https://huggingface.co/datasets/Muennighoff/xstory_cloze)
#### Dataset specifics
- Flores-200: There are three prompts for Flores: `continuation`, `question`, `command`, which represent three commonly used prompting styles, i.e. making a prompt seem like a natural continuation, turning it into a question or commanding the model to do something.
- tatoeba_mt: Contains duplicates. For example, it has data that is both classified as `jpn_Kana` and `jpn_Jpan`, so you may want to deduplicate.
## Additional Information
### Licensing Information
The dataset collection is released under Apache 2.0. Note that individual datasets may have different licenses.
### Citation Information
```bibtex
@article{muennighoff2022crosslingual,
title={Crosslingual generalization through multitask finetuning},
author={Muennighoff, Niklas and Wang, Thomas and Sutawika, Lintang and Roberts, Adam and Biderman, Stella and Scao, Teven Le and Bari, M Saiful and Shen, Sheng and Yong, Zheng-Xin and Schoelkopf, Hailey and others},
journal={arXiv preprint arXiv:2211.01786},
year={2022}
}
```
### Contributions
Thanks to the contributors of [promptsource](https://github.com/bigscience-workshop/promptsource/graphs/contributors) for adding many prompts used in this dataset.
Thanks to the Aya team @[C4AI](https://cohere.for.ai/) 🧡"
data-silence/sumnews,"{""language"": [""am"", ""ar"", ""az"", ""bn"", ""my"", ""zh"", ""en"", ""fr"", ""gu"", ""ha"", ""hi"", ""ig"", ""id"", ""ja"", ""rn"", ""ko"", ""ky"", ""mr"", ""ne"", ""om"", ""ps"", ""fa"", ""pcm"", ""pt"", ""pa"", ""ru"", ""gd"", ""sr"", ""si"", ""so"", ""es"", ""sw"", ""ta"", ""te"", ""th"", ""ti"", ""tr"", ""uk"", ""ur"", ""uz"", ""vi"", ""cy"", ""yo""], ""license"": [""cc-by-nc-sa-4.0""], ""multilinguality"": [""multilingual""], ""size_categories"": [""100K
Languages with under 1000 messages
- Vietnamese: 952
- Basque: 947
- Polish: 886
- Hungarian: 811
- Arabic: 666
- Dutch: 628
- Swedish: 512
- Turkish: 454
- Finnish: 386
- Czech: 372
- Danish: 358
- Galician: 339
- Hebrew: 255
- Romanian: 200
- Norwegian Bokmål: 133
- Indonesian: 115
- Bulgarian: 95
- Bengali: 82
- Persian: 72
- Greek: 66
- Esperanto: 59
- Slovak: 19
## Contact
- Discord [Open Assistant Discord Server](https://ykilcher.com/open-assistant-discord)
- GitHub: [LAION-AI/Open-Assistant](https://github.com/LAION-AI/Open-Assistant)
- E-Mail: [open-assistant@laion.ai](mailto:open-assistant@laion.ai)"
werty1248/Korean-1930-Novel-Scene-Summarize,"{""license"": ""mit"", ""task_categories"": [""summarization""], ""language"": [""ko""]}","## 한국 저작권 만료 소설에 대한 씬 분리 및 요약 데이터 셋
- 원천 데이터 출처: https://gongu.copyright.or.kr/gongu/wrt/wrtCl/listWrtText.do?menuNo=200019
- 총 96개 소설 수집 및 전처리
- 한자가 많은 소설 제외
- 한자 제거, 띄어쓰기 전처리 수행
### 씬 분리
- 사용 모델: Gemini-1.5-Flash
- (띄어쓰기 포함) 100자 이상, 1200자 미만으로 적절한 문장에서 씬 단위로 분리하도록 지시
- 총 12,108씬 생성
### 요약
- 사용 모델: Gemini-1.5-Flash(때때로 GPT-4o)
- 각 Scene에서 인물, 주요 소품, 사건을 추출하고, 요약(scenario)을 생성하도록 함"
jaeyong2/persona-inst,"{""dataset_info"": {""features"": [{""name"": ""Level"", ""dtype"": ""int64""}, {""name"": ""English"", ""dtype"": ""string""}, {""name"": ""Korean"", ""dtype"": ""string""}, {""name"": ""Japanese"", ""dtype"": ""string""}, {""name"": ""Thai"", ""dtype"": ""string""}, {""name"": ""Vietnamese"", ""dtype"": ""string""}, {""name"": ""context"", ""dtype"": ""string""}], ""splits"": [{""name"": ""train"", ""num_bytes"": 2973151280, ""num_examples"": 3006572}], ""download_size"": 995697751, ""dataset_size"": 2973151280}, ""configs"": [{""config_name"": ""default"", ""data_files"": [{""split"": ""train"", ""path"": ""data/train-*""}]}], ""language"": [""ja"", ""ko"", ""th"", ""vi""], ""license"": ""cc-by-nc-sa-4.0""}","## How to use
```python
>>> from datasets import load_dataset
>>> ds = load_dataset(""jaeyong2/persona-inst"", split=""train"")
>>> ds
Dataset({
features: ['Level', 'English', 'Korean', 'Thai', 'Vietnamese', 'context'],
num_rows: 3006572
})
```
### Development Process
1. Generate persona pair from [proj-persona/PersonaHub](https://huggingface.co/datasets/proj-persona/PersonaHub)
2. We used [Qwen/Qwen2-72B-Instruct](https://huggingface.co/Qwen/Qwen2.5-72B-Instruct) model to generate Question.
## License
- Qwen/Qwen2.5-72B-Instruct : https://huggingface.co/Qwen/Qwen2-72B-Instruct/blob/main/LICENSE
- proj-persona/PersonaHub : https://spdx.org/licenses/CC-BY-NC-SA-4.0
## Acknowledgement
This research is supported by **TPU Research Cloud program**."
CaterinaLac/sharegpt-deduplicated,"{""license"": ""apache-2.0"", ""task_categories"": [""conversational""], ""language"": [""en"", ""zh"", ""ko"", ""fr"", ""ja"", ""es"", ""no"", ""et"", ""de"", ""ca"", ""vi"", ""fi""], ""size_categories"": [""1KThe deduplication process has two steps:
1. The literal duplicates (both input and outputs) are removed
2. The remaining (5749) instances are embedded with the [SentenceTransformer library](https://www.sbert.net/) (""paraphrase-multilingual-mpnet-base-v2"" model).
Then, we compute the cosine similarity among all the possible pairs, and consider paraphrases those pairs with a similarity > 0.95. For each paraphrase group, we only retain one element.
The resulting dataset has 5139 elements.
### Languages
The dataset includes several languages, but the vast majority of it is in English. Roughly 600 instances are in more than one language, as detected by [langdetect](https://pypi.org/project/langdetect/).
The languages that appear across the dataset, together with the number of instances they appear in, follow:
Language Distribution
en 4053
zh-cn 423
ko 333
fr 168
ja 151
es 142
no 110
et 97
de 81
ca 78
vi 63
fi 52
zh-tw 47
pt 42
tl 39
ru 24
he 24
id 23
it 22
sv 21
pl 16
nl 16
th 15
ro 11
da 9
tr 8
cs 8
hr 6
uk 5
af 5
ar 4
bg 3
cy 2
sk 2
hu 2
so 2
bn 1
sl 1
hi 1
sw 1
lv 1
el 1
### Data Fields
Each instance has two fields:
- 'input': one turn of a human-bot conversation, initiated by a human. It starts with 'Human: ', and it ends with 'Assistant: '
- 'output': the bot reply"
neulab/PangeaBench-xchat,{},"---
dataset_info:
features:
- name: question_id
dtype: int64
- name: text
dtype: string
- name: category
dtype: string
- name: image
dtype: image
splits:
- name: chinese
num_examples: 50
- name: english
num_examples: 50
- name: hindi
num_examples: 50
- name: indonesian
num_examples: 50
- name: japanese
num_examples: 50
- name: kinyarwanda
num_examples: 50
- name: korean
num_examples: 50
- name: spanish
num_examples: 50
dataset_size: 400
configs:
- config_name: default
data_files:
- split: chinese
path: data/chinese.parquet
- split: english
path: data/english.parquet
- split: hindi
path: data/hindi.parquet
- split: indonesian
path: data/indonesian.parquet
- split: japanese
path: data/japanese.parquet
- split: kinyarwanda
path: data/kinyarwanda.parquet
- split: korean
path: data/korean.parquet
- split: spanish
path: data/spanish.parquet
license: cc-by-4.0
task_categories:
- visual-question-answering
language:
- zh
- en
- hi
- id
- ja
- rw
- ko
- es
pretty_name: xchat
size_categories:
- n<1K
---"
hecatonai/Housing_Subscription_QA_Dataset,"{""task_categories"": [""question-answering""], ""language"": [""ko""], ""tags"": [""finance""], ""size_categories"": [""n<1K""], ""config_names"": [""2024"", ""2022"", ""2021""], ""configs"": [{""config_name"": ""2024"", ""data_files"": [{""split"": ""train"", ""path"": ""2024/*""}]}, {""config_name"": ""2022"", ""data_files"": [{""split"": ""train"", ""path"": ""2022/*""}]}, {""config_name"": ""2021"", ""data_files"": [{""split"": ""train"", ""path"": ""2021/*""}]}]}","# Dataset Card for Dataset Name
This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1).
### Dataset Description
- **Language(s) (NLP):** Korean
### Dataset Sources [optional]
- **Origin of Data:** [2024 주택청약 FAQ](https://www.molit.go.kr/USR/policyData/m_34681/dtl.jsp?search=%EC%A3%BC%ED%83%9D&srch_dept_nm=&srch_dept_id=&srch_usr_nm=&srch_usr_titl=Y&srch_usr_ctnt=&search_regdate_s=&search_regdate_e=&psize=10&s_category=&p_category=&lcmspage=1&id=4765)
## Dataset Structure
```
[
{
'question' : ""..."",
'answer' : ""..."",
},
{
""...""
},
...
]
```
## Data Fields
* question : Question of korea's Housing Subscription.
* answer : Answer for Question same blocks."
traintogpb/aihub-koja-translation-integrated-large-4.3m,"{""license"": ""mit"", ""task_categories"": [""translation""], ""language"": [""ko"", ""ja""]}","### AI Hub Ko-Ja Translation Dataset (Integrated)
AI Hub의 한-일 번역 관련 데이터셋 10개를 병합한 자료입니다. 병합 시 총 데이터 개수는 4,339,465개이며, 이중 10,000개의 validation set와 2,000개의 test set가 분리되어 모든 데이터 사이즈(large-4.3m, base-1m, small-100k)에서 동일하게 사용됩니다.
- large-4.3m (train): 병합 데이터 100% 사용; 총 4,327,465개
- base-1m (train): 병합 데이터 중 1M개 사용; 총 1,000,000개
- small-100k (train): 병합 데이터 중 100K개 사용; 총 100,000개
### Subsets
| Name | Total Size | Japanese Size (Utilized Only) | URL | Datasetkey (AIHub) |
|---|---|---|---|---|
| 한국어-일본어 번역 말뭉치 | 1350000 | 1350000 | [URL](https://www.aihub.or.kr/aihubdata/data/view.do?currMenu=&topMenu=&aihubDataSe=data&dataSetSn=127) | 127 |
| 일상생활 및 구어체 한-중, 한-일 번역 병렬 말뭉치 데이터 | 2700000 | 1343763 | [URL](https://www.aihub.or.kr/aihubdata/data/view.do?currMenu=&topMenu=&aihubDataSe=data&dataSetSn=546) | 546 |
| 방송 콘텐츠 한-중, 한-일 번역 병렬 말뭉치 데이터 | 1487088 | 887425 | [URL](https://www.aihub.or.kr/aihubdata/data/view.do?currMenu=&topMenu=&aihubDataSe=data&dataSetSn=71263) | 71263 |
| 발화유형(문어, 구어, 채팅) 별 기계번역 병렬 말뭉치 | 82002 | 26990 | [URL](https://www.aihub.or.kr/aihubdata/data/view.do?currMenu=&topMenu=&aihubDataSe=data&dataSetSn=71411) | 71411 |
| 한국어-다국어(영어 제외) 번역 말뭉치(기술과학) | 270459 | 124142 | [URL](https://www.aihub.or.kr/aihubdata/data/view.do?currMenu=&topMenu=&aihubDataSe=data&dataSetSn=71493) | 71493 |
| 한국어-다국어 번역 말뭉치(기초과학) | 270317 | 81449 | [URL](https://www.aihub.or.kr/aihubdata/data/view.do?currMenu=&topMenu=&aihubDataSe=data&dataSetSn=71496) | 71496 |
| 한국어-다국어 번역 말뭉치(인문학) | 271721 | 80431 | [URL](https://www.aihub.or.kr/aihubdata/data/view.do?currMenu=&topMenu=&aihubDataSe=data&dataSetSn=71498) | 71498 |
| 다국어 통번역 낭독체 데이터 | 1468948 | 120168 | [URL](https://www.aihub.or.kr/aihubdata/data/view.do?currMenu=&topMenu=&aihubDataSe=data&dataSetSn=71524) | 71524 |
| 방송콘텐츠 한국어-아시아어 번역 말뭉치 | 820387 | 112978 | [URL](https://www.aihub.or.kr/aihubdata/data/view.do?currMenu=&topMenu=&aihubDataSe=data&dataSetSn=71591) | 71591 |
| AI 허브 데이터 활용을 위한 기계 번역말뭉치 | 2653948 | 212119 | [URL](https://www.aihub.or.kr/aihubdata/data/view.do?currMenu=&topMenu=&aihubDataSe=data&dataSetSn=71593) | 71593 |"
0x22almostEvil/tatoeba-mt-all-in-one,"{""annotations_creators"": [""Helsinki-NLP""], ""datasets"": [""Helsinki-NLP/tatoeba_mt""], ""language_creators"": [""crowdsourced""], ""language"": [""af"", ""ar"", ""az"", ""be"", ""bg"", ""bn"", ""br"", ""bs"", ""ca"", ""ch"", ""cs"", ""cv"", ""cy"", ""da"", ""de"", ""el"", ""en"", ""eo"", ""es"", ""et"", ""eu"", ""fa"", ""fi"", ""fo"", ""fr"", ""fy"", ""ga"", ""gd"", ""gl"", ""gn"", ""he"", ""hi"", ""hr"", ""hu"", ""hy"", ""ia"", ""id"", ""ie"", ""io"", ""is"", ""it"", ""ja"", ""jv"", ""ka"", ""kk"", ""km"", ""ko"", ""ku"", ""kw"", ""la"", ""lb"", ""lt"", ""lv"", ""mi"", ""mk"", ""ml"", ""mn"", ""mr"", ""ms"", ""mt"", ""my"", ""nb"", ""nl"", ""nn"", ""no"", ""oc"", ""pl"", ""pt"", ""qu"", ""rn"", ""ro"", ""ru"", ""sh"", ""sl"", ""sq"", ""sr"", ""sv"", ""sw"", ""ta"", ""te"", ""th"", ""tk"", ""tl"", ""tr"", ""tt"", ""ug"", ""uk"", ""ur"", ""uz"", ""vi"", ""vo"", ""yi"", ""zh""], ""license"": [""cc-by-2.0""], ""multilinguality"": [""translation""], ""pretty_name"": ""The Tatoeba Translation Challenge | All In One"", ""size_categories"": [""1MHere are examples from KoMT-Bench:
Category |
MT-Bench |
KoMT-Bench |
Writing |
|
|
1st Turn |
Imagine you are writing a blog post comparing two popular smartphone models. Develop an outline for the blog post, including key points and subheadings to effectively compare and contrast the features, performance, and user experience of the two models. Please answer in fewer than 200 words. |
두 개의 인기 스마트폰 모델을 비교하는 블로그 게시물을 작성한다고 가정합니다. 두 모델의 기능, 성능, 사용자 경험을 효과적으로 비교하고 대조할 수 있도록 핵심 사항과 소제목을 포함하여 블로그 게시물의 개요를 작성하세요. 200자 이내로 답하십시오. |
2nd Turn |
Take your previous response and rephrase it as a limerick. |
이전 답변을 충청도 사투리로 재작성하십시오. |
Math |
|
|
1st Turn |
When a number is divided by 10, the remainder is 4. What is the remainder when twice the number is divided by 4? |
어떤 숫자를 10으로 나눈 나머지는 4입니다. 그 숫자의 두 배를 4로 나눈 나머지를 구하세요. |
2nd Turn |
What about when twice the number is divided by 5? |
그 숫자의 두 배를 5로 나누면 어떨까요? |
Humanities |
|
|
1st Turn |
Provide insights into the correlation between economic indicators such as GDP, inflation, and unemployment rates. Explain how fiscal and monetary policies affect those indicators. |
GDP, 인플레이션, 실업률과 같은 경제 지표 간의 상관관계에 대한 통찰을 제시하세요. 이러한 지표들에 재정 및 통화 정책이 어떤 영향을 미치는지 설명하세요. |
2nd Turn |
Now, explain them again like I'm five. |
이제 제가 5살이라 생각하고 다시 설명해 주세요. |
## Models Results
Here are the evaluation results of various language models including [EXAONE 3.0 7.8B instruction-tuned model](https://huggingface.co/LGAI-EXAONE/EXAONE-3.0-7.8B-Instruct) on KoMT-Bench. Please refer to [EXAONE 3.0 technical report](https://arxiv.org/abs/2408.03541) for details.
| | EXAONE 3.0 7.8B Inst. | Llama 3.1 8B Inst. | Gemma 2 9B Inst. | QWEN 2 7B Inst. | Phi 3 7B Inst. | Mistral 7B Inst. |
| :---: | :---: | :---: | :---: | :---: | :---: | :---: |
| KoMT-Bench | **8.92** | 6.06 | 7.92 | 7.69 | 4.87 | 5.20 |
## References
[1] Lianmin Zheng, Wei-Lin Chiang, Ying Sheng, Siyuan Zhuang, Zhanghao Wu, Yonghao Zhuang, Zi Lin, Zhuohan Li, Dacheng Li, Eric Xing, Hao Zhang, Joseph E Gonzalez, and Ion Stoica. Judging llm-as-a-judge with mt-bench and chatbot arena. In A. Oh, T. Naumann, A. Globerson, K. Saenko, M. Hardt, and S. Levine, editors, Advances in Neural Information Processing Systems, volume 36, pages 46595–46623. Curran Associates, Inc., 2023.
## Citation
```
@misc{KoMT-Bench,
author = {LG AI Research},
title = {KoMT-Bench},
year = {2024},
publisher = {Hugging Face},
journal = {Hugging Face repository},
howpublished = {\url{https://huggingface.co/datasets/LGAI-EXAONE/KoMT-Bench}}
}
```"
kifai/KoInFoBench,"{""license"": ""mit"", ""task_categories"": [""text-generation""], ""language"": [""ko""], ""size_categories"": [""n<1K""]}","# KoInFoBench
KoInFoBench is a specialized evaluation dataset designed to assess the performance of Large Language Models (LLMs) on capabilities of Korean instructions following.
The current version of `KoInFoBench` consists of 60 instruction sets and 233 questions.
Inspired by [InFoBench](https://huggingface.co/datasets/kqsong/InFoBench) dataset, we extends their concpet by focusing on the nuances and features of Korean language.
- 🖥️ Code to reproduce or evaluate own LLMs is available at [https://github.com/KIFAI/KoInFoBench](https://github.com/KIFAI/KoInFoBench)
- 📄 Paper is under writing and open soon!
### 🚀 Update
- **2024.05.18**: add other results `gpt-4o-2024-05-13`, `claude-3-sonnet-20240229`, `solar-1-mini-chat`
## Dataset Overview
### Usage
```python
from datasets import load_dataset
dataset = load_dataset('kifai/KoInFoBench')
```
### Example
```json
{
""id"": ""19"",
""subset"": ""input_intensive_set"",
""category"": ""구글캘린더"",
""instruction"": ""다음은 해외 콘서트 참가 확정에 대한 영문으로 작성된 이메일입니다. 한국시간(KST) 기준으로 참가 확정된 날짜, 콘서트 날짜와 시간을 \""년-월-일 시간\"" 형식으로 작성하고 한국시간 기준으로 참가 확정일로부터 콘서트 날짜까지 몇 일 남았는지 계산하여 국문으로 정답을 함께 작성합니다."",
""input"": ""Email: We are pleased to inform you that your concert ticket purchase has been successfully confirmed at approximately 11am GMT today (26 March 2024). The concert you have been eagerly awaiting is scheduled to take place on 17 September 2024, starting at 6 PM UTC+2. Please mark your calendar and prepare to join us for an unforgettable evening of live music and entertainment. Your ticket grants you access to a night filled with exceptional performances, engaging visuals, and the vibrant energy of live music. We recommend arriving early to enjoy the full experience, including pre-concert activities and amenities."",
""decomposed_questions"": [
""답변은 해외 콘서트 참가 일정에 대한 내용이 포함되어 있습니까?"",
""답변으로 작성된 모든 일정은 한국시간(KST) 기준으로 작성되었습니까?"",
""콘서트 참가가 확정된 날짜 그리고 콘서트 날짜와 시간 2개의 일정을 모두 포함합니까?"",
""날짜와 시간이 \""년-월-일 시간\"" 형식으로 올바르게 작성되었습니까?"",
""콘서트 확정일로부터 콘서트까지 남은 기간은 콘서트 시작일을 포함할 경우 177일, 미포함인 경우 176일입니다. 남은 기간을 176일 혹은 177일로 계산하였습니까?""
],
""question_label"": [
""Format"",
""Format, Content"",
""Format"",
""Format"",
""Number""
],
""ref"": """"
}
```
### Fields
- **id**: unique identifier for each entry in the dataset
- **subset**: include `input_intensive_set` and `instruction_intensive_set`. where ""intensive"" indicates the entry's focus on evaluating Korean specific input or detailed instruction following
- **category**: a string which each entry belongs. For example, '구글캘린더' indicates that the entry is related to tasks associated with Google Calander
- **instruction**: a string containing instructions
- **input**: a string containing context information and can be empty
- **decomposed_questions**: a list of string questions that decompose the task related to the entry. Each question is designed to evaluate the response of LLM
- **question_label**: a list of string labels that identify the type of each decomposed question. Each lable belong to multiple aspects, such as Format, Content, Number, Linguistic, Style
- **ref**: references a string for references or additional information and it could be empty
## Evaluation Result
### DRFR
Decomposed Requirements Following Ratio(DRFR) is the metric to evaluate how LLMs accurately respond to the instruction/input.
This metric calculates the average accuracy across answers to the decomposed questions for each instruction.
The following is the summary of the model performance on our dataset.
| Model | H_DRFR | A_DRFR | Alignment |
|------------------------------ |-------- |--------|-----------|
| **claude-3-opus-20240229** | **0.854** | 0.850 | 87% |
| **gpt-4-turbo-2024-04-09** | 0.850 | 0.880 | 87% |
| **gpt-4o-2024-05-13** | 0.850 | 0.863 | 89% |
| **gpt-4-0125-preview** | 0.824 | 0.824 | 83% |
| **claude-3-sonnet-20240229** | 0.790 | 0.828 | 84% |
| **gemini-1.5-pro** | 0.773 | 0.811 | 83% |
| **meta-llama/Meta-Llama-3-70B-Instruct-** | 0.747 | 0.863 | 84% |
| **hpx003** | 0.691 | 0.738 | 83% |
| **gpt-3.5-turbo-0125** | 0.678 | 0.734 | 82% |
| **solar-1-mini-chat** | 0.614 | 0.695 | 79% |
| **yanolja/EEVE-Korean-Instruct-10.8B-v1.0** | 0.597 | 0.730 | 79% |`
- `H_DRFR`: The accuracy of model responses as evaluated by the human expert
- `A_DRFR`: The accuracy of model responses automatically evaluated by GPT-4 as employing the capability of LLM-as-a-judge
- `Alignment`: The degree of agreement or consistency between the human and automated evaluation
> Please note that the evaluation results of the LLMs presented in the above table may vary due to its randomness.
## Additional Information
### License Information
This dataset is released under the [MIT LISENCE](https://github.com/KIFAI/KoInfoBench/blob/main/LICENSE)
### Citation Information
```
@article{,
title={KoInFoBench},
author={Sungwoo Oh, Sungjun Kown, Donggyu Kim},
year={2024},
eprint={},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```"
neulab/PangeaBench-xgqa,"{""dataset_info"": {""features"": [{""name"": ""question"", ""dtype"": ""string""}, {""name"": ""answer"", ""dtype"": ""string""}, {""name"": ""full_answer"", ""dtype"": ""string""}, {""name"": ""image_id"", ""dtype"": ""string""}, {""name"": ""image"", ""dtype"": ""image""}], ""splits"": [{""name"": ""bn"", ""num_bytes"": 498517814, ""num_examples"": 9666}, {""name"": ""de"", ""num_bytes"": 498108367, ""num_examples"": 9666}, {""name"": ""en"", ""num_bytes"": 498078827, ""num_examples"": 9666}, {""name"": ""id"", ""num_bytes"": 498180441, ""num_examples"": 9666}, {""name"": ""ko"", ""num_bytes"": 498157980, ""num_examples"": 9666}, {""name"": ""pt"", ""num_bytes"": 498078408, ""num_examples"": 9666}, {""name"": ""ru"", ""num_bytes"": 498298164, ""num_examples"": 9666}, {""name"": ""zh"", ""num_bytes"": 498005624, ""num_examples"": 9666}], ""download_size"": 2692912777, ""dataset_size"": 3985425625}, ""configs"": [{""config_name"": ""default"", ""data_files"": [{""split"": ""bn"", ""path"": ""data/bn-*""}, {""split"": ""de"", ""path"": ""data/de-*""}, {""split"": ""en"", ""path"": ""data/en-*""}, {""split"": ""id"", ""path"": ""data/id-*""}, {""split"": ""ko"", ""path"": ""data/ko-*""}, {""split"": ""pt"", ""path"": ""data/pt-*""}, {""split"": ""ru"", ""path"": ""data/ru-*""}, {""split"": ""zh"", ""path"": ""data/zh-*""}]}], ""license"": ""cc-by-4.0"", ""task_categories"": [""visual-question-answering""], ""language"": [""bn"", ""de"", ""en"", ""id"", ""ko"", ""pt"", ""ru"", ""zh""], ""pretty_name"": ""xgqa"", ""size_categories"": [""10K
> [!TIP]
>
> Two words are structurally similar if and only if the two shares the same
> [stem](https://en.wikipedia.org/wiki/Word_stem)
Development
-----------
### Data Source
Although [the original Wiktionary dump](https://dumps.wikimedia.org/) is available, parsing it from scratch involves
rather complicated process. For example,
[acquiring the inflection data of most Indo-European languages on Wiktionary has already triggered some research-level efforts](https://stackoverflow.com/a/62977327).
We would probably do it in the future. At present, however, we would simply take the awesome works by
[tatuylonen](https://github.com/tatuylonen/wiktextract) which has already processed it and presented it in
[in JSONL format](https://kaikki.org/dictionary/rawdata.html). wiktionary-data sources the data from
__raw Wiktextract data (JSONL, one object per line)__ option there.
### Environment Setup
Get the source code:
```console
git clone git@github.com:paion-data/wiktionary-data.git
cd wiktionary-data
```
It is strongly recommended to work in an isolated environment. Install virtualenv and create an isolated Python
environment by
```console
python3 -m pip install --user -U virtualenv
python3 -m virtualenv .venv
```
To activate this environment:
```console
source .venv/bin/activate
```
or, on Windows
```console
./venv\Scripts\activate
```
> [!TIP]
>
> To deactivate this environment, use
>
> ```console
> deactivate
> ```
### Installing Dependencies
```console
pip3 install -r requirements.txt
```
License
-------
The use and distribution terms for [wiktionary-data]() are covered by the [Apache License, Version 2.0].
[Apache License Badge]: https://img.shields.io/badge/Apache%202.0-F25910.svg?style=for-the-badge&logo=Apache&logoColor=white
[Apache License, Version 2.0]: https://www.apache.org/licenses/LICENSE-2.0
[GitHub workflow status badge]: https://img.shields.io/github/actions/workflow/status/paion-data/wiktionary-data/ci-cd.yaml?branch=master&style=for-the-badge&logo=github&logoColor=white&label=CI/CD
[GitHub workflow status URL]: https://github.com/paion-data/wiktionary-data/actions/workflows/ci-cd.yaml
[Hugging Face dataset badge]: https://img.shields.io/badge/Hugging%20Face%20Dataset-wiktionary--data-FF9D00?style=for-the-badge&logo=huggingface&logoColor=white&labelColor=6B7280
[Hugging Face dataset URL]: https://huggingface.co/datasets/paion-data/wiktionary-data
[Hugging Face sync status badge]: https://img.shields.io/github/actions/workflow/status/paion-data/wiktionary-data/ci-cd.yaml?branch=master&style=for-the-badge&logo=github&logoColor=white&label=Hugging%20Face%20Sync%20Up
[Hugging Face sync status URL]: https://github.com/paion-data/wiktionary-data/actions/workflows/ci-cd.yaml
[Python Version Badge]: https://img.shields.io/badge/Python-3.10-FFD845?labelColor=498ABC&style=for-the-badge&logo=python&logoColor=white"
erfanzar/UltraChat-Matic,"{""dataset_info"": {""features"": [{""name"": ""system"", ""dtype"": ""string""}, {""name"": ""user"", ""sequence"": ""string""}, {""name"": ""assistant"", ""sequence"": ""string""}, {""name"": ""dialogs"", ""sequence"": ""string""}, {""name"": ""conv_depth"", ""dtype"": ""int64""}], ""splits"": [{""name"": ""train"", ""num_bytes"": 447216231, ""num_examples"": 109765}], ""download_size"": 242424003, ""dataset_size"": 447216231}, ""configs"": [{""config_name"": ""default"", ""data_files"": [{""split"": ""train"", ""path"": ""data/train-*""}]}], ""language"": [""en"", ""es"", ""ru"", ""de"", ""pl"", ""th"", ""vi"", ""sv"", ""bn"", ""da"", ""he"", ""it"", ""fa"", ""sk"", ""id"", ""nb"", ""el"", ""nl"", ""hu"", ""eu"", ""zh"", ""eo"", ""ja"", ""ca"", ""cs"", ""bg"", ""fi"", ""pt"", ""tr"", ""ro"", ""ar"", ""uk"", ""gl"", ""fr"", ""ko""], ""tags"": [""code"", ""biology"", ""medical""], ""size_categories"": [""1M
The distribution of languages in this dataset:
# License
We have endeavoured to base our dataset only on source datasets which allow for fully free use. Therefore, we share this dataset with the Apache 2.0 license.
# Developed by
This model was trained by Peter Devine ([ptrdvn](https://huggingface.co/ptrdvn)) for Lightblue"
ShoukanLabs/OpenNiji-Dataset-Aesthetic-Finetune,"{""task_categories"": [""text-to-image""], ""language"": [""en"", ""ja"", ""ko""], ""tags"": [""anime"", ""dataset"", ""Nijijourney"", ""Midjourney"", ""discord""], ""size_categories"": [""10K,
'ID': '5919991144272485961_0',
'Subset': ""('Japanese', 'Japan')"",
'Question': '写真に写っているキャラクターの名前は? ',
'Translated Question': 'What is the name of the object in the picture? ',
'Options': ['コスモ星丸', 'ミャクミャク', ' フリービー ', 'ハイバオ'],
'Translated Options': ['Cosmo Hoshimaru','MYAKU-MYAKU','Freebie ','Haibao'],
'Label': -1,
'Category': 'Objects / materials / clothing',
'Image Type': 'Self',
'Image Source': 'Self-open',
'License': 'CC BY-SA'
}
```
Data Fields
The data fields are:
- `image`: The image referenced by the question.
- `ID`: A unique ID for the given sample.
- `Subset`: A Language-Country pair
- `Question`: The question elicited in the local language.
- `Translated Question`: The question elicited in the English language.
- `Options`: A list of possible answers to the question in the Local Language.
- `Translated Options`: A list of possible answers to the question in the English Language.
- `Label`: Will always be -1. Please refer to our leaderboard to get your performance.
- `Category`: A specific category for the given sample.
- `Image Type`: `Self` or `External`, meaning if the image is self-taken from the annotator or comes from the internet.
- `Image Source`: If the image type is Self, this can be `Self-open` or `Self-research_only`, meaning that the image can be used for commercial purposes or only for research purposes. If the image type is External, this will be the link to the external source.
- `License`: The corresponding license for the image.
# Dataset Creation
## Source Data
The images in CVQA can either be based on existing external images or from the contributor's own images. You can see this information from the 'Image Type' and 'Image Source' columns. Images based on external sources will retain their original licensing, whereas images from contributors will be licensed based on each contributor's decision.
All the questions are hand-crafted by annotators.
## Data Annotation
Data creation follows two general steps: question formulation and validation.
During question formulation, annotators are asked to write a question, with one correct answer and three distractors.
Questions must be culturally nuanced and relevant to the image. Annotators are asked to mask sensitive information and text that can easily give away the answers.
During data validation, another annotator is asked to check and validate whether the images and questions adhere to the guidelines.
You can learn more about our annotation protocol and guidelines in our paper.
## Annotators
Annotators needed to be fluent speakers of the language in question and be accustomed to the cultures of the locations for which they provided data. Our annotators are predominantly native speakers, with around 89% residing in the respective country for over 16 years.
## Licensing Information
Note that each question has its own license. All data here is free to use for research purposes, but not every entry is permissible for commercial use.
---"
GMLBsst/RecoTravRoute,{},"---
license: apache-2.0
task_categories:
- question-answering
language:
- ko
size_categories:
- n<1K
---"
traintogpb/aihub-kozh-translation-integrated-small-100k,"{""license"": ""mit"", ""task_categories"": [""translation""], ""language"": [""ko"", ""zh""]}","### AI Hub Ko-Zh Translation Dataset (Integrated)
AI Hub의 한-중 번역 관련 데이터셋 10개를 병합한 자료입니다. 병합 시 총 데이터 개수는 5,934,596개이며, 이중 10,000개의 validation set와 2,000개의 test set가 분리되어 모든 데이터 사이즈(large-5.9m, base-1m, small-100k)에서 동일하게 사용됩니다.
- large-5.9m (train): 병합 데이터 100% 사용; 총 5,922,596개
- base-1m (train): 병합 데이터 중 1M개 사용; 총 1,000,000개
- small-100k (train): 병합 데이터 중 100K개 사용; 총 100,000개
### Subsets
| Name | Total Size | Chinese Size (Utilized Only) | URL | Datasetkey (AIHub) |
|---|---|---|---|---|
| 한국어-중국어 번역 말뭉치(기술과학) | 1170000 | 1170000 | [URL](https://www.aihub.or.kr/aihubdata/data/view.do?currMenu=&topMenu=&aihubDataSe=data&dataSetSn=128) | 128 |
| 한국어-중국어 번역 말뭉치(사회과학) | 1170000 | 1170000 | [URL](https://www.aihub.or.kr/aihubdata/data/view.do?currMenu=&topMenu=&aihubDataSe=data&dataSetSn=129) | 129 |
| 일상생활 및 구어체 한-중, 한-일 번역 병렬 말뭉치 데이터 | 2700000 | 1349470 | [URL](https://www.aihub.or.kr/aihubdata/data/view.do?currMenu=&topMenu=&aihubDataSe=data&dataSetSn=546) | 546 |
| 전문분야 영-한, 중-한 번역 말뭉치(식품) | 1350000 | 1326837 | [URL](https://www.aihub.or.kr/aihubdata/data/view.do?currMenu=&topMenu=&aihubDataSe=data&dataSetSn=71262) | 71262 |
| 방송 콘텐츠 한-중, 한-일 번역 병렬 말뭉치 데이터 | 1487088 | 367921 | [URL](https://www.aihub.or.kr/aihubdata/data/view.do?currMenu=&topMenu=&aihubDataSe=data&dataSetSn=71263) | 71263 |
| 발화유형(문어, 구어, 채팅) 별 기계번역 병렬 말뭉치 | 82002 | 26989 | [URL](https://www.aihub.or.kr/aihubdata/data/view.do?currMenu=&topMenu=&aihubDataSe=data&dataSetSn=71411) | 71411 |
| 한국어-다국어 번역 말뭉치(기술과학) | 270459 | 146317 | [URL](https://www.aihub.or.kr/aihubdata/data/view.do?currMenu=&topMenu=&aihubDataSe=data&dataSetSn=71493) | 71493 |
| 한국어-다국어 번역 말뭉치(기초과학) | 270317 | 84419 | [URL](https://www.aihub.or.kr/aihubdata/data/view.do?currMenu=&topMenu=&aihubDataSe=data&dataSetSn=71496) | 71496 |
| 한국어-다국어 번역 말뭉치(인문학) | 271721 | 80375 | [URL](https://www.aihub.or.kr/aihubdata/data/view.do?currMenu=&topMenu=&aihubDataSe=data&dataSetSn=71498) | 71498 |
| 방송콘텐츠 한국어-아시아어 번역 말뭉치 | 820387 | 112978 | [URL](https://www.aihub.or.kr/aihubdata/data/view.do?currMenu=&topMenu=&aihubDataSe=data&dataSetSn=71591) | 71591 |
| AI 허브 데이터 활용을 위한 기계 번역말뭉치 | 2653948 | 212268 | [URL](https://www.aihub.or.kr/aihubdata/data/view.do?currMenu=&topMenu=&aihubDataSe=data&dataSetSn=71593) | 71593 |"
jihye-moon/klac_legal_aid_counseling,"{""task_categories"": [""conversational"", ""text-classification""], ""language"": [""ko""], ""tags"": [""le""], ""size_categories"": [""1K
[법률구조공단](https://www.klac.or.kr/)의 법률구조상담 웹페이지를 크롤링하여 구축한 데이터셋 입니다."
floschne/xgqa_1k,"{""dataset_info"": {""features"": [{""name"": ""question"", ""dtype"": ""string""}, {""name"": ""answer"", ""dtype"": ""string""}, {""name"": ""full_answer"", ""dtype"": ""string""}, {""name"": ""image_id"", ""dtype"": ""string""}, {""name"": ""image"", ""struct"": [{""name"": ""bytes"", ""dtype"": ""binary""}, {""name"": ""path"", ""dtype"": ""null""}]}], ""splits"": [{""name"": ""bn"", ""num_bytes"": 51624194, ""num_examples"": 1000}, {""name"": ""de"", ""num_bytes"": 51582232, ""num_examples"": 1000}, {""name"": ""en"", ""num_bytes"": 51579211, ""num_examples"": 1000}, {""name"": ""id"", ""num_bytes"": 51590256, ""num_examples"": 1000}, {""name"": ""ko"", ""num_bytes"": 51587731, ""num_examples"": 1000}, {""name"": ""pt"", ""num_bytes"": 51579268, ""num_examples"": 1000}, {""name"": ""ru"", ""num_bytes"": 51602287, ""num_examples"": 1000}, {""name"": ""zh"", ""num_bytes"": 51572077, ""num_examples"": 1000}], ""download_size"": 412467532, ""dataset_size"": 412717256}, ""configs"": [{""config_name"": ""default"", ""data_files"": [{""split"": ""bn"", ""path"": ""data/bn-*""}, {""split"": ""de"", ""path"": ""data/de-*""}, {""split"": ""en"", ""path"": ""data/en-*""}, {""split"": ""id"", ""path"": ""data/id-*""}, {""split"": ""ko"", ""path"": ""data/ko-*""}, {""split"": ""pt"", ""path"": ""data/pt-*""}, {""split"": ""ru"", ""path"": ""data/ru-*""}, {""split"": ""zh"", ""path"": ""data/zh-*""}]}], ""license"": ""cc-by-4.0"", ""task_categories"": [""visual-question-answering""], ""language"": [""bn"", ""de"", ""en"", ""id"", ""ko"", ""pt"", ""ru"", ""zh""], ""pretty_name"": ""xGQA"", ""size_categories"": [""1K MIRACL 🌍🙌🌏 (Multilingual Information Retrieval Across a Continuum of Languages) is a multilingual retrieval dataset that focuses on search across 18 different languages, which collectively encompass over three billion native speakers around the world.
>
> The corpus for each language is prepared from a Wikipedia dump, where we keep only the plain text and discard images, tables, etc. Each article is segmented into multiple passages using WikiExtractor based on natural discourse units (e.g., `\n\n` in the wiki markup). Each of these passages comprises a ""document"" or unit of retrieval. We preserve the Wikipedia article title of each passage.
## Embeddings
We compute for `title+"" ""+text` the embeddings using our `multilingual-22-12` embedding model, a state-of-the-art model that works for semantic search in 100 languages. If you want to learn more about this model, have a look at [cohere.ai multilingual embedding model](https://txt.cohere.ai/multilingual/).
## Loading the dataset
In [miracl-ko-corpus-22-12](https://huggingface.co/datasets/Cohere/miracl-ko-corpus-22-12) we provide the corpus embeddings. Note, depending on the selected split, the respective files can be quite large.
You can either load the dataset like this:
```python
from datasets import load_dataset
docs = load_dataset(f""Cohere/miracl-ko-corpus-22-12"", split=""train"")
```
Or you can also stream it without downloading it before:
```python
from datasets import load_dataset
docs = load_dataset(f""Cohere/miracl-ko-corpus-22-12"", split=""train"", streaming=True)
for doc in docs:
docid = doc['docid']
title = doc['title']
text = doc['text']
emb = doc['emb']
```
## Search
Have a look at [miracl-ko-queries-22-12](https://huggingface.co/datasets/Cohere/miracl-ko-queries-22-12) where we provide the query embeddings for the MIRACL dataset.
To search in the documents, you must use **dot-product**.
And then compare this query embeddings either with a vector database (recommended) or directly computing the dot product.
A full search example:
```python
# Attention! For large datasets, this requires a lot of memory to store
# all document embeddings and to compute the dot product scores.
# Only use this for smaller datasets. For large datasets, use a vector DB
from datasets import load_dataset
import torch
#Load documents + embeddings
docs = load_dataset(f""Cohere/miracl-ko-corpus-22-12"", split=""train"")
doc_embeddings = torch.tensor(docs['emb'])
# Load queries
queries = load_dataset(f""Cohere/miracl-ko-queries-22-12"", split=""dev"")
# Select the first query as example
qid = 0
query = queries[qid]
query_embedding = torch.tensor(queries['emb'])
# Compute dot score between query embedding and document embeddings
dot_scores = torch.mm(query_embedding, doc_embeddings.transpose(0, 1))
top_k = torch.topk(dot_scores, k=3)
# Print results
print(""Query:"", query['query'])
for doc_id in top_k.indices[0].tolist():
print(docs[doc_id]['title'])
print(docs[doc_id]['text'])
```
You can get embeddings for new queries using our API:
```python
#Run: pip install cohere
import cohere
co = cohere.Client(f""{api_key}"") # You should add your cohere API Key here :))
texts = ['my search query']
response = co.embed(texts=texts, model='multilingual-22-12')
query_embedding = response.embeddings[0] # Get the embedding for the first text
```
## Performance
In the following table we compare the cohere multilingual-22-12 model with Elasticsearch version 8.6.0 lexical search (title and passage indexed as independent fields). Note that Elasticsearch doesn't support all languages that are part of the MIRACL dataset.
We compute nDCG@10 (a ranking based loss), as well as hit@3: Is at least one relevant document in the top-3 results. We find that hit@3 is easier to interpret, as it presents the number of queries for which a relevant document is found among the top-3 results.
Note: MIRACL only annotated a small fraction of passages (10 per query) for relevancy. Especially for larger Wikipedias (like English), we often found many more relevant passages. This is know as annotation holes. Real nDCG@10 and hit@3 performance is likely higher than depicted.
| Model | cohere multilingual-22-12 nDCG@10 | cohere multilingual-22-12 hit@3 | ES 8.6.0 nDCG@10 | ES 8.6.0 acc@3 |
|---|---|---|---|---|
| miracl-ar | 64.2 | 75.2 | 46.8 | 56.2 |
| miracl-bn | 61.5 | 75.7 | 49.2 | 60.1 |
| miracl-de | 44.4 | 60.7 | 19.6 | 29.8 |
| miracl-en | 44.6 | 62.2 | 30.2 | 43.2 |
| miracl-es | 47.0 | 74.1 | 27.0 | 47.2 |
| miracl-fi | 63.7 | 76.2 | 51.4 | 61.6 |
| miracl-fr | 46.8 | 57.1 | 17.0 | 21.6 |
| miracl-hi | 50.7 | 62.9 | 41.0 | 48.9 |
| miracl-id | 44.8 | 63.8 | 39.2 | 54.7 |
| miracl-ru | 49.2 | 66.9 | 25.4 | 36.7 |
| **Avg** | 51.7 | 67.5 | 34.7 | 46.0 |
Further languages (not supported by Elasticsearch):
| Model | cohere multilingual-22-12 nDCG@10 | cohere multilingual-22-12 hit@3 |
|---|---|---|
| miracl-fa | 44.8 | 53.6 |
| miracl-ja | 49.0 | 61.0 |
| miracl-ko | 50.9 | 64.8 |
| miracl-sw | 61.4 | 74.5 |
| miracl-te | 67.8 | 72.3 |
| miracl-th | 60.2 | 71.9 |
| miracl-yo | 56.4 | 62.2 |
| miracl-zh | 43.8 | 56.5 |
| **Avg** | 54.3 | 64.6 |"
blueapple8259/c4-ko-cleaned,"{""license"": ""odc-by"", ""language"": [""ko""], ""task_categories"": [""text-generation""]}","학교 점심시간 때 할 거 없어서 만든 [c4](https://huggingface.co/datasets/allenai/c4)를 정제한 데이터입니다. 다 하면 컴퓨터가 감당 못 할 거 같아서 전체 데이터의 1/10만 진행하였으며 아마 품질은 안 좋을 겁니다.
파일 크기: 약 3gb
데이터 수: 1847023"
Junnos/luckyvicky-DPO,"{""license"": ""mit"", ""task_categories"": [""reinforcement-learning"", ""text2text-generation""], ""language"": [""ko""], ""pretty_name"": ""\uc6d0\uc601\uc801 \uc0ac\uace0"", ""tags"": [""lifestyle"", ""DPO""], ""size_categories"": [""n<1K""]}",### 원영적 사고 데이터셋
ChuGyouk/PubMedQA-test-Ko,"{""license"": ""mit"", ""task_categories"": [""text-generation""], ""language"": [""ko""], ""tags"": [""medical""]}","This is the test data of PubMedQA, Korean translated version."
SachinPatel248/mqnli,{},"---
license: apache-2.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: question
dtype: string
- name: sentence
dtype: string
- name: label
dtype:
class_label:
names:
'0': entailment
'1': not_entailment
- name: translated_question_lang
dtype: string
- name: translated_sentence_lang
dtype: string
- name: translated_question
dtype: string
- name: translated_sentence
dtype: string
splits:
- name: train
num_bytes: 54987341
num_examples: 103059
download_size: 39711768
dataset_size: 54987341
task_categories:
- text-classification
language:
- en
- de
- es
- ar
- zh
- hi
- pt
- ru
- ja
- fr
- ur
- tr
- ko
- pl
- it
- sv
pretty_name: Multilingual qnli (from GLUE)
size_categories:
- 10K
KOGL Type 1
1. Source Indication Liability
- Users who use public works shall indicate source or copyright as follows:
- EX : “000(public institution's name)'s public work is used according to KOGL”
- The link shall be provided when online hyperlink for the source website is available.
- Marking shall not be used to misguide the third party that the user is sponsored by public institution or user has a special relationship with public institutions.
2. Use Prohibited Information
- Personal information that is protected by Personal Information Protection Act, Promotion for Information Network Use and Information Protection Act, etc.
- Credit information protected by the Use and Protection of Credit Information Act, etc.
- Military secrets protected by Military Secret Protection Act, etc.
- Information that is the object of other rights such as trademark right, design right, design right or patent right, etc., or that is owned by third party's copyright.
- Other information that is use prohibited information according to other laws.
3. Public Institution's Liability Exemption
- Public institution does not guarantee the accuracy or continued service of public works.
- Public institution and its employees do not have any liability for any kind of damage or disadvantage that may arise by using public works.
4. Effect of Use Term Violation
- The use permission is automatically terminated when user violates any of the KOGL's Use Terms, and the user shall immediately stop using public works.
## Data Structure
### Data Instance
```python
>>> from datasets import load_dataset
>>>
>>> ds = load_dataset(""Bingsu/national_library_of_korea_book_info"", split=""train"")
>>> ds
Dataset({
features: ['isbn13', 'vol', 'title', 'author', 'publisher', 'price', 'img_url', 'description'],
num_rows: 7919278
})
```
```python
>>> ds.features
{'isbn13': Value(dtype='string', id=None),
'vol': Value(dtype='string', id=None),
'title': Value(dtype='string', id=None),
'author': Value(dtype='string', id=None),
'publisher': Value(dtype='string', id=None),
'price': Value(dtype='string', id=None),
'img_url': Value(dtype='string', id=None),
'description': Value(dtype='string', id=None)}
```
or
```python
>>> import pandas as pd
>>>
>>> url = ""https://huggingface.co/datasets/Bingsu/national_library_of_korea_book_info/resolve/main/train.csv.gz""
>>> df = pd.read_csv(url, low_memory=False)
```
```python
>>> df.info()
RangeIndex: 7919278 entries, 0 to 7919277
Data columns (total 8 columns):
# Column Dtype
--- ------ -----
0 isbn13 object
1 vol object
2 title object
3 author object
4 publisher object
5 price object
6 img_url object
7 description object
dtypes: object(8)
memory usage: 483.4+ MB
```
### Null data
```python
>>> df.isnull().sum()
isbn13 3277
vol 5933882
title 19662
author 122998
publisher 1007553
price 3096535
img_url 3182882
description 4496194
dtype: int64
```
### Note
```python
>>> df[df[""description""].str.contains(""[해외주문원서]"", regex=False) == True].head()[""description""]
10773 [해외주문원서] 고객님의 요청으로 수입 주문하는 도서이므로, 주문취소 및 반품이 불...
95542 [해외주문원서] 고객님의 요청으로 수입 주문하는 도서이므로, 주문취소 및 반품이 불...
95543 [해외주문원서] 고객님의 요청으로 수입 주문하는 도서이므로, 주문취소 및 반품이 불...
96606 [해외주문원서] 고객님의 요청으로 수입 주문하는 도서이므로, 주문취소 및 반품이 불...
96678 [해외주문원서] 고객님의 요청으로 수입 주문하는 도서이므로, 주문취소 및 반품이 불...
Name: description, dtype: object
```"
KETI-AIR/kor_race,"{""pretty_name"": ""race"", ""language"": [""ko""], ""size_categories"": [""1K`, 여러 줄의 `\n` 등을 정리했습니다.
데이터의 `license` 필드에서 위키문헌 링크와 원본 자료 위치를 확인할 수 있습니다."
haebo1/test,"{""pretty_name"": ""KoBEST"", ""annotations_creators"": [""expert-generated""], ""language_creators"": [""expert-generated""], ""language"": [""ko""], ""license"": [""cc-by-sa-4.0""], ""multilinguality"": [""monolingual""], ""size_categories"": [""10K
This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1).
## Dataset Details
### Dataset Description
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
### Direct Use
[More Information Needed]
### Out-of-Scope Use
[More Information Needed]
## Dataset Structure
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Data Collection and Processing
[More Information Needed]
#### Who are the source data producers?
[More Information Needed]
### Annotations [optional]
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
#### Personal and Sensitive Information
[More Information Needed]
## Bias, Risks, and Limitations
[More Information Needed]
### Recommendations
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]"
jp1924/NaturalandArtificialOccurrenceNonverbalSoundDatasets,"{""dataset_info"": {""features"": [{""name"": ""RawDataInfo"", ""struct"": [{""name"": ""RawDataId"", ""dtype"": ""string""}, {""name"": ""Copyrighter"", ""dtype"": ""string""}, {""name"": ""SampleRate(Hz)"", ""dtype"": ""int32""}, {""name"": ""Channel"", ""dtype"": ""int32""}, {""name"": ""BitDepth(bit)"", ""dtype"": ""int32""}, {""name"": ""RecordingDevice"", ""dtype"": ""string""}, {""name"": ""BitRate(kbps)"", ""dtype"": ""int32""}, {""name"": ""CollectionType"", ""dtype"": ""string""}, {""name"": ""RecDateTime"", ""dtype"": ""string""}, {""name"": ""RecDataLength(sec)"", ""dtype"": ""int32""}, {""name"": ""Season"", ""dtype"": ""string""}, {""name"": ""Weather"", ""dtype"": ""string""}, {""name"": ""TimeZone"", ""dtype"": ""string""}, {""name"": ""PlaceType"", ""dtype"": ""string""}, {""name"": ""DistanceType"", ""dtype"": ""string""}, {""name"": ""FileExtension"", ""dtype"": ""string""}]}, {""name"": ""SourceDataInfo"", ""struct"": [{""name"": ""SourceDataId"", ""dtype"": ""string""}, {""name"": ""FileExtension"", ""dtype"": ""string""}, {""name"": ""NoOfClip"", ""dtype"": ""int32""}, {""name"": ""ClipDataLength(sec)"", ""dtype"": ""int32""}]}, {""name"": ""LabelDataInfo"", ""struct"": [{""name"": ""Path"", ""dtype"": ""string""}, {""name"": ""LabelID"", ""dtype"": ""string""}, {""name"": ""NumAnnotator"", ""dtype"": ""int32""}, {""name"": ""Division1"", ""dtype"": ""string""}, {""name"": ""Division2"", ""dtype"": ""string""}, {""name"": ""Class"", ""dtype"": ""string""}, {""name"": ""Desc"", ""dtype"": ""string""}, {""name"": ""Type"", ""dtype"": ""string""}, {""name"": ""NumSegmentation"", ""dtype"": ""int32""}, {""name"": ""Segmentations"", ""list"": {""sequence"": ""float32""}}]}, {""name"": ""audio"", ""dtype"": {""audio"": {""sampling_rate"": 44100}}}], ""splits"": [{""name"": ""train"", ""num_bytes"": 20358466791, ""num_examples"": 35848}, {""name"": ""validation"", ""num_bytes"": 2273622062.875, ""num_examples"": 4481}], ""download_size"": 20620542669, ""dataset_size"": 22632088853.875}, ""configs"": [{""config_name"": ""default"", ""data_files"": [{""split"": ""train"", ""path"": ""data/train-*""}, {""split"": ""validation"", ""path"": ""data/validation-*""}]}], ""task_categories"": [""automatic-speech-recognition""], ""language"": [""ko""], ""tags"": [""STT"", ""Audio"", ""Noise""], ""size_categories"": [""10B= 0일 것임) (정수)
- ```score_ratio```: 더 선호하는 댓글의 점수와 덜 선호하는 댓글의 점수의 비율 (>= 1) (float)
## Dataset Design
### 도메인 선택
데이터는 *서브레딧* 이라는 토픽별 포라로 구성된 공개 포럼인 Reddit에서 공급됩니다.
예를 들어 `askculinary` 하위 레딧은 사용자가 요리 관련 질문을 하고 다른 사용자에 의해 응답 되는 것입니다.
SHP에는 18개의 다른 하위 레딧에서 긁어낸 주석에 대한 열차, 유효성 검사 및 테스트 분할이 포함되어 있습니다. 우리는 다음을 기반으로 하위 레딧을 선택했습니다.
1. 잘 알려진 것인지 여부(가입자수 >= 100K)
2. 게시물이 질문 또는 지시를 내릴 것으로 예상되었는지 여부
3. 응답이 얼마나 *도움이* 되는지에 따라 평가되는지 여부
4. 코멘트가 전적으로 개인 경험에 대한 것이 아니라 일부 객관성에 뿌리를 두어야 하는지 여부(예: `askscience` 대 `AskAmericans`)입니다.
열차/검증/테스트 분할은 하위 레딧의 포스트 ID를 각각 90%/5%/5% 비율로 분할하여 생성되어 여러 분할에 포스트가 나타나지 않는다.
상이한 게시물들은 상이한 수의 코멘트들을 갖기 때문에, 각각의 분할에서의 선호들의 수는 정확히 90%/5%/5%가 아니다:
| subreddit | train | validation | test | total |
| ------------------ | -------: | ---------: | ---: | ----: |
| askacademia | 31450 | 2095 | 1708 | 35253 |
| askanthropology | 3910 | 203 | 268 | 4381 |
| askbaking | 44007 | 2096 | 1544 | 47647 |
| askcarguys | 3227 | 159 | 117 | 3503 |
| askculinary | 45710 | 2094 | 2563 | 50367 |
| askdocs | 6449 | 315 | 455 | 7219 |
| askengineers | 57096 | 3154 | 2638 | 62888 |
| askhistorians | 3264 | 113 | 164 | 3541 |
| askhr | 8295 | 641 | 395 | 9331 |
| askphilosophy | 10307 | 608 | 677 | 11592 |
| askphysics | 7364 | 409 | 587 | 8360 |
| askscience | 13316 | 899 | 977 | 15192 |
| asksciencefiction | 29382 | 1576 | 1987 | 32945 |
| asksocialscience | 2706 | 147 | 188 | 3041 |
| askvet | 3300 | 170 | 224 | 3694 |
| changemyview | 38173 | 1637 | 1836 | 41646 |
| explainlikeimfive | 19592 | 1014 | 1070 | 21676 |
| legaladvice | 21170 | 1106 | 1011 | 23287 |
| ALL | 348718 | 18436 | 18409 | 385563 |
### 데이터 선택
포스트/댓글의 점수는 1에 사용자로부터의 상향 투표 수(승인)를 곱하고 하향 투표 수(승인 취소)를 뺀 값입니다.
점수의 값은 상대적입니다. 트래픽이 많은 하위 레딧(게시물)에서는 점수가 높은 게시물(댓글)이 더 많습니다.
게시물에서 더 일찍 게시된 댓글은 단순히 노출이 많아 점수가 더 높은 경향이 있을 것이므로 선호도를 추론할 때 타임스탬프 정보를 사용하는 것이 필수적이다.
게시물 P와 두 개의 주석(A,B)이 주어지면 데이터 세트에 선호도 A > B만 포함했다.
1. A는 *늦지 않게* B로 작성되었고 A는 B보다 높은 점수를 갖는다.
2. 게시물은 2023년 이전에 만들어진 셀프-포스트(즉, 텍스트의 본문이고 다른 페이지로의 링크가 아님)이며, 편집되지 않았으며, NSFW(18 초과)가 아니다.
3. 삭제된 사용자, 사회자 또는 게시물 작성자에 의해 어떠한 코멘트도 이루어지지 않았다. 게시물은 삭제된 사용자 또는 진행자가 만들지 않았습니다.
4. 게시물은 점수가 >=10이고 각 코멘트는 점수가 >=2(적어도 한 번 이상 투표)이다.
주석이 있는 게시물은 `n` 데이터에서 최대 (`n` `2`) 환경 설정을 선택할 수 있습니다.
게시물당 댓글 수는 파레토 배포이기 때문에 상대적으로 적은 수의 게시물이 데이터를 지배하는 것을 방지하기 위해 게시물당 50개의 댓글으로 스크래핑을 제한했다.
이는 위의 모든 기준을 충족해야 하기 때문에 실제로는 훨씬 적은 수이지만 각 게시물에 데이터 집합에서 최대 (`50` `2`를 선택) 주석이 있을 수 있음을 의미 합니다.
레드딧은 서브레드딧마다 상위 1000개 이상의 게시물을 얻는 것을 매우 어렵게 만든다.
최상위 1,000개의 게시물부터 시작하여 Reddit의 검색 기능을 사용하여 각 게시물과 가장 유사한 25개의 게시물을 검색하여 하위 레딧당 최대 7500개의 고유한 게시물 ID를 얻었다.
### 전처리
전처리를 최소한으로 유지하려고 노력했습니다. 서브레디트-특정 약어는 확장되었다(예를 들어, ""CMV""를 ""내 견해를 변경""으로).
하이퍼링크에서, 참조 텍스트만이 유지되고 URL이 제거되었다(URL이 기입된 경우, 그것은 유지되었다).
## 기본 설정 모델 만들기
### Finetuning
인간 선호도(예를 들어, NLG 평가 또는 RLHF 보상 모델에 대해)를 예측하기 위해 모델을 피니튜닝하고자 하는 경우, 여기 몇 가지 유용한 팁이 있다:
1. **데이터를 전처리합니다.* * 총 입력 길이는 모델의 토큰 제한 (일반적으로 512 토큰)에 적합 해야 합니다.
FLAN-T5와 같은 모델은 위치 임베딩을 사용하지만 512개 토큰 이상의 입력에서 손실을 조정하면 손실이 수렴하지 않는다는 것을 발견했다.
이를 방지 하려면 게시글 텍스트 (`history` 필드에서)를 가능한 한 잘라서 전체 입력이 512 토큰 아래에 있도록 합니다 (그러나 주석을 잘리지 않음).
여전히 512 토큰 이상이면 예제를 건너뜁니다.
2. **충분히 큰 모델을 사용** 합니다.
모든 트레이닝 데이터에 걸쳐 단일 FLAN-T5-xl 모델을 피니튜닝하는 것은 72-73%(전체 입력이 토큰 한계 내에 맞는 예시의 모든 도메인에 걸쳐) 사이의 테스트 정확도를 제공해야 하며, 개별 서브레딧의 경우 65-80% 범위이다.
3. **도메인 내 예측을 수행 합니다.* * 하위 레딧이 관련이 없는 경우 도메인 외 성능이 좋지 않습니다 (예: 환경 설정을 미세 조정 하 고 환경 설정을 테스트 하는 경우 `askculinary` `askcarguys`).
4. **더 적은 에폭에 대해 훈련** InstructGPT 종이 페이퍼는 1 에폭에 대해서만 보상 모델을 훈련하는 것을 제안합니다.
동일한 코멘트가 여러 선호도에서 나타나기 때문에 데이터에 과적합되기 쉽다.
5. **더 적은 데이터에 대한 교육이 도움이 될 수 있습니다* *.
큰 `score_ratio`를 사용하는 환경 설정(예: 주석 B의 점수가 2배인 주석 A)은 모델을 조정하기 위한 더 강력한 신호를 제공하므로 특정 이상의 환경 설정만 고려하려는 것일 수 있습니다 `score_ratio`.
게시물당 선호도 수는 Pareto-distributed이므로 모델이 특정 게시물에 과도하게 적합 하는 것을 방지 하기 위해 특정 게시물에서 선호도 수를 제한 하는 것이 좋습니다.
### 평가
약한 기본 설정보다 강력한 기본 설정을 예측하는 것이 더 쉽기 때문에 단일 정확도 값을 보고하는 대신 성능 곡선을 `score_ratio`의 함수로 보고하는 것이 좋습니다.
예를 들어, 여기 위의 제안들을 사용하여 질문적 데이터에 대해 트레이닝된 FLAN-T5-xl 모델에 대한 정확도 곡선이 있다.
주황색 라인은 2+ 스코어 비율을 갖는 선호도에만 피니튜닝하고 과적합을 방지하기 위해 각 포스트로부터 5개 이하의 선호도를 사용하는 것이다:
로 이루어진 군에서 선택되는 어느 하나인 것을 특징으로 하는 유기 발광 표시 장치. [그래프](curve.png)
우리는 더 낮지만 더 높은 품질의 데이터를 미세 조정하는 것이 실제 단점이 없는 점수 비율이 3.5 미만인 테스트 데이터에 대한 더 높은 정확도로 이어진다는 것을 알 수 있다!
토큰 제한 내에 입력이 맞지 않는 예는 모델에서 처리할 것으로 예상할 수 없기 때문에 실험에서 제외되었다.
### SteamSHP - Open-Source Preference Model
우리는 SHP 데이터 세트와 Anthropic의 HH-RLHF의 도움 데이터 모두에 대해 두 개의 FLAN-T5 모델을 미세 조정했다. 그들은
- 테스트 데이터에서 72.8%를 달성하는 3B 매개 변수 모델인 [SteamSHP-XL](https://huggingface.co/stanfordnlp/SteamSHP-flan-t5-xl)입니다.
- 테스트 데이터에서 72.0%를 달성하는 780M 매개 변수 모델인 [SteamSHP-Large](https://huggingface.co/stanfordnlp/SteamSHP-flan-t5-large)입니다.
NLG 평가, RLHF에 대한 보상 모델 구축 또는 적합하다고 생각하는 다른 목적으로 스팀SHP를 사용하는 것이 좋습니다!
## 편향 및 제한 사항
### Biases
NSFW(18세 이상) 콘텐츠로 게시물을 걸러내고, 잘 조정되고 괴롭힘과 편협에 대한 정책이 있는 하위 레딧을 선택했지만 일부 데이터에는 차별적이거나 해로운 언어가 포함될 수 있다.
데이터는 데이터 세트 작성자의 보기를 반영하지 않습니다.
이러한 하위 레딧의 레딧 사용자도 광범위한 모집단을 대표하지 않는다.
하위 레딧별 인구 통계 정보는 사용할 수 없지만 전체 레딧 사용자는 불균형적으로 남성이며 선진국, 서양 및 영어 사용 국가에서 왔습니다 ([Pew Research](https://www.pewresearch.org/internet/2013/07/03/6-of-online-adults-are-reddit-users/)).
이 데이터에 대해 학습된 모델을 사용하기 전에 이 점을 염두에 두십시오.
### 제한 사항
SHP의 선호도 레이블은 지시/질문이 주어졌을 때 한 응답이 다른 응답과 얼마나 *도움이* 되는지 반영 하기 위한 것입니다.
SHP는 좋은 독성 검출기를 배우는 데 필요한 독성 함량을 포함하도록 설계되지 않았기 때문에 위해 최소화에서 사용하기 위한 것이 아니다.
환경 설정 레이블이 더 적은 해를 나타내는 데이터를 찾는 경우 [Anthropic의 HH-RLHF](https://huggingface.co/datasets/Anthropic/hh-rlhf)의 유해성 분할을 권장합니다.
또 다른 한계는 SHP에서 선호되는 응답이 반드시 더 사실적인 응답은 아니라는 것이다.
일부 논평은 그들의 반응을 정당화하기 위해 인용을 제공하지만 대부분은 그렇지 않다.
여기에는 `askhistorians` 하위 레딧과 같은 예외가 있으며, 이는 크게 조정되며 답변이 인용을 제공할 것으로 예상된다.
SHP의 집단 선호도 라벨은 가중치가 없는 합계를 취하기 전에 사용자에게 각 코멘트에 독립적으로 투표하도록 요청하면 반드시 얻을 수 있는 것은 아니다.
Reddit에 대한 주석 점수는 공개적이며 사용자 환경 설정에 영향을 미치는 것으로 알려져 있기 때문입니다. 높은 점수는 [(Muchnik et al., 2013)](https://pubmed.ncbi.nlm.nih.gov/23929980/)보다 긍정적인 표를 얻을 가능성을 높입니다.
이 ""허딩 효과""가 사용자의 선호도를 일시적으로 또는 영구적으로 이동시키는지 여부는 불분명하다.
따라서, SHP가 집단적 인간 선호도를 반영하지만, SHP에 대해 훈련된 모델은 개별 선호도가 다르게 집계되는 설정으로 일반화되지 않을 수 있다(예를 들어, 사용자는 현재 코멘트 점수를 전혀 보지 않고 독립적으로 투표하고, 사용자는 부여 후 투표 등).
그렉 스토다드가 지적해줘서 고마워요
## License
Last updated: 03/01/2023
이 데이터 세트는 Reddit과 직접 통신 또는 서면 동의 없이 [Reddit API 사용 약관](https://docs.google.com/a/reddit.com/forms/d/e/1FAIpQLSezNdDNK1-P8mspSbmtC2r86Ee9ZRbC66u929cG2GX0T9UMyw/viewform)에 따라 Reddit을 스크래핑하여 만들었습니다.
사용 약관에 따라 ""사용자 콘텐츠""는 Reddit이 아닌 사용자 자신이 소유하고 있으며 Reddit은 ""사용자 콘텐츠를 복사 및 표시 하기 위해 독점적이지 않고 양도할 수 없으며 공개되지 않으며 취소할 수 있는 라이선스""를 부여 합니다.
Reddit을 스크래핑 하 여 만든 데이터 집합은 연구 커뮤니티에서 널리 사용 됩니다. 예를 들어 Facebook AI 리서치는 Reddit에서 스크래핑 된 데이터를 사용 하 여 라이선스 없이 사용 하도록 만든 2019년 [ELI5](https://huggingface.co/datasets/eli5#source-data) 데이터 집합을 만들었습니다.
인류성 AI는 다른 방법론을 사용 하 여 환경 설정에 대 한 [Reddit을 스크래핑](https://arxiv.org/pdf/2112.00861.pdf) 합니다. 그러나이 데이터는 공개 되지 않았습니다.
정기적인 일정에서 Reddit의 전체 덤프를 사용할 수 있도록 하는 [PushShift Reddit 데이터 세트](https://arxiv.org/abs/2001.08435)도 라이선스 없이 사용할 수 있습니다 (알고 있는 범위).
우리는 책임을 지지 않으며 이 데이터 세트의 다운스트림 사용을 명시적으로 또는 암시적으로 지지하지 않는다.
우리는 향후 어느 시점에서든 SHP 데이터 세트와 이 라이선스를 수정할 수 있는 권한을 보유합니다.
## Contact
데이터에 대한 질문이 있는 경우 kawin@stanford.edu에 문의하십시오.
이 데이터 세트는 카윈 에타야라흐, 하이디(첸유) 장, 이중 왕 및 단 주라프스키에 의해 생성되었다.
## 인용
SHP는 다음 논문에서 제안한 기법을 이용하여 생성하였다. SHP 또는 스팀SHP 모델을 사용하는 경우 이 작업을 인용하십시오.
```
@InProceedings{pmlr-v162-ethayarajh22a,
title = {Understanding Dataset Difficulty with $\mathcal{V}$-Usable Information},
author = {Ethayarajh, Kawin and Choi, Yejin and Swayamdipta, Swabha},
booktitle = {Proceedings of the 39th International Conference on Machine Learning},
pages = {5988--6008},
year = {2022},
editor = {Chaudhuri, Kamalika and Jegelka, Stefanie and Song, Le and Szepesvari, Csaba and Niu, Gang and Sabato, Sivan},
volume = {162},
series = {Proceedings of Machine Learning Research},
month = {17--23 Jul},
publisher = {PMLR},
}
```
## 참조
Ethayarajh, K., Choi, Y. & Swayamdipta, S. (2022). Understanding Dataset Difficulty with $\mathcal{V}$-Usable Information. Proceedings of the 39th International Conference on Machine Learning, in Proceedings of Machine Learning Research. 162:5988-6008 Available from https://proceedings.mlr.press/v162/ethayarajh22a.html."
shchoice/finance-legal-mrc,"{""language"": ""ko"", ""license"": ""cc-by-sa-4.0"", ""tags"": [""mrc"", ""korean"", ""finance"", ""legal""], ""dataset_info"": [{""config_name"": ""multiple_choice"", ""features"": [{""name"": ""doc_title"", ""dtype"": ""string""}, {""name"": ""context"", ""dtype"": ""string""}, {""name"": ""question"", ""dtype"": ""string""}, {""name"": ""options"", ""sequence"": ""string""}, {""name"": ""answer_text"", ""dtype"": ""string""}], ""splits"": [{""name"": ""train"", ""num_bytes"": 52657661, ""num_examples"": 32092}, {""name"": ""test"", ""num_bytes"": 8296685, ""num_examples"": 4634}], ""download_size"": 14848868, ""dataset_size"": 60954346}, {""config_name"": ""span_extraction"", ""features"": [{""name"": ""doc_title"", ""dtype"": ""string""}, {""name"": ""context"", ""dtype"": ""string""}, {""name"": ""question"", ""dtype"": ""string""}, {""name"": ""answer_text"", ""dtype"": ""string""}], ""splits"": [{""name"": ""train"", ""num_bytes"": 154553286, ""num_examples"": 93828}, {""name"": ""test"", ""num_bytes"": 23273377, ""num_examples"": 12692}], ""download_size"": 43600501, ""dataset_size"": 177826663}, {""config_name"": ""span_extraction_how"", ""features"": [{""name"": ""doc_title"", ""dtype"": ""string""}, {""name"": ""context"", ""dtype"": ""string""}, {""name"": ""question"", ""dtype"": ""string""}, {""name"": ""answer_text"", ""dtype"": ""string""}], ""splits"": [{""name"": ""train"", ""num_bytes"": 111843943, ""num_examples"": 61746}, {""name"": ""test"", ""num_bytes"": 21025535, ""num_examples"": 9124}], ""download_size"": 31577500, ""dataset_size"": 132869478}, {""config_name"": ""tableqa"", ""features"": [{""name"": ""doc_title"", ""dtype"": ""string""}, {""name"": ""context"", ""dtype"": ""string""}, {""name"": ""table_title"", ""dtype"": ""string""}, {""name"": ""table_html"", ""dtype"": ""string""}, {""name"": ""question"", ""dtype"": ""string""}, {""name"": ""answer_text"", ""dtype"": ""string""}], ""splits"": [{""name"": ""train"", ""num_bytes"": 228901199, ""num_examples"": 96000}, {""name"": ""test"", ""num_bytes"": 24161924, ""num_examples"": 12000}], ""download_size"": 15321234, ""dataset_size"": 253063123}, {""config_name"": ""text_entailment"", ""features"": [{""name"": ""doc_title"", ""dtype"": ""string""}, {""name"": ""context"", ""dtype"": ""string""}, {""name"": ""question"", ""dtype"": ""string""}, {""name"": ""answer_text"", ""dtype"": ""string""}], ""splits"": [{""name"": ""train"", ""num_bytes"": 52307558, ""num_examples"": 32750}, {""name"": ""test"", ""num_bytes"": 5778697, ""num_examples"": 3740}], ""download_size"": 13330652, ""dataset_size"": 58086255}], ""configs"": [{""config_name"": ""multiple_choice"", ""data_files"": [{""split"": ""train"", ""path"": ""multiple_choice/train-*""}, {""split"": ""test"", ""path"": ""multiple_choice/test-*""}]}, {""config_name"": ""span_extraction"", ""data_files"": [{""split"": ""train"", ""path"": ""span_extraction/train-*""}, {""split"": ""test"", ""path"": ""span_extraction/test-*""}]}, {""config_name"": ""span_extraction_how"", ""data_files"": [{""split"": ""train"", ""path"": ""span_extraction_how/train-*""}, {""split"": ""test"", ""path"": ""span_extraction_how/test-*""}]}, {""config_name"": ""tableqa"", ""data_files"": [{""split"": ""train"", ""path"": ""tableqa/train-*""}, {""split"": ""test"", ""path"": ""tableqa/test-*""}]}, {""config_name"": ""text_entailment"", ""data_files"": [{""split"": ""train"", ""path"": ""text_entailment/train-*""}, {""split"": ""test"", ""path"": ""text_entailment/test-*""}]}]}","# 금융, 법률 문서 기계독해 데이터셋
금융 및 법률 분야 전문문서를 활용하여 기계독해 모델 생성을 위한 지문-질문-답변으로 구성된 데이터셋입니다.
## 데이터셋 구성
### Subsets
- **span_extraction**: 정답경계 추출형(정답이 지문 내 특정 범위에 있는 데이터)
- **span_extraction_how**: 절차(방법)(어떻게로 시작하는 질문에 대한 정답 추출 데이터)
- **multiple_choice**: 다지선다형(객관식 형태의 데이터)
- **tableqa**: Table 정답추출형(표 기반 질의응답 데이터)
- **text_entailment**: Yes/No 단문형(Boolean 선택 형태의 데이터)
### Splits
- **train**: 학습용 데이터셋
- **validation**: 검증용 데이터셋
## 사용 방법
``` python
from datasets import load_dataset
특정 configuration의 특정 split 로드
train_data = load_dataset(""shchoice/finance-legal-mrc"", ""span_extraction"", split=""train"")
valid_data = load_dataset(""shchoice/finance-legal-mrc"", ""multiple_choice"", split=""test"")
특정 configuration의 전체 split 로드
full_dataset = load_dataset(""shchoice/finance-legal-mrc"", ""span_extraction"")
```
## 데이터 필드
1. 기계독해에 필요한 필수 항목
- doc_title: 문서 제목 (Optional)
- context: 문장의 주요 텍스트 내용
- question: 질문 텍스트
- answer_text: 질문에 대한 정답 텍스트
2. 정답 힌트
- clue_text: 정답의 단서를 제공하는 텍스트
3. 테이블 QA에 필요한 정보
- table_title: 표의 제목
- table_html: 표의 HTML 데이터
4. 테이블 QA에 필요한 정보 + 정답 힌트
- cell_text: 정답이 포함된 셀의 텍스트 데이터
- cell_coordinates: 정답이 포함된 셀의 좌표 데이터
5. 다지선택형 QA에서 필요한 정보
- options : 객관식 내용들(1. 정답 2. 오답 3. 오답 4.오답)
6. 문서의 분류명
- class: 문서의 분류 정보
- code: 문서의 코드 정보
## 라이선스
이 데이터셋은 CC-BY-SA-4.0 라이선스 하에 제공됩니다."
PrompTartLAB/PTT_en_ko,"{""task_categories"": [""translation""], ""language"": [""en"", ""ko""], ""size_categories"": [""1K
- train/valid/test dataset of session 4
- translation ( English -> Koeran )
- GPT-3.5-turbo is used mostly
- GPT-4 : 66 data from the start of session_4_train ( after these, changed to gpt-3.5 )"
nayohan/korean-hate-speech,"{""dataset_info"": {""features"": [{""name"": ""comments"", ""dtype"": ""string""}, {""name"": ""contain_gender_bias"", ""dtype"": ""bool""}, {""name"": ""bias"", ""dtype"": ""string""}, {""name"": ""hate"", ""dtype"": ""string""}, {""name"": ""news_title"", ""dtype"": ""string""}], ""splits"": [{""name"": ""train"", ""num_bytes"": 1705416, ""num_examples"": 7896}, {""name"": ""valid"", ""num_bytes"": 101984, ""num_examples"": 471}, {""name"": ""test"", ""num_bytes"": 200963, ""num_examples"": 974}], ""download_size"": 1172909, ""dataset_size"": 2008363}, ""configs"": [{""config_name"": ""default"", ""data_files"": [{""split"": ""train"", ""path"": ""data/train-*""}, {""split"": ""valid"", ""path"": ""data/valid-*""}, {""split"": ""test"", ""path"": ""data/test-*""}]}], ""license"": ""cc-by-sa-4.0"", ""language"": [""ko""], ""tags"": [""safety""]}","reference: [https://github.com/kocohub/korean-hate-speech](https://github.com/kocohub/korean-hate-speech)
```
@inproceedings{moon-etal-2020-beep,
title = ""{BEEP}! {K}orean Corpus of Online News Comments for Toxic Speech Detection"",
author = ""Moon, Jihyung and
Cho, Won Ik and
Lee, Junbum"",
booktitle = ""Proceedings of the Eighth International Workshop on Natural Language Processing for Social Media"",
month = jul,
year = ""2020"",
address = ""Online"",
publisher = ""Association for Computational Linguistics"",
url = ""https://www.aclweb.org/anthology/2020.socialnlp-1.4"",
pages = ""25--31"",
abstract = ""Toxic comments in online platforms are an unavoidable social issue under the cloak of anonymity. Hate speech detection has been actively done for languages such as English, German, or Italian, where manually labeled corpus has been released. In this work, we first present 9.4K manually labeled entertainment news comments for identifying Korean toxic speech, collected from a widely used online news platform in Korea. The comments are annotated regarding social bias and hate speech since both aspects are correlated. The inter-annotator agreement Krippendorff{'}s alpha score is 0.492 and 0.496, respectively. We provide benchmarks using CharCNN, BiLSTM, and BERT, where BERT achieves the highest score on all tasks. The models generally display better performance on bias identification, since the hate speech detection is a more subjective issue. Additionally, when BERT is trained with bias label for hate speech detection, the prediction score increases, implying that bias and hate are intertwined. We make our dataset publicly available and open competitions with the corpus and benchmarks."",
}
```"
Suchae/korean-judgment-easyread-transform,{},"---
dataset_info:
features:
- name: judgment_chunk
dtype: string
- name: legal_term
dtype: string
- name: transform
dtype: string
- name: summary
dtype: string
splits:
- name: train
num_bytes: 1577917385
num_examples: 501333
download_size: 646399552
dataset_size: 1577917385
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
license: apache-2.0
task_categories:
- summarization
language:
- ko
tags:
- legal
size_categories:
- 100K [!NOTE]
> Dataset origin: https://lindat.mff.cuni.cz/repository/xmlui/handle/11858/00-097C-0000-0022-6133-9#
## Description
A set of corpora for 120 languages automatically collected from wikipedia and the web.
Collected using the W2C toolset: http://hdl.handle.net/11858/00-097C-0000-0022-60D6-1
## Citation
```
@misc{11858/00-097C-0000-0022-6133-9,
title = {{W2C} – Web to Corpus – Corpora},
author = {Majli{\v s}, Martin},
url = {http://hdl.handle.net/11858/00-097C-0000-0022-6133-9},
note = {{LINDAT}/{CLARIAH}-{CZ} digital library at the Institute of Formal and Applied Linguistics ({{\'U}FAL}), Faculty of Mathematics and Physics, Charles University},
copyright = {Attribution-{ShareAlike} 3.0 Unported ({CC} {BY}-{SA} 3.0)},
year = {2011} }
```"
mintaeng/min_test,"{""dataset_info"": {""features"": [{""name"": ""instruction"", ""dtype"": ""string""}, {""name"": ""input"", ""dtype"": ""string""}, {""name"": ""output"", ""dtype"": ""string""}], ""splits"": [{""name"": ""train"", ""num_bytes"": 1834197, ""num_examples"": 2200}], ""download_size"": 1015650, ""dataset_size"": 1834197}, ""configs"": [{""config_name"": ""default"", ""data_files"": [{""split"": ""train"", ""path"": ""data/train-*""}]}], ""task_categories"": [""text-generation"", ""question-answering""], ""language"": [""ko""]}","## llama3_8b_ 모델 파인튜닝 테스트용 데이터입니다.
원본 데이터 : databricks-dolly-15k-ko.jsonl
응답 형식 변환 후 자체 질의응답 세트 추가"
yjg30737/onepiece-characters,"{""license"": ""mit"", ""task_categories"": [""table-question-answering""], ""language"": [""en"", ""ja"", ""ko""], ""size_categories"": [""100K
## Dataset Details
### Dataset Description
這個資料集是分叉於 [rombodawg/Everything_Instruct_Multilingual](https://huggingface.co/datasets/rombodawg/Everything_Instruct_Multilingual),但在中文回答的部份透過 [opencc-python](https://github.com/yichen0831/opencc-python) 將簡體中文(zh-cn)轉成繁體中文(zh-tw)。除此之外,我們將資料集升級成具有 [DPO](https://arxiv.org/abs/2305.18290) 欄位,該 `rejected` 回覆是由 [lianghsun/Llama-3.2-Taiwan-3B-Instruct](https://huggingface.co/lianghsun/Llama-3.2-Taiwan-3B-Instruct) `v2024.11.27` 生成,此資料集將用於 lianghsun/Llama-3.2-Taiwan-3B-Instruct 的 DPO 階段。
- **Curated by:** [Huang Liang Hsun](https://www.linkedin.com/in/lianghsunhuang/?locale=en_US)
- **Language(s) (NLP):** multilingual
- **License:** cc-by-nc-sa-4.0
### Dataset Sources
- **Repository:** [lianghsun/Everything-Instruct-Multilingual](https://huggingface.co/datasets/lianghsun/Everything-Instruct-Multilingual/)
## Uses
### Direct Use
本資料集可以用在 SFT 與 DPO 的訓練階段。
### Out-of-Scope Use
本資料集並不適合用在評測集或者是任何事實審核使用。
## Dataset Structure
```yaml
{
""instruction"": """",
""input"": """",
""rejected"": """"
}
```
## Dataset Creation
### Curation Rationale
具有多國語系的指令資料集鮮少,更不用說多國語系的偏好資料集(Preference dataset),故本資料集以 [rombodawg/Everything_Instruct_Multilingual](https://huggingface.co/datasets/rombodawg/Everything_Instruct_Multilingual) 為基底資料集(foundation dataset),新增拒絕(rejected)回覆,使資料集更加全面,用戶可利用此資料集訓練模型進行多語系偏好學習。
### Source Data
#### Data Collection and Processing
1. **簡體中文轉繁體中文:** 本資料集將簡體中文轉成繁體中文。
2. **生成拒絕回覆:** 透過模型生成拒絕回覆,以建立偏好資料集。
#### Who are the source data producers?
- **Foundation dataset:** [rombodawg/Everything_Instruct_Multilingual](https://huggingface.co/datasets/rombodawg/Everything_Instruct_Multilingual)
- **Rejected dataset:** [lianghsun/Llama-3.2-Taiwan-3B-Instruct](https://huggingface.co/lianghsun/Llama-3.2-Taiwan-3B-Instruct)
### Annotations [optional]
#### Annotation process
無。
#### Who are the annotators?
無。
#### Personal and Sensitive Information
我們未針對原始資料進行 PII 檢測,但曾經收過 Hugging Face 系統信通知原始資料集內含有密鑰相關,請使用者再自行檢測。
## Bias, Risks, and Limitations
任何人使用此資料集都應該要注意,原始資料內可能含有不同立場和情境的言論,請小心使用。
### Recommendations
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
```yaml
@misc{huang2024everything,
author = {Huang, Liang Hsun},
title = {Everything-Instruct-Multilingual-DPO},
year = {2024},
publisher = {Hugging Face},
howpublished = {\url{https://huggingface.co/datasets/lianghsun/Everything-Instruct-Multilingual-DPO}},
note = {多國語系的指令服從資料集,可用於 SFT 或 DPO 訓練}
}
```
## More Information
本人僅能檢視繁體中文的部分,原始資料集含有中文的部分,大部分都是被要求翻譯成為中文(output),但經檢視中文的文本品質並不是很高水準,更甚至可能原始產生輸出的模型之中文能力低落,建議可以將有中文輸出的欄位刪除後再進行訓練。
*註:至此至今,我開始也懷疑原始資料集的多國語系指令回覆,是否也是低品質?*
但如果你缺少一個多國語系的指令資料集,這將是一個很好入門的資料集。
## Dataset Card Authors
[Huang Liang Hsun](https://www.linkedin.com/in/lianghsunhuang/?locale=en_US)
## Dataset Card Contact
[Huang Liang Hsun](https://www.linkedin.com/in/lianghsunhuang/?locale=en_US)"
zzunyang/legal_preference,{},"---
task_categories:
- conversational
language:
- ko
tags:
- legal
size_categories:
- 10K>> from datasets import load_dataset
>>> ds = load_dataset(""beomi/KoAlpaca-v1.1a"", split=""train"")
>>> ds
Dataset({
features: ['instruction', 'input', 'output'],
num_rows: 21272
})
```
```python
>>> ds[0]
{'instruction': '양파는 어떤 식물 부위인가요? 그리고 고구마는 뿌리인가요?',
'output': '양파는 잎이 아닌 식물의 줄기 부분입니다. 고구마는 식물의 뿌리 부분입니다. \n\n식물의 부위의 구분에 대해 궁금해하는 분이라면 분명 이 질문에 대한 답을 찾고 있을 것입니다. 양파는 잎이 아닌 줄기 부분입니다. 고구마는 다른 질문과 답변에서 언급된 것과 같이 뿌리 부분입니다. 따라서, 양파는 식물의 줄기 부분이 되고, 고구마는 식물의 뿌리 부분입니다.\n\n 덧붙이는 답변: 고구마 줄기도 볶아먹을 수 있나요? \n\n고구마 줄기도 식용으로 볶아먹을 수 있습니다. 하지만 줄기 뿐만 아니라, 잎, 씨, 뿌리까지 모든 부위가 식용으로 활용되기도 합니다. 다만, 한국에서는 일반적으로 뿌리 부분인 고구마를 주로 먹습니다.',
'url': 'https://kin.naver.com/qna/detail.naver?d1id=11&dirId=1116&docId=55320268'}"
vitus9988/ko_gpt4omini_note_15.4k,"{""language"": [""ko""], ""tags"": [""ChatGPT""]}","# 한국어 메모 데이터셋
GPT-4o-mini를 통해 생성된 한국어 메모 데이터셋입니다.
대주제(main_topic), 소주제(sub_topic)를 통해 메모처럼 보이는 데이터를 생성하였습니다."
datumo/KorNAT,"{""configs"": [{""config_name"": ""Social Values (Kor)"", ""data_files"": [{""split"": ""test"", ""path"": ""KorNAT/social-values-kor-test.csv""}]}, {""config_name"": ""Social Values (Eng)"", ""data_files"": [{""split"": ""test"", ""path"": ""KorNAT/social-values-eng-test.csv""}]}, {""config_name"": ""Common Knowledge (Kor)"", ""data_files"": [{""split"": ""test"", ""path"": ""KorNAT/common-knowledge-kor-test.csv""}]}, {""config_name"": ""Common Knowledge (Eng)"", ""data_files"": [{""split"": ""test"", ""path"": ""KorNAT/common-knowledge-eng-test.csv""}]}], ""license"": ""cc-by-nc-2.0"", ""task_categories"": [""multiple-choice""], ""language"": [""ko"", ""en""], ""tags"": [""national-alignment""], ""size_categories"": [""10>> from datasets import load_dataset
>>> ds = load_dataset(""SangChan/KCC_Profit_DataSet_v2"", split=""train"")
>>> ds
Dataset({
features: ['instruction', 'input', 'output'],
num_rows: 21155
})
```
```python
>>> ds[0]
{'instruction': 'KCC 담당자의 이름을 알려줘',
'input': 'KCC 담당자',
'output': '담당자는 박상찬 책임입니다.',
```"
Junnos/luckyvicky,"{""license"": ""mit"", ""task_categories"": [""text2text-generation""], ""language"": [""ko""], ""pretty_name"": ""\uc6d0\uc601\uc801 \uc0ac\uace0"", ""tags"": [""lifestyle""], ""size_categories"": [""n<1K""]}",### 원영적 사고 데이터셋
nayohan/Magpie-Phi3-Pro-300K-Filtered-ko,"{""dataset_info"": {""features"": [{""name"": ""instruction"", ""dtype"": ""string""}, {""name"": ""response"", ""dtype"": ""string""}, {""name"": ""intent"", ""dtype"": ""string""}, {""name"": ""knowledge"", ""dtype"": ""string""}, {""name"": ""quality_explanation"", ""dtype"": ""string""}, {""name"": ""model"", ""dtype"": ""string""}, {""name"": ""gen_input_configs"", ""struct"": [{""name"": ""extract_input"", ""dtype"": ""string""}, {""name"": ""input_generator"", ""dtype"": ""string""}, {""name"": ""seed"", ""dtype"": ""null""}, {""name"": ""temperature"", ""dtype"": ""float64""}, {""name"": ""top_p"", ""dtype"": ""float64""}]}, {""name"": ""conversations"", ""list"": [{""name"": ""from"", ""dtype"": ""string""}, {""name"": ""value"", ""dtype"": ""string""}]}, {""name"": ""uuid"", ""dtype"": ""string""}, {""name"": ""task_category"", ""dtype"": ""string""}, {""name"": ""other_task_category"", ""sequence"": ""string""}, {""name"": ""task_category_generator"", ""dtype"": ""string""}, {""name"": ""difficulty"", ""dtype"": ""string""}, {""name"": ""difficulty_generator"", ""dtype"": ""string""}, {""name"": ""input_quality"", ""dtype"": ""string""}, {""name"": ""quality_generator"", ""dtype"": ""string""}, {""name"": ""llama_guard_2"", ""dtype"": ""string""}, {""name"": ""reward_model"", ""dtype"": ""string""}, {""name"": ""instruct_reward"", ""dtype"": ""float64""}, {""name"": ""min_neighbor_distance"", ""dtype"": ""float64""}, {""name"": ""repeat_count"", ""dtype"": ""int64""}, {""name"": ""min_similar_uuid"", ""dtype"": ""string""}, {""name"": ""instruction_length"", ""dtype"": ""int64""}, {""name"": ""response_length"", ""dtype"": ""int64""}, {""name"": ""language"", ""dtype"": ""string""}], ""splits"": [{""name"": ""train"", ""num_bytes"": 2086968303, ""num_examples"": 300000}], ""download_size"": 917420446, ""dataset_size"": 2086968303}, ""configs"": [{""config_name"": ""default"", ""data_files"": [{""split"": ""train"", ""path"": ""data/train-*""}]}], ""task_categories"": [""text-generation""], ""language"": [""ko""], ""tags"": [""instruction"", ""korean"", ""li""]}","Translate [Magpie-Align/Magpie-Phi3-Pro-300K-Filtered](https://huggingface.co/datasets/Magpie-Align/Magpie-Phi3-Pro-300K-Filtered) using [nayohan/llama3-instrucTrans-enko-8b](https://huggingface.co/nayohan/llama3-instrucTrans-enko-8b).
This is a raw translation dataset. It needs to be filtered for repetitions generated by the model.
```
@misc{xu2024magpie,
title={Magpie: Alignment Data Synthesis from Scratch by Prompting Aligned LLMs with Nothing},
author={Zhangchen Xu and Fengqing Jiang and Luyao Niu and Yuntian Deng and Radha Poovendran and Yejin Choi and Bill Yuchen Lin},
year={2024},
eprint={2406.08464},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```"
nayohan/Evol-Instruct-Code-80k-v1-ko,"{""dataset_info"": {""features"": [{""name"": ""instruction"", ""dtype"": ""string""}, {""name"": ""output"", ""dtype"": ""string""}], ""splits"": [{""name"": ""train"", ""num_bytes"": 126824292, ""num_examples"": 78264}], ""download_size"": 57428336, ""dataset_size"": 126824292}, ""configs"": [{""config_name"": ""default"", ""data_files"": [{""split"": ""train"", ""path"": ""data/train-*""}]}], ""task_categories"": [""text-generation""], ""language"": [""ko""], ""tags"": [""instruction"", ""korean""], ""license"": ""cc-by-nc-sa-4.0""}","Translated [nickrosh/Evol-Instruct-Code-80k-v1](https://huggingface.co/datasets/nickrosh/Evol-Instruct-Code-80k-v1) using [nayohan/llama3-instrucTrans-enko-8b](https://huggingface.co/nayohan/llama3-instrucTrans-enko-8b).
This is a raw translation dataset. It needs to be filtered for repetitions generated by the model."
WueNLP/belebele-fleurs,"{""license"": ""cc-by-sa-4.0"", ""annotations_creators"": [""found""], ""language_creators"": [""expert-generated""], ""language"": [""af"", ""am"", ""ar"", ""az"", ""as"", ""bm"", ""bn"", ""bo"", ""bg"", ""ca"", ""cs"", ""ku"", ""da"", ""de"", ""el"", ""en"", ""es"", ""et"", ""eu"", ""fi"", ""fr"", ""ff"", ""om"", ""gu"", ""gn"", ""ht"", ""ha"", ""he"", ""hi"", ""hr"", ""hu"", ""hy"", ""ig"", ""id"", ""it"", ""is"", ""jv"", ""ja"", ""ka"", ""kn"", ""kk"", ""mn"", ""km"", ""rw"", ""ky"", ""ko"", ""lo"", ""ln"", ""lt"", ""lg"", ""lv"", ""ml"", ""mr"", ""mk"", ""mt"", ""mi"", ""my"", ""nl"", ""no"", ""ne"", ""ny"", ""or"", ""pa"", ""ps"", ""fa"", ""mg"", ""pl"", ""pt"", ""ro"", ""ru"", ""sn"", ""si"", ""sl"", ""sv"", ""sk"", ""sd"", ""sw"", ""ta"", ""te"", ""tg"", ""tl"", ""th"", ""ti"", ""tn"", ""ts"", ""tr"", ""uk"", ""ur"", ""uz"", ""vi"", ""wo"", ""xh"", ""yo"", ""zh"", ""ms"", ""zu"", ""multilingual""], ""multilinguality"": [""multilingual""], ""task_categories"": [""audio-classification"", ""automatic-speech-recognition"", ""audio-text-to-text"", ""text-to-speech"", ""question-answering"", ""document-question-answering""], ""pretty_name"": ""Belebele-Fleurs"", ""dataset_info"": [{""config_name"": ""afr_Latn"", ""features"": [{""name"": ""link"", ""dtype"": ""string""}, {""name"": ""question_number"", ""dtype"": ""int64""}, {""name"": ""flores_passage"", ""dtype"": ""string""}, {""name"": ""question"", ""dtype"": ""string""}, {""name"": ""mc_answer1"", ""dtype"": ""string""}, {""name"": ""mc_answer2"", ""dtype"": ""string""}, {""name"": ""mc_answer3"", ""dtype"": ""string""}, {""name"": ""mc_answer4"", ""dtype"": ""string""}, {""name"": ""correct_answer_num"", ""dtype"": ""string""}, {""name"": ""dialect"", ""dtype"": ""string""}, {""name"": ""ds"", ""dtype"": ""timestamp[us]""}, {""name"": ""sentence_data"", ""list"": [{""name"": ""URL"", ""dtype"": ""string""}, {""name"": ""audio"", ""sequence"": {""audio"": {""sampling_rate"": 16000}}}, {""name"": ""domain"", ""dtype"": ""string""}, {""name"": ""filename"", ""sequence"": ""string""}, {""name"": ""fleurs_id"", ""dtype"": ""int64""}, {""name"": ""full_paragraph"", ""dtype"": ""bool""}, {""name"": ""gender"", ""sequence"": ""string""}, {""name"": ""has_hyperlink"", ""dtype"": ""int64""}, {""name"": ""has_image"", ""dtype"": ""int64""}, {""name"": ""id"", ""dtype"": ""int64""}, {""name"": ""num_samples"", ""sequence"": ""int64""}, {""name"": ""raw_transcription"", ""dtype"": ""string""}, {""name"": ""seamlessm4t_asr"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr_cer"", ""sequence"": ""float64""}, {""name"": ""seamlessm4t_asr_translation"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr_wer"", ""sequence"": ""float64""}, {""name"": ""sentence"", ""dtype"": ""string""}, {""name"": ""sentence_idx"", ""dtype"": ""int64""}, {""name"": ""speaker_id"", ""sequence"": ""int64""}, {""name"": ""split"", ""sequence"": ""string""}, {""name"": ""topic"", ""dtype"": ""string""}, {""name"": ""transcription"", ""dtype"": ""string""}, {""name"": ""whisper_asr"", ""sequence"": ""string""}, {""name"": ""whisper_asr_cer"", ""sequence"": ""float64""}, {""name"": ""whisper_asr_translation"", ""sequence"": ""string""}, {""name"": ""whisper_asr_wer"", ""sequence"": ""float64""}]}], ""splits"": [{""name"": ""test"", ""num_bytes"": 1321576984.0, ""num_examples"": 309}], ""download_size"": 697429945, ""dataset_size"": 1321576984.0}, {""config_name"": ""amh_Ethi"", ""features"": [{""name"": ""link"", ""dtype"": ""string""}, {""name"": ""question_number"", ""dtype"": ""int64""}, {""name"": ""flores_passage"", ""dtype"": ""string""}, {""name"": ""question"", ""dtype"": ""string""}, {""name"": ""mc_answer1"", ""dtype"": ""string""}, {""name"": ""mc_answer2"", ""dtype"": ""string""}, {""name"": ""mc_answer3"", ""dtype"": ""string""}, {""name"": ""mc_answer4"", ""dtype"": ""string""}, {""name"": ""correct_answer_num"", ""dtype"": ""string""}, {""name"": ""dialect"", ""dtype"": ""string""}, {""name"": ""ds"", ""dtype"": ""timestamp[us]""}, {""name"": ""sentence_data"", ""list"": [{""name"": ""URL"", ""dtype"": ""string""}, {""name"": ""audio"", ""sequence"": {""audio"": {""sampling_rate"": 16000}}}, {""name"": ""domain"", ""dtype"": ""string""}, {""name"": ""filename"", ""sequence"": ""string""}, {""name"": ""fleurs_id"", ""dtype"": ""int64""}, {""name"": ""full_paragraph"", ""dtype"": ""bool""}, {""name"": ""gender"", ""sequence"": ""string""}, {""name"": ""has_hyperlink"", ""dtype"": ""int64""}, {""name"": ""has_image"", ""dtype"": ""int64""}, {""name"": ""id"", ""dtype"": ""int64""}, {""name"": ""num_samples"", ""sequence"": ""int64""}, {""name"": ""raw_transcription"", ""dtype"": ""string""}, {""name"": ""seamlessm4t_asr"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr_cer"", ""sequence"": ""float64""}, {""name"": ""seamlessm4t_asr_translation"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr_wer"", ""sequence"": ""float64""}, {""name"": ""sentence"", ""dtype"": ""string""}, {""name"": ""sentence_idx"", ""dtype"": ""int64""}, {""name"": ""speaker_id"", ""sequence"": ""int64""}, {""name"": ""split"", ""sequence"": ""string""}, {""name"": ""topic"", ""dtype"": ""string""}, {""name"": ""transcription"", ""dtype"": ""string""}, {""name"": ""whisper_asr"", ""sequence"": ""string""}, {""name"": ""whisper_asr_cer"", ""sequence"": ""float64""}, {""name"": ""whisper_asr_translation"", ""sequence"": ""string""}, {""name"": ""whisper_asr_wer"", ""sequence"": ""float64""}]}], ""splits"": [{""name"": ""test"", ""num_bytes"": 4735350571.0, ""num_examples"": 782}], ""download_size"": 2518582786, ""dataset_size"": 4735350571.0}, {""config_name"": ""arb_Arab"", ""features"": [{""name"": ""link"", ""dtype"": ""string""}, {""name"": ""question_number"", ""dtype"": ""int64""}, {""name"": ""flores_passage"", ""dtype"": ""string""}, {""name"": ""question"", ""dtype"": ""string""}, {""name"": ""mc_answer1"", ""dtype"": ""string""}, {""name"": ""mc_answer2"", ""dtype"": ""string""}, {""name"": ""mc_answer3"", ""dtype"": ""string""}, {""name"": ""mc_answer4"", ""dtype"": ""string""}, {""name"": ""correct_answer_num"", ""dtype"": ""string""}, {""name"": ""dialect"", ""dtype"": ""string""}, {""name"": ""ds"", ""dtype"": ""timestamp[us]""}, {""name"": ""sentence_data"", ""list"": [{""name"": ""URL"", ""dtype"": ""string""}, {""name"": ""audio"", ""sequence"": {""audio"": {""sampling_rate"": 16000}}}, {""name"": ""domain"", ""dtype"": ""string""}, {""name"": ""filename"", ""sequence"": ""string""}, {""name"": ""fleurs_id"", ""dtype"": ""int64""}, {""name"": ""full_paragraph"", ""dtype"": ""bool""}, {""name"": ""gender"", ""sequence"": ""string""}, {""name"": ""has_hyperlink"", ""dtype"": ""int64""}, {""name"": ""has_image"", ""dtype"": ""int64""}, {""name"": ""id"", ""dtype"": ""int64""}, {""name"": ""num_samples"", ""sequence"": ""int64""}, {""name"": ""raw_transcription"", ""dtype"": ""string""}, {""name"": ""seamlessm4t_asr"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr_cer"", ""sequence"": ""float64""}, {""name"": ""seamlessm4t_asr_translation"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr_wer"", ""sequence"": ""float64""}, {""name"": ""sentence"", ""dtype"": ""string""}, {""name"": ""sentence_idx"", ""dtype"": ""int64""}, {""name"": ""speaker_id"", ""sequence"": ""int64""}, {""name"": ""split"", ""sequence"": ""string""}, {""name"": ""topic"", ""dtype"": ""string""}, {""name"": ""transcription"", ""dtype"": ""string""}, {""name"": ""whisper_asr"", ""sequence"": ""string""}, {""name"": ""whisper_asr_cer"", ""sequence"": ""float64""}, {""name"": ""whisper_asr_translation"", ""sequence"": ""string""}, {""name"": ""whisper_asr_wer"", ""sequence"": ""float64""}]}], ""splits"": [{""name"": ""test"", ""num_bytes"": 1516344836.0, ""num_examples"": 387}], ""download_size"": 802576290, ""dataset_size"": 1516344836.0}, {""config_name"": ""asm_Beng"", ""features"": [{""name"": ""link"", ""dtype"": ""string""}, {""name"": ""question_number"", ""dtype"": ""int64""}, {""name"": ""flores_passage"", ""dtype"": ""string""}, {""name"": ""question"", ""dtype"": ""string""}, {""name"": ""mc_answer1"", ""dtype"": ""string""}, {""name"": ""mc_answer2"", ""dtype"": ""string""}, {""name"": ""mc_answer3"", ""dtype"": ""string""}, {""name"": ""mc_answer4"", ""dtype"": ""string""}, {""name"": ""correct_answer_num"", ""dtype"": ""string""}, {""name"": ""dialect"", ""dtype"": ""string""}, {""name"": ""ds"", ""dtype"": ""timestamp[us]""}, {""name"": ""sentence_data"", ""list"": [{""name"": ""URL"", ""dtype"": ""string""}, {""name"": ""audio"", ""sequence"": {""audio"": {""sampling_rate"": 16000}}}, {""name"": ""domain"", ""dtype"": ""string""}, {""name"": ""filename"", ""sequence"": ""string""}, {""name"": ""fleurs_id"", ""dtype"": ""int64""}, {""name"": ""full_paragraph"", ""dtype"": ""bool""}, {""name"": ""gender"", ""sequence"": ""string""}, {""name"": ""has_hyperlink"", ""dtype"": ""int64""}, {""name"": ""has_image"", ""dtype"": ""int64""}, {""name"": ""id"", ""dtype"": ""int64""}, {""name"": ""num_samples"", ""sequence"": ""int64""}, {""name"": ""raw_transcription"", ""dtype"": ""string""}, {""name"": ""seamlessm4t_asr"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr_cer"", ""sequence"": ""float64""}, {""name"": ""seamlessm4t_asr_translation"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr_wer"", ""sequence"": ""float64""}, {""name"": ""sentence"", ""dtype"": ""string""}, {""name"": ""sentence_idx"", ""dtype"": ""int64""}, {""name"": ""speaker_id"", ""sequence"": ""int64""}, {""name"": ""split"", ""sequence"": ""string""}, {""name"": ""topic"", ""dtype"": ""string""}, {""name"": ""transcription"", ""dtype"": ""string""}, {""name"": ""whisper_asr"", ""sequence"": ""string""}, {""name"": ""whisper_asr_cer"", ""sequence"": ""float64""}, {""name"": ""whisper_asr_translation"", ""sequence"": ""string""}, {""name"": ""whisper_asr_wer"", ""sequence"": ""float64""}]}], ""splits"": [{""name"": ""test"", ""num_bytes"": 5596186237.0, ""num_examples"": 824}], ""download_size"": 3017636971, ""dataset_size"": 5596186237.0}, {""config_name"": ""azj_Latn"", ""features"": [{""name"": ""link"", ""dtype"": ""string""}, {""name"": ""question_number"", ""dtype"": ""int64""}, {""name"": ""flores_passage"", ""dtype"": ""string""}, {""name"": ""question"", ""dtype"": ""string""}, {""name"": ""mc_answer1"", ""dtype"": ""string""}, {""name"": ""mc_answer2"", ""dtype"": ""string""}, {""name"": ""mc_answer3"", ""dtype"": ""string""}, {""name"": ""mc_answer4"", ""dtype"": ""string""}, {""name"": ""correct_answer_num"", ""dtype"": ""string""}, {""name"": ""dialect"", ""dtype"": ""string""}, {""name"": ""ds"", ""dtype"": ""timestamp[us]""}, {""name"": ""sentence_data"", ""list"": [{""name"": ""URL"", ""dtype"": ""string""}, {""name"": ""audio"", ""sequence"": {""audio"": {""sampling_rate"": 16000}}}, {""name"": ""domain"", ""dtype"": ""string""}, {""name"": ""filename"", ""sequence"": ""string""}, {""name"": ""fleurs_id"", ""dtype"": ""int64""}, {""name"": ""full_paragraph"", ""dtype"": ""bool""}, {""name"": ""gender"", ""sequence"": ""string""}, {""name"": ""has_hyperlink"", ""dtype"": ""int64""}, {""name"": ""has_image"", ""dtype"": ""int64""}, {""name"": ""id"", ""dtype"": ""int64""}, {""name"": ""num_samples"", ""sequence"": ""int64""}, {""name"": ""raw_transcription"", ""dtype"": ""string""}, {""name"": ""seamlessm4t_asr"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr_cer"", ""sequence"": ""float64""}, {""name"": ""seamlessm4t_asr_translation"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr_wer"", ""sequence"": ""float64""}, {""name"": ""sentence"", ""dtype"": ""string""}, {""name"": ""sentence_idx"", ""dtype"": ""int64""}, {""name"": ""speaker_id"", ""sequence"": ""int64""}, {""name"": ""split"", ""sequence"": ""string""}, {""name"": ""topic"", ""dtype"": ""string""}, {""name"": ""transcription"", ""dtype"": ""string""}, {""name"": ""whisper_asr"", ""sequence"": ""string""}, {""name"": ""whisper_asr_cer"", ""sequence"": ""float64""}, {""name"": ""whisper_asr_translation"", ""sequence"": ""string""}, {""name"": ""whisper_asr_wer"", ""sequence"": ""float64""}]}], ""splits"": [{""name"": ""test"", ""num_bytes"": 4695408159.0, ""num_examples"": 759}], ""download_size"": 2513843262, ""dataset_size"": 4695408159.0}, {""config_name"": ""ben_Beng"", ""features"": [{""name"": ""link"", ""dtype"": ""string""}, {""name"": ""question_number"", ""dtype"": ""int64""}, {""name"": ""flores_passage"", ""dtype"": ""string""}, {""name"": ""question"", ""dtype"": ""string""}, {""name"": ""mc_answer1"", ""dtype"": ""string""}, {""name"": ""mc_answer2"", ""dtype"": ""string""}, {""name"": ""mc_answer3"", ""dtype"": ""string""}, {""name"": ""mc_answer4"", ""dtype"": ""string""}, {""name"": ""correct_answer_num"", ""dtype"": ""string""}, {""name"": ""dialect"", ""dtype"": ""string""}, {""name"": ""ds"", ""dtype"": ""timestamp[us]""}, {""name"": ""sentence_data"", ""list"": [{""name"": ""URL"", ""dtype"": ""string""}, {""name"": ""audio"", ""sequence"": {""audio"": {""sampling_rate"": 16000}}}, {""name"": ""domain"", ""dtype"": ""string""}, {""name"": ""filename"", ""sequence"": ""string""}, {""name"": ""fleurs_id"", ""dtype"": ""int64""}, {""name"": ""full_paragraph"", ""dtype"": ""bool""}, {""name"": ""gender"", ""sequence"": ""string""}, {""name"": ""has_hyperlink"", ""dtype"": ""int64""}, {""name"": ""has_image"", ""dtype"": ""int64""}, {""name"": ""id"", ""dtype"": ""int64""}, {""name"": ""num_samples"", ""sequence"": ""int64""}, {""name"": ""raw_transcription"", ""dtype"": ""string""}, {""name"": ""seamlessm4t_asr"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr_cer"", ""sequence"": ""float64""}, {""name"": ""seamlessm4t_asr_translation"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr_wer"", ""sequence"": ""float64""}, {""name"": ""sentence"", ""dtype"": ""string""}, {""name"": ""sentence_idx"", ""dtype"": ""int64""}, {""name"": ""speaker_id"", ""sequence"": ""int64""}, {""name"": ""split"", ""sequence"": ""string""}, {""name"": ""topic"", ""dtype"": ""string""}, {""name"": ""transcription"", ""dtype"": ""string""}, {""name"": ""whisper_asr"", ""sequence"": ""string""}, {""name"": ""whisper_asr_cer"", ""sequence"": ""float64""}, {""name"": ""whisper_asr_translation"", ""sequence"": ""string""}, {""name"": ""whisper_asr_wer"", ""sequence"": ""float64""}]}], ""splits"": [{""name"": ""test"", ""num_bytes"": 5752882522.0, ""num_examples"": 855}], ""download_size"": 3136282526, ""dataset_size"": 5752882522.0}, {""config_name"": ""bul_Cyrl"", ""features"": [{""name"": ""link"", ""dtype"": ""string""}, {""name"": ""question_number"", ""dtype"": ""int64""}, {""name"": ""flores_passage"", ""dtype"": ""string""}, {""name"": ""question"", ""dtype"": ""string""}, {""name"": ""mc_answer1"", ""dtype"": ""string""}, {""name"": ""mc_answer2"", ""dtype"": ""string""}, {""name"": ""mc_answer3"", ""dtype"": ""string""}, {""name"": ""mc_answer4"", ""dtype"": ""string""}, {""name"": ""correct_answer_num"", ""dtype"": ""string""}, {""name"": ""dialect"", ""dtype"": ""string""}, {""name"": ""ds"", ""dtype"": ""timestamp[us]""}, {""name"": ""sentence_data"", ""list"": [{""name"": ""URL"", ""dtype"": ""string""}, {""name"": ""audio"", ""sequence"": {""audio"": {""sampling_rate"": 16000}}}, {""name"": ""domain"", ""dtype"": ""string""}, {""name"": ""filename"", ""sequence"": ""string""}, {""name"": ""fleurs_id"", ""dtype"": ""int64""}, {""name"": ""full_paragraph"", ""dtype"": ""bool""}, {""name"": ""gender"", ""sequence"": ""string""}, {""name"": ""has_hyperlink"", ""dtype"": ""int64""}, {""name"": ""has_image"", ""dtype"": ""int64""}, {""name"": ""id"", ""dtype"": ""int64""}, {""name"": ""num_samples"", ""sequence"": ""int64""}, {""name"": ""raw_transcription"", ""dtype"": ""string""}, {""name"": ""seamlessm4t_asr"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr_cer"", ""sequence"": ""float64""}, {""name"": ""seamlessm4t_asr_translation"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr_wer"", ""sequence"": ""float64""}, {""name"": ""sentence"", ""dtype"": ""string""}, {""name"": ""sentence_idx"", ""dtype"": ""int64""}, {""name"": ""speaker_id"", ""sequence"": ""int64""}, {""name"": ""split"", ""sequence"": ""string""}, {""name"": ""topic"", ""dtype"": ""string""}, {""name"": ""transcription"", ""dtype"": ""string""}, {""name"": ""whisper_asr"", ""sequence"": ""string""}, {""name"": ""whisper_asr_cer"", ""sequence"": ""float64""}, {""name"": ""whisper_asr_translation"", ""sequence"": ""string""}, {""name"": ""whisper_asr_wer"", ""sequence"": ""float64""}]}], ""splits"": [{""name"": ""test"", ""num_bytes"": 4689171166.0, ""num_examples"": 873}], ""download_size"": 2505604703, ""dataset_size"": 4689171166.0}, {""config_name"": ""cat_Latn"", ""features"": [{""name"": ""link"", ""dtype"": ""string""}, {""name"": ""question_number"", ""dtype"": ""int64""}, {""name"": ""flores_passage"", ""dtype"": ""string""}, {""name"": ""question"", ""dtype"": ""string""}, {""name"": ""mc_answer1"", ""dtype"": ""string""}, {""name"": ""mc_answer2"", ""dtype"": ""string""}, {""name"": ""mc_answer3"", ""dtype"": ""string""}, {""name"": ""mc_answer4"", ""dtype"": ""string""}, {""name"": ""correct_answer_num"", ""dtype"": ""string""}, {""name"": ""dialect"", ""dtype"": ""string""}, {""name"": ""ds"", ""dtype"": ""timestamp[us]""}, {""name"": ""sentence_data"", ""list"": [{""name"": ""URL"", ""dtype"": ""string""}, {""name"": ""audio"", ""sequence"": {""audio"": {""sampling_rate"": 16000}}}, {""name"": ""domain"", ""dtype"": ""string""}, {""name"": ""filename"", ""sequence"": ""string""}, {""name"": ""fleurs_id"", ""dtype"": ""int64""}, {""name"": ""full_paragraph"", ""dtype"": ""bool""}, {""name"": ""gender"", ""sequence"": ""string""}, {""name"": ""has_hyperlink"", ""dtype"": ""int64""}, {""name"": ""has_image"", ""dtype"": ""int64""}, {""name"": ""id"", ""dtype"": ""int64""}, {""name"": ""num_samples"", ""sequence"": ""int64""}, {""name"": ""raw_transcription"", ""dtype"": ""string""}, {""name"": ""seamlessm4t_asr"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr_cer"", ""sequence"": ""float64""}, {""name"": ""seamlessm4t_asr_translation"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr_wer"", ""sequence"": ""float64""}, {""name"": ""sentence"", ""dtype"": ""string""}, {""name"": ""sentence_idx"", ""dtype"": ""int64""}, {""name"": ""speaker_id"", ""sequence"": ""int64""}, {""name"": ""split"", ""sequence"": ""string""}, {""name"": ""topic"", ""dtype"": ""string""}, {""name"": ""transcription"", ""dtype"": ""string""}, {""name"": ""whisper_asr"", ""sequence"": ""string""}, {""name"": ""whisper_asr_cer"", ""sequence"": ""float64""}, {""name"": ""whisper_asr_translation"", ""sequence"": ""string""}, {""name"": ""whisper_asr_wer"", ""sequence"": ""float64""}]}], ""splits"": [{""name"": ""test"", ""num_bytes"": 3478577853.0, ""num_examples"": 652}], ""download_size"": 1857877410, ""dataset_size"": 3478577853.0}, {""config_name"": ""ceb_Latn"", ""features"": [{""name"": ""link"", ""dtype"": ""string""}, {""name"": ""question_number"", ""dtype"": ""int64""}, {""name"": ""flores_passage"", ""dtype"": ""string""}, {""name"": ""question"", ""dtype"": ""string""}, {""name"": ""mc_answer1"", ""dtype"": ""string""}, {""name"": ""mc_answer2"", ""dtype"": ""string""}, {""name"": ""mc_answer3"", ""dtype"": ""string""}, {""name"": ""mc_answer4"", ""dtype"": ""string""}, {""name"": ""correct_answer_num"", ""dtype"": ""string""}, {""name"": ""dialect"", ""dtype"": ""string""}, {""name"": ""ds"", ""dtype"": ""timestamp[us]""}, {""name"": ""sentence_data"", ""list"": [{""name"": ""URL"", ""dtype"": ""string""}, {""name"": ""audio"", ""sequence"": {""audio"": {""sampling_rate"": 16000}}}, {""name"": ""domain"", ""dtype"": ""string""}, {""name"": ""filename"", ""sequence"": ""string""}, {""name"": ""fleurs_id"", ""dtype"": ""int64""}, {""name"": ""full_paragraph"", ""dtype"": ""bool""}, {""name"": ""gender"", ""sequence"": ""string""}, {""name"": ""has_hyperlink"", ""dtype"": ""int64""}, {""name"": ""has_image"", ""dtype"": ""int64""}, {""name"": ""id"", ""dtype"": ""int64""}, {""name"": ""num_samples"", ""sequence"": ""int64""}, {""name"": ""raw_transcription"", ""dtype"": ""string""}, {""name"": ""seamlessm4t_asr"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr_cer"", ""sequence"": ""float64""}, {""name"": ""seamlessm4t_asr_translation"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr_wer"", ""sequence"": ""float64""}, {""name"": ""sentence"", ""dtype"": ""string""}, {""name"": ""sentence_idx"", ""dtype"": ""int64""}, {""name"": ""speaker_id"", ""sequence"": ""int64""}, {""name"": ""split"", ""sequence"": ""string""}, {""name"": ""topic"", ""dtype"": ""string""}, {""name"": ""transcription"", ""dtype"": ""string""}, {""name"": ""whisper_asr"", ""sequence"": ""string""}, {""name"": ""whisper_asr_cer"", ""sequence"": ""float64""}, {""name"": ""whisper_asr_translation"", ""sequence"": ""string""}, {""name"": ""whisper_asr_wer"", ""sequence"": ""float64""}]}], ""splits"": [{""name"": ""test"", ""num_bytes"": 5328160039.0, ""num_examples"": 783}], ""download_size"": 2836984523, ""dataset_size"": 5328160039.0}, {""config_name"": ""ces_Latn"", ""features"": [{""name"": ""link"", ""dtype"": ""string""}, {""name"": ""question_number"", ""dtype"": ""int64""}, {""name"": ""flores_passage"", ""dtype"": ""string""}, {""name"": ""question"", ""dtype"": ""string""}, {""name"": ""mc_answer1"", ""dtype"": ""string""}, {""name"": ""mc_answer2"", ""dtype"": ""string""}, {""name"": ""mc_answer3"", ""dtype"": ""string""}, {""name"": ""mc_answer4"", ""dtype"": ""string""}, {""name"": ""correct_answer_num"", ""dtype"": ""string""}, {""name"": ""dialect"", ""dtype"": ""string""}, {""name"": ""ds"", ""dtype"": ""timestamp[us]""}, {""name"": ""sentence_data"", ""list"": [{""name"": ""URL"", ""dtype"": ""string""}, {""name"": ""audio"", ""sequence"": {""audio"": {""sampling_rate"": 16000}}}, {""name"": ""domain"", ""dtype"": ""string""}, {""name"": ""filename"", ""sequence"": ""string""}, {""name"": ""fleurs_id"", ""dtype"": ""int64""}, {""name"": ""full_paragraph"", ""dtype"": ""bool""}, {""name"": ""gender"", ""sequence"": ""string""}, {""name"": ""has_hyperlink"", ""dtype"": ""int64""}, {""name"": ""has_image"", ""dtype"": ""int64""}, {""name"": ""id"", ""dtype"": ""int64""}, {""name"": ""num_samples"", ""sequence"": ""int64""}, {""name"": ""raw_transcription"", ""dtype"": ""string""}, {""name"": ""seamlessm4t_asr"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr_cer"", ""sequence"": ""float64""}, {""name"": ""seamlessm4t_asr_translation"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr_wer"", ""sequence"": ""float64""}, {""name"": ""sentence"", ""dtype"": ""string""}, {""name"": ""sentence_idx"", ""dtype"": ""int64""}, {""name"": ""speaker_id"", ""sequence"": ""int64""}, {""name"": ""split"", ""sequence"": ""string""}, {""name"": ""topic"", ""dtype"": ""string""}, {""name"": ""transcription"", ""dtype"": ""string""}, {""name"": ""whisper_asr"", ""sequence"": ""string""}, {""name"": ""whisper_asr_cer"", ""sequence"": ""float64""}, {""name"": ""whisper_asr_translation"", ""sequence"": ""string""}, {""name"": ""whisper_asr_wer"", ""sequence"": ""float64""}]}], ""splits"": [{""name"": ""test"", ""num_bytes"": 4184004947.0, ""num_examples"": 802}], ""download_size"": 2222498009, ""dataset_size"": 4184004947.0}, {""config_name"": ""ckb_Arab"", ""features"": [{""name"": ""link"", ""dtype"": ""string""}, {""name"": ""question_number"", ""dtype"": ""int64""}, {""name"": ""flores_passage"", ""dtype"": ""string""}, {""name"": ""question"", ""dtype"": ""string""}, {""name"": ""mc_answer1"", ""dtype"": ""string""}, {""name"": ""mc_answer2"", ""dtype"": ""string""}, {""name"": ""mc_answer3"", ""dtype"": ""string""}, {""name"": ""mc_answer4"", ""dtype"": ""string""}, {""name"": ""correct_answer_num"", ""dtype"": ""string""}, {""name"": ""dialect"", ""dtype"": ""string""}, {""name"": ""ds"", ""dtype"": ""timestamp[us]""}, {""name"": ""sentence_data"", ""list"": [{""name"": ""URL"", ""dtype"": ""string""}, {""name"": ""audio"", ""sequence"": {""audio"": {""sampling_rate"": 16000}}}, {""name"": ""domain"", ""dtype"": ""string""}, {""name"": ""filename"", ""sequence"": ""string""}, {""name"": ""fleurs_id"", ""dtype"": ""int64""}, {""name"": ""full_paragraph"", ""dtype"": ""bool""}, {""name"": ""gender"", ""sequence"": ""string""}, {""name"": ""has_hyperlink"", ""dtype"": ""int64""}, {""name"": ""has_image"", ""dtype"": ""int64""}, {""name"": ""id"", ""dtype"": ""int64""}, {""name"": ""num_samples"", ""sequence"": ""int64""}, {""name"": ""raw_transcription"", ""dtype"": ""string""}, {""name"": ""seamlessm4t_asr"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr_cer"", ""sequence"": ""float64""}, {""name"": ""seamlessm4t_asr_translation"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr_wer"", ""sequence"": ""float64""}, {""name"": ""sentence"", ""dtype"": ""string""}, {""name"": ""sentence_idx"", ""dtype"": ""int64""}, {""name"": ""speaker_id"", ""sequence"": ""int64""}, {""name"": ""split"", ""sequence"": ""string""}, {""name"": ""topic"", ""dtype"": ""string""}, {""name"": ""transcription"", ""dtype"": ""string""}, {""name"": ""whisper_asr"", ""sequence"": ""string""}, {""name"": ""whisper_asr_cer"", ""sequence"": ""float64""}, {""name"": ""whisper_asr_translation"", ""sequence"": ""string""}, {""name"": ""whisper_asr_wer"", ""sequence"": ""float64""}]}], ""splits"": [{""name"": ""test"", ""num_bytes"": 5313934366.0, ""num_examples"": 842}], ""download_size"": 2818314961, ""dataset_size"": 5313934366.0}, {""config_name"": ""dan_Latn"", ""features"": [{""name"": ""link"", ""dtype"": ""string""}, {""name"": ""question_number"", ""dtype"": ""int64""}, {""name"": ""flores_passage"", ""dtype"": ""string""}, {""name"": ""question"", ""dtype"": ""string""}, {""name"": ""mc_answer1"", ""dtype"": ""string""}, {""name"": ""mc_answer2"", ""dtype"": ""string""}, {""name"": ""mc_answer3"", ""dtype"": ""string""}, {""name"": ""mc_answer4"", ""dtype"": ""string""}, {""name"": ""correct_answer_num"", ""dtype"": ""string""}, {""name"": ""dialect"", ""dtype"": ""string""}, {""name"": ""ds"", ""dtype"": ""timestamp[us]""}, {""name"": ""sentence_data"", ""list"": [{""name"": ""URL"", ""dtype"": ""string""}, {""name"": ""audio"", ""sequence"": {""audio"": {""sampling_rate"": 16000}}}, {""name"": ""domain"", ""dtype"": ""string""}, {""name"": ""filename"", ""sequence"": ""string""}, {""name"": ""fleurs_id"", ""dtype"": ""int64""}, {""name"": ""full_paragraph"", ""dtype"": ""bool""}, {""name"": ""gender"", ""sequence"": ""string""}, {""name"": ""has_hyperlink"", ""dtype"": ""int64""}, {""name"": ""has_image"", ""dtype"": ""int64""}, {""name"": ""id"", ""dtype"": ""int64""}, {""name"": ""num_samples"", ""sequence"": ""int64""}, {""name"": ""raw_transcription"", ""dtype"": ""string""}, {""name"": ""seamlessm4t_asr"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr_cer"", ""sequence"": ""float64""}, {""name"": ""seamlessm4t_asr_translation"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr_wer"", ""sequence"": ""float64""}, {""name"": ""sentence"", ""dtype"": ""string""}, {""name"": ""sentence_idx"", ""dtype"": ""int64""}, {""name"": ""speaker_id"", ""sequence"": ""int64""}, {""name"": ""split"", ""sequence"": ""string""}, {""name"": ""topic"", ""dtype"": ""string""}, {""name"": ""transcription"", ""dtype"": ""string""}, {""name"": ""whisper_asr"", ""sequence"": ""string""}, {""name"": ""whisper_asr_cer"", ""sequence"": ""float64""}, {""name"": ""whisper_asr_translation"", ""sequence"": ""string""}, {""name"": ""whisper_asr_wer"", ""sequence"": ""float64""}]}], ""splits"": [{""name"": ""test"", ""num_bytes"": 3579744686.0, ""num_examples"": 696}], ""download_size"": 1921138648, ""dataset_size"": 3579744686.0}, {""config_name"": ""deu_Latn"", ""features"": [{""name"": ""link"", ""dtype"": ""string""}, {""name"": ""question_number"", ""dtype"": ""int64""}, {""name"": ""flores_passage"", ""dtype"": ""string""}, {""name"": ""question"", ""dtype"": ""string""}, {""name"": ""mc_answer1"", ""dtype"": ""string""}, {""name"": ""mc_answer2"", ""dtype"": ""string""}, {""name"": ""mc_answer3"", ""dtype"": ""string""}, {""name"": ""mc_answer4"", ""dtype"": ""string""}, {""name"": ""correct_answer_num"", ""dtype"": ""string""}, {""name"": ""dialect"", ""dtype"": ""string""}, {""name"": ""ds"", ""dtype"": ""timestamp[us]""}, {""name"": ""sentence_data"", ""list"": [{""name"": ""URL"", ""dtype"": ""string""}, {""name"": ""audio"", ""sequence"": {""audio"": {""sampling_rate"": 16000}}}, {""name"": ""domain"", ""dtype"": ""string""}, {""name"": ""filename"", ""sequence"": ""string""}, {""name"": ""fleurs_id"", ""dtype"": ""int64""}, {""name"": ""full_paragraph"", ""dtype"": ""bool""}, {""name"": ""gender"", ""sequence"": ""string""}, {""name"": ""has_hyperlink"", ""dtype"": ""int64""}, {""name"": ""has_image"", ""dtype"": ""int64""}, {""name"": ""id"", ""dtype"": ""int64""}, {""name"": ""num_samples"", ""sequence"": ""int64""}, {""name"": ""raw_transcription"", ""dtype"": ""string""}, {""name"": ""seamlessm4t_asr"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr_cer"", ""sequence"": ""float64""}, {""name"": ""seamlessm4t_asr_translation"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr_wer"", ""sequence"": ""float64""}, {""name"": ""sentence"", ""dtype"": ""string""}, {""name"": ""sentence_idx"", ""dtype"": ""int64""}, {""name"": ""speaker_id"", ""sequence"": ""int64""}, {""name"": ""split"", ""sequence"": ""string""}, {""name"": ""topic"", ""dtype"": ""string""}, {""name"": ""transcription"", ""dtype"": ""string""}, {""name"": ""whisper_asr"", ""sequence"": ""string""}, {""name"": ""whisper_asr_cer"", ""sequence"": ""float64""}, {""name"": ""whisper_asr_translation"", ""sequence"": ""string""}, {""name"": ""whisper_asr_wer"", ""sequence"": ""float64""}]}], ""splits"": [{""name"": ""test"", ""num_bytes"": 4655172301.0, ""num_examples"": 804}], ""download_size"": 2514299817, ""dataset_size"": 4655172301.0}, {""config_name"": ""ell_Grek"", ""features"": [{""name"": ""link"", ""dtype"": ""string""}, {""name"": ""question_number"", ""dtype"": ""int64""}, {""name"": ""flores_passage"", ""dtype"": ""string""}, {""name"": ""question"", ""dtype"": ""string""}, {""name"": ""mc_answer1"", ""dtype"": ""string""}, {""name"": ""mc_answer2"", ""dtype"": ""string""}, {""name"": ""mc_answer3"", ""dtype"": ""string""}, {""name"": ""mc_answer4"", ""dtype"": ""string""}, {""name"": ""correct_answer_num"", ""dtype"": ""string""}, {""name"": ""dialect"", ""dtype"": ""string""}, {""name"": ""ds"", ""dtype"": ""timestamp[us]""}, {""name"": ""sentence_data"", ""list"": [{""name"": ""URL"", ""dtype"": ""string""}, {""name"": ""audio"", ""sequence"": {""audio"": {""sampling_rate"": 16000}}}, {""name"": ""domain"", ""dtype"": ""string""}, {""name"": ""filename"", ""sequence"": ""string""}, {""name"": ""fleurs_id"", ""dtype"": ""int64""}, {""name"": ""full_paragraph"", ""dtype"": ""bool""}, {""name"": ""gender"", ""sequence"": ""string""}, {""name"": ""has_hyperlink"", ""dtype"": ""int64""}, {""name"": ""has_image"", ""dtype"": ""int64""}, {""name"": ""id"", ""dtype"": ""int64""}, {""name"": ""num_samples"", ""sequence"": ""int64""}, {""name"": ""raw_transcription"", ""dtype"": ""string""}, {""name"": ""seamlessm4t_asr"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr_cer"", ""sequence"": ""float64""}, {""name"": ""seamlessm4t_asr_translation"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr_wer"", ""sequence"": ""float64""}, {""name"": ""sentence"", ""dtype"": ""string""}, {""name"": ""sentence_idx"", ""dtype"": ""int64""}, {""name"": ""speaker_id"", ""sequence"": ""int64""}, {""name"": ""split"", ""sequence"": ""string""}, {""name"": ""topic"", ""dtype"": ""string""}, {""name"": ""transcription"", ""dtype"": ""string""}, {""name"": ""whisper_asr"", ""sequence"": ""string""}, {""name"": ""whisper_asr_cer"", ""sequence"": ""float64""}, {""name"": ""whisper_asr_translation"", ""sequence"": ""string""}, {""name"": ""whisper_asr_wer"", ""sequence"": ""float64""}]}], ""splits"": [{""name"": ""test"", ""num_bytes"": 4676081230.0, ""num_examples"": 837}], ""download_size"": 2495880530, ""dataset_size"": 4676081230.0}, {""config_name"": ""eng_Latn"", ""features"": [{""name"": ""link"", ""dtype"": ""string""}, {""name"": ""question_number"", ""dtype"": ""int64""}, {""name"": ""flores_passage"", ""dtype"": ""string""}, {""name"": ""question"", ""dtype"": ""string""}, {""name"": ""mc_answer1"", ""dtype"": ""string""}, {""name"": ""mc_answer2"", ""dtype"": ""string""}, {""name"": ""mc_answer3"", ""dtype"": ""string""}, {""name"": ""mc_answer4"", ""dtype"": ""string""}, {""name"": ""correct_answer_num"", ""dtype"": ""string""}, {""name"": ""dialect"", ""dtype"": ""string""}, {""name"": ""ds"", ""dtype"": ""timestamp[us]""}, {""name"": ""sentence_data"", ""list"": [{""name"": ""URL"", ""dtype"": ""string""}, {""name"": ""audio"", ""sequence"": {""audio"": {""sampling_rate"": 16000}}}, {""name"": ""domain"", ""dtype"": ""string""}, {""name"": ""filename"", ""sequence"": ""string""}, {""name"": ""fleurs_id"", ""dtype"": ""int64""}, {""name"": ""full_paragraph"", ""dtype"": ""bool""}, {""name"": ""gender"", ""sequence"": ""string""}, {""name"": ""has_hyperlink"", ""dtype"": ""int64""}, {""name"": ""has_image"", ""dtype"": ""int64""}, {""name"": ""id"", ""dtype"": ""int64""}, {""name"": ""num_samples"", ""sequence"": ""int64""}, {""name"": ""raw_transcription"", ""dtype"": ""string""}, {""name"": ""seamlessm4t_asr"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr_cer"", ""sequence"": ""float64""}, {""name"": ""seamlessm4t_asr_translation"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr_wer"", ""sequence"": ""float64""}, {""name"": ""sentence"", ""dtype"": ""string""}, {""name"": ""sentence_idx"", ""dtype"": ""int64""}, {""name"": ""speaker_id"", ""sequence"": ""int64""}, {""name"": ""split"", ""sequence"": ""string""}, {""name"": ""topic"", ""dtype"": ""string""}, {""name"": ""transcription"", ""dtype"": ""string""}, {""name"": ""whisper_asr"", ""sequence"": ""string""}, {""name"": ""whisper_asr_cer"", ""sequence"": ""float64""}, {""name"": ""whisper_asr_translation"", ""sequence"": ""string""}, {""name"": ""whisper_asr_wer"", ""sequence"": ""float64""}]}], ""splits"": [{""name"": ""test"", ""num_bytes"": 3795562630.0, ""num_examples"": 844}], ""download_size"": 2048956533, ""dataset_size"": 3795562630.0}, {""config_name"": ""est_Latn"", ""features"": [{""name"": ""link"", ""dtype"": ""string""}, {""name"": ""question_number"", ""dtype"": ""int64""}, {""name"": ""flores_passage"", ""dtype"": ""string""}, {""name"": ""question"", ""dtype"": ""string""}, {""name"": ""mc_answer1"", ""dtype"": ""string""}, {""name"": ""mc_answer2"", ""dtype"": ""string""}, {""name"": ""mc_answer3"", ""dtype"": ""string""}, {""name"": ""mc_answer4"", ""dtype"": ""string""}, {""name"": ""correct_answer_num"", ""dtype"": ""string""}, {""name"": ""dialect"", ""dtype"": ""string""}, {""name"": ""ds"", ""dtype"": ""timestamp[us]""}, {""name"": ""sentence_data"", ""list"": [{""name"": ""URL"", ""dtype"": ""string""}, {""name"": ""audio"", ""sequence"": {""audio"": {""sampling_rate"": 16000}}}, {""name"": ""domain"", ""dtype"": ""string""}, {""name"": ""filename"", ""sequence"": ""string""}, {""name"": ""fleurs_id"", ""dtype"": ""int64""}, {""name"": ""full_paragraph"", ""dtype"": ""bool""}, {""name"": ""gender"", ""sequence"": ""string""}, {""name"": ""has_hyperlink"", ""dtype"": ""int64""}, {""name"": ""has_image"", ""dtype"": ""int64""}, {""name"": ""id"", ""dtype"": ""int64""}, {""name"": ""num_samples"", ""sequence"": ""int64""}, {""name"": ""raw_transcription"", ""dtype"": ""string""}, {""name"": ""seamlessm4t_asr"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr_cer"", ""sequence"": ""float64""}, {""name"": ""seamlessm4t_asr_translation"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr_wer"", ""sequence"": ""float64""}, {""name"": ""sentence"", ""dtype"": ""string""}, {""name"": ""sentence_idx"", ""dtype"": ""int64""}, {""name"": ""speaker_id"", ""sequence"": ""int64""}, {""name"": ""split"", ""sequence"": ""string""}, {""name"": ""topic"", ""dtype"": ""string""}, {""name"": ""transcription"", ""dtype"": ""string""}, {""name"": ""whisper_asr"", ""sequence"": ""string""}, {""name"": ""whisper_asr_cer"", ""sequence"": ""float64""}, {""name"": ""whisper_asr_translation"", ""sequence"": ""string""}, {""name"": ""whisper_asr_wer"", ""sequence"": ""float64""}]}], ""splits"": [{""name"": ""test"", ""num_bytes"": 3738347953.0, ""num_examples"": 736}], ""download_size"": 1996763639, ""dataset_size"": 3738347953.0}, {""config_name"": ""fin_Latn"", ""features"": [{""name"": ""link"", ""dtype"": ""string""}, {""name"": ""question_number"", ""dtype"": ""int64""}, {""name"": ""flores_passage"", ""dtype"": ""string""}, {""name"": ""question"", ""dtype"": ""string""}, {""name"": ""mc_answer1"", ""dtype"": ""string""}, {""name"": ""mc_answer2"", ""dtype"": ""string""}, {""name"": ""mc_answer3"", ""dtype"": ""string""}, {""name"": ""mc_answer4"", ""dtype"": ""string""}, {""name"": ""correct_answer_num"", ""dtype"": ""string""}, {""name"": ""dialect"", ""dtype"": ""string""}, {""name"": ""ds"", ""dtype"": ""timestamp[us]""}, {""name"": ""sentence_data"", ""list"": [{""name"": ""URL"", ""dtype"": ""string""}, {""name"": ""audio"", ""sequence"": {""audio"": {""sampling_rate"": 16000}}}, {""name"": ""domain"", ""dtype"": ""string""}, {""name"": ""filename"", ""sequence"": ""string""}, {""name"": ""fleurs_id"", ""dtype"": ""int64""}, {""name"": ""full_paragraph"", ""dtype"": ""bool""}, {""name"": ""gender"", ""sequence"": ""string""}, {""name"": ""has_hyperlink"", ""dtype"": ""int64""}, {""name"": ""has_image"", ""dtype"": ""int64""}, {""name"": ""id"", ""dtype"": ""int64""}, {""name"": ""num_samples"", ""sequence"": ""int64""}, {""name"": ""raw_transcription"", ""dtype"": ""string""}, {""name"": ""seamlessm4t_asr"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr_cer"", ""sequence"": ""float64""}, {""name"": ""seamlessm4t_asr_translation"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr_wer"", ""sequence"": ""float64""}, {""name"": ""sentence"", ""dtype"": ""string""}, {""name"": ""sentence_idx"", ""dtype"": ""int64""}, {""name"": ""speaker_id"", ""sequence"": ""int64""}, {""name"": ""split"", ""sequence"": ""string""}, {""name"": ""topic"", ""dtype"": ""string""}, {""name"": ""transcription"", ""dtype"": ""string""}, {""name"": ""whisper_asr"", ""sequence"": ""string""}, {""name"": ""whisper_asr_cer"", ""sequence"": ""float64""}, {""name"": ""whisper_asr_translation"", ""sequence"": ""string""}, {""name"": ""whisper_asr_wer"", ""sequence"": ""float64""}]}], ""splits"": [{""name"": ""test"", ""num_bytes"": 4808962381.0, ""num_examples"": 826}], ""download_size"": 2583594472, ""dataset_size"": 4808962381.0}, {""config_name"": ""fra_Latn"", ""features"": [{""name"": ""link"", ""dtype"": ""string""}, {""name"": ""question_number"", ""dtype"": ""int64""}, {""name"": ""flores_passage"", ""dtype"": ""string""}, {""name"": ""question"", ""dtype"": ""string""}, {""name"": ""mc_answer1"", ""dtype"": ""string""}, {""name"": ""mc_answer2"", ""dtype"": ""string""}, {""name"": ""mc_answer3"", ""dtype"": ""string""}, {""name"": ""mc_answer4"", ""dtype"": ""string""}, {""name"": ""correct_answer_num"", ""dtype"": ""string""}, {""name"": ""dialect"", ""dtype"": ""string""}, {""name"": ""ds"", ""dtype"": ""timestamp[us]""}, {""name"": ""sentence_data"", ""list"": [{""name"": ""URL"", ""dtype"": ""string""}, {""name"": ""audio"", ""sequence"": {""audio"": {""sampling_rate"": 16000}}}, {""name"": ""domain"", ""dtype"": ""string""}, {""name"": ""filename"", ""sequence"": ""string""}, {""name"": ""fleurs_id"", ""dtype"": ""int64""}, {""name"": ""full_paragraph"", ""dtype"": ""bool""}, {""name"": ""gender"", ""sequence"": ""string""}, {""name"": ""has_hyperlink"", ""dtype"": ""int64""}, {""name"": ""has_image"", ""dtype"": ""int64""}, {""name"": ""id"", ""dtype"": ""int64""}, {""name"": ""num_samples"", ""sequence"": ""int64""}, {""name"": ""raw_transcription"", ""dtype"": ""string""}, {""name"": ""seamlessm4t_asr"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr_cer"", ""sequence"": ""float64""}, {""name"": ""seamlessm4t_asr_translation"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr_wer"", ""sequence"": ""float64""}, {""name"": ""sentence"", ""dtype"": ""string""}, {""name"": ""sentence_idx"", ""dtype"": ""int64""}, {""name"": ""speaker_id"", ""sequence"": ""int64""}, {""name"": ""split"", ""sequence"": ""string""}, {""name"": ""topic"", ""dtype"": ""string""}, {""name"": ""transcription"", ""dtype"": ""string""}, {""name"": ""whisper_asr"", ""sequence"": ""string""}, {""name"": ""whisper_asr_cer"", ""sequence"": ""float64""}, {""name"": ""whisper_asr_translation"", ""sequence"": ""string""}, {""name"": ""whisper_asr_wer"", ""sequence"": ""float64""}]}], ""splits"": [{""name"": ""test"", ""num_bytes"": 4771909151.0, ""num_examples"": 839}], ""download_size"": 2537876484, ""dataset_size"": 4771909151.0}, {""config_name"": ""fuv_Latn"", ""features"": [{""name"": ""link"", ""dtype"": ""string""}, {""name"": ""question_number"", ""dtype"": ""int64""}, {""name"": ""flores_passage"", ""dtype"": ""string""}, {""name"": ""question"", ""dtype"": ""string""}, {""name"": ""mc_answer1"", ""dtype"": ""string""}, {""name"": ""mc_answer2"", ""dtype"": ""string""}, {""name"": ""mc_answer3"", ""dtype"": ""string""}, {""name"": ""mc_answer4"", ""dtype"": ""string""}, {""name"": ""correct_answer_num"", ""dtype"": ""string""}, {""name"": ""dialect"", ""dtype"": ""string""}, {""name"": ""ds"", ""dtype"": ""timestamp[us]""}, {""name"": ""sentence_data"", ""list"": [{""name"": ""URL"", ""dtype"": ""string""}, {""name"": ""audio"", ""sequence"": {""audio"": {""sampling_rate"": 16000}}}, {""name"": ""domain"", ""dtype"": ""string""}, {""name"": ""filename"", ""sequence"": ""string""}, {""name"": ""fleurs_id"", ""dtype"": ""int64""}, {""name"": ""full_paragraph"", ""dtype"": ""bool""}, {""name"": ""gender"", ""sequence"": ""string""}, {""name"": ""has_hyperlink"", ""dtype"": ""int64""}, {""name"": ""has_image"", ""dtype"": ""int64""}, {""name"": ""id"", ""dtype"": ""int64""}, {""name"": ""num_samples"", ""sequence"": ""int64""}, {""name"": ""raw_transcription"", ""dtype"": ""string""}, {""name"": ""seamlessm4t_asr"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr_cer"", ""sequence"": ""float64""}, {""name"": ""seamlessm4t_asr_translation"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr_wer"", ""sequence"": ""float64""}, {""name"": ""sentence"", ""dtype"": ""string""}, {""name"": ""sentence_idx"", ""dtype"": ""int64""}, {""name"": ""speaker_id"", ""sequence"": ""int64""}, {""name"": ""split"", ""sequence"": ""string""}, {""name"": ""topic"", ""dtype"": ""string""}, {""name"": ""transcription"", ""dtype"": ""string""}, {""name"": ""whisper_asr"", ""sequence"": ""string""}, {""name"": ""whisper_asr_cer"", ""sequence"": ""float64""}, {""name"": ""whisper_asr_translation"", ""sequence"": ""string""}, {""name"": ""whisper_asr_wer"", ""sequence"": ""float64""}]}], ""splits"": [{""name"": ""test"", ""num_bytes"": 6574923107.0, ""num_examples"": 848}], ""download_size"": 3501465907, ""dataset_size"": 6574923107.0}, {""config_name"": ""gaz_Latn"", ""features"": [{""name"": ""link"", ""dtype"": ""string""}, {""name"": ""question_number"", ""dtype"": ""int64""}, {""name"": ""flores_passage"", ""dtype"": ""string""}, {""name"": ""question"", ""dtype"": ""string""}, {""name"": ""mc_answer1"", ""dtype"": ""string""}, {""name"": ""mc_answer2"", ""dtype"": ""string""}, {""name"": ""mc_answer3"", ""dtype"": ""string""}, {""name"": ""mc_answer4"", ""dtype"": ""string""}, {""name"": ""correct_answer_num"", ""dtype"": ""string""}, {""name"": ""dialect"", ""dtype"": ""string""}, {""name"": ""ds"", ""dtype"": ""timestamp[us]""}, {""name"": ""sentence_data"", ""list"": [{""name"": ""URL"", ""dtype"": ""string""}, {""name"": ""audio"", ""sequence"": {""audio"": {""sampling_rate"": 16000}}}, {""name"": ""domain"", ""dtype"": ""string""}, {""name"": ""filename"", ""sequence"": ""string""}, {""name"": ""fleurs_id"", ""dtype"": ""int64""}, {""name"": ""full_paragraph"", ""dtype"": ""bool""}, {""name"": ""gender"", ""sequence"": ""string""}, {""name"": ""has_hyperlink"", ""dtype"": ""int64""}, {""name"": ""has_image"", ""dtype"": ""int64""}, {""name"": ""id"", ""dtype"": ""int64""}, {""name"": ""num_samples"", ""sequence"": ""int64""}, {""name"": ""raw_transcription"", ""dtype"": ""string""}, {""name"": ""seamlessm4t_asr"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr_cer"", ""sequence"": ""float64""}, {""name"": ""seamlessm4t_asr_translation"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr_wer"", ""sequence"": ""float64""}, {""name"": ""sentence"", ""dtype"": ""string""}, {""name"": ""sentence_idx"", ""dtype"": ""int64""}, {""name"": ""speaker_id"", ""sequence"": ""int64""}, {""name"": ""split"", ""sequence"": ""string""}, {""name"": ""topic"", ""dtype"": ""string""}, {""name"": ""transcription"", ""dtype"": ""string""}, {""name"": ""whisper_asr"", ""sequence"": ""string""}, {""name"": ""whisper_asr_cer"", ""sequence"": ""float64""}, {""name"": ""whisper_asr_translation"", ""sequence"": ""string""}, {""name"": ""whisper_asr_wer"", ""sequence"": ""float64""}]}], ""splits"": [{""name"": ""test"", ""num_bytes"": 1208450421.0, ""num_examples"": 252}], ""download_size"": 633794246, ""dataset_size"": 1208450421.0}, {""config_name"": ""guj_Gujr"", ""features"": [{""name"": ""link"", ""dtype"": ""string""}, {""name"": ""question_number"", ""dtype"": ""int64""}, {""name"": ""flores_passage"", ""dtype"": ""string""}, {""name"": ""question"", ""dtype"": ""string""}, {""name"": ""mc_answer1"", ""dtype"": ""string""}, {""name"": ""mc_answer2"", ""dtype"": ""string""}, {""name"": ""mc_answer3"", ""dtype"": ""string""}, {""name"": ""mc_answer4"", ""dtype"": ""string""}, {""name"": ""correct_answer_num"", ""dtype"": ""string""}, {""name"": ""dialect"", ""dtype"": ""string""}, {""name"": ""ds"", ""dtype"": ""timestamp[us]""}, {""name"": ""sentence_data"", ""list"": [{""name"": ""URL"", ""dtype"": ""string""}, {""name"": ""audio"", ""sequence"": {""audio"": {""sampling_rate"": 16000}}}, {""name"": ""domain"", ""dtype"": ""string""}, {""name"": ""filename"", ""sequence"": ""string""}, {""name"": ""fleurs_id"", ""dtype"": ""int64""}, {""name"": ""full_paragraph"", ""dtype"": ""bool""}, {""name"": ""gender"", ""sequence"": ""string""}, {""name"": ""has_hyperlink"", ""dtype"": ""int64""}, {""name"": ""has_image"", ""dtype"": ""int64""}, {""name"": ""id"", ""dtype"": ""int64""}, {""name"": ""num_samples"", ""sequence"": ""int64""}, {""name"": ""raw_transcription"", ""dtype"": ""string""}, {""name"": ""seamlessm4t_asr"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr_cer"", ""sequence"": ""float64""}, {""name"": ""seamlessm4t_asr_translation"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr_wer"", ""sequence"": ""float64""}, {""name"": ""sentence"", ""dtype"": ""string""}, {""name"": ""sentence_idx"", ""dtype"": ""int64""}, {""name"": ""speaker_id"", ""sequence"": ""int64""}, {""name"": ""split"", ""sequence"": ""string""}, {""name"": ""topic"", ""dtype"": ""string""}, {""name"": ""transcription"", ""dtype"": ""string""}, {""name"": ""whisper_asr"", ""sequence"": ""string""}, {""name"": ""whisper_asr_cer"", ""sequence"": ""float64""}, {""name"": ""whisper_asr_translation"", ""sequence"": ""string""}, {""name"": ""whisper_asr_wer"", ""sequence"": ""float64""}]}], ""splits"": [{""name"": ""test"", ""num_bytes"": 4941743312.0, ""num_examples"": 880}], ""download_size"": 2674707067, ""dataset_size"": 4941743312.0}, {""config_name"": ""hau_Latn"", ""features"": [{""name"": ""link"", ""dtype"": ""string""}, {""name"": ""question_number"", ""dtype"": ""int64""}, {""name"": ""flores_passage"", ""dtype"": ""string""}, {""name"": ""question"", ""dtype"": ""string""}, {""name"": ""mc_answer1"", ""dtype"": ""string""}, {""name"": ""mc_answer2"", ""dtype"": ""string""}, {""name"": ""mc_answer3"", ""dtype"": ""string""}, {""name"": ""mc_answer4"", ""dtype"": ""string""}, {""name"": ""correct_answer_num"", ""dtype"": ""string""}, {""name"": ""dialect"", ""dtype"": ""string""}, {""name"": ""ds"", ""dtype"": ""timestamp[us]""}, {""name"": ""sentence_data"", ""list"": [{""name"": ""URL"", ""dtype"": ""string""}, {""name"": ""audio"", ""sequence"": {""audio"": {""sampling_rate"": 16000}}}, {""name"": ""domain"", ""dtype"": ""string""}, {""name"": ""filename"", ""sequence"": ""string""}, {""name"": ""fleurs_id"", ""dtype"": ""int64""}, {""name"": ""full_paragraph"", ""dtype"": ""bool""}, {""name"": ""gender"", ""sequence"": ""string""}, {""name"": ""has_hyperlink"", ""dtype"": ""int64""}, {""name"": ""has_image"", ""dtype"": ""int64""}, {""name"": ""id"", ""dtype"": ""int64""}, {""name"": ""num_samples"", ""sequence"": ""int64""}, {""name"": ""raw_transcription"", ""dtype"": ""string""}, {""name"": ""seamlessm4t_asr"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr_cer"", ""sequence"": ""float64""}, {""name"": ""seamlessm4t_asr_translation"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr_wer"", ""sequence"": ""float64""}, {""name"": ""sentence"", ""dtype"": ""string""}, {""name"": ""sentence_idx"", ""dtype"": ""int64""}, {""name"": ""speaker_id"", ""sequence"": ""int64""}, {""name"": ""split"", ""sequence"": ""string""}, {""name"": ""topic"", ""dtype"": ""string""}, {""name"": ""transcription"", ""dtype"": ""string""}, {""name"": ""whisper_asr"", ""sequence"": ""string""}, {""name"": ""whisper_asr_cer"", ""sequence"": ""float64""}, {""name"": ""whisper_asr_translation"", ""sequence"": ""string""}, {""name"": ""whisper_asr_wer"", ""sequence"": ""float64""}]}], ""splits"": [{""name"": ""test"", ""num_bytes"": 6720094918.0, ""num_examples"": 838}], ""download_size"": 3664051325, ""dataset_size"": 6720094918.0}, {""config_name"": ""heb_Hebr"", ""features"": [{""name"": ""link"", ""dtype"": ""string""}, {""name"": ""question_number"", ""dtype"": ""int64""}, {""name"": ""flores_passage"", ""dtype"": ""string""}, {""name"": ""question"", ""dtype"": ""string""}, {""name"": ""mc_answer1"", ""dtype"": ""string""}, {""name"": ""mc_answer2"", ""dtype"": ""string""}, {""name"": ""mc_answer3"", ""dtype"": ""string""}, {""name"": ""mc_answer4"", ""dtype"": ""string""}, {""name"": ""correct_answer_num"", ""dtype"": ""string""}, {""name"": ""dialect"", ""dtype"": ""string""}, {""name"": ""ds"", ""dtype"": ""timestamp[us]""}, {""name"": ""sentence_data"", ""list"": [{""name"": ""URL"", ""dtype"": ""string""}, {""name"": ""audio"", ""sequence"": {""audio"": {""sampling_rate"": 16000}}}, {""name"": ""domain"", ""dtype"": ""string""}, {""name"": ""filename"", ""sequence"": ""string""}, {""name"": ""fleurs_id"", ""dtype"": ""int64""}, {""name"": ""full_paragraph"", ""dtype"": ""bool""}, {""name"": ""gender"", ""sequence"": ""string""}, {""name"": ""has_hyperlink"", ""dtype"": ""int64""}, {""name"": ""has_image"", ""dtype"": ""int64""}, {""name"": ""id"", ""dtype"": ""int64""}, {""name"": ""num_samples"", ""sequence"": ""int64""}, {""name"": ""raw_transcription"", ""dtype"": ""string""}, {""name"": ""seamlessm4t_asr"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr_cer"", ""sequence"": ""float64""}, {""name"": ""seamlessm4t_asr_translation"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr_wer"", ""sequence"": ""float64""}, {""name"": ""sentence"", ""dtype"": ""string""}, {""name"": ""sentence_idx"", ""dtype"": ""int64""}, {""name"": ""speaker_id"", ""sequence"": ""int64""}, {""name"": ""split"", ""sequence"": ""string""}, {""name"": ""topic"", ""dtype"": ""string""}, {""name"": ""transcription"", ""dtype"": ""string""}, {""name"": ""whisper_asr"", ""sequence"": ""string""}, {""name"": ""whisper_asr_cer"", ""sequence"": ""float64""}, {""name"": ""whisper_asr_translation"", ""sequence"": ""string""}, {""name"": ""whisper_asr_wer"", ""sequence"": ""float64""}]}], ""splits"": [{""name"": ""test"", ""num_bytes"": 4723937383.0, ""num_examples"": 878}], ""download_size"": 2537321008, ""dataset_size"": 4723937383.0}, {""config_name"": ""hin_Deva"", ""features"": [{""name"": ""link"", ""dtype"": ""string""}, {""name"": ""question_number"", ""dtype"": ""int64""}, {""name"": ""flores_passage"", ""dtype"": ""string""}, {""name"": ""question"", ""dtype"": ""string""}, {""name"": ""mc_answer1"", ""dtype"": ""string""}, {""name"": ""mc_answer2"", ""dtype"": ""string""}, {""name"": ""mc_answer3"", ""dtype"": ""string""}, {""name"": ""mc_answer4"", ""dtype"": ""string""}, {""name"": ""correct_answer_num"", ""dtype"": ""string""}, {""name"": ""dialect"", ""dtype"": ""string""}, {""name"": ""ds"", ""dtype"": ""timestamp[us]""}, {""name"": ""sentence_data"", ""list"": [{""name"": ""URL"", ""dtype"": ""string""}, {""name"": ""audio"", ""sequence"": {""audio"": {""sampling_rate"": 16000}}}, {""name"": ""domain"", ""dtype"": ""string""}, {""name"": ""filename"", ""sequence"": ""string""}, {""name"": ""fleurs_id"", ""dtype"": ""int64""}, {""name"": ""full_paragraph"", ""dtype"": ""bool""}, {""name"": ""gender"", ""sequence"": ""string""}, {""name"": ""has_hyperlink"", ""dtype"": ""int64""}, {""name"": ""has_image"", ""dtype"": ""int64""}, {""name"": ""id"", ""dtype"": ""int64""}, {""name"": ""num_samples"", ""sequence"": ""int64""}, {""name"": ""raw_transcription"", ""dtype"": ""string""}, {""name"": ""seamlessm4t_asr"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr_cer"", ""sequence"": ""float64""}, {""name"": ""seamlessm4t_asr_translation"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr_wer"", ""sequence"": ""float64""}, {""name"": ""sentence"", ""dtype"": ""string""}, {""name"": ""sentence_idx"", ""dtype"": ""int64""}, {""name"": ""speaker_id"", ""sequence"": ""int64""}, {""name"": ""split"", ""sequence"": ""string""}, {""name"": ""topic"", ""dtype"": ""string""}, {""name"": ""transcription"", ""dtype"": ""string""}, {""name"": ""whisper_asr"", ""sequence"": ""string""}, {""name"": ""whisper_asr_cer"", ""sequence"": ""float64""}, {""name"": ""whisper_asr_translation"", ""sequence"": ""string""}, {""name"": ""whisper_asr_wer"", ""sequence"": ""float64""}]}], ""splits"": [{""name"": ""test"", ""num_bytes"": 2242460095.0, ""num_examples"": 515}], ""download_size"": 1210901498, ""dataset_size"": 2242460095.0}, {""config_name"": ""hrv_Latn"", ""features"": [{""name"": ""link"", ""dtype"": ""string""}, {""name"": ""question_number"", ""dtype"": ""int64""}, {""name"": ""flores_passage"", ""dtype"": ""string""}, {""name"": ""question"", ""dtype"": ""string""}, {""name"": ""mc_answer1"", ""dtype"": ""string""}, {""name"": ""mc_answer2"", ""dtype"": ""string""}, {""name"": ""mc_answer3"", ""dtype"": ""string""}, {""name"": ""mc_answer4"", ""dtype"": ""string""}, {""name"": ""correct_answer_num"", ""dtype"": ""string""}, {""name"": ""dialect"", ""dtype"": ""string""}, {""name"": ""ds"", ""dtype"": ""timestamp[us]""}, {""name"": ""sentence_data"", ""list"": [{""name"": ""URL"", ""dtype"": ""string""}, {""name"": ""audio"", ""sequence"": {""audio"": {""sampling_rate"": 16000}}}, {""name"": ""domain"", ""dtype"": ""string""}, {""name"": ""filename"", ""sequence"": ""string""}, {""name"": ""fleurs_id"", ""dtype"": ""int64""}, {""name"": ""full_paragraph"", ""dtype"": ""bool""}, {""name"": ""gender"", ""sequence"": ""string""}, {""name"": ""has_hyperlink"", ""dtype"": ""int64""}, {""name"": ""has_image"", ""dtype"": ""int64""}, {""name"": ""id"", ""dtype"": ""int64""}, {""name"": ""num_samples"", ""sequence"": ""int64""}, {""name"": ""raw_transcription"", ""dtype"": ""string""}, {""name"": ""seamlessm4t_asr"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr_cer"", ""sequence"": ""float64""}, {""name"": ""seamlessm4t_asr_translation"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr_wer"", ""sequence"": ""float64""}, {""name"": ""sentence"", ""dtype"": ""string""}, {""name"": ""sentence_idx"", ""dtype"": ""int64""}, {""name"": ""speaker_id"", ""sequence"": ""int64""}, {""name"": ""split"", ""sequence"": ""string""}, {""name"": ""topic"", ""dtype"": ""string""}, {""name"": ""transcription"", ""dtype"": ""string""}, {""name"": ""whisper_asr"", ""sequence"": ""string""}, {""name"": ""whisper_asr_cer"", ""sequence"": ""float64""}, {""name"": ""whisper_asr_translation"", ""sequence"": ""string""}, {""name"": ""whisper_asr_wer"", ""sequence"": ""float64""}]}], ""splits"": [{""name"": ""test"", ""num_bytes"": 5934802411.0, ""num_examples"": 896}], ""download_size"": 3121833679, ""dataset_size"": 5934802411.0}, {""config_name"": ""hun_Latn"", ""features"": [{""name"": ""link"", ""dtype"": ""string""}, {""name"": ""question_number"", ""dtype"": ""int64""}, {""name"": ""flores_passage"", ""dtype"": ""string""}, {""name"": ""question"", ""dtype"": ""string""}, {""name"": ""mc_answer1"", ""dtype"": ""string""}, {""name"": ""mc_answer2"", ""dtype"": ""string""}, {""name"": ""mc_answer3"", ""dtype"": ""string""}, {""name"": ""mc_answer4"", ""dtype"": ""string""}, {""name"": ""correct_answer_num"", ""dtype"": ""string""}, {""name"": ""dialect"", ""dtype"": ""string""}, {""name"": ""ds"", ""dtype"": ""timestamp[us]""}, {""name"": ""sentence_data"", ""list"": [{""name"": ""URL"", ""dtype"": ""string""}, {""name"": ""audio"", ""sequence"": {""audio"": {""sampling_rate"": 16000}}}, {""name"": ""domain"", ""dtype"": ""string""}, {""name"": ""filename"", ""sequence"": ""string""}, {""name"": ""fleurs_id"", ""dtype"": ""int64""}, {""name"": ""full_paragraph"", ""dtype"": ""bool""}, {""name"": ""gender"", ""sequence"": ""string""}, {""name"": ""has_hyperlink"", ""dtype"": ""int64""}, {""name"": ""has_image"", ""dtype"": ""int64""}, {""name"": ""id"", ""dtype"": ""int64""}, {""name"": ""num_samples"", ""sequence"": ""int64""}, {""name"": ""raw_transcription"", ""dtype"": ""string""}, {""name"": ""seamlessm4t_asr"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr_cer"", ""sequence"": ""float64""}, {""name"": ""seamlessm4t_asr_translation"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr_wer"", ""sequence"": ""float64""}, {""name"": ""sentence"", ""dtype"": ""string""}, {""name"": ""sentence_idx"", ""dtype"": ""int64""}, {""name"": ""speaker_id"", ""sequence"": ""int64""}, {""name"": ""split"", ""sequence"": ""string""}, {""name"": ""topic"", ""dtype"": ""string""}, {""name"": ""transcription"", ""dtype"": ""string""}, {""name"": ""whisper_asr"", ""sequence"": ""string""}, {""name"": ""whisper_asr_cer"", ""sequence"": ""float64""}, {""name"": ""whisper_asr_translation"", ""sequence"": ""string""}, {""name"": ""whisper_asr_wer"", ""sequence"": ""float64""}]}], ""splits"": [{""name"": ""test"", ""num_bytes"": 5104748515.0, ""num_examples"": 879}], ""download_size"": 2745994747, ""dataset_size"": 5104748515.0}, {""config_name"": ""hye_Armn"", ""features"": [{""name"": ""link"", ""dtype"": ""string""}, {""name"": ""question_number"", ""dtype"": ""int64""}, {""name"": ""flores_passage"", ""dtype"": ""string""}, {""name"": ""question"", ""dtype"": ""string""}, {""name"": ""mc_answer1"", ""dtype"": ""string""}, {""name"": ""mc_answer2"", ""dtype"": ""string""}, {""name"": ""mc_answer3"", ""dtype"": ""string""}, {""name"": ""mc_answer4"", ""dtype"": ""string""}, {""name"": ""correct_answer_num"", ""dtype"": ""string""}, {""name"": ""dialect"", ""dtype"": ""string""}, {""name"": ""ds"", ""dtype"": ""timestamp[us]""}, {""name"": ""sentence_data"", ""list"": [{""name"": ""URL"", ""dtype"": ""string""}, {""name"": ""audio"", ""sequence"": {""audio"": {""sampling_rate"": 16000}}}, {""name"": ""domain"", ""dtype"": ""string""}, {""name"": ""filename"", ""sequence"": ""string""}, {""name"": ""fleurs_id"", ""dtype"": ""int64""}, {""name"": ""full_paragraph"", ""dtype"": ""bool""}, {""name"": ""gender"", ""sequence"": ""string""}, {""name"": ""has_hyperlink"", ""dtype"": ""int64""}, {""name"": ""has_image"", ""dtype"": ""int64""}, {""name"": ""id"", ""dtype"": ""int64""}, {""name"": ""num_samples"", ""sequence"": ""int64""}, {""name"": ""raw_transcription"", ""dtype"": ""string""}, {""name"": ""seamlessm4t_asr"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr_cer"", ""sequence"": ""float64""}, {""name"": ""seamlessm4t_asr_translation"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr_wer"", ""sequence"": ""float64""}, {""name"": ""sentence"", ""dtype"": ""string""}, {""name"": ""sentence_idx"", ""dtype"": ""int64""}, {""name"": ""speaker_id"", ""sequence"": ""int64""}, {""name"": ""split"", ""sequence"": ""string""}, {""name"": ""topic"", ""dtype"": ""string""}, {""name"": ""transcription"", ""dtype"": ""string""}, {""name"": ""whisper_asr"", ""sequence"": ""string""}, {""name"": ""whisper_asr_cer"", ""sequence"": ""float64""}, {""name"": ""whisper_asr_translation"", ""sequence"": ""string""}, {""name"": ""whisper_asr_wer"", ""sequence"": ""float64""}]}], ""splits"": [{""name"": ""test"", ""num_bytes"": 5418654048.0, ""num_examples"": 861}], ""download_size"": 2886102322, ""dataset_size"": 5418654048.0}, {""config_name"": ""ibo_Latn"", ""features"": [{""name"": ""link"", ""dtype"": ""string""}, {""name"": ""question_number"", ""dtype"": ""int64""}, {""name"": ""flores_passage"", ""dtype"": ""string""}, {""name"": ""question"", ""dtype"": ""string""}, {""name"": ""mc_answer1"", ""dtype"": ""string""}, {""name"": ""mc_answer2"", ""dtype"": ""string""}, {""name"": ""mc_answer3"", ""dtype"": ""string""}, {""name"": ""mc_answer4"", ""dtype"": ""string""}, {""name"": ""correct_answer_num"", ""dtype"": ""string""}, {""name"": ""dialect"", ""dtype"": ""string""}, {""name"": ""ds"", ""dtype"": ""timestamp[us]""}, {""name"": ""sentence_data"", ""list"": [{""name"": ""URL"", ""dtype"": ""string""}, {""name"": ""audio"", ""sequence"": {""audio"": {""sampling_rate"": 16000}}}, {""name"": ""domain"", ""dtype"": ""string""}, {""name"": ""filename"", ""sequence"": ""string""}, {""name"": ""fleurs_id"", ""dtype"": ""int64""}, {""name"": ""full_paragraph"", ""dtype"": ""bool""}, {""name"": ""gender"", ""sequence"": ""string""}, {""name"": ""has_hyperlink"", ""dtype"": ""int64""}, {""name"": ""has_image"", ""dtype"": ""int64""}, {""name"": ""id"", ""dtype"": ""int64""}, {""name"": ""num_samples"", ""sequence"": ""int64""}, {""name"": ""raw_transcription"", ""dtype"": ""string""}, {""name"": ""seamlessm4t_asr"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr_cer"", ""sequence"": ""float64""}, {""name"": ""seamlessm4t_asr_translation"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr_wer"", ""sequence"": ""float64""}, {""name"": ""sentence"", ""dtype"": ""string""}, {""name"": ""sentence_idx"", ""dtype"": ""int64""}, {""name"": ""speaker_id"", ""sequence"": ""int64""}, {""name"": ""split"", ""sequence"": ""string""}, {""name"": ""topic"", ""dtype"": ""string""}, {""name"": ""transcription"", ""dtype"": ""string""}, {""name"": ""whisper_asr"", ""sequence"": ""string""}, {""name"": ""whisper_asr_cer"", ""sequence"": ""float64""}, {""name"": ""whisper_asr_translation"", ""sequence"": ""string""}, {""name"": ""whisper_asr_wer"", ""sequence"": ""float64""}]}], ""splits"": [{""name"": ""test"", ""num_bytes"": 7449052399.0, ""num_examples"": 838}], ""download_size"": 3923089955, ""dataset_size"": 7449052399.0}, {""config_name"": ""ind_Latn"", ""features"": [{""name"": ""link"", ""dtype"": ""string""}, {""name"": ""question_number"", ""dtype"": ""int64""}, {""name"": ""flores_passage"", ""dtype"": ""string""}, {""name"": ""question"", ""dtype"": ""string""}, {""name"": ""mc_answer1"", ""dtype"": ""string""}, {""name"": ""mc_answer2"", ""dtype"": ""string""}, {""name"": ""mc_answer3"", ""dtype"": ""string""}, {""name"": ""mc_answer4"", ""dtype"": ""string""}, {""name"": ""correct_answer_num"", ""dtype"": ""string""}, {""name"": ""dialect"", ""dtype"": ""string""}, {""name"": ""ds"", ""dtype"": ""timestamp[us]""}, {""name"": ""sentence_data"", ""list"": [{""name"": ""URL"", ""dtype"": ""string""}, {""name"": ""audio"", ""sequence"": {""audio"": {""sampling_rate"": 16000}}}, {""name"": ""domain"", ""dtype"": ""string""}, {""name"": ""filename"", ""sequence"": ""string""}, {""name"": ""fleurs_id"", ""dtype"": ""int64""}, {""name"": ""full_paragraph"", ""dtype"": ""bool""}, {""name"": ""gender"", ""sequence"": ""string""}, {""name"": ""has_hyperlink"", ""dtype"": ""int64""}, {""name"": ""has_image"", ""dtype"": ""int64""}, {""name"": ""id"", ""dtype"": ""int64""}, {""name"": ""num_samples"", ""sequence"": ""int64""}, {""name"": ""raw_transcription"", ""dtype"": ""string""}, {""name"": ""seamlessm4t_asr"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr_cer"", ""sequence"": ""float64""}, {""name"": ""seamlessm4t_asr_translation"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr_wer"", ""sequence"": ""float64""}, {""name"": ""sentence"", ""dtype"": ""string""}, {""name"": ""sentence_idx"", ""dtype"": ""int64""}, {""name"": ""speaker_id"", ""sequence"": ""int64""}, {""name"": ""split"", ""sequence"": ""string""}, {""name"": ""topic"", ""dtype"": ""string""}, {""name"": ""transcription"", ""dtype"": ""string""}, {""name"": ""whisper_asr"", ""sequence"": ""string""}, {""name"": ""whisper_asr_cer"", ""sequence"": ""float64""}, {""name"": ""whisper_asr_translation"", ""sequence"": ""string""}, {""name"": ""whisper_asr_wer"", ""sequence"": ""float64""}]}], ""splits"": [{""name"": ""test"", ""num_bytes"": 4259773565.0, ""num_examples"": 783}], ""download_size"": 2307173322, ""dataset_size"": 4259773565.0}, {""config_name"": ""isl_Latn"", ""features"": [{""name"": ""link"", ""dtype"": ""string""}, {""name"": ""question_number"", ""dtype"": ""int64""}, {""name"": ""flores_passage"", ""dtype"": ""string""}, {""name"": ""question"", ""dtype"": ""string""}, {""name"": ""mc_answer1"", ""dtype"": ""string""}, {""name"": ""mc_answer2"", ""dtype"": ""string""}, {""name"": ""mc_answer3"", ""dtype"": ""string""}, {""name"": ""mc_answer4"", ""dtype"": ""string""}, {""name"": ""correct_answer_num"", ""dtype"": ""string""}, {""name"": ""dialect"", ""dtype"": ""string""}, {""name"": ""ds"", ""dtype"": ""timestamp[us]""}, {""name"": ""sentence_data"", ""list"": [{""name"": ""URL"", ""dtype"": ""string""}, {""name"": ""audio"", ""sequence"": {""audio"": {""sampling_rate"": 16000}}}, {""name"": ""domain"", ""dtype"": ""string""}, {""name"": ""filename"", ""sequence"": ""string""}, {""name"": ""fleurs_id"", ""dtype"": ""int64""}, {""name"": ""full_paragraph"", ""dtype"": ""bool""}, {""name"": ""gender"", ""sequence"": ""string""}, {""name"": ""has_hyperlink"", ""dtype"": ""int64""}, {""name"": ""has_image"", ""dtype"": ""int64""}, {""name"": ""id"", ""dtype"": ""int64""}, {""name"": ""num_samples"", ""sequence"": ""int64""}, {""name"": ""raw_transcription"", ""dtype"": ""string""}, {""name"": ""seamlessm4t_asr"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr_cer"", ""sequence"": ""float64""}, {""name"": ""seamlessm4t_asr_translation"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr_wer"", ""sequence"": ""float64""}, {""name"": ""sentence"", ""dtype"": ""string""}, {""name"": ""sentence_idx"", ""dtype"": ""int64""}, {""name"": ""speaker_id"", ""sequence"": ""int64""}, {""name"": ""split"", ""sequence"": ""string""}, {""name"": ""topic"", ""dtype"": ""string""}, {""name"": ""transcription"", ""dtype"": ""string""}, {""name"": ""whisper_asr"", ""sequence"": ""string""}, {""name"": ""whisper_asr_cer"", ""sequence"": ""float64""}, {""name"": ""whisper_asr_translation"", ""sequence"": ""string""}, {""name"": ""whisper_asr_wer"", ""sequence"": ""float64""}]}], ""splits"": [{""name"": ""test"", ""num_bytes"": 205392451.0, ""num_examples"": 81}], ""download_size"": 108289354, ""dataset_size"": 205392451.0}, {""config_name"": ""ita_Latn"", ""features"": [{""name"": ""link"", ""dtype"": ""string""}, {""name"": ""question_number"", ""dtype"": ""int64""}, {""name"": ""flores_passage"", ""dtype"": ""string""}, {""name"": ""question"", ""dtype"": ""string""}, {""name"": ""mc_answer1"", ""dtype"": ""string""}, {""name"": ""mc_answer2"", ""dtype"": ""string""}, {""name"": ""mc_answer3"", ""dtype"": ""string""}, {""name"": ""mc_answer4"", ""dtype"": ""string""}, {""name"": ""correct_answer_num"", ""dtype"": ""string""}, {""name"": ""dialect"", ""dtype"": ""string""}, {""name"": ""ds"", ""dtype"": ""timestamp[us]""}, {""name"": ""sentence_data"", ""list"": [{""name"": ""URL"", ""dtype"": ""string""}, {""name"": ""audio"", ""sequence"": {""audio"": {""sampling_rate"": 16000}}}, {""name"": ""domain"", ""dtype"": ""string""}, {""name"": ""filename"", ""sequence"": ""string""}, {""name"": ""fleurs_id"", ""dtype"": ""int64""}, {""name"": ""full_paragraph"", ""dtype"": ""bool""}, {""name"": ""gender"", ""sequence"": ""string""}, {""name"": ""has_hyperlink"", ""dtype"": ""int64""}, {""name"": ""has_image"", ""dtype"": ""int64""}, {""name"": ""id"", ""dtype"": ""int64""}, {""name"": ""num_samples"", ""sequence"": ""int64""}, {""name"": ""raw_transcription"", ""dtype"": ""string""}, {""name"": ""seamlessm4t_asr"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr_cer"", ""sequence"": ""float64""}, {""name"": ""seamlessm4t_asr_translation"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr_wer"", ""sequence"": ""float64""}, {""name"": ""sentence"", ""dtype"": ""string""}, {""name"": ""sentence_idx"", ""dtype"": ""int64""}, {""name"": ""speaker_id"", ""sequence"": ""int64""}, {""name"": ""split"", ""sequence"": ""string""}, {""name"": ""topic"", ""dtype"": ""string""}, {""name"": ""transcription"", ""dtype"": ""string""}, {""name"": ""whisper_asr"", ""sequence"": ""string""}, {""name"": ""whisper_asr_cer"", ""sequence"": ""float64""}, {""name"": ""whisper_asr_translation"", ""sequence"": ""string""}, {""name"": ""whisper_asr_wer"", ""sequence"": ""float64""}]}], ""splits"": [{""name"": ""test"", ""num_bytes"": 5068341448.0, ""num_examples"": 851}], ""download_size"": 2722452061, ""dataset_size"": 5068341448.0}, {""config_name"": ""jav_Latn"", ""features"": [{""name"": ""link"", ""dtype"": ""string""}, {""name"": ""question_number"", ""dtype"": ""int64""}, {""name"": ""flores_passage"", ""dtype"": ""string""}, {""name"": ""question"", ""dtype"": ""string""}, {""name"": ""mc_answer1"", ""dtype"": ""string""}, {""name"": ""mc_answer2"", ""dtype"": ""string""}, {""name"": ""mc_answer3"", ""dtype"": ""string""}, {""name"": ""mc_answer4"", ""dtype"": ""string""}, {""name"": ""correct_answer_num"", ""dtype"": ""string""}, {""name"": ""dialect"", ""dtype"": ""string""}, {""name"": ""ds"", ""dtype"": ""timestamp[us]""}, {""name"": ""sentence_data"", ""list"": [{""name"": ""URL"", ""dtype"": ""string""}, {""name"": ""audio"", ""sequence"": {""audio"": {""sampling_rate"": 16000}}}, {""name"": ""domain"", ""dtype"": ""string""}, {""name"": ""filename"", ""sequence"": ""string""}, {""name"": ""fleurs_id"", ""dtype"": ""int64""}, {""name"": ""full_paragraph"", ""dtype"": ""bool""}, {""name"": ""gender"", ""sequence"": ""string""}, {""name"": ""has_hyperlink"", ""dtype"": ""int64""}, {""name"": ""has_image"", ""dtype"": ""int64""}, {""name"": ""id"", ""dtype"": ""int64""}, {""name"": ""num_samples"", ""sequence"": ""int64""}, {""name"": ""raw_transcription"", ""dtype"": ""string""}, {""name"": ""seamlessm4t_asr"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr_cer"", ""sequence"": ""float64""}, {""name"": ""seamlessm4t_asr_translation"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr_wer"", ""sequence"": ""float64""}, {""name"": ""sentence"", ""dtype"": ""string""}, {""name"": ""sentence_idx"", ""dtype"": ""int64""}, {""name"": ""speaker_id"", ""sequence"": ""int64""}, {""name"": ""split"", ""sequence"": ""string""}, {""name"": ""topic"", ""dtype"": ""string""}, {""name"": ""transcription"", ""dtype"": ""string""}, {""name"": ""whisper_asr"", ""sequence"": ""string""}, {""name"": ""whisper_asr_cer"", ""sequence"": ""float64""}, {""name"": ""whisper_asr_translation"", ""sequence"": ""string""}, {""name"": ""whisper_asr_wer"", ""sequence"": ""float64""}]}], ""splits"": [{""name"": ""test"", ""num_bytes"": 5484070653.0, ""num_examples"": 835}], ""download_size"": 2959104967, ""dataset_size"": 5484070653.0}, {""config_name"": ""jpn_Jpan"", ""features"": [{""name"": ""link"", ""dtype"": ""string""}, {""name"": ""question_number"", ""dtype"": ""int64""}, {""name"": ""flores_passage"", ""dtype"": ""string""}, {""name"": ""question"", ""dtype"": ""string""}, {""name"": ""mc_answer1"", ""dtype"": ""string""}, {""name"": ""mc_answer2"", ""dtype"": ""string""}, {""name"": ""mc_answer3"", ""dtype"": ""string""}, {""name"": ""mc_answer4"", ""dtype"": ""string""}, {""name"": ""correct_answer_num"", ""dtype"": ""string""}, {""name"": ""dialect"", ""dtype"": ""string""}, {""name"": ""ds"", ""dtype"": ""timestamp[us]""}, {""name"": ""sentence_data"", ""list"": [{""name"": ""URL"", ""dtype"": ""string""}, {""name"": ""audio"", ""sequence"": {""audio"": {""sampling_rate"": 16000}}}, {""name"": ""domain"", ""dtype"": ""string""}, {""name"": ""filename"", ""sequence"": ""string""}, {""name"": ""fleurs_id"", ""dtype"": ""int64""}, {""name"": ""full_paragraph"", ""dtype"": ""bool""}, {""name"": ""gender"", ""sequence"": ""string""}, {""name"": ""has_hyperlink"", ""dtype"": ""int64""}, {""name"": ""has_image"", ""dtype"": ""int64""}, {""name"": ""id"", ""dtype"": ""int64""}, {""name"": ""num_samples"", ""sequence"": ""int64""}, {""name"": ""raw_transcription"", ""dtype"": ""string""}, {""name"": ""seamlessm4t_asr"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr_cer"", ""sequence"": ""float64""}, {""name"": ""seamlessm4t_asr_translation"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr_wer"", ""sequence"": ""float64""}, {""name"": ""sentence"", ""dtype"": ""string""}, {""name"": ""sentence_idx"", ""dtype"": ""int64""}, {""name"": ""speaker_id"", ""sequence"": ""int64""}, {""name"": ""split"", ""sequence"": ""string""}, {""name"": ""topic"", ""dtype"": ""string""}, {""name"": ""transcription"", ""dtype"": ""string""}, {""name"": ""whisper_asr"", ""sequence"": ""string""}, {""name"": ""whisper_asr_cer"", ""sequence"": ""float64""}, {""name"": ""whisper_asr_translation"", ""sequence"": ""string""}, {""name"": ""whisper_asr_wer"", ""sequence"": ""float64""}]}], ""splits"": [{""name"": ""test"", ""num_bytes"": 2946432247.0, ""num_examples"": 590}], ""download_size"": 1600204018, ""dataset_size"": 2946432247.0}, {""config_name"": ""kan_Knda"", ""features"": [{""name"": ""link"", ""dtype"": ""string""}, {""name"": ""question_number"", ""dtype"": ""int64""}, {""name"": ""flores_passage"", ""dtype"": ""string""}, {""name"": ""question"", ""dtype"": ""string""}, {""name"": ""mc_answer1"", ""dtype"": ""string""}, {""name"": ""mc_answer2"", ""dtype"": ""string""}, {""name"": ""mc_answer3"", ""dtype"": ""string""}, {""name"": ""mc_answer4"", ""dtype"": ""string""}, {""name"": ""correct_answer_num"", ""dtype"": ""string""}, {""name"": ""dialect"", ""dtype"": ""string""}, {""name"": ""ds"", ""dtype"": ""timestamp[us]""}, {""name"": ""sentence_data"", ""list"": [{""name"": ""URL"", ""dtype"": ""string""}, {""name"": ""audio"", ""sequence"": {""audio"": {""sampling_rate"": 16000}}}, {""name"": ""domain"", ""dtype"": ""string""}, {""name"": ""filename"", ""sequence"": ""string""}, {""name"": ""fleurs_id"", ""dtype"": ""int64""}, {""name"": ""full_paragraph"", ""dtype"": ""bool""}, {""name"": ""gender"", ""sequence"": ""string""}, {""name"": ""has_hyperlink"", ""dtype"": ""int64""}, {""name"": ""has_image"", ""dtype"": ""int64""}, {""name"": ""id"", ""dtype"": ""int64""}, {""name"": ""num_samples"", ""sequence"": ""int64""}, {""name"": ""raw_transcription"", ""dtype"": ""string""}, {""name"": ""seamlessm4t_asr"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr_cer"", ""sequence"": ""float64""}, {""name"": ""seamlessm4t_asr_translation"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr_wer"", ""sequence"": ""float64""}, {""name"": ""sentence"", ""dtype"": ""string""}, {""name"": ""sentence_idx"", ""dtype"": ""int64""}, {""name"": ""speaker_id"", ""sequence"": ""int64""}, {""name"": ""split"", ""sequence"": ""string""}, {""name"": ""topic"", ""dtype"": ""string""}, {""name"": ""transcription"", ""dtype"": ""string""}, {""name"": ""whisper_asr"", ""sequence"": ""string""}, {""name"": ""whisper_asr_cer"", ""sequence"": ""float64""}, {""name"": ""whisper_asr_translation"", ""sequence"": ""string""}, {""name"": ""whisper_asr_wer"", ""sequence"": ""float64""}]}], ""splits"": [{""name"": ""test"", ""num_bytes"": 3554826830.0, ""num_examples"": 606}], ""download_size"": 1975103591, ""dataset_size"": 3554826830.0}, {""config_name"": ""kat_Geor"", ""features"": [{""name"": ""link"", ""dtype"": ""string""}, {""name"": ""question_number"", ""dtype"": ""int64""}, {""name"": ""flores_passage"", ""dtype"": ""string""}, {""name"": ""question"", ""dtype"": ""string""}, {""name"": ""mc_answer1"", ""dtype"": ""string""}, {""name"": ""mc_answer2"", ""dtype"": ""string""}, {""name"": ""mc_answer3"", ""dtype"": ""string""}, {""name"": ""mc_answer4"", ""dtype"": ""string""}, {""name"": ""correct_answer_num"", ""dtype"": ""string""}, {""name"": ""dialect"", ""dtype"": ""string""}, {""name"": ""ds"", ""dtype"": ""timestamp[us]""}, {""name"": ""sentence_data"", ""list"": [{""name"": ""URL"", ""dtype"": ""string""}, {""name"": ""audio"", ""sequence"": {""audio"": {""sampling_rate"": 16000}}}, {""name"": ""domain"", ""dtype"": ""string""}, {""name"": ""filename"", ""sequence"": ""string""}, {""name"": ""fleurs_id"", ""dtype"": ""int64""}, {""name"": ""full_paragraph"", ""dtype"": ""bool""}, {""name"": ""gender"", ""sequence"": ""string""}, {""name"": ""has_hyperlink"", ""dtype"": ""int64""}, {""name"": ""has_image"", ""dtype"": ""int64""}, {""name"": ""id"", ""dtype"": ""int64""}, {""name"": ""num_samples"", ""sequence"": ""int64""}, {""name"": ""raw_transcription"", ""dtype"": ""string""}, {""name"": ""seamlessm4t_asr"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr_cer"", ""sequence"": ""float64""}, {""name"": ""seamlessm4t_asr_translation"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr_wer"", ""sequence"": ""float64""}, {""name"": ""sentence"", ""dtype"": ""string""}, {""name"": ""sentence_idx"", ""dtype"": ""int64""}, {""name"": ""speaker_id"", ""sequence"": ""int64""}, {""name"": ""split"", ""sequence"": ""string""}, {""name"": ""topic"", ""dtype"": ""string""}, {""name"": ""transcription"", ""dtype"": ""string""}, {""name"": ""whisper_asr"", ""sequence"": ""string""}, {""name"": ""whisper_asr_cer"", ""sequence"": ""float64""}, {""name"": ""whisper_asr_translation"", ""sequence"": ""string""}, {""name"": ""whisper_asr_wer"", ""sequence"": ""float64""}]}], ""splits"": [{""name"": ""test"", ""num_bytes"": 1721617148.0, ""num_examples"": 372}], ""download_size"": 927696223, ""dataset_size"": 1721617148.0}, {""config_name"": ""kaz_Cyrl"", ""features"": [{""name"": ""link"", ""dtype"": ""string""}, {""name"": ""question_number"", ""dtype"": ""int64""}, {""name"": ""flores_passage"", ""dtype"": ""string""}, {""name"": ""question"", ""dtype"": ""string""}, {""name"": ""mc_answer1"", ""dtype"": ""string""}, {""name"": ""mc_answer2"", ""dtype"": ""string""}, {""name"": ""mc_answer3"", ""dtype"": ""string""}, {""name"": ""mc_answer4"", ""dtype"": ""string""}, {""name"": ""correct_answer_num"", ""dtype"": ""string""}, {""name"": ""dialect"", ""dtype"": ""string""}, {""name"": ""ds"", ""dtype"": ""timestamp[us]""}, {""name"": ""sentence_data"", ""list"": [{""name"": ""URL"", ""dtype"": ""string""}, {""name"": ""audio"", ""sequence"": {""audio"": {""sampling_rate"": 16000}}}, {""name"": ""domain"", ""dtype"": ""string""}, {""name"": ""filename"", ""sequence"": ""string""}, {""name"": ""fleurs_id"", ""dtype"": ""int64""}, {""name"": ""full_paragraph"", ""dtype"": ""bool""}, {""name"": ""gender"", ""sequence"": ""string""}, {""name"": ""has_hyperlink"", ""dtype"": ""int64""}, {""name"": ""has_image"", ""dtype"": ""int64""}, {""name"": ""id"", ""dtype"": ""int64""}, {""name"": ""num_samples"", ""sequence"": ""int64""}, {""name"": ""raw_transcription"", ""dtype"": ""string""}, {""name"": ""seamlessm4t_asr"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr_cer"", ""sequence"": ""float64""}, {""name"": ""seamlessm4t_asr_translation"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr_wer"", ""sequence"": ""float64""}, {""name"": ""sentence"", ""dtype"": ""string""}, {""name"": ""sentence_idx"", ""dtype"": ""int64""}, {""name"": ""speaker_id"", ""sequence"": ""int64""}, {""name"": ""split"", ""sequence"": ""string""}, {""name"": ""topic"", ""dtype"": ""string""}, {""name"": ""transcription"", ""dtype"": ""string""}, {""name"": ""whisper_asr"", ""sequence"": ""string""}, {""name"": ""whisper_asr_cer"", ""sequence"": ""float64""}, {""name"": ""whisper_asr_translation"", ""sequence"": ""string""}, {""name"": ""whisper_asr_wer"", ""sequence"": ""float64""}]}], ""splits"": [{""name"": ""test"", ""num_bytes"": 6414631274.0, ""num_examples"": 870}], ""download_size"": 3429143107, ""dataset_size"": 6414631274.0}, {""config_name"": ""kea_Latn"", ""features"": [{""name"": ""link"", ""dtype"": ""string""}, {""name"": ""question_number"", ""dtype"": ""int64""}, {""name"": ""flores_passage"", ""dtype"": ""string""}, {""name"": ""question"", ""dtype"": ""string""}, {""name"": ""mc_answer1"", ""dtype"": ""string""}, {""name"": ""mc_answer2"", ""dtype"": ""string""}, {""name"": ""mc_answer3"", ""dtype"": ""string""}, {""name"": ""mc_answer4"", ""dtype"": ""string""}, {""name"": ""correct_answer_num"", ""dtype"": ""string""}, {""name"": ""dialect"", ""dtype"": ""string""}, {""name"": ""ds"", ""dtype"": ""timestamp[us]""}, {""name"": ""sentence_data"", ""list"": [{""name"": ""URL"", ""dtype"": ""string""}, {""name"": ""audio"", ""sequence"": {""audio"": {""sampling_rate"": 16000}}}, {""name"": ""domain"", ""dtype"": ""string""}, {""name"": ""filename"", ""sequence"": ""string""}, {""name"": ""fleurs_id"", ""dtype"": ""int64""}, {""name"": ""full_paragraph"", ""dtype"": ""bool""}, {""name"": ""gender"", ""sequence"": ""string""}, {""name"": ""has_hyperlink"", ""dtype"": ""int64""}, {""name"": ""has_image"", ""dtype"": ""int64""}, {""name"": ""id"", ""dtype"": ""int64""}, {""name"": ""num_samples"", ""sequence"": ""int64""}, {""name"": ""raw_transcription"", ""dtype"": ""string""}, {""name"": ""seamlessm4t_asr"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr_cer"", ""sequence"": ""float64""}, {""name"": ""seamlessm4t_asr_translation"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr_wer"", ""sequence"": ""float64""}, {""name"": ""sentence"", ""dtype"": ""string""}, {""name"": ""sentence_idx"", ""dtype"": ""int64""}, {""name"": ""speaker_id"", ""sequence"": ""int64""}, {""name"": ""split"", ""sequence"": ""string""}, {""name"": ""topic"", ""dtype"": ""string""}, {""name"": ""transcription"", ""dtype"": ""string""}, {""name"": ""whisper_asr"", ""sequence"": ""string""}, {""name"": ""whisper_asr_cer"", ""sequence"": ""float64""}, {""name"": ""whisper_asr_translation"", ""sequence"": ""string""}, {""name"": ""whisper_asr_wer"", ""sequence"": ""float64""}]}], ""splits"": [{""name"": ""test"", ""num_bytes"": 5144742658.0, ""num_examples"": 770}], ""download_size"": 2797781391, ""dataset_size"": 5144742658.0}, {""config_name"": ""khk_Cyrl"", ""features"": [{""name"": ""link"", ""dtype"": ""string""}, {""name"": ""question_number"", ""dtype"": ""int64""}, {""name"": ""flores_passage"", ""dtype"": ""string""}, {""name"": ""question"", ""dtype"": ""string""}, {""name"": ""mc_answer1"", ""dtype"": ""string""}, {""name"": ""mc_answer2"", ""dtype"": ""string""}, {""name"": ""mc_answer3"", ""dtype"": ""string""}, {""name"": ""mc_answer4"", ""dtype"": ""string""}, {""name"": ""correct_answer_num"", ""dtype"": ""string""}, {""name"": ""dialect"", ""dtype"": ""string""}, {""name"": ""ds"", ""dtype"": ""timestamp[us]""}, {""name"": ""sentence_data"", ""list"": [{""name"": ""URL"", ""dtype"": ""string""}, {""name"": ""audio"", ""sequence"": {""audio"": {""sampling_rate"": 16000}}}, {""name"": ""domain"", ""dtype"": ""string""}, {""name"": ""filename"", ""sequence"": ""string""}, {""name"": ""fleurs_id"", ""dtype"": ""int64""}, {""name"": ""full_paragraph"", ""dtype"": ""bool""}, {""name"": ""gender"", ""sequence"": ""string""}, {""name"": ""has_hyperlink"", ""dtype"": ""int64""}, {""name"": ""has_image"", ""dtype"": ""int64""}, {""name"": ""id"", ""dtype"": ""int64""}, {""name"": ""num_samples"", ""sequence"": ""int64""}, {""name"": ""raw_transcription"", ""dtype"": ""string""}, {""name"": ""seamlessm4t_asr"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr_cer"", ""sequence"": ""float64""}, {""name"": ""seamlessm4t_asr_translation"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr_wer"", ""sequence"": ""float64""}, {""name"": ""sentence"", ""dtype"": ""string""}, {""name"": ""sentence_idx"", ""dtype"": ""int64""}, {""name"": ""speaker_id"", ""sequence"": ""int64""}, {""name"": ""split"", ""sequence"": ""string""}, {""name"": ""topic"", ""dtype"": ""string""}, {""name"": ""transcription"", ""dtype"": ""string""}, {""name"": ""whisper_asr"", ""sequence"": ""string""}, {""name"": ""whisper_asr_cer"", ""sequence"": ""float64""}, {""name"": ""whisper_asr_translation"", ""sequence"": ""string""}, {""name"": ""whisper_asr_wer"", ""sequence"": ""float64""}]}], ""splits"": [{""name"": ""test"", ""num_bytes"": 5162621083.0, ""num_examples"": 869}], ""download_size"": 2720720587, ""dataset_size"": 5162621083.0}, {""config_name"": ""khm_Khmr"", ""features"": [{""name"": ""link"", ""dtype"": ""string""}, {""name"": ""question_number"", ""dtype"": ""int64""}, {""name"": ""flores_passage"", ""dtype"": ""string""}, {""name"": ""question"", ""dtype"": ""string""}, {""name"": ""mc_answer1"", ""dtype"": ""string""}, {""name"": ""mc_answer2"", ""dtype"": ""string""}, {""name"": ""mc_answer3"", ""dtype"": ""string""}, {""name"": ""mc_answer4"", ""dtype"": ""string""}, {""name"": ""correct_answer_num"", ""dtype"": ""string""}, {""name"": ""dialect"", ""dtype"": ""string""}, {""name"": ""ds"", ""dtype"": ""timestamp[us]""}, {""name"": ""sentence_data"", ""list"": [{""name"": ""URL"", ""dtype"": ""string""}, {""name"": ""audio"", ""sequence"": {""audio"": {""sampling_rate"": 16000}}}, {""name"": ""domain"", ""dtype"": ""string""}, {""name"": ""filename"", ""sequence"": ""string""}, {""name"": ""fleurs_id"", ""dtype"": ""int64""}, {""name"": ""full_paragraph"", ""dtype"": ""bool""}, {""name"": ""gender"", ""sequence"": ""string""}, {""name"": ""has_hyperlink"", ""dtype"": ""int64""}, {""name"": ""has_image"", ""dtype"": ""int64""}, {""name"": ""id"", ""dtype"": ""int64""}, {""name"": ""num_samples"", ""sequence"": ""int64""}, {""name"": ""raw_transcription"", ""dtype"": ""string""}, {""name"": ""seamlessm4t_asr"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr_cer"", ""sequence"": ""float64""}, {""name"": ""seamlessm4t_asr_translation"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr_wer"", ""sequence"": ""float64""}, {""name"": ""sentence"", ""dtype"": ""string""}, {""name"": ""sentence_idx"", ""dtype"": ""int64""}, {""name"": ""speaker_id"", ""sequence"": ""int64""}, {""name"": ""split"", ""sequence"": ""string""}, {""name"": ""topic"", ""dtype"": ""string""}, {""name"": ""transcription"", ""dtype"": ""string""}, {""name"": ""whisper_asr"", ""sequence"": ""string""}, {""name"": ""whisper_asr_cer"", ""sequence"": ""float64""}, {""name"": ""whisper_asr_translation"", ""sequence"": ""string""}, {""name"": ""whisper_asr_wer"", ""sequence"": ""float64""}]}], ""splits"": [{""name"": ""test"", ""num_bytes"": 2485030372.0, ""num_examples"": 439}], ""download_size"": 1331377064, ""dataset_size"": 2485030372.0}, {""config_name"": ""kir_Cyrl"", ""features"": [{""name"": ""link"", ""dtype"": ""string""}, {""name"": ""question_number"", ""dtype"": ""int64""}, {""name"": ""flores_passage"", ""dtype"": ""string""}, {""name"": ""question"", ""dtype"": ""string""}, {""name"": ""mc_answer1"", ""dtype"": ""string""}, {""name"": ""mc_answer2"", ""dtype"": ""string""}, {""name"": ""mc_answer3"", ""dtype"": ""string""}, {""name"": ""mc_answer4"", ""dtype"": ""string""}, {""name"": ""correct_answer_num"", ""dtype"": ""string""}, {""name"": ""dialect"", ""dtype"": ""string""}, {""name"": ""ds"", ""dtype"": ""timestamp[us]""}, {""name"": ""sentence_data"", ""list"": [{""name"": ""URL"", ""dtype"": ""string""}, {""name"": ""audio"", ""sequence"": {""audio"": {""sampling_rate"": 16000}}}, {""name"": ""domain"", ""dtype"": ""string""}, {""name"": ""filename"", ""sequence"": ""string""}, {""name"": ""fleurs_id"", ""dtype"": ""int64""}, {""name"": ""full_paragraph"", ""dtype"": ""bool""}, {""name"": ""gender"", ""sequence"": ""string""}, {""name"": ""has_hyperlink"", ""dtype"": ""int64""}, {""name"": ""has_image"", ""dtype"": ""int64""}, {""name"": ""id"", ""dtype"": ""int64""}, {""name"": ""num_samples"", ""sequence"": ""int64""}, {""name"": ""raw_transcription"", ""dtype"": ""string""}, {""name"": ""seamlessm4t_asr"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr_cer"", ""sequence"": ""float64""}, {""name"": ""seamlessm4t_asr_translation"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr_wer"", ""sequence"": ""float64""}, {""name"": ""sentence"", ""dtype"": ""string""}, {""name"": ""sentence_idx"", ""dtype"": ""int64""}, {""name"": ""speaker_id"", ""sequence"": ""int64""}, {""name"": ""split"", ""sequence"": ""string""}, {""name"": ""topic"", ""dtype"": ""string""}, {""name"": ""transcription"", ""dtype"": ""string""}, {""name"": ""whisper_asr"", ""sequence"": ""string""}, {""name"": ""whisper_asr_cer"", ""sequence"": ""float64""}, {""name"": ""whisper_asr_translation"", ""sequence"": ""string""}, {""name"": ""whisper_asr_wer"", ""sequence"": ""float64""}]}], ""splits"": [{""name"": ""test"", ""num_bytes"": 4853152090.0, ""num_examples"": 811}], ""download_size"": 2578226837, ""dataset_size"": 4853152090.0}, {""config_name"": ""kor_Hang"", ""features"": [{""name"": ""link"", ""dtype"": ""string""}, {""name"": ""question_number"", ""dtype"": ""int64""}, {""name"": ""flores_passage"", ""dtype"": ""string""}, {""name"": ""question"", ""dtype"": ""string""}, {""name"": ""mc_answer1"", ""dtype"": ""string""}, {""name"": ""mc_answer2"", ""dtype"": ""string""}, {""name"": ""mc_answer3"", ""dtype"": ""string""}, {""name"": ""mc_answer4"", ""dtype"": ""string""}, {""name"": ""correct_answer_num"", ""dtype"": ""string""}, {""name"": ""dialect"", ""dtype"": ""string""}, {""name"": ""ds"", ""dtype"": ""timestamp[us]""}, {""name"": ""sentence_data"", ""list"": [{""name"": ""URL"", ""dtype"": ""string""}, {""name"": ""audio"", ""sequence"": {""audio"": {""sampling_rate"": 16000}}}, {""name"": ""domain"", ""dtype"": ""string""}, {""name"": ""filename"", ""sequence"": ""string""}, {""name"": ""fleurs_id"", ""dtype"": ""int64""}, {""name"": ""full_paragraph"", ""dtype"": ""bool""}, {""name"": ""gender"", ""sequence"": ""string""}, {""name"": ""has_hyperlink"", ""dtype"": ""int64""}, {""name"": ""has_image"", ""dtype"": ""int64""}, {""name"": ""id"", ""dtype"": ""int64""}, {""name"": ""num_samples"", ""sequence"": ""int64""}, {""name"": ""raw_transcription"", ""dtype"": ""string""}, {""name"": ""seamlessm4t_asr"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr_cer"", ""sequence"": ""float64""}, {""name"": ""seamlessm4t_asr_translation"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr_wer"", ""sequence"": ""float64""}, {""name"": ""sentence"", ""dtype"": ""string""}, {""name"": ""sentence_idx"", ""dtype"": ""int64""}, {""name"": ""speaker_id"", ""sequence"": ""int64""}, {""name"": ""split"", ""sequence"": ""string""}, {""name"": ""topic"", ""dtype"": ""string""}, {""name"": ""transcription"", ""dtype"": ""string""}, {""name"": ""whisper_asr"", ""sequence"": ""string""}, {""name"": ""whisper_asr_cer"", ""sequence"": ""float64""}, {""name"": ""whisper_asr_translation"", ""sequence"": ""string""}, {""name"": ""whisper_asr_wer"", ""sequence"": ""float64""}]}], ""splits"": [{""name"": ""test"", ""num_bytes"": 2578906312.0, ""num_examples"": 535}], ""download_size"": 1374610547, ""dataset_size"": 2578906312.0}, {""config_name"": ""lao_Laoo"", ""features"": [{""name"": ""link"", ""dtype"": ""string""}, {""name"": ""question_number"", ""dtype"": ""int64""}, {""name"": ""flores_passage"", ""dtype"": ""string""}, {""name"": ""question"", ""dtype"": ""string""}, {""name"": ""mc_answer1"", ""dtype"": ""string""}, {""name"": ""mc_answer2"", ""dtype"": ""string""}, {""name"": ""mc_answer3"", ""dtype"": ""string""}, {""name"": ""mc_answer4"", ""dtype"": ""string""}, {""name"": ""correct_answer_num"", ""dtype"": ""string""}, {""name"": ""dialect"", ""dtype"": ""string""}, {""name"": ""ds"", ""dtype"": ""timestamp[us]""}, {""name"": ""sentence_data"", ""list"": [{""name"": ""URL"", ""dtype"": ""string""}, {""name"": ""audio"", ""sequence"": {""audio"": {""sampling_rate"": 16000}}}, {""name"": ""domain"", ""dtype"": ""string""}, {""name"": ""filename"", ""sequence"": ""string""}, {""name"": ""fleurs_id"", ""dtype"": ""int64""}, {""name"": ""full_paragraph"", ""dtype"": ""bool""}, {""name"": ""gender"", ""sequence"": ""string""}, {""name"": ""has_hyperlink"", ""dtype"": ""int64""}, {""name"": ""has_image"", ""dtype"": ""int64""}, {""name"": ""id"", ""dtype"": ""int64""}, {""name"": ""num_samples"", ""sequence"": ""int64""}, {""name"": ""raw_transcription"", ""dtype"": ""string""}, {""name"": ""seamlessm4t_asr"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr_cer"", ""sequence"": ""float64""}, {""name"": ""seamlessm4t_asr_translation"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr_wer"", ""sequence"": ""float64""}, {""name"": ""sentence"", ""dtype"": ""string""}, {""name"": ""sentence_idx"", ""dtype"": ""int64""}, {""name"": ""speaker_id"", ""sequence"": ""int64""}, {""name"": ""split"", ""sequence"": ""string""}, {""name"": ""topic"", ""dtype"": ""string""}, {""name"": ""transcription"", ""dtype"": ""string""}, {""name"": ""whisper_asr"", ""sequence"": ""string""}, {""name"": ""whisper_asr_cer"", ""sequence"": ""float64""}, {""name"": ""whisper_asr_translation"", ""sequence"": ""string""}, {""name"": ""whisper_asr_wer"", ""sequence"": ""float64""}]}], ""splits"": [{""name"": ""test"", ""num_bytes"": 1743552264.0, ""num_examples"": 346}], ""download_size"": 947988554, ""dataset_size"": 1743552264.0}, {""config_name"": ""lin_Latn"", ""features"": [{""name"": ""link"", ""dtype"": ""string""}, {""name"": ""question_number"", ""dtype"": ""int64""}, {""name"": ""flores_passage"", ""dtype"": ""string""}, {""name"": ""question"", ""dtype"": ""string""}, {""name"": ""mc_answer1"", ""dtype"": ""string""}, {""name"": ""mc_answer2"", ""dtype"": ""string""}, {""name"": ""mc_answer3"", ""dtype"": ""string""}, {""name"": ""mc_answer4"", ""dtype"": ""string""}, {""name"": ""correct_answer_num"", ""dtype"": ""string""}, {""name"": ""dialect"", ""dtype"": ""string""}, {""name"": ""ds"", ""dtype"": ""timestamp[us]""}, {""name"": ""sentence_data"", ""list"": [{""name"": ""URL"", ""dtype"": ""string""}, {""name"": ""audio"", ""sequence"": {""audio"": {""sampling_rate"": 16000}}}, {""name"": ""domain"", ""dtype"": ""string""}, {""name"": ""filename"", ""sequence"": ""string""}, {""name"": ""fleurs_id"", ""dtype"": ""int64""}, {""name"": ""full_paragraph"", ""dtype"": ""bool""}, {""name"": ""gender"", ""sequence"": ""string""}, {""name"": ""has_hyperlink"", ""dtype"": ""int64""}, {""name"": ""has_image"", ""dtype"": ""int64""}, {""name"": ""id"", ""dtype"": ""int64""}, {""name"": ""num_samples"", ""sequence"": ""int64""}, {""name"": ""raw_transcription"", ""dtype"": ""string""}, {""name"": ""seamlessm4t_asr"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr_cer"", ""sequence"": ""float64""}, {""name"": ""seamlessm4t_asr_translation"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr_wer"", ""sequence"": ""float64""}, {""name"": ""sentence"", ""dtype"": ""string""}, {""name"": ""sentence_idx"", ""dtype"": ""int64""}, {""name"": ""speaker_id"", ""sequence"": ""int64""}, {""name"": ""split"", ""sequence"": ""string""}, {""name"": ""topic"", ""dtype"": ""string""}, {""name"": ""transcription"", ""dtype"": ""string""}, {""name"": ""whisper_asr"", ""sequence"": ""string""}, {""name"": ""whisper_asr_cer"", ""sequence"": ""float64""}, {""name"": ""whisper_asr_translation"", ""sequence"": ""string""}, {""name"": ""whisper_asr_wer"", ""sequence"": ""float64""}]}], ""splits"": [{""name"": ""test"", ""num_bytes"": 7700082152.0, ""num_examples"": 778}], ""download_size"": 4174517500, ""dataset_size"": 7700082152.0}, {""config_name"": ""lit_Latn"", ""features"": [{""name"": ""link"", ""dtype"": ""string""}, {""name"": ""question_number"", ""dtype"": ""int64""}, {""name"": ""flores_passage"", ""dtype"": ""string""}, {""name"": ""question"", ""dtype"": ""string""}, {""name"": ""mc_answer1"", ""dtype"": ""string""}, {""name"": ""mc_answer2"", ""dtype"": ""string""}, {""name"": ""mc_answer3"", ""dtype"": ""string""}, {""name"": ""mc_answer4"", ""dtype"": ""string""}, {""name"": ""correct_answer_num"", ""dtype"": ""string""}, {""name"": ""dialect"", ""dtype"": ""string""}, {""name"": ""ds"", ""dtype"": ""timestamp[us]""}, {""name"": ""sentence_data"", ""list"": [{""name"": ""URL"", ""dtype"": ""string""}, {""name"": ""audio"", ""sequence"": {""audio"": {""sampling_rate"": 16000}}}, {""name"": ""domain"", ""dtype"": ""string""}, {""name"": ""filename"", ""sequence"": ""string""}, {""name"": ""fleurs_id"", ""dtype"": ""int64""}, {""name"": ""full_paragraph"", ""dtype"": ""bool""}, {""name"": ""gender"", ""sequence"": ""string""}, {""name"": ""has_hyperlink"", ""dtype"": ""int64""}, {""name"": ""has_image"", ""dtype"": ""int64""}, {""name"": ""id"", ""dtype"": ""int64""}, {""name"": ""num_samples"", ""sequence"": ""int64""}, {""name"": ""raw_transcription"", ""dtype"": ""string""}, {""name"": ""seamlessm4t_asr"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr_cer"", ""sequence"": ""float64""}, {""name"": ""seamlessm4t_asr_translation"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr_wer"", ""sequence"": ""float64""}, {""name"": ""sentence"", ""dtype"": ""string""}, {""name"": ""sentence_idx"", ""dtype"": ""int64""}, {""name"": ""speaker_id"", ""sequence"": ""int64""}, {""name"": ""split"", ""sequence"": ""string""}, {""name"": ""topic"", ""dtype"": ""string""}, {""name"": ""transcription"", ""dtype"": ""string""}, {""name"": ""whisper_asr"", ""sequence"": ""string""}, {""name"": ""whisper_asr_cer"", ""sequence"": ""float64""}, {""name"": ""whisper_asr_translation"", ""sequence"": ""string""}, {""name"": ""whisper_asr_wer"", ""sequence"": ""float64""}]}], ""splits"": [{""name"": ""test"", ""num_bytes"": 5076063385.0, ""num_examples"": 834}], ""download_size"": 2686738787, ""dataset_size"": 5076063385.0}, {""config_name"": ""lug_Latn"", ""features"": [{""name"": ""link"", ""dtype"": ""string""}, {""name"": ""question_number"", ""dtype"": ""int64""}, {""name"": ""flores_passage"", ""dtype"": ""string""}, {""name"": ""question"", ""dtype"": ""string""}, {""name"": ""mc_answer1"", ""dtype"": ""string""}, {""name"": ""mc_answer2"", ""dtype"": ""string""}, {""name"": ""mc_answer3"", ""dtype"": ""string""}, {""name"": ""mc_answer4"", ""dtype"": ""string""}, {""name"": ""correct_answer_num"", ""dtype"": ""string""}, {""name"": ""dialect"", ""dtype"": ""string""}, {""name"": ""ds"", ""dtype"": ""timestamp[us]""}, {""name"": ""sentence_data"", ""list"": [{""name"": ""URL"", ""dtype"": ""string""}, {""name"": ""audio"", ""sequence"": {""audio"": {""sampling_rate"": 16000}}}, {""name"": ""domain"", ""dtype"": ""string""}, {""name"": ""filename"", ""sequence"": ""string""}, {""name"": ""fleurs_id"", ""dtype"": ""int64""}, {""name"": ""full_paragraph"", ""dtype"": ""bool""}, {""name"": ""gender"", ""sequence"": ""string""}, {""name"": ""has_hyperlink"", ""dtype"": ""int64""}, {""name"": ""has_image"", ""dtype"": ""int64""}, {""name"": ""id"", ""dtype"": ""int64""}, {""name"": ""num_samples"", ""sequence"": ""int64""}, {""name"": ""raw_transcription"", ""dtype"": ""string""}, {""name"": ""seamlessm4t_asr"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr_cer"", ""sequence"": ""float64""}, {""name"": ""seamlessm4t_asr_translation"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr_wer"", ""sequence"": ""float64""}, {""name"": ""sentence"", ""dtype"": ""string""}, {""name"": ""sentence_idx"", ""dtype"": ""int64""}, {""name"": ""speaker_id"", ""sequence"": ""int64""}, {""name"": ""split"", ""sequence"": ""string""}, {""name"": ""topic"", ""dtype"": ""string""}, {""name"": ""transcription"", ""dtype"": ""string""}, {""name"": ""whisper_asr"", ""sequence"": ""string""}, {""name"": ""whisper_asr_cer"", ""sequence"": ""float64""}, {""name"": ""whisper_asr_translation"", ""sequence"": ""string""}, {""name"": ""whisper_asr_wer"", ""sequence"": ""float64""}]}], ""splits"": [{""name"": ""test"", ""num_bytes"": 5464636888.0, ""num_examples"": 703}], ""download_size"": 2897617677, ""dataset_size"": 5464636888.0}, {""config_name"": ""luo_Latn"", ""features"": [{""name"": ""link"", ""dtype"": ""string""}, {""name"": ""question_number"", ""dtype"": ""int64""}, {""name"": ""flores_passage"", ""dtype"": ""string""}, {""name"": ""question"", ""dtype"": ""string""}, {""name"": ""mc_answer1"", ""dtype"": ""string""}, {""name"": ""mc_answer2"", ""dtype"": ""string""}, {""name"": ""mc_answer3"", ""dtype"": ""string""}, {""name"": ""mc_answer4"", ""dtype"": ""string""}, {""name"": ""correct_answer_num"", ""dtype"": ""string""}, {""name"": ""dialect"", ""dtype"": ""string""}, {""name"": ""ds"", ""dtype"": ""timestamp[us]""}, {""name"": ""sentence_data"", ""list"": [{""name"": ""URL"", ""dtype"": ""string""}, {""name"": ""audio"", ""sequence"": {""audio"": {""sampling_rate"": 16000}}}, {""name"": ""domain"", ""dtype"": ""string""}, {""name"": ""filename"", ""sequence"": ""string""}, {""name"": ""fleurs_id"", ""dtype"": ""int64""}, {""name"": ""full_paragraph"", ""dtype"": ""bool""}, {""name"": ""gender"", ""sequence"": ""string""}, {""name"": ""has_hyperlink"", ""dtype"": ""int64""}, {""name"": ""has_image"", ""dtype"": ""int64""}, {""name"": ""id"", ""dtype"": ""int64""}, {""name"": ""num_samples"", ""sequence"": ""int64""}, {""name"": ""raw_transcription"", ""dtype"": ""string""}, {""name"": ""seamlessm4t_asr"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr_cer"", ""sequence"": ""float64""}, {""name"": ""seamlessm4t_asr_translation"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr_wer"", ""sequence"": ""float64""}, {""name"": ""sentence"", ""dtype"": ""string""}, {""name"": ""sentence_idx"", ""dtype"": ""int64""}, {""name"": ""speaker_id"", ""sequence"": ""int64""}, {""name"": ""split"", ""sequence"": ""string""}, {""name"": ""topic"", ""dtype"": ""string""}, {""name"": ""transcription"", ""dtype"": ""string""}, {""name"": ""whisper_asr"", ""sequence"": ""string""}, {""name"": ""whisper_asr_cer"", ""sequence"": ""float64""}, {""name"": ""whisper_asr_translation"", ""sequence"": ""string""}, {""name"": ""whisper_asr_wer"", ""sequence"": ""float64""}]}], ""splits"": [{""name"": ""test"", ""num_bytes"": 3087291382.0, ""num_examples"": 512}], ""download_size"": 1584768164, ""dataset_size"": 3087291382.0}, {""config_name"": ""lvs_Latn"", ""features"": [{""name"": ""link"", ""dtype"": ""string""}, {""name"": ""question_number"", ""dtype"": ""int64""}, {""name"": ""flores_passage"", ""dtype"": ""string""}, {""name"": ""question"", ""dtype"": ""string""}, {""name"": ""mc_answer1"", ""dtype"": ""string""}, {""name"": ""mc_answer2"", ""dtype"": ""string""}, {""name"": ""mc_answer3"", ""dtype"": ""string""}, {""name"": ""mc_answer4"", ""dtype"": ""string""}, {""name"": ""correct_answer_num"", ""dtype"": ""string""}, {""name"": ""dialect"", ""dtype"": ""string""}, {""name"": ""ds"", ""dtype"": ""timestamp[us]""}, {""name"": ""sentence_data"", ""list"": [{""name"": ""URL"", ""dtype"": ""string""}, {""name"": ""audio"", ""sequence"": {""audio"": {""sampling_rate"": 16000}}}, {""name"": ""domain"", ""dtype"": ""string""}, {""name"": ""filename"", ""sequence"": ""string""}, {""name"": ""fleurs_id"", ""dtype"": ""int64""}, {""name"": ""full_paragraph"", ""dtype"": ""bool""}, {""name"": ""gender"", ""sequence"": ""string""}, {""name"": ""has_hyperlink"", ""dtype"": ""int64""}, {""name"": ""has_image"", ""dtype"": ""int64""}, {""name"": ""id"", ""dtype"": ""int64""}, {""name"": ""num_samples"", ""sequence"": ""int64""}, {""name"": ""raw_transcription"", ""dtype"": ""string""}, {""name"": ""seamlessm4t_asr"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr_cer"", ""sequence"": ""float64""}, {""name"": ""seamlessm4t_asr_translation"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr_wer"", ""sequence"": ""float64""}, {""name"": ""sentence"", ""dtype"": ""string""}, {""name"": ""sentence_idx"", ""dtype"": ""int64""}, {""name"": ""speaker_id"", ""sequence"": ""int64""}, {""name"": ""split"", ""sequence"": ""string""}, {""name"": ""topic"", ""dtype"": ""string""}, {""name"": ""transcription"", ""dtype"": ""string""}, {""name"": ""whisper_asr"", ""sequence"": ""string""}, {""name"": ""whisper_asr_cer"", ""sequence"": ""float64""}, {""name"": ""whisper_asr_translation"", ""sequence"": ""string""}, {""name"": ""whisper_asr_wer"", ""sequence"": ""float64""}]}], ""splits"": [{""name"": ""test"", ""num_bytes"": 2682008095.0, ""num_examples"": 555}], ""download_size"": 1444964552, ""dataset_size"": 2682008095.0}, {""config_name"": ""mal_Mlym"", ""features"": [{""name"": ""link"", ""dtype"": ""string""}, {""name"": ""question_number"", ""dtype"": ""int64""}, {""name"": ""flores_passage"", ""dtype"": ""string""}, {""name"": ""question"", ""dtype"": ""string""}, {""name"": ""mc_answer1"", ""dtype"": ""string""}, {""name"": ""mc_answer2"", ""dtype"": ""string""}, {""name"": ""mc_answer3"", ""dtype"": ""string""}, {""name"": ""mc_answer4"", ""dtype"": ""string""}, {""name"": ""correct_answer_num"", ""dtype"": ""string""}, {""name"": ""dialect"", ""dtype"": ""string""}, {""name"": ""ds"", ""dtype"": ""timestamp[us]""}, {""name"": ""sentence_data"", ""list"": [{""name"": ""URL"", ""dtype"": ""string""}, {""name"": ""audio"", ""sequence"": {""audio"": {""sampling_rate"": 16000}}}, {""name"": ""domain"", ""dtype"": ""string""}, {""name"": ""filename"", ""sequence"": ""string""}, {""name"": ""fleurs_id"", ""dtype"": ""int64""}, {""name"": ""full_paragraph"", ""dtype"": ""bool""}, {""name"": ""gender"", ""sequence"": ""string""}, {""name"": ""has_hyperlink"", ""dtype"": ""int64""}, {""name"": ""has_image"", ""dtype"": ""int64""}, {""name"": ""id"", ""dtype"": ""int64""}, {""name"": ""num_samples"", ""sequence"": ""int64""}, {""name"": ""raw_transcription"", ""dtype"": ""string""}, {""name"": ""seamlessm4t_asr"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr_cer"", ""sequence"": ""float64""}, {""name"": ""seamlessm4t_asr_translation"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr_wer"", ""sequence"": ""float64""}, {""name"": ""sentence"", ""dtype"": ""string""}, {""name"": ""sentence_idx"", ""dtype"": ""int64""}, {""name"": ""speaker_id"", ""sequence"": ""int64""}, {""name"": ""split"", ""sequence"": ""string""}, {""name"": ""topic"", ""dtype"": ""string""}, {""name"": ""transcription"", ""dtype"": ""string""}, {""name"": ""whisper_asr"", ""sequence"": ""string""}, {""name"": ""whisper_asr_cer"", ""sequence"": ""float64""}, {""name"": ""whisper_asr_translation"", ""sequence"": ""string""}, {""name"": ""whisper_asr_wer"", ""sequence"": ""float64""}]}], ""splits"": [{""name"": ""test"", ""num_bytes"": 5386280878.0, ""num_examples"": 809}], ""download_size"": 2955248301, ""dataset_size"": 5386280878.0}, {""config_name"": ""mar_Deva"", ""features"": [{""name"": ""link"", ""dtype"": ""string""}, {""name"": ""question_number"", ""dtype"": ""int64""}, {""name"": ""flores_passage"", ""dtype"": ""string""}, {""name"": ""question"", ""dtype"": ""string""}, {""name"": ""mc_answer1"", ""dtype"": ""string""}, {""name"": ""mc_answer2"", ""dtype"": ""string""}, {""name"": ""mc_answer3"", ""dtype"": ""string""}, {""name"": ""mc_answer4"", ""dtype"": ""string""}, {""name"": ""correct_answer_num"", ""dtype"": ""string""}, {""name"": ""dialect"", ""dtype"": ""string""}, {""name"": ""ds"", ""dtype"": ""timestamp[us]""}, {""name"": ""sentence_data"", ""list"": [{""name"": ""URL"", ""dtype"": ""string""}, {""name"": ""audio"", ""sequence"": {""audio"": {""sampling_rate"": 16000}}}, {""name"": ""domain"", ""dtype"": ""string""}, {""name"": ""filename"", ""sequence"": ""string""}, {""name"": ""fleurs_id"", ""dtype"": ""int64""}, {""name"": ""full_paragraph"", ""dtype"": ""bool""}, {""name"": ""gender"", ""sequence"": ""string""}, {""name"": ""has_hyperlink"", ""dtype"": ""int64""}, {""name"": ""has_image"", ""dtype"": ""int64""}, {""name"": ""id"", ""dtype"": ""int64""}, {""name"": ""num_samples"", ""sequence"": ""int64""}, {""name"": ""raw_transcription"", ""dtype"": ""string""}, {""name"": ""seamlessm4t_asr"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr_cer"", ""sequence"": ""float64""}, {""name"": ""seamlessm4t_asr_translation"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr_wer"", ""sequence"": ""float64""}, {""name"": ""sentence"", ""dtype"": ""string""}, {""name"": ""sentence_idx"", ""dtype"": ""int64""}, {""name"": ""speaker_id"", ""sequence"": ""int64""}, {""name"": ""split"", ""sequence"": ""string""}, {""name"": ""topic"", ""dtype"": ""string""}, {""name"": ""transcription"", ""dtype"": ""string""}, {""name"": ""whisper_asr"", ""sequence"": ""string""}, {""name"": ""whisper_asr_cer"", ""sequence"": ""float64""}, {""name"": ""whisper_asr_translation"", ""sequence"": ""string""}, {""name"": ""whisper_asr_wer"", ""sequence"": ""float64""}]}], ""splits"": [{""name"": ""test"", ""num_bytes"": 6465055207.0, ""num_examples"": 869}], ""download_size"": 3520545265, ""dataset_size"": 6465055207.0}, {""config_name"": ""mkd_Cyrl"", ""features"": [{""name"": ""link"", ""dtype"": ""string""}, {""name"": ""question_number"", ""dtype"": ""int64""}, {""name"": ""flores_passage"", ""dtype"": ""string""}, {""name"": ""question"", ""dtype"": ""string""}, {""name"": ""mc_answer1"", ""dtype"": ""string""}, {""name"": ""mc_answer2"", ""dtype"": ""string""}, {""name"": ""mc_answer3"", ""dtype"": ""string""}, {""name"": ""mc_answer4"", ""dtype"": ""string""}, {""name"": ""correct_answer_num"", ""dtype"": ""string""}, {""name"": ""dialect"", ""dtype"": ""string""}, {""name"": ""ds"", ""dtype"": ""timestamp[us]""}, {""name"": ""sentence_data"", ""list"": [{""name"": ""URL"", ""dtype"": ""string""}, {""name"": ""audio"", ""sequence"": {""audio"": {""sampling_rate"": 16000}}}, {""name"": ""domain"", ""dtype"": ""string""}, {""name"": ""filename"", ""sequence"": ""string""}, {""name"": ""fleurs_id"", ""dtype"": ""int64""}, {""name"": ""full_paragraph"", ""dtype"": ""bool""}, {""name"": ""gender"", ""sequence"": ""string""}, {""name"": ""has_hyperlink"", ""dtype"": ""int64""}, {""name"": ""has_image"", ""dtype"": ""int64""}, {""name"": ""id"", ""dtype"": ""int64""}, {""name"": ""num_samples"", ""sequence"": ""int64""}, {""name"": ""raw_transcription"", ""dtype"": ""string""}, {""name"": ""seamlessm4t_asr"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr_cer"", ""sequence"": ""float64""}, {""name"": ""seamlessm4t_asr_translation"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr_wer"", ""sequence"": ""float64""}, {""name"": ""sentence"", ""dtype"": ""string""}, {""name"": ""sentence_idx"", ""dtype"": ""int64""}, {""name"": ""speaker_id"", ""sequence"": ""int64""}, {""name"": ""split"", ""sequence"": ""string""}, {""name"": ""topic"", ""dtype"": ""string""}, {""name"": ""transcription"", ""dtype"": ""string""}, {""name"": ""whisper_asr"", ""sequence"": ""string""}, {""name"": ""whisper_asr_cer"", ""sequence"": ""float64""}, {""name"": ""whisper_asr_translation"", ""sequence"": ""string""}, {""name"": ""whisper_asr_wer"", ""sequence"": ""float64""}]}], ""splits"": [{""name"": ""test"", ""num_bytes"": 3390202546.0, ""num_examples"": 667}], ""download_size"": 1826472415, ""dataset_size"": 3390202546.0}, {""config_name"": ""mlt_Latn"", ""features"": [{""name"": ""link"", ""dtype"": ""string""}, {""name"": ""question_number"", ""dtype"": ""int64""}, {""name"": ""flores_passage"", ""dtype"": ""string""}, {""name"": ""question"", ""dtype"": ""string""}, {""name"": ""mc_answer1"", ""dtype"": ""string""}, {""name"": ""mc_answer2"", ""dtype"": ""string""}, {""name"": ""mc_answer3"", ""dtype"": ""string""}, {""name"": ""mc_answer4"", ""dtype"": ""string""}, {""name"": ""correct_answer_num"", ""dtype"": ""string""}, {""name"": ""dialect"", ""dtype"": ""string""}, {""name"": ""ds"", ""dtype"": ""timestamp[us]""}, {""name"": ""sentence_data"", ""list"": [{""name"": ""URL"", ""dtype"": ""string""}, {""name"": ""audio"", ""sequence"": {""audio"": {""sampling_rate"": 16000}}}, {""name"": ""domain"", ""dtype"": ""string""}, {""name"": ""filename"", ""sequence"": ""string""}, {""name"": ""fleurs_id"", ""dtype"": ""int64""}, {""name"": ""full_paragraph"", ""dtype"": ""bool""}, {""name"": ""gender"", ""sequence"": ""string""}, {""name"": ""has_hyperlink"", ""dtype"": ""int64""}, {""name"": ""has_image"", ""dtype"": ""int64""}, {""name"": ""id"", ""dtype"": ""int64""}, {""name"": ""num_samples"", ""sequence"": ""int64""}, {""name"": ""raw_transcription"", ""dtype"": ""string""}, {""name"": ""seamlessm4t_asr"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr_cer"", ""sequence"": ""float64""}, {""name"": ""seamlessm4t_asr_translation"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr_wer"", ""sequence"": ""float64""}, {""name"": ""sentence"", ""dtype"": ""string""}, {""name"": ""sentence_idx"", ""dtype"": ""int64""}, {""name"": ""speaker_id"", ""sequence"": ""int64""}, {""name"": ""split"", ""sequence"": ""string""}, {""name"": ""topic"", ""dtype"": ""string""}, {""name"": ""transcription"", ""dtype"": ""string""}, {""name"": ""whisper_asr"", ""sequence"": ""string""}, {""name"": ""whisper_asr_cer"", ""sequence"": ""float64""}, {""name"": ""whisper_asr_translation"", ""sequence"": ""string""}, {""name"": ""whisper_asr_wer"", ""sequence"": ""float64""}]}], ""splits"": [{""name"": ""test"", ""num_bytes"": 5323890122.0, ""num_examples"": 816}], ""download_size"": 2853160463, ""dataset_size"": 5323890122.0}, {""config_name"": ""mri_Latn"", ""features"": [{""name"": ""link"", ""dtype"": ""string""}, {""name"": ""question_number"", ""dtype"": ""int64""}, {""name"": ""flores_passage"", ""dtype"": ""string""}, {""name"": ""question"", ""dtype"": ""string""}, {""name"": ""mc_answer1"", ""dtype"": ""string""}, {""name"": ""mc_answer2"", ""dtype"": ""string""}, {""name"": ""mc_answer3"", ""dtype"": ""string""}, {""name"": ""mc_answer4"", ""dtype"": ""string""}, {""name"": ""correct_answer_num"", ""dtype"": ""string""}, {""name"": ""dialect"", ""dtype"": ""string""}, {""name"": ""ds"", ""dtype"": ""timestamp[us]""}, {""name"": ""sentence_data"", ""list"": [{""name"": ""URL"", ""dtype"": ""string""}, {""name"": ""audio"", ""sequence"": {""audio"": {""sampling_rate"": 16000}}}, {""name"": ""domain"", ""dtype"": ""string""}, {""name"": ""filename"", ""sequence"": ""string""}, {""name"": ""fleurs_id"", ""dtype"": ""int64""}, {""name"": ""full_paragraph"", ""dtype"": ""bool""}, {""name"": ""gender"", ""sequence"": ""string""}, {""name"": ""has_hyperlink"", ""dtype"": ""int64""}, {""name"": ""has_image"", ""dtype"": ""int64""}, {""name"": ""id"", ""dtype"": ""int64""}, {""name"": ""num_samples"", ""sequence"": ""int64""}, {""name"": ""raw_transcription"", ""dtype"": ""string""}, {""name"": ""seamlessm4t_asr"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr_cer"", ""sequence"": ""float64""}, {""name"": ""seamlessm4t_asr_translation"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr_wer"", ""sequence"": ""float64""}, {""name"": ""sentence"", ""dtype"": ""string""}, {""name"": ""sentence_idx"", ""dtype"": ""int64""}, {""name"": ""speaker_id"", ""sequence"": ""int64""}, {""name"": ""split"", ""sequence"": ""string""}, {""name"": ""topic"", ""dtype"": ""string""}, {""name"": ""transcription"", ""dtype"": ""string""}, {""name"": ""whisper_asr"", ""sequence"": ""string""}, {""name"": ""whisper_asr_cer"", ""sequence"": ""float64""}, {""name"": ""whisper_asr_translation"", ""sequence"": ""string""}, {""name"": ""whisper_asr_wer"", ""sequence"": ""float64""}]}], ""splits"": [{""name"": ""test"", ""num_bytes"": 9910328387.0, ""num_examples"": 877}], ""download_size"": 5366184778, ""dataset_size"": 9910328387.0}, {""config_name"": ""mya_Mymr"", ""features"": [{""name"": ""link"", ""dtype"": ""string""}, {""name"": ""question_number"", ""dtype"": ""int64""}, {""name"": ""flores_passage"", ""dtype"": ""string""}, {""name"": ""question"", ""dtype"": ""string""}, {""name"": ""mc_answer1"", ""dtype"": ""string""}, {""name"": ""mc_answer2"", ""dtype"": ""string""}, {""name"": ""mc_answer3"", ""dtype"": ""string""}, {""name"": ""mc_answer4"", ""dtype"": ""string""}, {""name"": ""correct_answer_num"", ""dtype"": ""string""}, {""name"": ""dialect"", ""dtype"": ""string""}, {""name"": ""ds"", ""dtype"": ""timestamp[us]""}, {""name"": ""sentence_data"", ""list"": [{""name"": ""URL"", ""dtype"": ""string""}, {""name"": ""audio"", ""sequence"": {""audio"": {""sampling_rate"": 16000}}}, {""name"": ""domain"", ""dtype"": ""string""}, {""name"": ""filename"", ""sequence"": ""string""}, {""name"": ""fleurs_id"", ""dtype"": ""int64""}, {""name"": ""full_paragraph"", ""dtype"": ""bool""}, {""name"": ""gender"", ""sequence"": ""string""}, {""name"": ""has_hyperlink"", ""dtype"": ""int64""}, {""name"": ""has_image"", ""dtype"": ""int64""}, {""name"": ""id"", ""dtype"": ""int64""}, {""name"": ""num_samples"", ""sequence"": ""int64""}, {""name"": ""raw_transcription"", ""dtype"": ""string""}, {""name"": ""seamlessm4t_asr"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr_cer"", ""sequence"": ""float64""}, {""name"": ""seamlessm4t_asr_translation"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr_wer"", ""sequence"": ""float64""}, {""name"": ""sentence"", ""dtype"": ""string""}, {""name"": ""sentence_idx"", ""dtype"": ""int64""}, {""name"": ""speaker_id"", ""sequence"": ""int64""}, {""name"": ""split"", ""sequence"": ""string""}, {""name"": ""topic"", ""dtype"": ""string""}, {""name"": ""transcription"", ""dtype"": ""string""}, {""name"": ""whisper_asr"", ""sequence"": ""string""}, {""name"": ""whisper_asr_cer"", ""sequence"": ""float64""}, {""name"": ""whisper_asr_translation"", ""sequence"": ""string""}, {""name"": ""whisper_asr_wer"", ""sequence"": ""float64""}]}], ""splits"": [{""name"": ""test"", ""num_bytes"": 6537958064.0, ""num_examples"": 864}], ""download_size"": 3473242775, ""dataset_size"": 6537958064.0}, {""config_name"": ""nld_Latn"", ""features"": [{""name"": ""link"", ""dtype"": ""string""}, {""name"": ""question_number"", ""dtype"": ""int64""}, {""name"": ""flores_passage"", ""dtype"": ""string""}, {""name"": ""question"", ""dtype"": ""string""}, {""name"": ""mc_answer1"", ""dtype"": ""string""}, {""name"": ""mc_answer2"", ""dtype"": ""string""}, {""name"": ""mc_answer3"", ""dtype"": ""string""}, {""name"": ""mc_answer4"", ""dtype"": ""string""}, {""name"": ""correct_answer_num"", ""dtype"": ""string""}, {""name"": ""dialect"", ""dtype"": ""string""}, {""name"": ""ds"", ""dtype"": ""timestamp[us]""}, {""name"": ""sentence_data"", ""list"": [{""name"": ""URL"", ""dtype"": ""string""}, {""name"": ""audio"", ""sequence"": {""audio"": {""sampling_rate"": 16000}}}, {""name"": ""domain"", ""dtype"": ""string""}, {""name"": ""filename"", ""sequence"": ""string""}, {""name"": ""fleurs_id"", ""dtype"": ""int64""}, {""name"": ""full_paragraph"", ""dtype"": ""bool""}, {""name"": ""gender"", ""sequence"": ""string""}, {""name"": ""has_hyperlink"", ""dtype"": ""int64""}, {""name"": ""has_image"", ""dtype"": ""int64""}, {""name"": ""id"", ""dtype"": ""int64""}, {""name"": ""num_samples"", ""sequence"": ""int64""}, {""name"": ""raw_transcription"", ""dtype"": ""string""}, {""name"": ""seamlessm4t_asr"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr_cer"", ""sequence"": ""float64""}, {""name"": ""seamlessm4t_asr_translation"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr_wer"", ""sequence"": ""float64""}, {""name"": ""sentence"", ""dtype"": ""string""}, {""name"": ""sentence_idx"", ""dtype"": ""int64""}, {""name"": ""speaker_id"", ""sequence"": ""int64""}, {""name"": ""split"", ""sequence"": ""string""}, {""name"": ""topic"", ""dtype"": ""string""}, {""name"": ""transcription"", ""dtype"": ""string""}, {""name"": ""whisper_asr"", ""sequence"": ""string""}, {""name"": ""whisper_asr_cer"", ""sequence"": ""float64""}, {""name"": ""whisper_asr_translation"", ""sequence"": ""string""}, {""name"": ""whisper_asr_wer"", ""sequence"": ""float64""}]}], ""splits"": [{""name"": ""test"", ""num_bytes"": 2936062189.0, ""num_examples"": 674}], ""download_size"": 1580031502, ""dataset_size"": 2936062189.0}, {""config_name"": ""nob_Latn"", ""features"": [{""name"": ""link"", ""dtype"": ""string""}, {""name"": ""question_number"", ""dtype"": ""int64""}, {""name"": ""flores_passage"", ""dtype"": ""string""}, {""name"": ""question"", ""dtype"": ""string""}, {""name"": ""mc_answer1"", ""dtype"": ""string""}, {""name"": ""mc_answer2"", ""dtype"": ""string""}, {""name"": ""mc_answer3"", ""dtype"": ""string""}, {""name"": ""mc_answer4"", ""dtype"": ""string""}, {""name"": ""correct_answer_num"", ""dtype"": ""string""}, {""name"": ""dialect"", ""dtype"": ""string""}, {""name"": ""ds"", ""dtype"": ""timestamp[us]""}, {""name"": ""sentence_data"", ""list"": [{""name"": ""URL"", ""dtype"": ""string""}, {""name"": ""audio"", ""sequence"": {""audio"": {""sampling_rate"": 16000}}}, {""name"": ""domain"", ""dtype"": ""string""}, {""name"": ""filename"", ""sequence"": ""string""}, {""name"": ""fleurs_id"", ""dtype"": ""int64""}, {""name"": ""full_paragraph"", ""dtype"": ""bool""}, {""name"": ""gender"", ""sequence"": ""string""}, {""name"": ""has_hyperlink"", ""dtype"": ""int64""}, {""name"": ""has_image"", ""dtype"": ""int64""}, {""name"": ""id"", ""dtype"": ""int64""}, {""name"": ""num_samples"", ""sequence"": ""int64""}, {""name"": ""raw_transcription"", ""dtype"": ""string""}, {""name"": ""seamlessm4t_asr"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr_cer"", ""sequence"": ""float64""}, {""name"": ""seamlessm4t_asr_translation"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr_wer"", ""sequence"": ""float64""}, {""name"": ""sentence"", ""dtype"": ""string""}, {""name"": ""sentence_idx"", ""dtype"": ""int64""}, {""name"": ""speaker_id"", ""sequence"": ""int64""}, {""name"": ""split"", ""sequence"": ""string""}, {""name"": ""topic"", ""dtype"": ""string""}, {""name"": ""transcription"", ""dtype"": ""string""}, {""name"": ""whisper_asr"", ""sequence"": ""string""}, {""name"": ""whisper_asr_cer"", ""sequence"": ""float64""}, {""name"": ""whisper_asr_translation"", ""sequence"": ""string""}, {""name"": ""whisper_asr_wer"", ""sequence"": ""float64""}]}], ""splits"": [{""name"": ""test"", ""num_bytes"": 3223729748.0, ""num_examples"": 635}], ""download_size"": 1727069260, ""dataset_size"": 3223729748.0}, {""config_name"": ""npi_Deva"", ""features"": [{""name"": ""link"", ""dtype"": ""string""}, {""name"": ""question_number"", ""dtype"": ""int64""}, {""name"": ""flores_passage"", ""dtype"": ""string""}, {""name"": ""question"", ""dtype"": ""string""}, {""name"": ""mc_answer1"", ""dtype"": ""string""}, {""name"": ""mc_answer2"", ""dtype"": ""string""}, {""name"": ""mc_answer3"", ""dtype"": ""string""}, {""name"": ""mc_answer4"", ""dtype"": ""string""}, {""name"": ""correct_answer_num"", ""dtype"": ""string""}, {""name"": ""dialect"", ""dtype"": ""string""}, {""name"": ""ds"", ""dtype"": ""timestamp[us]""}, {""name"": ""sentence_data"", ""list"": [{""name"": ""URL"", ""dtype"": ""string""}, {""name"": ""audio"", ""sequence"": {""audio"": {""sampling_rate"": 16000}}}, {""name"": ""domain"", ""dtype"": ""string""}, {""name"": ""filename"", ""sequence"": ""string""}, {""name"": ""fleurs_id"", ""dtype"": ""int64""}, {""name"": ""full_paragraph"", ""dtype"": ""bool""}, {""name"": ""gender"", ""sequence"": ""string""}, {""name"": ""has_hyperlink"", ""dtype"": ""int64""}, {""name"": ""has_image"", ""dtype"": ""int64""}, {""name"": ""id"", ""dtype"": ""int64""}, {""name"": ""num_samples"", ""sequence"": ""int64""}, {""name"": ""raw_transcription"", ""dtype"": ""string""}, {""name"": ""seamlessm4t_asr"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr_cer"", ""sequence"": ""float64""}, {""name"": ""seamlessm4t_asr_translation"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr_wer"", ""sequence"": ""float64""}, {""name"": ""sentence"", ""dtype"": ""string""}, {""name"": ""sentence_idx"", ""dtype"": ""int64""}, {""name"": ""speaker_id"", ""sequence"": ""int64""}, {""name"": ""split"", ""sequence"": ""string""}, {""name"": ""topic"", ""dtype"": ""string""}, {""name"": ""transcription"", ""dtype"": ""string""}, {""name"": ""whisper_asr"", ""sequence"": ""string""}, {""name"": ""whisper_asr_cer"", ""sequence"": ""float64""}, {""name"": ""whisper_asr_translation"", ""sequence"": ""string""}, {""name"": ""whisper_asr_wer"", ""sequence"": ""float64""}]}], ""splits"": [{""name"": ""test"", ""num_bytes"": 5523444178.0, ""num_examples"": 876}], ""download_size"": 2934225115, ""dataset_size"": 5523444178.0}, {""config_name"": ""nso_Latn"", ""features"": [{""name"": ""link"", ""dtype"": ""string""}, {""name"": ""question_number"", ""dtype"": ""int64""}, {""name"": ""flores_passage"", ""dtype"": ""string""}, {""name"": ""question"", ""dtype"": ""string""}, {""name"": ""mc_answer1"", ""dtype"": ""string""}, {""name"": ""mc_answer2"", ""dtype"": ""string""}, {""name"": ""mc_answer3"", ""dtype"": ""string""}, {""name"": ""mc_answer4"", ""dtype"": ""string""}, {""name"": ""correct_answer_num"", ""dtype"": ""string""}, {""name"": ""dialect"", ""dtype"": ""string""}, {""name"": ""ds"", ""dtype"": ""timestamp[us]""}, {""name"": ""sentence_data"", ""list"": [{""name"": ""URL"", ""dtype"": ""string""}, {""name"": ""audio"", ""sequence"": {""audio"": {""sampling_rate"": 16000}}}, {""name"": ""domain"", ""dtype"": ""string""}, {""name"": ""filename"", ""sequence"": ""string""}, {""name"": ""fleurs_id"", ""dtype"": ""int64""}, {""name"": ""full_paragraph"", ""dtype"": ""bool""}, {""name"": ""gender"", ""sequence"": ""string""}, {""name"": ""has_hyperlink"", ""dtype"": ""int64""}, {""name"": ""has_image"", ""dtype"": ""int64""}, {""name"": ""id"", ""dtype"": ""int64""}, {""name"": ""num_samples"", ""sequence"": ""int64""}, {""name"": ""raw_transcription"", ""dtype"": ""string""}, {""name"": ""seamlessm4t_asr"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr_cer"", ""sequence"": ""float64""}, {""name"": ""seamlessm4t_asr_translation"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr_wer"", ""sequence"": ""float64""}, {""name"": ""sentence"", ""dtype"": ""string""}, {""name"": ""sentence_idx"", ""dtype"": ""int64""}, {""name"": ""speaker_id"", ""sequence"": ""int64""}, {""name"": ""split"", ""sequence"": ""string""}, {""name"": ""topic"", ""dtype"": ""string""}, {""name"": ""transcription"", ""dtype"": ""string""}, {""name"": ""whisper_asr"", ""sequence"": ""string""}, {""name"": ""whisper_asr_cer"", ""sequence"": ""float64""}, {""name"": ""whisper_asr_translation"", ""sequence"": ""string""}, {""name"": ""whisper_asr_wer"", ""sequence"": ""float64""}]}], ""splits"": [{""name"": ""test"", ""num_bytes"": 5214380593.0, ""num_examples"": 569}], ""download_size"": 2820054584, ""dataset_size"": 5214380593.0}, {""config_name"": ""nya_Latn"", ""features"": [{""name"": ""link"", ""dtype"": ""string""}, {""name"": ""question_number"", ""dtype"": ""int64""}, {""name"": ""flores_passage"", ""dtype"": ""string""}, {""name"": ""question"", ""dtype"": ""string""}, {""name"": ""mc_answer1"", ""dtype"": ""string""}, {""name"": ""mc_answer2"", ""dtype"": ""string""}, {""name"": ""mc_answer3"", ""dtype"": ""string""}, {""name"": ""mc_answer4"", ""dtype"": ""string""}, {""name"": ""correct_answer_num"", ""dtype"": ""string""}, {""name"": ""dialect"", ""dtype"": ""string""}, {""name"": ""ds"", ""dtype"": ""timestamp[us]""}, {""name"": ""sentence_data"", ""list"": [{""name"": ""URL"", ""dtype"": ""string""}, {""name"": ""audio"", ""sequence"": {""audio"": {""sampling_rate"": 16000}}}, {""name"": ""domain"", ""dtype"": ""string""}, {""name"": ""filename"", ""sequence"": ""string""}, {""name"": ""fleurs_id"", ""dtype"": ""int64""}, {""name"": ""full_paragraph"", ""dtype"": ""bool""}, {""name"": ""gender"", ""sequence"": ""string""}, {""name"": ""has_hyperlink"", ""dtype"": ""int64""}, {""name"": ""has_image"", ""dtype"": ""int64""}, {""name"": ""id"", ""dtype"": ""int64""}, {""name"": ""num_samples"", ""sequence"": ""int64""}, {""name"": ""raw_transcription"", ""dtype"": ""string""}, {""name"": ""seamlessm4t_asr"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr_cer"", ""sequence"": ""float64""}, {""name"": ""seamlessm4t_asr_translation"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr_wer"", ""sequence"": ""float64""}, {""name"": ""sentence"", ""dtype"": ""string""}, {""name"": ""sentence_idx"", ""dtype"": ""int64""}, {""name"": ""speaker_id"", ""sequence"": ""int64""}, {""name"": ""split"", ""sequence"": ""string""}, {""name"": ""topic"", ""dtype"": ""string""}, {""name"": ""transcription"", ""dtype"": ""string""}, {""name"": ""whisper_asr"", ""sequence"": ""string""}, {""name"": ""whisper_asr_cer"", ""sequence"": ""float64""}, {""name"": ""whisper_asr_translation"", ""sequence"": ""string""}, {""name"": ""whisper_asr_wer"", ""sequence"": ""float64""}]}], ""splits"": [{""name"": ""test"", ""num_bytes"": 5171597207.0, ""num_examples"": 752}], ""download_size"": 2798443867, ""dataset_size"": 5171597207.0}, {""config_name"": ""ory_Orya"", ""features"": [{""name"": ""link"", ""dtype"": ""string""}, {""name"": ""question_number"", ""dtype"": ""int64""}, {""name"": ""flores_passage"", ""dtype"": ""string""}, {""name"": ""question"", ""dtype"": ""string""}, {""name"": ""mc_answer1"", ""dtype"": ""string""}, {""name"": ""mc_answer2"", ""dtype"": ""string""}, {""name"": ""mc_answer3"", ""dtype"": ""string""}, {""name"": ""mc_answer4"", ""dtype"": ""string""}, {""name"": ""correct_answer_num"", ""dtype"": ""string""}, {""name"": ""dialect"", ""dtype"": ""string""}, {""name"": ""ds"", ""dtype"": ""timestamp[us]""}, {""name"": ""sentence_data"", ""list"": [{""name"": ""URL"", ""dtype"": ""string""}, {""name"": ""audio"", ""sequence"": {""audio"": {""sampling_rate"": 16000}}}, {""name"": ""domain"", ""dtype"": ""string""}, {""name"": ""filename"", ""sequence"": ""string""}, {""name"": ""fleurs_id"", ""dtype"": ""int64""}, {""name"": ""full_paragraph"", ""dtype"": ""bool""}, {""name"": ""gender"", ""sequence"": ""string""}, {""name"": ""has_hyperlink"", ""dtype"": ""int64""}, {""name"": ""has_image"", ""dtype"": ""int64""}, {""name"": ""id"", ""dtype"": ""int64""}, {""name"": ""num_samples"", ""sequence"": ""int64""}, {""name"": ""raw_transcription"", ""dtype"": ""string""}, {""name"": ""seamlessm4t_asr"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr_cer"", ""sequence"": ""float64""}, {""name"": ""seamlessm4t_asr_translation"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr_wer"", ""sequence"": ""float64""}, {""name"": ""sentence"", ""dtype"": ""string""}, {""name"": ""sentence_idx"", ""dtype"": ""int64""}, {""name"": ""speaker_id"", ""sequence"": ""int64""}, {""name"": ""split"", ""sequence"": ""string""}, {""name"": ""topic"", ""dtype"": ""string""}, {""name"": ""transcription"", ""dtype"": ""string""}, {""name"": ""whisper_asr"", ""sequence"": ""string""}, {""name"": ""whisper_asr_cer"", ""sequence"": ""float64""}, {""name"": ""whisper_asr_translation"", ""sequence"": ""string""}, {""name"": ""whisper_asr_wer"", ""sequence"": ""float64""}]}], ""splits"": [{""name"": ""test"", ""num_bytes"": 952123794.0, ""num_examples"": 220}], ""download_size"": 530607239, ""dataset_size"": 952123794.0}, {""config_name"": ""pan_Guru"", ""features"": [{""name"": ""link"", ""dtype"": ""string""}, {""name"": ""question_number"", ""dtype"": ""int64""}, {""name"": ""flores_passage"", ""dtype"": ""string""}, {""name"": ""question"", ""dtype"": ""string""}, {""name"": ""mc_answer1"", ""dtype"": ""string""}, {""name"": ""mc_answer2"", ""dtype"": ""string""}, {""name"": ""mc_answer3"", ""dtype"": ""string""}, {""name"": ""mc_answer4"", ""dtype"": ""string""}, {""name"": ""correct_answer_num"", ""dtype"": ""string""}, {""name"": ""dialect"", ""dtype"": ""string""}, {""name"": ""ds"", ""dtype"": ""timestamp[us]""}, {""name"": ""sentence_data"", ""list"": [{""name"": ""URL"", ""dtype"": ""string""}, {""name"": ""audio"", ""sequence"": {""audio"": {""sampling_rate"": 16000}}}, {""name"": ""domain"", ""dtype"": ""string""}, {""name"": ""filename"", ""sequence"": ""string""}, {""name"": ""fleurs_id"", ""dtype"": ""int64""}, {""name"": ""full_paragraph"", ""dtype"": ""bool""}, {""name"": ""gender"", ""sequence"": ""string""}, {""name"": ""has_hyperlink"", ""dtype"": ""int64""}, {""name"": ""has_image"", ""dtype"": ""int64""}, {""name"": ""id"", ""dtype"": ""int64""}, {""name"": ""num_samples"", ""sequence"": ""int64""}, {""name"": ""raw_transcription"", ""dtype"": ""string""}, {""name"": ""seamlessm4t_asr"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr_cer"", ""sequence"": ""float64""}, {""name"": ""seamlessm4t_asr_translation"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr_wer"", ""sequence"": ""float64""}, {""name"": ""sentence"", ""dtype"": ""string""}, {""name"": ""sentence_idx"", ""dtype"": ""int64""}, {""name"": ""speaker_id"", ""sequence"": ""int64""}, {""name"": ""split"", ""sequence"": ""string""}, {""name"": ""topic"", ""dtype"": ""string""}, {""name"": ""transcription"", ""dtype"": ""string""}, {""name"": ""whisper_asr"", ""sequence"": ""string""}, {""name"": ""whisper_asr_cer"", ""sequence"": ""float64""}, {""name"": ""whisper_asr_translation"", ""sequence"": ""string""}, {""name"": ""whisper_asr_wer"", ""sequence"": ""float64""}]}], ""splits"": [{""name"": ""test"", ""num_bytes"": 1790548604.0, ""num_examples"": 396}], ""download_size"": 981900813, ""dataset_size"": 1790548604.0}, {""config_name"": ""pbt_Arab"", ""features"": [{""name"": ""link"", ""dtype"": ""string""}, {""name"": ""question_number"", ""dtype"": ""int64""}, {""name"": ""flores_passage"", ""dtype"": ""string""}, {""name"": ""question"", ""dtype"": ""string""}, {""name"": ""mc_answer1"", ""dtype"": ""string""}, {""name"": ""mc_answer2"", ""dtype"": ""string""}, {""name"": ""mc_answer3"", ""dtype"": ""string""}, {""name"": ""mc_answer4"", ""dtype"": ""string""}, {""name"": ""correct_answer_num"", ""dtype"": ""string""}, {""name"": ""dialect"", ""dtype"": ""string""}, {""name"": ""ds"", ""dtype"": ""timestamp[us]""}, {""name"": ""sentence_data"", ""list"": [{""name"": ""URL"", ""dtype"": ""string""}, {""name"": ""audio"", ""sequence"": {""audio"": {""sampling_rate"": 16000}}}, {""name"": ""domain"", ""dtype"": ""string""}, {""name"": ""filename"", ""sequence"": ""string""}, {""name"": ""fleurs_id"", ""dtype"": ""int64""}, {""name"": ""full_paragraph"", ""dtype"": ""bool""}, {""name"": ""gender"", ""sequence"": ""string""}, {""name"": ""has_hyperlink"", ""dtype"": ""int64""}, {""name"": ""has_image"", ""dtype"": ""int64""}, {""name"": ""id"", ""dtype"": ""int64""}, {""name"": ""num_samples"", ""sequence"": ""int64""}, {""name"": ""raw_transcription"", ""dtype"": ""string""}, {""name"": ""seamlessm4t_asr"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr_cer"", ""sequence"": ""float64""}, {""name"": ""seamlessm4t_asr_translation"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr_wer"", ""sequence"": ""float64""}, {""name"": ""sentence"", ""dtype"": ""string""}, {""name"": ""sentence_idx"", ""dtype"": ""int64""}, {""name"": ""speaker_id"", ""sequence"": ""int64""}, {""name"": ""split"", ""sequence"": ""string""}, {""name"": ""topic"", ""dtype"": ""string""}, {""name"": ""transcription"", ""dtype"": ""string""}, {""name"": ""whisper_asr"", ""sequence"": ""string""}, {""name"": ""whisper_asr_cer"", ""sequence"": ""float64""}, {""name"": ""whisper_asr_translation"", ""sequence"": ""string""}, {""name"": ""whisper_asr_wer"", ""sequence"": ""float64""}]}], ""splits"": [{""name"": ""test"", ""num_bytes"": 3363027226.0, ""num_examples"": 628}], ""download_size"": 1793620479, ""dataset_size"": 3363027226.0}, {""config_name"": ""pes_Arab"", ""features"": [{""name"": ""link"", ""dtype"": ""string""}, {""name"": ""question_number"", ""dtype"": ""int64""}, {""name"": ""flores_passage"", ""dtype"": ""string""}, {""name"": ""question"", ""dtype"": ""string""}, {""name"": ""mc_answer1"", ""dtype"": ""string""}, {""name"": ""mc_answer2"", ""dtype"": ""string""}, {""name"": ""mc_answer3"", ""dtype"": ""string""}, {""name"": ""mc_answer4"", ""dtype"": ""string""}, {""name"": ""correct_answer_num"", ""dtype"": ""string""}, {""name"": ""dialect"", ""dtype"": ""string""}, {""name"": ""ds"", ""dtype"": ""timestamp[us]""}, {""name"": ""sentence_data"", ""list"": [{""name"": ""URL"", ""dtype"": ""string""}, {""name"": ""audio"", ""sequence"": {""audio"": {""sampling_rate"": 16000}}}, {""name"": ""domain"", ""dtype"": ""string""}, {""name"": ""filename"", ""sequence"": ""string""}, {""name"": ""fleurs_id"", ""dtype"": ""int64""}, {""name"": ""full_paragraph"", ""dtype"": ""bool""}, {""name"": ""gender"", ""sequence"": ""string""}, {""name"": ""has_hyperlink"", ""dtype"": ""int64""}, {""name"": ""has_image"", ""dtype"": ""int64""}, {""name"": ""id"", ""dtype"": ""int64""}, {""name"": ""num_samples"", ""sequence"": ""int64""}, {""name"": ""raw_transcription"", ""dtype"": ""string""}, {""name"": ""seamlessm4t_asr"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr_cer"", ""sequence"": ""float64""}, {""name"": ""seamlessm4t_asr_translation"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr_wer"", ""sequence"": ""float64""}, {""name"": ""sentence"", ""dtype"": ""string""}, {""name"": ""sentence_idx"", ""dtype"": ""int64""}, {""name"": ""speaker_id"", ""sequence"": ""int64""}, {""name"": ""split"", ""sequence"": ""string""}, {""name"": ""topic"", ""dtype"": ""string""}, {""name"": ""transcription"", ""dtype"": ""string""}, {""name"": ""whisper_asr"", ""sequence"": ""string""}, {""name"": ""whisper_asr_cer"", ""sequence"": ""float64""}, {""name"": ""whisper_asr_translation"", ""sequence"": ""string""}, {""name"": ""whisper_asr_wer"", ""sequence"": ""float64""}]}], ""splits"": [{""name"": ""test"", ""num_bytes"": 5254241288.0, ""num_examples"": 673}], ""download_size"": 2841830079, ""dataset_size"": 5254241288.0}, {""config_name"": ""pol_Latn"", ""features"": [{""name"": ""link"", ""dtype"": ""string""}, {""name"": ""question_number"", ""dtype"": ""int64""}, {""name"": ""flores_passage"", ""dtype"": ""string""}, {""name"": ""question"", ""dtype"": ""string""}, {""name"": ""mc_answer1"", ""dtype"": ""string""}, {""name"": ""mc_answer2"", ""dtype"": ""string""}, {""name"": ""mc_answer3"", ""dtype"": ""string""}, {""name"": ""mc_answer4"", ""dtype"": ""string""}, {""name"": ""correct_answer_num"", ""dtype"": ""string""}, {""name"": ""dialect"", ""dtype"": ""string""}, {""name"": ""ds"", ""dtype"": ""timestamp[us]""}, {""name"": ""sentence_data"", ""list"": [{""name"": ""URL"", ""dtype"": ""string""}, {""name"": ""audio"", ""sequence"": {""audio"": {""sampling_rate"": 16000}}}, {""name"": ""domain"", ""dtype"": ""string""}, {""name"": ""filename"", ""sequence"": ""string""}, {""name"": ""fleurs_id"", ""dtype"": ""int64""}, {""name"": ""full_paragraph"", ""dtype"": ""bool""}, {""name"": ""gender"", ""sequence"": ""string""}, {""name"": ""has_hyperlink"", ""dtype"": ""int64""}, {""name"": ""has_image"", ""dtype"": ""int64""}, {""name"": ""id"", ""dtype"": ""int64""}, {""name"": ""num_samples"", ""sequence"": ""int64""}, {""name"": ""raw_transcription"", ""dtype"": ""string""}, {""name"": ""seamlessm4t_asr"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr_cer"", ""sequence"": ""float64""}, {""name"": ""seamlessm4t_asr_translation"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr_wer"", ""sequence"": ""float64""}, {""name"": ""sentence"", ""dtype"": ""string""}, {""name"": ""sentence_idx"", ""dtype"": ""int64""}, {""name"": ""speaker_id"", ""sequence"": ""int64""}, {""name"": ""split"", ""sequence"": ""string""}, {""name"": ""topic"", ""dtype"": ""string""}, {""name"": ""transcription"", ""dtype"": ""string""}, {""name"": ""whisper_asr"", ""sequence"": ""string""}, {""name"": ""whisper_asr_cer"", ""sequence"": ""float64""}, {""name"": ""whisper_asr_translation"", ""sequence"": ""string""}, {""name"": ""whisper_asr_wer"", ""sequence"": ""float64""}]}], ""splits"": [{""name"": ""test"", ""num_bytes"": 4081649841.0, ""num_examples"": 765}], ""download_size"": 2178140885, ""dataset_size"": 4081649841.0}, {""config_name"": ""por_Latn"", ""features"": [{""name"": ""link"", ""dtype"": ""string""}, {""name"": ""question_number"", ""dtype"": ""int64""}, {""name"": ""flores_passage"", ""dtype"": ""string""}, {""name"": ""question"", ""dtype"": ""string""}, {""name"": ""mc_answer1"", ""dtype"": ""string""}, {""name"": ""mc_answer2"", ""dtype"": ""string""}, {""name"": ""mc_answer3"", ""dtype"": ""string""}, {""name"": ""mc_answer4"", ""dtype"": ""string""}, {""name"": ""correct_answer_num"", ""dtype"": ""string""}, {""name"": ""dialect"", ""dtype"": ""string""}, {""name"": ""ds"", ""dtype"": ""timestamp[us]""}, {""name"": ""sentence_data"", ""list"": [{""name"": ""URL"", ""dtype"": ""string""}, {""name"": ""audio"", ""sequence"": {""audio"": {""sampling_rate"": 16000}}}, {""name"": ""domain"", ""dtype"": ""string""}, {""name"": ""filename"", ""sequence"": ""string""}, {""name"": ""fleurs_id"", ""dtype"": ""int64""}, {""name"": ""full_paragraph"", ""dtype"": ""bool""}, {""name"": ""gender"", ""sequence"": ""string""}, {""name"": ""has_hyperlink"", ""dtype"": ""int64""}, {""name"": ""has_image"", ""dtype"": ""int64""}, {""name"": ""id"", ""dtype"": ""int64""}, {""name"": ""num_samples"", ""sequence"": ""int64""}, {""name"": ""raw_transcription"", ""dtype"": ""string""}, {""name"": ""seamlessm4t_asr"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr_cer"", ""sequence"": ""float64""}, {""name"": ""seamlessm4t_asr_translation"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr_wer"", ""sequence"": ""float64""}, {""name"": ""sentence"", ""dtype"": ""string""}, {""name"": ""sentence_idx"", ""dtype"": ""int64""}, {""name"": ""speaker_id"", ""sequence"": ""int64""}, {""name"": ""split"", ""sequence"": ""string""}, {""name"": ""topic"", ""dtype"": ""string""}, {""name"": ""transcription"", ""dtype"": ""string""}, {""name"": ""whisper_asr"", ""sequence"": ""string""}, {""name"": ""whisper_asr_cer"", ""sequence"": ""float64""}, {""name"": ""whisper_asr_translation"", ""sequence"": ""string""}, {""name"": ""whisper_asr_wer"", ""sequence"": ""float64""}]}], ""splits"": [{""name"": ""test"", ""num_bytes"": 5067443206.0, ""num_examples"": 791}], ""download_size"": 2728057508, ""dataset_size"": 5067443206.0}, {""config_name"": ""ron_Latn"", ""features"": [{""name"": ""link"", ""dtype"": ""string""}, {""name"": ""question_number"", ""dtype"": ""int64""}, {""name"": ""flores_passage"", ""dtype"": ""string""}, {""name"": ""question"", ""dtype"": ""string""}, {""name"": ""mc_answer1"", ""dtype"": ""string""}, {""name"": ""mc_answer2"", ""dtype"": ""string""}, {""name"": ""mc_answer3"", ""dtype"": ""string""}, {""name"": ""mc_answer4"", ""dtype"": ""string""}, {""name"": ""correct_answer_num"", ""dtype"": ""string""}, {""name"": ""dialect"", ""dtype"": ""string""}, {""name"": ""ds"", ""dtype"": ""timestamp[us]""}, {""name"": ""sentence_data"", ""list"": [{""name"": ""URL"", ""dtype"": ""string""}, {""name"": ""audio"", ""sequence"": {""audio"": {""sampling_rate"": 16000}}}, {""name"": ""domain"", ""dtype"": ""string""}, {""name"": ""filename"", ""sequence"": ""string""}, {""name"": ""fleurs_id"", ""dtype"": ""int64""}, {""name"": ""full_paragraph"", ""dtype"": ""bool""}, {""name"": ""gender"", ""sequence"": ""string""}, {""name"": ""has_hyperlink"", ""dtype"": ""int64""}, {""name"": ""has_image"", ""dtype"": ""int64""}, {""name"": ""id"", ""dtype"": ""int64""}, {""name"": ""num_samples"", ""sequence"": ""int64""}, {""name"": ""raw_transcription"", ""dtype"": ""string""}, {""name"": ""seamlessm4t_asr"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr_cer"", ""sequence"": ""float64""}, {""name"": ""seamlessm4t_asr_translation"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr_wer"", ""sequence"": ""float64""}, {""name"": ""sentence"", ""dtype"": ""string""}, {""name"": ""sentence_idx"", ""dtype"": ""int64""}, {""name"": ""speaker_id"", ""sequence"": ""int64""}, {""name"": ""split"", ""sequence"": ""string""}, {""name"": ""topic"", ""dtype"": ""string""}, {""name"": ""transcription"", ""dtype"": ""string""}, {""name"": ""whisper_asr"", ""sequence"": ""string""}, {""name"": ""whisper_asr_cer"", ""sequence"": ""float64""}, {""name"": ""whisper_asr_translation"", ""sequence"": ""string""}, {""name"": ""whisper_asr_wer"", ""sequence"": ""float64""}]}], ""splits"": [{""name"": ""test"", ""num_bytes"": 4918450774.0, ""num_examples"": 815}], ""download_size"": 2651664539, ""dataset_size"": 4918450774.0}, {""config_name"": ""rus_Cyrl"", ""features"": [{""name"": ""link"", ""dtype"": ""string""}, {""name"": ""question_number"", ""dtype"": ""int64""}, {""name"": ""flores_passage"", ""dtype"": ""string""}, {""name"": ""question"", ""dtype"": ""string""}, {""name"": ""mc_answer1"", ""dtype"": ""string""}, {""name"": ""mc_answer2"", ""dtype"": ""string""}, {""name"": ""mc_answer3"", ""dtype"": ""string""}, {""name"": ""mc_answer4"", ""dtype"": ""string""}, {""name"": ""correct_answer_num"", ""dtype"": ""string""}, {""name"": ""dialect"", ""dtype"": ""string""}, {""name"": ""ds"", ""dtype"": ""timestamp[us]""}, {""name"": ""sentence_data"", ""list"": [{""name"": ""URL"", ""dtype"": ""string""}, {""name"": ""audio"", ""sequence"": {""audio"": {""sampling_rate"": 16000}}}, {""name"": ""domain"", ""dtype"": ""string""}, {""name"": ""filename"", ""sequence"": ""string""}, {""name"": ""fleurs_id"", ""dtype"": ""int64""}, {""name"": ""full_paragraph"", ""dtype"": ""bool""}, {""name"": ""gender"", ""sequence"": ""string""}, {""name"": ""has_hyperlink"", ""dtype"": ""int64""}, {""name"": ""has_image"", ""dtype"": ""int64""}, {""name"": ""id"", ""dtype"": ""int64""}, {""name"": ""num_samples"", ""sequence"": ""int64""}, {""name"": ""raw_transcription"", ""dtype"": ""string""}, {""name"": ""seamlessm4t_asr"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr_cer"", ""sequence"": ""float64""}, {""name"": ""seamlessm4t_asr_translation"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr_wer"", ""sequence"": ""float64""}, {""name"": ""sentence"", ""dtype"": ""string""}, {""name"": ""sentence_idx"", ""dtype"": ""int64""}, {""name"": ""speaker_id"", ""sequence"": ""int64""}, {""name"": ""split"", ""sequence"": ""string""}, {""name"": ""topic"", ""dtype"": ""string""}, {""name"": ""transcription"", ""dtype"": ""string""}, {""name"": ""whisper_asr"", ""sequence"": ""string""}, {""name"": ""whisper_asr_cer"", ""sequence"": ""float64""}, {""name"": ""whisper_asr_translation"", ""sequence"": ""string""}, {""name"": ""whisper_asr_wer"", ""sequence"": ""float64""}]}], ""splits"": [{""name"": ""test"", ""num_bytes"": 4088257099.0, ""num_examples"": 819}], ""download_size"": 2205120246, ""dataset_size"": 4088257099.0}, {""config_name"": ""slk_Latn"", ""features"": [{""name"": ""link"", ""dtype"": ""string""}, {""name"": ""question_number"", ""dtype"": ""int64""}, {""name"": ""flores_passage"", ""dtype"": ""string""}, {""name"": ""question"", ""dtype"": ""string""}, {""name"": ""mc_answer1"", ""dtype"": ""string""}, {""name"": ""mc_answer2"", ""dtype"": ""string""}, {""name"": ""mc_answer3"", ""dtype"": ""string""}, {""name"": ""mc_answer4"", ""dtype"": ""string""}, {""name"": ""correct_answer_num"", ""dtype"": ""string""}, {""name"": ""dialect"", ""dtype"": ""string""}, {""name"": ""ds"", ""dtype"": ""timestamp[us]""}, {""name"": ""sentence_data"", ""list"": [{""name"": ""URL"", ""dtype"": ""string""}, {""name"": ""audio"", ""sequence"": {""audio"": {""sampling_rate"": 16000}}}, {""name"": ""domain"", ""dtype"": ""string""}, {""name"": ""filename"", ""sequence"": ""string""}, {""name"": ""fleurs_id"", ""dtype"": ""int64""}, {""name"": ""full_paragraph"", ""dtype"": ""bool""}, {""name"": ""gender"", ""sequence"": ""string""}, {""name"": ""has_hyperlink"", ""dtype"": ""int64""}, {""name"": ""has_image"", ""dtype"": ""int64""}, {""name"": ""id"", ""dtype"": ""int64""}, {""name"": ""num_samples"", ""sequence"": ""int64""}, {""name"": ""raw_transcription"", ""dtype"": ""string""}, {""name"": ""seamlessm4t_asr"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr_cer"", ""sequence"": ""float64""}, {""name"": ""seamlessm4t_asr_translation"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr_wer"", ""sequence"": ""float64""}, {""name"": ""sentence"", ""dtype"": ""string""}, {""name"": ""sentence_idx"", ""dtype"": ""int64""}, {""name"": ""speaker_id"", ""sequence"": ""int64""}, {""name"": ""split"", ""sequence"": ""string""}, {""name"": ""topic"", ""dtype"": ""string""}, {""name"": ""transcription"", ""dtype"": ""string""}, {""name"": ""whisper_asr"", ""sequence"": ""string""}, {""name"": ""whisper_asr_cer"", ""sequence"": ""float64""}, {""name"": ""whisper_asr_translation"", ""sequence"": ""string""}, {""name"": ""whisper_asr_wer"", ""sequence"": ""float64""}]}], ""splits"": [{""name"": ""test"", ""num_bytes"": 2314088471.0, ""num_examples"": 513}], ""download_size"": 1275825426, ""dataset_size"": 2314088471.0}, {""config_name"": ""slv_Latn"", ""features"": [{""name"": ""link"", ""dtype"": ""string""}, {""name"": ""question_number"", ""dtype"": ""int64""}, {""name"": ""flores_passage"", ""dtype"": ""string""}, {""name"": ""question"", ""dtype"": ""string""}, {""name"": ""mc_answer1"", ""dtype"": ""string""}, {""name"": ""mc_answer2"", ""dtype"": ""string""}, {""name"": ""mc_answer3"", ""dtype"": ""string""}, {""name"": ""mc_answer4"", ""dtype"": ""string""}, {""name"": ""correct_answer_num"", ""dtype"": ""string""}, {""name"": ""dialect"", ""dtype"": ""string""}, {""name"": ""ds"", ""dtype"": ""timestamp[us]""}, {""name"": ""sentence_data"", ""list"": [{""name"": ""URL"", ""dtype"": ""string""}, {""name"": ""audio"", ""sequence"": {""audio"": {""sampling_rate"": 16000}}}, {""name"": ""domain"", ""dtype"": ""string""}, {""name"": ""filename"", ""sequence"": ""string""}, {""name"": ""fleurs_id"", ""dtype"": ""int64""}, {""name"": ""full_paragraph"", ""dtype"": ""bool""}, {""name"": ""gender"", ""sequence"": ""string""}, {""name"": ""has_hyperlink"", ""dtype"": ""int64""}, {""name"": ""has_image"", ""dtype"": ""int64""}, {""name"": ""id"", ""dtype"": ""int64""}, {""name"": ""num_samples"", ""sequence"": ""int64""}, {""name"": ""raw_transcription"", ""dtype"": ""string""}, {""name"": ""seamlessm4t_asr"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr_cer"", ""sequence"": ""float64""}, {""name"": ""seamlessm4t_asr_translation"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr_wer"", ""sequence"": ""float64""}, {""name"": ""sentence"", ""dtype"": ""string""}, {""name"": ""sentence_idx"", ""dtype"": ""int64""}, {""name"": ""speaker_id"", ""sequence"": ""int64""}, {""name"": ""split"", ""sequence"": ""string""}, {""name"": ""topic"", ""dtype"": ""string""}, {""name"": ""transcription"", ""dtype"": ""string""}, {""name"": ""whisper_asr"", ""sequence"": ""string""}, {""name"": ""whisper_asr_cer"", ""sequence"": ""float64""}, {""name"": ""whisper_asr_translation"", ""sequence"": ""string""}, {""name"": ""whisper_asr_wer"", ""sequence"": ""float64""}]}], ""splits"": [{""name"": ""test"", ""num_bytes"": 3512597020.0, ""num_examples"": 724}], ""download_size"": 1860950627, ""dataset_size"": 3512597020.0}, {""config_name"": ""sna_Latn"", ""features"": [{""name"": ""link"", ""dtype"": ""string""}, {""name"": ""question_number"", ""dtype"": ""int64""}, {""name"": ""flores_passage"", ""dtype"": ""string""}, {""name"": ""question"", ""dtype"": ""string""}, {""name"": ""mc_answer1"", ""dtype"": ""string""}, {""name"": ""mc_answer2"", ""dtype"": ""string""}, {""name"": ""mc_answer3"", ""dtype"": ""string""}, {""name"": ""mc_answer4"", ""dtype"": ""string""}, {""name"": ""correct_answer_num"", ""dtype"": ""string""}, {""name"": ""dialect"", ""dtype"": ""string""}, {""name"": ""ds"", ""dtype"": ""timestamp[us]""}, {""name"": ""sentence_data"", ""list"": [{""name"": ""URL"", ""dtype"": ""string""}, {""name"": ""audio"", ""sequence"": {""audio"": {""sampling_rate"": 16000}}}, {""name"": ""domain"", ""dtype"": ""string""}, {""name"": ""filename"", ""sequence"": ""string""}, {""name"": ""fleurs_id"", ""dtype"": ""int64""}, {""name"": ""full_paragraph"", ""dtype"": ""bool""}, {""name"": ""gender"", ""sequence"": ""string""}, {""name"": ""has_hyperlink"", ""dtype"": ""int64""}, {""name"": ""has_image"", ""dtype"": ""int64""}, {""name"": ""id"", ""dtype"": ""int64""}, {""name"": ""num_samples"", ""sequence"": ""int64""}, {""name"": ""raw_transcription"", ""dtype"": ""string""}, {""name"": ""seamlessm4t_asr"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr_cer"", ""sequence"": ""float64""}, {""name"": ""seamlessm4t_asr_translation"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr_wer"", ""sequence"": ""float64""}, {""name"": ""sentence"", ""dtype"": ""string""}, {""name"": ""sentence_idx"", ""dtype"": ""int64""}, {""name"": ""speaker_id"", ""sequence"": ""int64""}, {""name"": ""split"", ""sequence"": ""string""}, {""name"": ""topic"", ""dtype"": ""string""}, {""name"": ""transcription"", ""dtype"": ""string""}, {""name"": ""whisper_asr"", ""sequence"": ""string""}, {""name"": ""whisper_asr_cer"", ""sequence"": ""float64""}, {""name"": ""whisper_asr_translation"", ""sequence"": ""string""}, {""name"": ""whisper_asr_wer"", ""sequence"": ""float64""}]}], ""splits"": [{""name"": ""test"", ""num_bytes"": 4943509770.0, ""num_examples"": 735}], ""download_size"": 2636061545, ""dataset_size"": 4943509770.0}, {""config_name"": ""snd_Arab"", ""features"": [{""name"": ""link"", ""dtype"": ""string""}, {""name"": ""question_number"", ""dtype"": ""int64""}, {""name"": ""flores_passage"", ""dtype"": ""string""}, {""name"": ""question"", ""dtype"": ""string""}, {""name"": ""mc_answer1"", ""dtype"": ""string""}, {""name"": ""mc_answer2"", ""dtype"": ""string""}, {""name"": ""mc_answer3"", ""dtype"": ""string""}, {""name"": ""mc_answer4"", ""dtype"": ""string""}, {""name"": ""correct_answer_num"", ""dtype"": ""string""}, {""name"": ""dialect"", ""dtype"": ""string""}, {""name"": ""ds"", ""dtype"": ""timestamp[us]""}, {""name"": ""sentence_data"", ""list"": [{""name"": ""URL"", ""dtype"": ""string""}, {""name"": ""audio"", ""sequence"": {""audio"": {""sampling_rate"": 16000}}}, {""name"": ""domain"", ""dtype"": ""string""}, {""name"": ""filename"", ""sequence"": ""string""}, {""name"": ""fleurs_id"", ""dtype"": ""int64""}, {""name"": ""full_paragraph"", ""dtype"": ""bool""}, {""name"": ""gender"", ""sequence"": ""string""}, {""name"": ""has_hyperlink"", ""dtype"": ""int64""}, {""name"": ""has_image"", ""dtype"": ""int64""}, {""name"": ""id"", ""dtype"": ""int64""}, {""name"": ""num_samples"", ""sequence"": ""int64""}, {""name"": ""raw_transcription"", ""dtype"": ""string""}, {""name"": ""seamlessm4t_asr"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr_cer"", ""sequence"": ""float64""}, {""name"": ""seamlessm4t_asr_translation"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr_wer"", ""sequence"": ""float64""}, {""name"": ""sentence"", ""dtype"": ""string""}, {""name"": ""sentence_idx"", ""dtype"": ""int64""}, {""name"": ""speaker_id"", ""sequence"": ""int64""}, {""name"": ""split"", ""sequence"": ""string""}, {""name"": ""topic"", ""dtype"": ""string""}, {""name"": ""transcription"", ""dtype"": ""string""}, {""name"": ""whisper_asr"", ""sequence"": ""string""}, {""name"": ""whisper_asr_cer"", ""sequence"": ""float64""}, {""name"": ""whisper_asr_translation"", ""sequence"": ""string""}, {""name"": ""whisper_asr_wer"", ""sequence"": ""float64""}]}], ""splits"": [{""name"": ""test"", ""num_bytes"": 5905092711.0, ""num_examples"": 878}], ""download_size"": 3199596548, ""dataset_size"": 5905092711.0}, {""config_name"": ""som_Latn"", ""features"": [{""name"": ""link"", ""dtype"": ""string""}, {""name"": ""question_number"", ""dtype"": ""int64""}, {""name"": ""flores_passage"", ""dtype"": ""string""}, {""name"": ""question"", ""dtype"": ""string""}, {""name"": ""mc_answer1"", ""dtype"": ""string""}, {""name"": ""mc_answer2"", ""dtype"": ""string""}, {""name"": ""mc_answer3"", ""dtype"": ""string""}, {""name"": ""mc_answer4"", ""dtype"": ""string""}, {""name"": ""correct_answer_num"", ""dtype"": ""string""}, {""name"": ""dialect"", ""dtype"": ""string""}, {""name"": ""ds"", ""dtype"": ""timestamp[us]""}, {""name"": ""sentence_data"", ""list"": [{""name"": ""URL"", ""dtype"": ""string""}, {""name"": ""audio"", ""sequence"": {""audio"": {""sampling_rate"": 16000}}}, {""name"": ""domain"", ""dtype"": ""string""}, {""name"": ""filename"", ""sequence"": ""string""}, {""name"": ""fleurs_id"", ""dtype"": ""int64""}, {""name"": ""full_paragraph"", ""dtype"": ""bool""}, {""name"": ""gender"", ""sequence"": ""string""}, {""name"": ""has_hyperlink"", ""dtype"": ""int64""}, {""name"": ""has_image"", ""dtype"": ""int64""}, {""name"": ""id"", ""dtype"": ""int64""}, {""name"": ""num_samples"", ""sequence"": ""int64""}, {""name"": ""raw_transcription"", ""dtype"": ""string""}, {""name"": ""seamlessm4t_asr"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr_cer"", ""sequence"": ""float64""}, {""name"": ""seamlessm4t_asr_translation"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr_wer"", ""sequence"": ""float64""}, {""name"": ""sentence"", ""dtype"": ""string""}, {""name"": ""sentence_idx"", ""dtype"": ""int64""}, {""name"": ""speaker_id"", ""sequence"": ""int64""}, {""name"": ""split"", ""sequence"": ""string""}, {""name"": ""topic"", ""dtype"": ""string""}, {""name"": ""transcription"", ""dtype"": ""string""}, {""name"": ""whisper_asr"", ""sequence"": ""string""}, {""name"": ""whisper_asr_cer"", ""sequence"": ""float64""}, {""name"": ""whisper_asr_translation"", ""sequence"": ""string""}, {""name"": ""whisper_asr_wer"", ""sequence"": ""float64""}]}], ""splits"": [{""name"": ""test"", ""num_bytes"": 7045590404.0, ""num_examples"": 874}], ""download_size"": 3795317446, ""dataset_size"": 7045590404.0}, {""config_name"": ""spa_Latn"", ""features"": [{""name"": ""link"", ""dtype"": ""string""}, {""name"": ""question_number"", ""dtype"": ""int64""}, {""name"": ""flores_passage"", ""dtype"": ""string""}, {""name"": ""question"", ""dtype"": ""string""}, {""name"": ""mc_answer1"", ""dtype"": ""string""}, {""name"": ""mc_answer2"", ""dtype"": ""string""}, {""name"": ""mc_answer3"", ""dtype"": ""string""}, {""name"": ""mc_answer4"", ""dtype"": ""string""}, {""name"": ""correct_answer_num"", ""dtype"": ""string""}, {""name"": ""dialect"", ""dtype"": ""string""}, {""name"": ""ds"", ""dtype"": ""timestamp[us]""}, {""name"": ""sentence_data"", ""list"": [{""name"": ""URL"", ""dtype"": ""string""}, {""name"": ""audio"", ""sequence"": {""audio"": {""sampling_rate"": 16000}}}, {""name"": ""domain"", ""dtype"": ""string""}, {""name"": ""filename"", ""sequence"": ""string""}, {""name"": ""fleurs_id"", ""dtype"": ""int64""}, {""name"": ""full_paragraph"", ""dtype"": ""bool""}, {""name"": ""gender"", ""sequence"": ""string""}, {""name"": ""has_hyperlink"", ""dtype"": ""int64""}, {""name"": ""has_image"", ""dtype"": ""int64""}, {""name"": ""id"", ""dtype"": ""int64""}, {""name"": ""num_samples"", ""sequence"": ""int64""}, {""name"": ""raw_transcription"", ""dtype"": ""string""}, {""name"": ""seamlessm4t_asr"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr_cer"", ""sequence"": ""float64""}, {""name"": ""seamlessm4t_asr_translation"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr_wer"", ""sequence"": ""float64""}, {""name"": ""sentence"", ""dtype"": ""string""}, {""name"": ""sentence_idx"", ""dtype"": ""int64""}, {""name"": ""speaker_id"", ""sequence"": ""int64""}, {""name"": ""split"", ""sequence"": ""string""}, {""name"": ""topic"", ""dtype"": ""string""}, {""name"": ""transcription"", ""dtype"": ""string""}, {""name"": ""whisper_asr"", ""sequence"": ""string""}, {""name"": ""whisper_asr_cer"", ""sequence"": ""float64""}, {""name"": ""whisper_asr_translation"", ""sequence"": ""string""}, {""name"": ""whisper_asr_wer"", ""sequence"": ""float64""}]}], ""splits"": [{""name"": ""test"", ""num_bytes"": 3379035485.0, ""num_examples"": 659}], ""download_size"": 1802935790, ""dataset_size"": 3379035485.0}, {""config_name"": ""srp_Cyrl"", ""features"": [{""name"": ""link"", ""dtype"": ""string""}, {""name"": ""question_number"", ""dtype"": ""int64""}, {""name"": ""flores_passage"", ""dtype"": ""string""}, {""name"": ""question"", ""dtype"": ""string""}, {""name"": ""mc_answer1"", ""dtype"": ""string""}, {""name"": ""mc_answer2"", ""dtype"": ""string""}, {""name"": ""mc_answer3"", ""dtype"": ""string""}, {""name"": ""mc_answer4"", ""dtype"": ""string""}, {""name"": ""correct_answer_num"", ""dtype"": ""string""}, {""name"": ""dialect"", ""dtype"": ""string""}, {""name"": ""ds"", ""dtype"": ""timestamp[us]""}, {""name"": ""sentence_data"", ""list"": [{""name"": ""URL"", ""dtype"": ""string""}, {""name"": ""audio"", ""sequence"": {""audio"": {""sampling_rate"": 16000}}}, {""name"": ""domain"", ""dtype"": ""string""}, {""name"": ""filename"", ""sequence"": ""string""}, {""name"": ""fleurs_id"", ""dtype"": ""int64""}, {""name"": ""full_paragraph"", ""dtype"": ""bool""}, {""name"": ""gender"", ""sequence"": ""string""}, {""name"": ""has_hyperlink"", ""dtype"": ""int64""}, {""name"": ""has_image"", ""dtype"": ""int64""}, {""name"": ""id"", ""dtype"": ""int64""}, {""name"": ""num_samples"", ""sequence"": ""int64""}, {""name"": ""raw_transcription"", ""dtype"": ""string""}, {""name"": ""seamlessm4t_asr"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr_cer"", ""sequence"": ""float64""}, {""name"": ""seamlessm4t_asr_translation"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr_wer"", ""sequence"": ""float64""}, {""name"": ""sentence"", ""dtype"": ""string""}, {""name"": ""sentence_idx"", ""dtype"": ""int64""}, {""name"": ""speaker_id"", ""sequence"": ""int64""}, {""name"": ""split"", ""sequence"": ""string""}, {""name"": ""topic"", ""dtype"": ""string""}, {""name"": ""transcription"", ""dtype"": ""string""}, {""name"": ""whisper_asr"", ""sequence"": ""string""}, {""name"": ""whisper_asr_cer"", ""sequence"": ""float64""}, {""name"": ""whisper_asr_translation"", ""sequence"": ""string""}, {""name"": ""whisper_asr_wer"", ""sequence"": ""float64""}]}], ""splits"": [{""name"": ""test"", ""num_bytes"": 4608181984.0, ""num_examples"": 766}], ""download_size"": 2492717750, ""dataset_size"": 4608181984.0}, {""config_name"": ""swe_Latn"", ""features"": [{""name"": ""link"", ""dtype"": ""string""}, {""name"": ""question_number"", ""dtype"": ""int64""}, {""name"": ""flores_passage"", ""dtype"": ""string""}, {""name"": ""question"", ""dtype"": ""string""}, {""name"": ""mc_answer1"", ""dtype"": ""string""}, {""name"": ""mc_answer2"", ""dtype"": ""string""}, {""name"": ""mc_answer3"", ""dtype"": ""string""}, {""name"": ""mc_answer4"", ""dtype"": ""string""}, {""name"": ""correct_answer_num"", ""dtype"": ""string""}, {""name"": ""dialect"", ""dtype"": ""string""}, {""name"": ""ds"", ""dtype"": ""timestamp[us]""}, {""name"": ""sentence_data"", ""list"": [{""name"": ""URL"", ""dtype"": ""string""}, {""name"": ""audio"", ""sequence"": {""audio"": {""sampling_rate"": 16000}}}, {""name"": ""domain"", ""dtype"": ""string""}, {""name"": ""filename"", ""sequence"": ""string""}, {""name"": ""fleurs_id"", ""dtype"": ""int64""}, {""name"": ""full_paragraph"", ""dtype"": ""bool""}, {""name"": ""gender"", ""sequence"": ""string""}, {""name"": ""has_hyperlink"", ""dtype"": ""int64""}, {""name"": ""has_image"", ""dtype"": ""int64""}, {""name"": ""id"", ""dtype"": ""int64""}, {""name"": ""num_samples"", ""sequence"": ""int64""}, {""name"": ""raw_transcription"", ""dtype"": ""string""}, {""name"": ""seamlessm4t_asr"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr_cer"", ""sequence"": ""float64""}, {""name"": ""seamlessm4t_asr_translation"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr_wer"", ""sequence"": ""float64""}, {""name"": ""sentence"", ""dtype"": ""string""}, {""name"": ""sentence_idx"", ""dtype"": ""int64""}, {""name"": ""speaker_id"", ""sequence"": ""int64""}, {""name"": ""split"", ""sequence"": ""string""}, {""name"": ""topic"", ""dtype"": ""string""}, {""name"": ""transcription"", ""dtype"": ""string""}, {""name"": ""whisper_asr"", ""sequence"": ""string""}, {""name"": ""whisper_asr_cer"", ""sequence"": ""float64""}, {""name"": ""whisper_asr_translation"", ""sequence"": ""string""}, {""name"": ""whisper_asr_wer"", ""sequence"": ""float64""}]}], ""splits"": [{""name"": ""test"", ""num_bytes"": 3621750448.0, ""num_examples"": 681}], ""download_size"": 1878869280, ""dataset_size"": 3621750448.0}, {""config_name"": ""swh_Latn"", ""features"": [{""name"": ""link"", ""dtype"": ""string""}, {""name"": ""question_number"", ""dtype"": ""int64""}, {""name"": ""flores_passage"", ""dtype"": ""string""}, {""name"": ""question"", ""dtype"": ""string""}, {""name"": ""mc_answer1"", ""dtype"": ""string""}, {""name"": ""mc_answer2"", ""dtype"": ""string""}, {""name"": ""mc_answer3"", ""dtype"": ""string""}, {""name"": ""mc_answer4"", ""dtype"": ""string""}, {""name"": ""correct_answer_num"", ""dtype"": ""string""}, {""name"": ""dialect"", ""dtype"": ""string""}, {""name"": ""ds"", ""dtype"": ""timestamp[us]""}, {""name"": ""sentence_data"", ""list"": [{""name"": ""URL"", ""dtype"": ""string""}, {""name"": ""audio"", ""sequence"": {""audio"": {""sampling_rate"": 16000}}}, {""name"": ""domain"", ""dtype"": ""string""}, {""name"": ""filename"", ""sequence"": ""string""}, {""name"": ""fleurs_id"", ""dtype"": ""int64""}, {""name"": ""full_paragraph"", ""dtype"": ""bool""}, {""name"": ""gender"", ""sequence"": ""string""}, {""name"": ""has_hyperlink"", ""dtype"": ""int64""}, {""name"": ""has_image"", ""dtype"": ""int64""}, {""name"": ""id"", ""dtype"": ""int64""}, {""name"": ""num_samples"", ""sequence"": ""int64""}, {""name"": ""raw_transcription"", ""dtype"": ""string""}, {""name"": ""seamlessm4t_asr"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr_cer"", ""sequence"": ""float64""}, {""name"": ""seamlessm4t_asr_translation"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr_wer"", ""sequence"": ""float64""}, {""name"": ""sentence"", ""dtype"": ""string""}, {""name"": ""sentence_idx"", ""dtype"": ""int64""}, {""name"": ""speaker_id"", ""sequence"": ""int64""}, {""name"": ""split"", ""sequence"": ""string""}, {""name"": ""topic"", ""dtype"": ""string""}, {""name"": ""transcription"", ""dtype"": ""string""}, {""name"": ""whisper_asr"", ""sequence"": ""string""}, {""name"": ""whisper_asr_cer"", ""sequence"": ""float64""}, {""name"": ""whisper_asr_translation"", ""sequence"": ""string""}, {""name"": ""whisper_asr_wer"", ""sequence"": ""float64""}]}], ""splits"": [{""name"": ""test"", ""num_bytes"": 5654478598.0, ""num_examples"": 780}], ""download_size"": 3051736604, ""dataset_size"": 5654478598.0}, {""config_name"": ""tam_Taml"", ""features"": [{""name"": ""link"", ""dtype"": ""string""}, {""name"": ""question_number"", ""dtype"": ""int64""}, {""name"": ""flores_passage"", ""dtype"": ""string""}, {""name"": ""question"", ""dtype"": ""string""}, {""name"": ""mc_answer1"", ""dtype"": ""string""}, {""name"": ""mc_answer2"", ""dtype"": ""string""}, {""name"": ""mc_answer3"", ""dtype"": ""string""}, {""name"": ""mc_answer4"", ""dtype"": ""string""}, {""name"": ""correct_answer_num"", ""dtype"": ""string""}, {""name"": ""dialect"", ""dtype"": ""string""}, {""name"": ""ds"", ""dtype"": ""timestamp[us]""}, {""name"": ""sentence_data"", ""list"": [{""name"": ""URL"", ""dtype"": ""string""}, {""name"": ""audio"", ""sequence"": {""audio"": {""sampling_rate"": 16000}}}, {""name"": ""domain"", ""dtype"": ""string""}, {""name"": ""filename"", ""sequence"": ""string""}, {""name"": ""fleurs_id"", ""dtype"": ""int64""}, {""name"": ""full_paragraph"", ""dtype"": ""bool""}, {""name"": ""gender"", ""sequence"": ""string""}, {""name"": ""has_hyperlink"", ""dtype"": ""int64""}, {""name"": ""has_image"", ""dtype"": ""int64""}, {""name"": ""id"", ""dtype"": ""int64""}, {""name"": ""num_samples"", ""sequence"": ""int64""}, {""name"": ""raw_transcription"", ""dtype"": ""string""}, {""name"": ""seamlessm4t_asr"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr_cer"", ""sequence"": ""float64""}, {""name"": ""seamlessm4t_asr_translation"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr_wer"", ""sequence"": ""float64""}, {""name"": ""sentence"", ""dtype"": ""string""}, {""name"": ""sentence_idx"", ""dtype"": ""int64""}, {""name"": ""speaker_id"", ""sequence"": ""int64""}, {""name"": ""split"", ""sequence"": ""string""}, {""name"": ""topic"", ""dtype"": ""string""}, {""name"": ""transcription"", ""dtype"": ""string""}, {""name"": ""whisper_asr"", ""sequence"": ""string""}, {""name"": ""whisper_asr_cer"", ""sequence"": ""float64""}, {""name"": ""whisper_asr_translation"", ""sequence"": ""string""}, {""name"": ""whisper_asr_wer"", ""sequence"": ""float64""}]}], ""splits"": [{""name"": ""test"", ""num_bytes"": 3849928833.0, ""num_examples"": 714}], ""download_size"": 2060754219, ""dataset_size"": 3849928833.0}, {""config_name"": ""tel_Telu"", ""features"": [{""name"": ""link"", ""dtype"": ""string""}, {""name"": ""question_number"", ""dtype"": ""int64""}, {""name"": ""flores_passage"", ""dtype"": ""string""}, {""name"": ""question"", ""dtype"": ""string""}, {""name"": ""mc_answer1"", ""dtype"": ""string""}, {""name"": ""mc_answer2"", ""dtype"": ""string""}, {""name"": ""mc_answer3"", ""dtype"": ""string""}, {""name"": ""mc_answer4"", ""dtype"": ""string""}, {""name"": ""correct_answer_num"", ""dtype"": ""string""}, {""name"": ""dialect"", ""dtype"": ""string""}, {""name"": ""ds"", ""dtype"": ""timestamp[us]""}, {""name"": ""sentence_data"", ""list"": [{""name"": ""URL"", ""dtype"": ""string""}, {""name"": ""audio"", ""sequence"": {""audio"": {""sampling_rate"": 16000}}}, {""name"": ""domain"", ""dtype"": ""string""}, {""name"": ""filename"", ""sequence"": ""string""}, {""name"": ""fleurs_id"", ""dtype"": ""int64""}, {""name"": ""full_paragraph"", ""dtype"": ""bool""}, {""name"": ""gender"", ""sequence"": ""string""}, {""name"": ""has_hyperlink"", ""dtype"": ""int64""}, {""name"": ""has_image"", ""dtype"": ""int64""}, {""name"": ""id"", ""dtype"": ""int64""}, {""name"": ""num_samples"", ""sequence"": ""int64""}, {""name"": ""raw_transcription"", ""dtype"": ""string""}, {""name"": ""seamlessm4t_asr"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr_cer"", ""sequence"": ""float64""}, {""name"": ""seamlessm4t_asr_translation"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr_wer"", ""sequence"": ""float64""}, {""name"": ""sentence"", ""dtype"": ""string""}, {""name"": ""sentence_idx"", ""dtype"": ""int64""}, {""name"": ""speaker_id"", ""sequence"": ""int64""}, {""name"": ""split"", ""sequence"": ""string""}, {""name"": ""topic"", ""dtype"": ""string""}, {""name"": ""transcription"", ""dtype"": ""string""}, {""name"": ""whisper_asr"", ""sequence"": ""string""}, {""name"": ""whisper_asr_cer"", ""sequence"": ""float64""}, {""name"": ""whisper_asr_translation"", ""sequence"": ""string""}, {""name"": ""whisper_asr_wer"", ""sequence"": ""float64""}]}], ""splits"": [{""name"": ""test"", ""num_bytes"": 2710509575.0, ""num_examples"": 567}], ""download_size"": 1453506468, ""dataset_size"": 2710509575.0}, {""config_name"": ""tgk_Cyrl"", ""features"": [{""name"": ""link"", ""dtype"": ""string""}, {""name"": ""question_number"", ""dtype"": ""int64""}, {""name"": ""flores_passage"", ""dtype"": ""string""}, {""name"": ""question"", ""dtype"": ""string""}, {""name"": ""mc_answer1"", ""dtype"": ""string""}, {""name"": ""mc_answer2"", ""dtype"": ""string""}, {""name"": ""mc_answer3"", ""dtype"": ""string""}, {""name"": ""mc_answer4"", ""dtype"": ""string""}, {""name"": ""correct_answer_num"", ""dtype"": ""string""}, {""name"": ""dialect"", ""dtype"": ""string""}, {""name"": ""ds"", ""dtype"": ""timestamp[us]""}, {""name"": ""sentence_data"", ""list"": [{""name"": ""URL"", ""dtype"": ""string""}, {""name"": ""audio"", ""sequence"": {""audio"": {""sampling_rate"": 16000}}}, {""name"": ""domain"", ""dtype"": ""string""}, {""name"": ""filename"", ""sequence"": ""string""}, {""name"": ""fleurs_id"", ""dtype"": ""int64""}, {""name"": ""full_paragraph"", ""dtype"": ""bool""}, {""name"": ""gender"", ""sequence"": ""string""}, {""name"": ""has_hyperlink"", ""dtype"": ""int64""}, {""name"": ""has_image"", ""dtype"": ""int64""}, {""name"": ""id"", ""dtype"": ""int64""}, {""name"": ""num_samples"", ""sequence"": ""int64""}, {""name"": ""raw_transcription"", ""dtype"": ""string""}, {""name"": ""seamlessm4t_asr"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr_cer"", ""sequence"": ""float64""}, {""name"": ""seamlessm4t_asr_translation"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr_wer"", ""sequence"": ""float64""}, {""name"": ""sentence"", ""dtype"": ""string""}, {""name"": ""sentence_idx"", ""dtype"": ""int64""}, {""name"": ""speaker_id"", ""sequence"": ""int64""}, {""name"": ""split"", ""sequence"": ""string""}, {""name"": ""topic"", ""dtype"": ""string""}, {""name"": ""transcription"", ""dtype"": ""string""}, {""name"": ""whisper_asr"", ""sequence"": ""string""}, {""name"": ""whisper_asr_cer"", ""sequence"": ""float64""}, {""name"": ""whisper_asr_translation"", ""sequence"": ""string""}, {""name"": ""whisper_asr_wer"", ""sequence"": ""float64""}]}], ""splits"": [{""name"": ""test"", ""num_bytes"": 3466965147.0, ""num_examples"": 632}], ""download_size"": 1836266294, ""dataset_size"": 3466965147.0}, {""config_name"": ""tgl_Latn"", ""features"": [{""name"": ""link"", ""dtype"": ""string""}, {""name"": ""question_number"", ""dtype"": ""int64""}, {""name"": ""flores_passage"", ""dtype"": ""string""}, {""name"": ""question"", ""dtype"": ""string""}, {""name"": ""mc_answer1"", ""dtype"": ""string""}, {""name"": ""mc_answer2"", ""dtype"": ""string""}, {""name"": ""mc_answer3"", ""dtype"": ""string""}, {""name"": ""mc_answer4"", ""dtype"": ""string""}, {""name"": ""correct_answer_num"", ""dtype"": ""string""}, {""name"": ""dialect"", ""dtype"": ""string""}, {""name"": ""ds"", ""dtype"": ""timestamp[us]""}, {""name"": ""sentence_data"", ""list"": [{""name"": ""URL"", ""dtype"": ""string""}, {""name"": ""audio"", ""sequence"": {""audio"": {""sampling_rate"": 16000}}}, {""name"": ""domain"", ""dtype"": ""string""}, {""name"": ""filename"", ""sequence"": ""string""}, {""name"": ""fleurs_id"", ""dtype"": ""int64""}, {""name"": ""full_paragraph"", ""dtype"": ""bool""}, {""name"": ""gender"", ""sequence"": ""string""}, {""name"": ""has_hyperlink"", ""dtype"": ""int64""}, {""name"": ""has_image"", ""dtype"": ""int64""}, {""name"": ""id"", ""dtype"": ""int64""}, {""name"": ""num_samples"", ""sequence"": ""int64""}, {""name"": ""raw_transcription"", ""dtype"": ""string""}, {""name"": ""seamlessm4t_asr"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr_cer"", ""sequence"": ""float64""}, {""name"": ""seamlessm4t_asr_translation"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr_wer"", ""sequence"": ""float64""}, {""name"": ""sentence"", ""dtype"": ""string""}, {""name"": ""sentence_idx"", ""dtype"": ""int64""}, {""name"": ""speaker_id"", ""sequence"": ""int64""}, {""name"": ""split"", ""sequence"": ""string""}, {""name"": ""topic"", ""dtype"": ""string""}, {""name"": ""transcription"", ""dtype"": ""string""}, {""name"": ""whisper_asr"", ""sequence"": ""string""}, {""name"": ""whisper_asr_cer"", ""sequence"": ""float64""}, {""name"": ""whisper_asr_translation"", ""sequence"": ""string""}, {""name"": ""whisper_asr_wer"", ""sequence"": ""float64""}]}], ""splits"": [{""name"": ""test"", ""num_bytes"": 3521747690.0, ""num_examples"": 505}], ""download_size"": 1944891399, ""dataset_size"": 3521747690.0}, {""config_name"": ""tha_Thai"", ""features"": [{""name"": ""link"", ""dtype"": ""string""}, {""name"": ""question_number"", ""dtype"": ""int64""}, {""name"": ""flores_passage"", ""dtype"": ""string""}, {""name"": ""question"", ""dtype"": ""string""}, {""name"": ""mc_answer1"", ""dtype"": ""string""}, {""name"": ""mc_answer2"", ""dtype"": ""string""}, {""name"": ""mc_answer3"", ""dtype"": ""string""}, {""name"": ""mc_answer4"", ""dtype"": ""string""}, {""name"": ""correct_answer_num"", ""dtype"": ""string""}, {""name"": ""dialect"", ""dtype"": ""string""}, {""name"": ""ds"", ""dtype"": ""timestamp[us]""}, {""name"": ""sentence_data"", ""list"": [{""name"": ""URL"", ""dtype"": ""string""}, {""name"": ""audio"", ""sequence"": {""audio"": {""sampling_rate"": 16000}}}, {""name"": ""domain"", ""dtype"": ""string""}, {""name"": ""filename"", ""sequence"": ""string""}, {""name"": ""fleurs_id"", ""dtype"": ""int64""}, {""name"": ""full_paragraph"", ""dtype"": ""bool""}, {""name"": ""gender"", ""sequence"": ""string""}, {""name"": ""has_hyperlink"", ""dtype"": ""int64""}, {""name"": ""has_image"", ""dtype"": ""int64""}, {""name"": ""id"", ""dtype"": ""int64""}, {""name"": ""num_samples"", ""sequence"": ""int64""}, {""name"": ""raw_transcription"", ""dtype"": ""string""}, {""name"": ""seamlessm4t_asr"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr_cer"", ""sequence"": ""float64""}, {""name"": ""seamlessm4t_asr_translation"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr_wer"", ""sequence"": ""float64""}, {""name"": ""sentence"", ""dtype"": ""string""}, {""name"": ""sentence_idx"", ""dtype"": ""int64""}, {""name"": ""speaker_id"", ""sequence"": ""int64""}, {""name"": ""split"", ""sequence"": ""string""}, {""name"": ""topic"", ""dtype"": ""string""}, {""name"": ""transcription"", ""dtype"": ""string""}, {""name"": ""whisper_asr"", ""sequence"": ""string""}, {""name"": ""whisper_asr_cer"", ""sequence"": ""float64""}, {""name"": ""whisper_asr_translation"", ""sequence"": ""string""}, {""name"": ""whisper_asr_wer"", ""sequence"": ""float64""}]}], ""splits"": [{""name"": ""test"", ""num_bytes"": 4391092533.0, ""num_examples"": 745}], ""download_size"": 2373882345, ""dataset_size"": 4391092533.0}, {""config_name"": ""tur_Latn"", ""features"": [{""name"": ""link"", ""dtype"": ""string""}, {""name"": ""question_number"", ""dtype"": ""int64""}, {""name"": ""flores_passage"", ""dtype"": ""string""}, {""name"": ""question"", ""dtype"": ""string""}, {""name"": ""mc_answer1"", ""dtype"": ""string""}, {""name"": ""mc_answer2"", ""dtype"": ""string""}, {""name"": ""mc_answer3"", ""dtype"": ""string""}, {""name"": ""mc_answer4"", ""dtype"": ""string""}, {""name"": ""correct_answer_num"", ""dtype"": ""string""}, {""name"": ""dialect"", ""dtype"": ""string""}, {""name"": ""ds"", ""dtype"": ""timestamp[us]""}, {""name"": ""sentence_data"", ""list"": [{""name"": ""URL"", ""dtype"": ""string""}, {""name"": ""audio"", ""sequence"": {""audio"": {""sampling_rate"": 16000}}}, {""name"": ""domain"", ""dtype"": ""string""}, {""name"": ""filename"", ""sequence"": ""string""}, {""name"": ""fleurs_id"", ""dtype"": ""int64""}, {""name"": ""full_paragraph"", ""dtype"": ""bool""}, {""name"": ""gender"", ""sequence"": ""string""}, {""name"": ""has_hyperlink"", ""dtype"": ""int64""}, {""name"": ""has_image"", ""dtype"": ""int64""}, {""name"": ""id"", ""dtype"": ""int64""}, {""name"": ""num_samples"", ""sequence"": ""int64""}, {""name"": ""raw_transcription"", ""dtype"": ""string""}, {""name"": ""seamlessm4t_asr"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr_cer"", ""sequence"": ""float64""}, {""name"": ""seamlessm4t_asr_translation"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr_wer"", ""sequence"": ""float64""}, {""name"": ""sentence"", ""dtype"": ""string""}, {""name"": ""sentence_idx"", ""dtype"": ""int64""}, {""name"": ""speaker_id"", ""sequence"": ""int64""}, {""name"": ""split"", ""sequence"": ""string""}, {""name"": ""topic"", ""dtype"": ""string""}, {""name"": ""transcription"", ""dtype"": ""string""}, {""name"": ""whisper_asr"", ""sequence"": ""string""}, {""name"": ""whisper_asr_cer"", ""sequence"": ""float64""}, {""name"": ""whisper_asr_translation"", ""sequence"": ""string""}, {""name"": ""whisper_asr_wer"", ""sequence"": ""float64""}]}], ""splits"": [{""name"": ""test"", ""num_bytes"": 3781792503.0, ""num_examples"": 706}], ""download_size"": 2023965910, ""dataset_size"": 3781792503.0}, {""config_name"": ""ukr_Cyrl"", ""features"": [{""name"": ""link"", ""dtype"": ""string""}, {""name"": ""question_number"", ""dtype"": ""int64""}, {""name"": ""flores_passage"", ""dtype"": ""string""}, {""name"": ""question"", ""dtype"": ""string""}, {""name"": ""mc_answer1"", ""dtype"": ""string""}, {""name"": ""mc_answer2"", ""dtype"": ""string""}, {""name"": ""mc_answer3"", ""dtype"": ""string""}, {""name"": ""mc_answer4"", ""dtype"": ""string""}, {""name"": ""correct_answer_num"", ""dtype"": ""string""}, {""name"": ""dialect"", ""dtype"": ""string""}, {""name"": ""ds"", ""dtype"": ""timestamp[us]""}, {""name"": ""sentence_data"", ""list"": [{""name"": ""URL"", ""dtype"": ""string""}, {""name"": ""audio"", ""sequence"": {""audio"": {""sampling_rate"": 16000}}}, {""name"": ""domain"", ""dtype"": ""string""}, {""name"": ""filename"", ""sequence"": ""string""}, {""name"": ""fleurs_id"", ""dtype"": ""int64""}, {""name"": ""full_paragraph"", ""dtype"": ""bool""}, {""name"": ""gender"", ""sequence"": ""string""}, {""name"": ""has_hyperlink"", ""dtype"": ""int64""}, {""name"": ""has_image"", ""dtype"": ""int64""}, {""name"": ""id"", ""dtype"": ""int64""}, {""name"": ""num_samples"", ""sequence"": ""int64""}, {""name"": ""raw_transcription"", ""dtype"": ""string""}, {""name"": ""seamlessm4t_asr"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr_cer"", ""sequence"": ""float64""}, {""name"": ""seamlessm4t_asr_translation"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr_wer"", ""sequence"": ""float64""}, {""name"": ""sentence"", ""dtype"": ""string""}, {""name"": ""sentence_idx"", ""dtype"": ""int64""}, {""name"": ""speaker_id"", ""sequence"": ""int64""}, {""name"": ""split"", ""sequence"": ""string""}, {""name"": ""topic"", ""dtype"": ""string""}, {""name"": ""transcription"", ""dtype"": ""string""}, {""name"": ""whisper_asr"", ""sequence"": ""string""}, {""name"": ""whisper_asr_cer"", ""sequence"": ""float64""}, {""name"": ""whisper_asr_translation"", ""sequence"": ""string""}, {""name"": ""whisper_asr_wer"", ""sequence"": ""float64""}]}], ""splits"": [{""name"": ""test"", ""num_bytes"": 4141799202.0, ""num_examples"": 773}], ""download_size"": 2217292261, ""dataset_size"": 4141799202.0}, {""config_name"": ""urd_Arab"", ""features"": [{""name"": ""link"", ""dtype"": ""string""}, {""name"": ""question_number"", ""dtype"": ""int64""}, {""name"": ""flores_passage"", ""dtype"": ""string""}, {""name"": ""question"", ""dtype"": ""string""}, {""name"": ""mc_answer1"", ""dtype"": ""string""}, {""name"": ""mc_answer2"", ""dtype"": ""string""}, {""name"": ""mc_answer3"", ""dtype"": ""string""}, {""name"": ""mc_answer4"", ""dtype"": ""string""}, {""name"": ""correct_answer_num"", ""dtype"": ""string""}, {""name"": ""dialect"", ""dtype"": ""string""}, {""name"": ""ds"", ""dtype"": ""timestamp[us]""}, {""name"": ""sentence_data"", ""list"": [{""name"": ""URL"", ""dtype"": ""string""}, {""name"": ""audio"", ""sequence"": {""audio"": {""sampling_rate"": 16000}}}, {""name"": ""domain"", ""dtype"": ""string""}, {""name"": ""filename"", ""sequence"": ""string""}, {""name"": ""fleurs_id"", ""dtype"": ""int64""}, {""name"": ""full_paragraph"", ""dtype"": ""bool""}, {""name"": ""gender"", ""sequence"": ""string""}, {""name"": ""has_hyperlink"", ""dtype"": ""int64""}, {""name"": ""has_image"", ""dtype"": ""int64""}, {""name"": ""id"", ""dtype"": ""int64""}, {""name"": ""num_samples"", ""sequence"": ""int64""}, {""name"": ""raw_transcription"", ""dtype"": ""string""}, {""name"": ""seamlessm4t_asr"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr_cer"", ""sequence"": ""float64""}, {""name"": ""seamlessm4t_asr_translation"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr_wer"", ""sequence"": ""float64""}, {""name"": ""sentence"", ""dtype"": ""string""}, {""name"": ""sentence_idx"", ""dtype"": ""int64""}, {""name"": ""speaker_id"", ""sequence"": ""int64""}, {""name"": ""split"", ""sequence"": ""string""}, {""name"": ""topic"", ""dtype"": ""string""}, {""name"": ""transcription"", ""dtype"": ""string""}, {""name"": ""whisper_asr"", ""sequence"": ""string""}, {""name"": ""whisper_asr_cer"", ""sequence"": ""float64""}, {""name"": ""whisper_asr_translation"", ""sequence"": ""string""}, {""name"": ""whisper_asr_wer"", ""sequence"": ""float64""}]}], ""splits"": [{""name"": ""test"", ""num_bytes"": 2162223158.0, ""num_examples"": 482}], ""download_size"": 1146920089, ""dataset_size"": 2162223158.0}, {""config_name"": ""uzn_Latn"", ""features"": [{""name"": ""link"", ""dtype"": ""string""}, {""name"": ""question_number"", ""dtype"": ""int64""}, {""name"": ""flores_passage"", ""dtype"": ""string""}, {""name"": ""question"", ""dtype"": ""string""}, {""name"": ""mc_answer1"", ""dtype"": ""string""}, {""name"": ""mc_answer2"", ""dtype"": ""string""}, {""name"": ""mc_answer3"", ""dtype"": ""string""}, {""name"": ""mc_answer4"", ""dtype"": ""string""}, {""name"": ""correct_answer_num"", ""dtype"": ""string""}, {""name"": ""dialect"", ""dtype"": ""string""}, {""name"": ""ds"", ""dtype"": ""timestamp[us]""}, {""name"": ""sentence_data"", ""list"": [{""name"": ""URL"", ""dtype"": ""string""}, {""name"": ""audio"", ""sequence"": {""audio"": {""sampling_rate"": 16000}}}, {""name"": ""domain"", ""dtype"": ""string""}, {""name"": ""filename"", ""sequence"": ""string""}, {""name"": ""fleurs_id"", ""dtype"": ""int64""}, {""name"": ""full_paragraph"", ""dtype"": ""bool""}, {""name"": ""gender"", ""sequence"": ""string""}, {""name"": ""has_hyperlink"", ""dtype"": ""int64""}, {""name"": ""has_image"", ""dtype"": ""int64""}, {""name"": ""id"", ""dtype"": ""int64""}, {""name"": ""num_samples"", ""sequence"": ""int64""}, {""name"": ""raw_transcription"", ""dtype"": ""string""}, {""name"": ""seamlessm4t_asr"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr_cer"", ""sequence"": ""float64""}, {""name"": ""seamlessm4t_asr_translation"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr_wer"", ""sequence"": ""float64""}, {""name"": ""sentence"", ""dtype"": ""string""}, {""name"": ""sentence_idx"", ""dtype"": ""int64""}, {""name"": ""speaker_id"", ""sequence"": ""int64""}, {""name"": ""split"", ""sequence"": ""string""}, {""name"": ""topic"", ""dtype"": ""string""}, {""name"": ""transcription"", ""dtype"": ""string""}, {""name"": ""whisper_asr"", ""sequence"": ""string""}, {""name"": ""whisper_asr_cer"", ""sequence"": ""float64""}, {""name"": ""whisper_asr_translation"", ""sequence"": ""string""}, {""name"": ""whisper_asr_wer"", ""sequence"": ""float64""}]}], ""splits"": [{""name"": ""test"", ""num_bytes"": 4941238038.0, ""num_examples"": 812}], ""download_size"": 2646926766, ""dataset_size"": 4941238038.0}, {""config_name"": ""vie_Latn"", ""features"": [{""name"": ""link"", ""dtype"": ""string""}, {""name"": ""question_number"", ""dtype"": ""int64""}, {""name"": ""flores_passage"", ""dtype"": ""string""}, {""name"": ""question"", ""dtype"": ""string""}, {""name"": ""mc_answer1"", ""dtype"": ""string""}, {""name"": ""mc_answer2"", ""dtype"": ""string""}, {""name"": ""mc_answer3"", ""dtype"": ""string""}, {""name"": ""mc_answer4"", ""dtype"": ""string""}, {""name"": ""correct_answer_num"", ""dtype"": ""string""}, {""name"": ""dialect"", ""dtype"": ""string""}, {""name"": ""ds"", ""dtype"": ""timestamp[us]""}, {""name"": ""sentence_data"", ""list"": [{""name"": ""URL"", ""dtype"": ""string""}, {""name"": ""audio"", ""sequence"": {""audio"": {""sampling_rate"": 16000}}}, {""name"": ""domain"", ""dtype"": ""string""}, {""name"": ""filename"", ""sequence"": ""string""}, {""name"": ""fleurs_id"", ""dtype"": ""int64""}, {""name"": ""full_paragraph"", ""dtype"": ""bool""}, {""name"": ""gender"", ""sequence"": ""string""}, {""name"": ""has_hyperlink"", ""dtype"": ""int64""}, {""name"": ""has_image"", ""dtype"": ""int64""}, {""name"": ""id"", ""dtype"": ""int64""}, {""name"": ""num_samples"", ""sequence"": ""int64""}, {""name"": ""raw_transcription"", ""dtype"": ""string""}, {""name"": ""seamlessm4t_asr"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr_cer"", ""sequence"": ""float64""}, {""name"": ""seamlessm4t_asr_translation"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr_wer"", ""sequence"": ""float64""}, {""name"": ""sentence"", ""dtype"": ""string""}, {""name"": ""sentence_idx"", ""dtype"": ""int64""}, {""name"": ""speaker_id"", ""sequence"": ""int64""}, {""name"": ""split"", ""sequence"": ""string""}, {""name"": ""topic"", ""dtype"": ""string""}, {""name"": ""transcription"", ""dtype"": ""string""}, {""name"": ""whisper_asr"", ""sequence"": ""string""}, {""name"": ""whisper_asr_cer"", ""sequence"": ""float64""}, {""name"": ""whisper_asr_translation"", ""sequence"": ""string""}, {""name"": ""whisper_asr_wer"", ""sequence"": ""float64""}]}], ""splits"": [{""name"": ""test"", ""num_bytes"": 4871565136.0, ""num_examples"": 847}], ""download_size"": 2621996970, ""dataset_size"": 4871565136.0}, {""config_name"": ""wol_Latn"", ""features"": [{""name"": ""link"", ""dtype"": ""string""}, {""name"": ""question_number"", ""dtype"": ""int64""}, {""name"": ""flores_passage"", ""dtype"": ""string""}, {""name"": ""question"", ""dtype"": ""string""}, {""name"": ""mc_answer1"", ""dtype"": ""string""}, {""name"": ""mc_answer2"", ""dtype"": ""string""}, {""name"": ""mc_answer3"", ""dtype"": ""string""}, {""name"": ""mc_answer4"", ""dtype"": ""string""}, {""name"": ""correct_answer_num"", ""dtype"": ""string""}, {""name"": ""dialect"", ""dtype"": ""string""}, {""name"": ""ds"", ""dtype"": ""timestamp[us]""}, {""name"": ""sentence_data"", ""list"": [{""name"": ""URL"", ""dtype"": ""string""}, {""name"": ""audio"", ""sequence"": {""audio"": {""sampling_rate"": 16000}}}, {""name"": ""domain"", ""dtype"": ""string""}, {""name"": ""filename"", ""sequence"": ""string""}, {""name"": ""fleurs_id"", ""dtype"": ""int64""}, {""name"": ""full_paragraph"", ""dtype"": ""bool""}, {""name"": ""gender"", ""sequence"": ""string""}, {""name"": ""has_hyperlink"", ""dtype"": ""int64""}, {""name"": ""has_image"", ""dtype"": ""int64""}, {""name"": ""id"", ""dtype"": ""int64""}, {""name"": ""num_samples"", ""sequence"": ""int64""}, {""name"": ""raw_transcription"", ""dtype"": ""string""}, {""name"": ""seamlessm4t_asr"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr_cer"", ""sequence"": ""float64""}, {""name"": ""seamlessm4t_asr_translation"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr_wer"", ""sequence"": ""float64""}, {""name"": ""sentence"", ""dtype"": ""string""}, {""name"": ""sentence_idx"", ""dtype"": ""int64""}, {""name"": ""speaker_id"", ""sequence"": ""int64""}, {""name"": ""split"", ""sequence"": ""string""}, {""name"": ""topic"", ""dtype"": ""string""}, {""name"": ""transcription"", ""dtype"": ""string""}, {""name"": ""whisper_asr"", ""sequence"": ""string""}, {""name"": ""whisper_asr_cer"", ""sequence"": ""float64""}, {""name"": ""whisper_asr_translation"", ""sequence"": ""string""}, {""name"": ""whisper_asr_wer"", ""sequence"": ""float64""}]}], ""splits"": [{""name"": ""test"", ""num_bytes"": 2675304049.0, ""num_examples"": 495}], ""download_size"": 1327127473, ""dataset_size"": 2675304049.0}, {""config_name"": ""xho_Latn"", ""features"": [{""name"": ""link"", ""dtype"": ""string""}, {""name"": ""question_number"", ""dtype"": ""int64""}, {""name"": ""flores_passage"", ""dtype"": ""string""}, {""name"": ""question"", ""dtype"": ""string""}, {""name"": ""mc_answer1"", ""dtype"": ""string""}, {""name"": ""mc_answer2"", ""dtype"": ""string""}, {""name"": ""mc_answer3"", ""dtype"": ""string""}, {""name"": ""mc_answer4"", ""dtype"": ""string""}, {""name"": ""correct_answer_num"", ""dtype"": ""string""}, {""name"": ""dialect"", ""dtype"": ""string""}, {""name"": ""ds"", ""dtype"": ""timestamp[us]""}, {""name"": ""sentence_data"", ""list"": [{""name"": ""URL"", ""dtype"": ""string""}, {""name"": ""audio"", ""sequence"": {""audio"": {""sampling_rate"": 16000}}}, {""name"": ""domain"", ""dtype"": ""string""}, {""name"": ""filename"", ""sequence"": ""string""}, {""name"": ""fleurs_id"", ""dtype"": ""int64""}, {""name"": ""full_paragraph"", ""dtype"": ""bool""}, {""name"": ""gender"", ""sequence"": ""string""}, {""name"": ""has_hyperlink"", ""dtype"": ""int64""}, {""name"": ""has_image"", ""dtype"": ""int64""}, {""name"": ""id"", ""dtype"": ""int64""}, {""name"": ""num_samples"", ""sequence"": ""int64""}, {""name"": ""raw_transcription"", ""dtype"": ""string""}, {""name"": ""seamlessm4t_asr"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr_cer"", ""sequence"": ""float64""}, {""name"": ""seamlessm4t_asr_translation"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr_wer"", ""sequence"": ""float64""}, {""name"": ""sentence"", ""dtype"": ""string""}, {""name"": ""sentence_idx"", ""dtype"": ""int64""}, {""name"": ""speaker_id"", ""sequence"": ""int64""}, {""name"": ""split"", ""sequence"": ""string""}, {""name"": ""topic"", ""dtype"": ""string""}, {""name"": ""transcription"", ""dtype"": ""string""}, {""name"": ""whisper_asr"", ""sequence"": ""string""}, {""name"": ""whisper_asr_cer"", ""sequence"": ""float64""}, {""name"": ""whisper_asr_translation"", ""sequence"": ""string""}, {""name"": ""whisper_asr_wer"", ""sequence"": ""float64""}]}], ""splits"": [{""name"": ""test"", ""num_bytes"": 7209281056.0, ""num_examples"": 900}], ""download_size"": 3841482084, ""dataset_size"": 7209281056.0}, {""config_name"": ""yor_Latn"", ""features"": [{""name"": ""link"", ""dtype"": ""string""}, {""name"": ""question_number"", ""dtype"": ""int64""}, {""name"": ""flores_passage"", ""dtype"": ""string""}, {""name"": ""question"", ""dtype"": ""string""}, {""name"": ""mc_answer1"", ""dtype"": ""string""}, {""name"": ""mc_answer2"", ""dtype"": ""string""}, {""name"": ""mc_answer3"", ""dtype"": ""string""}, {""name"": ""mc_answer4"", ""dtype"": ""string""}, {""name"": ""correct_answer_num"", ""dtype"": ""string""}, {""name"": ""dialect"", ""dtype"": ""string""}, {""name"": ""ds"", ""dtype"": ""timestamp[us]""}, {""name"": ""sentence_data"", ""list"": [{""name"": ""URL"", ""dtype"": ""string""}, {""name"": ""audio"", ""sequence"": {""audio"": {""sampling_rate"": 16000}}}, {""name"": ""domain"", ""dtype"": ""string""}, {""name"": ""filename"", ""sequence"": ""string""}, {""name"": ""fleurs_id"", ""dtype"": ""int64""}, {""name"": ""full_paragraph"", ""dtype"": ""bool""}, {""name"": ""gender"", ""sequence"": ""string""}, {""name"": ""has_hyperlink"", ""dtype"": ""int64""}, {""name"": ""has_image"", ""dtype"": ""int64""}, {""name"": ""id"", ""dtype"": ""int64""}, {""name"": ""num_samples"", ""sequence"": ""int64""}, {""name"": ""raw_transcription"", ""dtype"": ""string""}, {""name"": ""seamlessm4t_asr"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr_cer"", ""sequence"": ""float64""}, {""name"": ""seamlessm4t_asr_translation"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr_wer"", ""sequence"": ""float64""}, {""name"": ""sentence"", ""dtype"": ""string""}, {""name"": ""sentence_idx"", ""dtype"": ""int64""}, {""name"": ""speaker_id"", ""sequence"": ""int64""}, {""name"": ""split"", ""sequence"": ""string""}, {""name"": ""topic"", ""dtype"": ""string""}, {""name"": ""transcription"", ""dtype"": ""string""}, {""name"": ""whisper_asr"", ""sequence"": ""string""}, {""name"": ""whisper_asr_cer"", ""sequence"": ""float64""}, {""name"": ""whisper_asr_translation"", ""sequence"": ""string""}, {""name"": ""whisper_asr_wer"", ""sequence"": ""float64""}]}], ""splits"": [{""name"": ""test"", ""num_bytes"": 4500184372.0, ""num_examples"": 652}], ""download_size"": 2483974070, ""dataset_size"": 4500184372.0}, {""config_name"": ""zho_Hans"", ""features"": [{""name"": ""link"", ""dtype"": ""string""}, {""name"": ""question_number"", ""dtype"": ""int64""}, {""name"": ""flores_passage"", ""dtype"": ""string""}, {""name"": ""question"", ""dtype"": ""string""}, {""name"": ""mc_answer1"", ""dtype"": ""string""}, {""name"": ""mc_answer2"", ""dtype"": ""string""}, {""name"": ""mc_answer3"", ""dtype"": ""string""}, {""name"": ""mc_answer4"", ""dtype"": ""string""}, {""name"": ""correct_answer_num"", ""dtype"": ""string""}, {""name"": ""dialect"", ""dtype"": ""string""}, {""name"": ""ds"", ""dtype"": ""timestamp[us]""}, {""name"": ""sentence_data"", ""list"": [{""name"": ""URL"", ""dtype"": ""string""}, {""name"": ""audio"", ""sequence"": {""audio"": {""sampling_rate"": 16000}}}, {""name"": ""domain"", ""dtype"": ""string""}, {""name"": ""filename"", ""sequence"": ""string""}, {""name"": ""fleurs_id"", ""dtype"": ""int64""}, {""name"": ""full_paragraph"", ""dtype"": ""bool""}, {""name"": ""gender"", ""sequence"": ""string""}, {""name"": ""has_hyperlink"", ""dtype"": ""int64""}, {""name"": ""has_image"", ""dtype"": ""int64""}, {""name"": ""id"", ""dtype"": ""int64""}, {""name"": ""num_samples"", ""sequence"": ""int64""}, {""name"": ""raw_transcription"", ""dtype"": ""string""}, {""name"": ""seamlessm4t_asr"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr_cer"", ""sequence"": ""float64""}, {""name"": ""seamlessm4t_asr_translation"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr_wer"", ""sequence"": ""float64""}, {""name"": ""sentence"", ""dtype"": ""string""}, {""name"": ""sentence_idx"", ""dtype"": ""int64""}, {""name"": ""speaker_id"", ""sequence"": ""int64""}, {""name"": ""split"", ""sequence"": ""string""}, {""name"": ""topic"", ""dtype"": ""string""}, {""name"": ""transcription"", ""dtype"": ""string""}, {""name"": ""whisper_asr"", ""sequence"": ""string""}, {""name"": ""whisper_asr_cer"", ""sequence"": ""float64""}, {""name"": ""whisper_asr_translation"", ""sequence"": ""string""}, {""name"": ""whisper_asr_wer"", ""sequence"": ""float64""}]}], ""splits"": [{""name"": ""test"", ""num_bytes"": 5326687452.0, ""num_examples"": 888}], ""download_size"": 2859612274, ""dataset_size"": 5326687452.0}, {""config_name"": ""zho_Hant"", ""features"": [{""name"": ""link"", ""dtype"": ""string""}, {""name"": ""question_number"", ""dtype"": ""int64""}, {""name"": ""flores_passage"", ""dtype"": ""string""}, {""name"": ""question"", ""dtype"": ""string""}, {""name"": ""mc_answer1"", ""dtype"": ""string""}, {""name"": ""mc_answer2"", ""dtype"": ""string""}, {""name"": ""mc_answer3"", ""dtype"": ""string""}, {""name"": ""mc_answer4"", ""dtype"": ""string""}, {""name"": ""correct_answer_num"", ""dtype"": ""string""}, {""name"": ""dialect"", ""dtype"": ""string""}, {""name"": ""ds"", ""dtype"": ""timestamp[us]""}, {""name"": ""sentence_data"", ""list"": [{""name"": ""URL"", ""dtype"": ""string""}, {""name"": ""audio"", ""sequence"": {""audio"": {""sampling_rate"": 16000}}}, {""name"": ""domain"", ""dtype"": ""string""}, {""name"": ""filename"", ""sequence"": ""string""}, {""name"": ""fleurs_id"", ""dtype"": ""int64""}, {""name"": ""full_paragraph"", ""dtype"": ""bool""}, {""name"": ""gender"", ""sequence"": ""string""}, {""name"": ""has_hyperlink"", ""dtype"": ""int64""}, {""name"": ""has_image"", ""dtype"": ""int64""}, {""name"": ""id"", ""dtype"": ""int64""}, {""name"": ""num_samples"", ""sequence"": ""int64""}, {""name"": ""raw_transcription"", ""dtype"": ""string""}, {""name"": ""seamlessm4t_asr"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr_cer"", ""sequence"": ""float64""}, {""name"": ""seamlessm4t_asr_translation"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr_wer"", ""sequence"": ""float64""}, {""name"": ""sentence"", ""dtype"": ""string""}, {""name"": ""sentence_idx"", ""dtype"": ""int64""}, {""name"": ""speaker_id"", ""sequence"": ""int64""}, {""name"": ""split"", ""sequence"": ""string""}, {""name"": ""topic"", ""dtype"": ""string""}, {""name"": ""transcription"", ""dtype"": ""string""}, {""name"": ""whisper_asr"", ""sequence"": ""string""}, {""name"": ""whisper_asr_cer"", ""sequence"": ""float64""}, {""name"": ""whisper_asr_translation"", ""sequence"": ""string""}, {""name"": ""whisper_asr_wer"", ""sequence"": ""float64""}]}], ""splits"": [{""name"": ""test"", ""num_bytes"": 2787256479.0, ""num_examples"": 527}], ""download_size"": 1516121058, ""dataset_size"": 2787256479.0}, {""config_name"": ""zsm_Latn"", ""features"": [{""name"": ""link"", ""dtype"": ""string""}, {""name"": ""question_number"", ""dtype"": ""int64""}, {""name"": ""flores_passage"", ""dtype"": ""string""}, {""name"": ""question"", ""dtype"": ""string""}, {""name"": ""mc_answer1"", ""dtype"": ""string""}, {""name"": ""mc_answer2"", ""dtype"": ""string""}, {""name"": ""mc_answer3"", ""dtype"": ""string""}, {""name"": ""mc_answer4"", ""dtype"": ""string""}, {""name"": ""correct_answer_num"", ""dtype"": ""string""}, {""name"": ""dialect"", ""dtype"": ""string""}, {""name"": ""ds"", ""dtype"": ""timestamp[us]""}, {""name"": ""sentence_data"", ""list"": [{""name"": ""URL"", ""dtype"": ""string""}, {""name"": ""audio"", ""sequence"": {""audio"": {""sampling_rate"": 16000}}}, {""name"": ""domain"", ""dtype"": ""string""}, {""name"": ""filename"", ""sequence"": ""string""}, {""name"": ""fleurs_id"", ""dtype"": ""int64""}, {""name"": ""full_paragraph"", ""dtype"": ""bool""}, {""name"": ""gender"", ""sequence"": ""string""}, {""name"": ""has_hyperlink"", ""dtype"": ""int64""}, {""name"": ""has_image"", ""dtype"": ""int64""}, {""name"": ""id"", ""dtype"": ""int64""}, {""name"": ""num_samples"", ""sequence"": ""int64""}, {""name"": ""raw_transcription"", ""dtype"": ""string""}, {""name"": ""seamlessm4t_asr"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr_cer"", ""sequence"": ""float64""}, {""name"": ""seamlessm4t_asr_translation"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr_wer"", ""sequence"": ""float64""}, {""name"": ""sentence"", ""dtype"": ""string""}, {""name"": ""sentence_idx"", ""dtype"": ""int64""}, {""name"": ""speaker_id"", ""sequence"": ""int64""}, {""name"": ""split"", ""sequence"": ""string""}, {""name"": ""topic"", ""dtype"": ""string""}, {""name"": ""transcription"", ""dtype"": ""string""}, {""name"": ""whisper_asr"", ""sequence"": ""string""}, {""name"": ""whisper_asr_cer"", ""sequence"": ""float64""}, {""name"": ""whisper_asr_translation"", ""sequence"": ""string""}, {""name"": ""whisper_asr_wer"", ""sequence"": ""float64""}]}], ""splits"": [{""name"": ""test"", ""num_bytes"": 4323922189.0, ""num_examples"": 749}], ""download_size"": 2323029241, ""dataset_size"": 4323922189.0}, {""config_name"": ""zul_Latn"", ""features"": [{""name"": ""link"", ""dtype"": ""string""}, {""name"": ""question_number"", ""dtype"": ""int64""}, {""name"": ""flores_passage"", ""dtype"": ""string""}, {""name"": ""question"", ""dtype"": ""string""}, {""name"": ""mc_answer1"", ""dtype"": ""string""}, {""name"": ""mc_answer2"", ""dtype"": ""string""}, {""name"": ""mc_answer3"", ""dtype"": ""string""}, {""name"": ""mc_answer4"", ""dtype"": ""string""}, {""name"": ""correct_answer_num"", ""dtype"": ""string""}, {""name"": ""dialect"", ""dtype"": ""string""}, {""name"": ""ds"", ""dtype"": ""timestamp[us]""}, {""name"": ""sentence_data"", ""list"": [{""name"": ""URL"", ""dtype"": ""string""}, {""name"": ""audio"", ""sequence"": {""audio"": {""sampling_rate"": 16000}}}, {""name"": ""domain"", ""dtype"": ""string""}, {""name"": ""filename"", ""sequence"": ""string""}, {""name"": ""fleurs_id"", ""dtype"": ""int64""}, {""name"": ""full_paragraph"", ""dtype"": ""bool""}, {""name"": ""gender"", ""sequence"": ""string""}, {""name"": ""has_hyperlink"", ""dtype"": ""int64""}, {""name"": ""has_image"", ""dtype"": ""int64""}, {""name"": ""id"", ""dtype"": ""int64""}, {""name"": ""num_samples"", ""sequence"": ""int64""}, {""name"": ""raw_transcription"", ""dtype"": ""string""}, {""name"": ""seamlessm4t_asr"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr_cer"", ""sequence"": ""float64""}, {""name"": ""seamlessm4t_asr_translation"", ""sequence"": ""string""}, {""name"": ""seamlessm4t_asr_wer"", ""sequence"": ""float64""}, {""name"": ""sentence"", ""dtype"": ""string""}, {""name"": ""sentence_idx"", ""dtype"": ""int64""}, {""name"": ""speaker_id"", ""sequence"": ""int64""}, {""name"": ""split"", ""sequence"": ""string""}, {""name"": ""topic"", ""dtype"": ""string""}, {""name"": ""transcription"", ""dtype"": ""string""}, {""name"": ""whisper_asr"", ""sequence"": ""string""}, {""name"": ""whisper_asr_cer"", ""sequence"": ""float64""}, {""name"": ""whisper_asr_translation"", ""sequence"": ""string""}, {""name"": ""whisper_asr_wer"", ""sequence"": ""float64""}]}], ""splits"": [{""name"": ""test"", ""num_bytes"": 7229958526.0, ""num_examples"": 838}], ""download_size"": 3874798473, ""dataset_size"": 7229958526.0}], ""configs"": [{""config_name"": ""afr_Latn"", ""data_files"": [{""split"": ""test"", ""path"": ""data/afr_Latn/test-*""}]}, {""config_name"": ""amh_Ethi"", ""data_files"": [{""split"": ""test"", ""path"": ""data/amh_Ethi/test-*""}]}, {""config_name"": ""arb_Arab"", ""data_files"": [{""split"": ""test"", ""path"": ""data/arb_Arab/test-*""}]}, {""config_name"": ""asm_Beng"", ""data_files"": [{""split"": ""test"", ""path"": ""data/asm_Beng/test-*""}]}, {""config_name"": ""azj_Latn"", ""data_files"": [{""split"": ""test"", ""path"": ""data/azj_Latn/test-*""}]}, {""config_name"": ""ben_Beng"", ""data_files"": [{""split"": ""test"", ""path"": ""data/ben_Beng/test-*""}]}, {""config_name"": ""bul_Cyrl"", ""data_files"": [{""split"": ""test"", ""path"": ""data/bul_Cyrl/test-*""}]}, {""config_name"": ""cat_Latn"", ""data_files"": [{""split"": ""test"", ""path"": ""data/cat_Latn/test-*""}]}, {""config_name"": ""ceb_Latn"", ""data_files"": [{""split"": ""test"", ""path"": ""data/ceb_Latn/test-*""}]}, {""config_name"": ""ces_Latn"", ""data_files"": [{""split"": ""test"", ""path"": ""data/ces_Latn/test-*""}]}, {""config_name"": ""ckb_Arab"", ""data_files"": [{""split"": ""test"", ""path"": ""data/ckb_Arab/test-*""}]}, {""config_name"": ""dan_Latn"", ""data_files"": [{""split"": ""test"", ""path"": ""data/dan_Latn/test-*""}]}, {""config_name"": ""deu_Latn"", ""data_files"": [{""split"": ""test"", ""path"": ""data/deu_Latn/test-*""}]}, {""config_name"": ""ell_Grek"", ""data_files"": [{""split"": ""test"", ""path"": ""data/ell_Grek/test-*""}]}, {""config_name"": ""eng_Latn"", ""data_files"": [{""split"": ""test"", ""path"": ""data/eng_Latn/test-*""}]}, {""config_name"": ""est_Latn"", ""data_files"": [{""split"": ""test"", ""path"": ""data/est_Latn/test-*""}]}, {""config_name"": ""fin_Latn"", ""data_files"": [{""split"": ""test"", ""path"": ""data/fin_Latn/test-*""}]}, {""config_name"": ""fra_Latn"", ""data_files"": [{""split"": ""test"", ""path"": ""data/fra_Latn/test-*""}]}, {""config_name"": ""fuv_Latn"", ""data_files"": [{""split"": ""test"", ""path"": ""data/fuv_Latn/test-*""}]}, {""config_name"": ""gaz_Latn"", ""data_files"": [{""split"": ""test"", ""path"": ""data/gaz_Latn/test-*""}]}, {""config_name"": ""guj_Gujr"", ""data_files"": [{""split"": ""test"", ""path"": ""data/guj_Gujr/test-*""}]}, {""config_name"": ""hau_Latn"", ""data_files"": [{""split"": ""test"", ""path"": ""data/hau_Latn/test-*""}]}, {""config_name"": ""heb_Hebr"", ""data_files"": [{""split"": ""test"", ""path"": ""data/heb_Hebr/test-*""}]}, {""config_name"": ""hin_Deva"", ""data_files"": [{""split"": ""test"", ""path"": ""data/hin_Deva/test-*""}]}, {""config_name"": ""hrv_Latn"", ""data_files"": [{""split"": ""test"", ""path"": ""data/hrv_Latn/test-*""}]}, {""config_name"": ""hun_Latn"", ""data_files"": [{""split"": ""test"", ""path"": ""data/hun_Latn/test-*""}]}, {""config_name"": ""hye_Armn"", ""data_files"": [{""split"": ""test"", ""path"": ""data/hye_Armn/test-*""}]}, {""config_name"": ""ibo_Latn"", ""data_files"": [{""split"": ""test"", ""path"": ""data/ibo_Latn/test-*""}]}, {""config_name"": ""ind_Latn"", ""data_files"": [{""split"": ""test"", ""path"": ""data/ind_Latn/test-*""}]}, {""config_name"": ""isl_Latn"", ""data_files"": [{""split"": ""test"", ""path"": ""data/isl_Latn/test-*""}]}, {""config_name"": ""ita_Latn"", ""data_files"": [{""split"": ""test"", ""path"": ""data/ita_Latn/test-*""}]}, {""config_name"": ""jav_Latn"", ""data_files"": [{""split"": ""test"", ""path"": ""data/jav_Latn/test-*""}]}, {""config_name"": ""jpn_Jpan"", ""data_files"": [{""split"": ""test"", ""path"": ""data/jpn_Jpan/test-*""}]}, {""config_name"": ""kan_Knda"", ""data_files"": [{""split"": ""test"", ""path"": ""data/kan_Knda/test-*""}]}, {""config_name"": ""kat_Geor"", ""data_files"": [{""split"": ""test"", ""path"": ""data/kat_Geor/test-*""}]}, {""config_name"": ""kaz_Cyrl"", ""data_files"": [{""split"": ""test"", ""path"": ""data/kaz_Cyrl/test-*""}]}, {""config_name"": ""kea_Latn"", ""data_files"": [{""split"": ""test"", ""path"": ""data/kea_Latn/test-*""}]}, {""config_name"": ""khk_Cyrl"", ""data_files"": [{""split"": ""test"", ""path"": ""data/khk_Cyrl/test-*""}]}, {""config_name"": ""khm_Khmr"", ""data_files"": [{""split"": ""test"", ""path"": ""data/khm_Khmr/test-*""}]}, {""config_name"": ""kir_Cyrl"", ""data_files"": [{""split"": ""test"", ""path"": ""data/kir_Cyrl/test-*""}]}, {""config_name"": ""kor_Hang"", ""data_files"": [{""split"": ""test"", ""path"": ""data/kor_Hang/test-*""}]}, {""config_name"": ""lao_Laoo"", ""data_files"": [{""split"": ""test"", ""path"": ""data/lao_Laoo/test-*""}]}, {""config_name"": ""lin_Latn"", ""data_files"": [{""split"": ""test"", ""path"": ""data/lin_Latn/test-*""}]}, {""config_name"": ""lit_Latn"", ""data_files"": [{""split"": ""test"", ""path"": ""data/lit_Latn/test-*""}]}, {""config_name"": ""lug_Latn"", ""data_files"": [{""split"": ""test"", ""path"": ""data/lug_Latn/test-*""}]}, {""config_name"": ""luo_Latn"", ""data_files"": [{""split"": ""test"", ""path"": ""data/luo_Latn/test-*""}]}, {""config_name"": ""lvs_Latn"", ""data_files"": [{""split"": ""test"", ""path"": ""data/lvs_Latn/test-*""}]}, {""config_name"": ""mal_Mlym"", ""data_files"": [{""split"": ""test"", ""path"": ""data/mal_Mlym/test-*""}]}, {""config_name"": ""mar_Deva"", ""data_files"": [{""split"": ""test"", ""path"": ""data/mar_Deva/test-*""}]}, {""config_name"": ""mkd_Cyrl"", ""data_files"": [{""split"": ""test"", ""path"": ""data/mkd_Cyrl/test-*""}]}, {""config_name"": ""mlt_Latn"", ""data_files"": [{""split"": ""test"", ""path"": ""data/mlt_Latn/test-*""}]}, {""config_name"": ""mri_Latn"", ""data_files"": [{""split"": ""test"", ""path"": ""data/mri_Latn/test-*""}]}, {""config_name"": ""mya_Mymr"", ""data_files"": [{""split"": ""test"", ""path"": ""data/mya_Mymr/test-*""}]}, {""config_name"": ""nld_Latn"", ""data_files"": [{""split"": ""test"", ""path"": ""data/nld_Latn/test-*""}]}, {""config_name"": ""nob_Latn"", ""data_files"": [{""split"": ""test"", ""path"": ""data/nob_Latn/test-*""}]}, {""config_name"": ""npi_Deva"", ""data_files"": [{""split"": ""test"", ""path"": ""data/npi_Deva/test-*""}]}, {""config_name"": ""nso_Latn"", ""data_files"": [{""split"": ""test"", ""path"": ""data/nso_Latn/test-*""}]}, {""config_name"": ""nya_Latn"", ""data_files"": [{""split"": ""test"", ""path"": ""data/nya_Latn/test-*""}]}, {""config_name"": ""ory_Orya"", ""data_files"": [{""split"": ""test"", ""path"": ""data/ory_Orya/test-*""}]}, {""config_name"": ""pan_Guru"", ""data_files"": [{""split"": ""test"", ""path"": ""data/pan_Guru/test-*""}]}, {""config_name"": ""pbt_Arab"", ""data_files"": [{""split"": ""test"", ""path"": ""data/pbt_Arab/test-*""}]}, {""config_name"": ""pes_Arab"", ""data_files"": [{""split"": ""test"", ""path"": ""data/pes_Arab/test-*""}]}, {""config_name"": ""pol_Latn"", ""data_files"": [{""split"": ""test"", ""path"": ""data/pol_Latn/test-*""}]}, {""config_name"": ""por_Latn"", ""data_files"": [{""split"": ""test"", ""path"": ""data/por_Latn/test-*""}]}, {""config_name"": ""ron_Latn"", ""data_files"": [{""split"": ""test"", ""path"": ""data/ron_Latn/test-*""}]}, {""config_name"": ""rus_Cyrl"", ""data_files"": [{""split"": ""test"", ""path"": ""data/rus_Cyrl/test-*""}]}, {""config_name"": ""slk_Latn"", ""data_files"": [{""split"": ""test"", ""path"": ""data/slk_Latn/test-*""}]}, {""config_name"": ""slv_Latn"", ""data_files"": [{""split"": ""test"", ""path"": ""data/slv_Latn/test-*""}]}, {""config_name"": ""sna_Latn"", ""data_files"": [{""split"": ""test"", ""path"": ""data/sna_Latn/test-*""}]}, {""config_name"": ""snd_Arab"", ""data_files"": [{""split"": ""test"", ""path"": ""data/snd_Arab/test-*""}]}, {""config_name"": ""som_Latn"", ""data_files"": [{""split"": ""test"", ""path"": ""data/som_Latn/test-*""}]}, {""config_name"": ""spa_Latn"", ""data_files"": [{""split"": ""test"", ""path"": ""data/spa_Latn/test-*""}]}, {""config_name"": ""srp_Cyrl"", ""data_files"": [{""split"": ""test"", ""path"": ""data/srp_Cyrl/test-*""}]}, {""config_name"": ""swe_Latn"", ""data_files"": [{""split"": ""test"", ""path"": ""data/swe_Latn/test-*""}]}, {""config_name"": ""swh_Latn"", ""data_files"": [{""split"": ""test"", ""path"": ""data/swh_Latn/test-*""}]}, {""config_name"": ""tam_Taml"", ""data_files"": [{""split"": ""test"", ""path"": ""data/tam_Taml/test-*""}]}, {""config_name"": ""tel_Telu"", ""data_files"": [{""split"": ""test"", ""path"": ""data/tel_Telu/test-*""}]}, {""config_name"": ""tgk_Cyrl"", ""data_files"": [{""split"": ""test"", ""path"": ""data/tgk_Cyrl/test-*""}]}, {""config_name"": ""tgl_Latn"", ""data_files"": [{""split"": ""test"", ""path"": ""data/tgl_Latn/test-*""}]}, {""config_name"": ""tha_Thai"", ""data_files"": [{""split"": ""test"", ""path"": ""data/tha_Thai/test-*""}]}, {""config_name"": ""tur_Latn"", ""data_files"": [{""split"": ""test"", ""path"": ""data/tur_Latn/test-*""}]}, {""config_name"": ""ukr_Cyrl"", ""data_files"": [{""split"": ""test"", ""path"": ""data/ukr_Cyrl/test-*""}]}, {""config_name"": ""urd_Arab"", ""data_files"": [{""split"": ""test"", ""path"": ""data/urd_Arab/test-*""}]}, {""config_name"": ""uzn_Latn"", ""data_files"": [{""split"": ""test"", ""path"": ""data/uzn_Latn/test-*""}]}, {""config_name"": ""vie_Latn"", ""data_files"": [{""split"": ""test"", ""path"": ""data/vie_Latn/test-*""}]}, {""config_name"": ""wol_Latn"", ""data_files"": [{""split"": ""test"", ""path"": ""data/wol_Latn/test-*""}]}, {""config_name"": ""xho_Latn"", ""data_files"": [{""split"": ""test"", ""path"": ""data/xho_Latn/test-*""}]}, {""config_name"": ""yor_Latn"", ""data_files"": [{""split"": ""test"", ""path"": ""data/yor_Latn/test-*""}]}, {""config_name"": ""zho_Hans"", ""data_files"": [{""split"": ""test"", ""path"": ""data/zho_Hans/test-*""}]}, {""config_name"": ""zho_Hant"", ""data_files"": [{""split"": ""test"", ""path"": ""data/zho_Hant/test-*""}]}, {""config_name"": ""zsm_Latn"", ""data_files"": [{""split"": ""test"", ""path"": ""data/zsm_Latn/test-*""}]}, {""config_name"": ""zul_Latn"", ""data_files"": [{""split"": ""test"", ""path"": ""data/zul_Latn/test-*""}]}]}","# Belebele-Fleurs
Belebele-Fleurs is a dataset suitable to evaluate two core tasks:
- **Multilingual Spoken Language Understanding (Listening Comprehension):** For each spoken paragraph, the task is to answer a multiple-choice question. The question and four answer choices are provided in text form.
- **Multilingual Long-Form Automatic Speech Recognition (ASR) with Diverse Speakers:** By concatenating sentence-level utterances, long-form audio clips (ranging from 30 seconds to 1 minute 30 seconds) can be created. These clips feature a diverse set of speakers, making the dataset suitable for robust ASR evaluations.
## Dataset creation
This dataset processes and merges all available multilingual data from the Fleurs, Flores, and Belebele datasets.
It aligns the Belebele test subset with the corresponding segments from the interesected Fleurs-Flores data.
The processing pipeline involves the following steps:
1. Remove all silent and noisy files from Fleurs.
2. Match Fleurs into Flores
3. Matches concatenated Flores sentences into Belebele for Fleurs-Flores paragraphs that are fully available.
4. Uploads the merged and aligned dataset to a Hugging Face Hub repository.
Full details and scripts to compile this dataset are available at: [https://github.com/fdschmidt93/fleurs-slu](https://github.com/fdschmidt93/fleurs-slu)
## Example
```python
from datasets import load_dataset
eng_Latn = load_dataset(""wuenlp/fleurs-belebele"", ""eng_Latn"", split=""test"")
#
# Dataset({
# features: ['link', 'question_number', 'flores_passage', 'question', 'mc_answer1', 'm
# c_answer2', 'mc_answer3', 'mc_answer4', 'correct_answer_num', 'dialect', 'ds', 'sentence
# _data'],
# num_rows: 844
# })
```
`sentence_data: list[dict]` comprises the ordered sentence-level data of each available paragraph for Belebele for that language. Each utterance is wrapped in a list inside the sentence-level data. See 'Usage' below on an example function to unwrap the sentence data as you would like to use it.
```python
eng_Latn[0][""sentence_data""]
[{'URL': 'https://en.wikibooks.org/wiki/Accordion/Right_hand',
'audio': [{'path': '9408178198244706031.wav', 'array': array([ 0. , 0. , 0. , ..., -0.00086391, -0.00147504, -0.0025661 ]), 'sampling_rate': 16000},
{'path': '12239315312712394265.wav', 'array': array([ 1.78813934e-07, -1.78813934e-07, 2.38418579e-07, ..., 6.80863857e-04, 5.23209572e-04, 6.05285168e-04]), 'sampling_rate': 16000}],
'domain': 'wikibooks',
'filename': ['9408178198244706031.wav', '12239315312712394265.wav'],
'fleurs_id': 479,
'full_paragraph': True,
'gender': ['FEMALE', 'MALE'],
'has_hyperlink': 0,
'has_image': 0,
'id': 479,
'num_samples': [184320, 161280],
'raw_transcription': 'Make sure your hand is as relaxed as possible while still hittin g all the notes correctly - also try not to make much extraneous motion with your fingers.',
'seamlessm4t_asr': ['Make sure your hand is as relaxed as possible when still hitting all the notes correctly. Also, try not to make much extraneous motion with your fingers.',
'make sure your hand is as relaxed as possible while still hitting all the notes correctly also try not to make much extraneous motion with your fingers'],
'seamlessm4t_asr_cer': [0.045454545454545456, 0.025974025974025976],
'seamlessm4t_asr_translation': ['Make sure your hand is as relaxed as possible when still hitting all the notes correctly. Also, try not to make much extraneous motion with your fingers.',
'make sure your hand is as relaxed as possible while still hitting all the notes correctly also try not to make much extraneous motion with your fingers'],
'seamlessm4t_asr_wer': [0.14285714285714285, 0.10714285714285714],
'sentence': 'Make sure your hand is as relaxed as possible while still hitting all the notes correctly - also try not to make much extraneous motion with your fingers.',
'sentence_idx': 0,
'speaker_id': [11, 9],
'split': ['train', 'train'],
'topic': 'accordion/right hand',
'transcription': 'make sure your hand is as relaxed as possible while still hitting all the notes correctly also try not to make much extraneous motion with your fingers',
'whisper_asr': ['Make sure your hand is as relaxed as possible when still hitting all the notes correctly. Also, try not to make much extraneous motion with your fingers.',
'Make sure your hand is as relaxed as possible while still hitting all the notes correctly. Also, try not to make much extraneous motion with your fingers.'],
'whisper_asr_cer': [0.045454545454545456, 0.025974025974025976],
'whisper_asr_translation': ['Make sure your hand is as relaxed as possible when still hitting all the notes correctly. Also, try not to make much extraneous motion with your
fingers.',
'Make sure your hand is as relaxed as possible while still hitting all the notes correctly. Also, try not to make much extraneous motion with your fingers.'],
'whisper_asr_wer': [0.14285714285714285, 0.10714285714285714]}
,
# ... and remaining sentences
]
```
## Usage
Below is an example of how to use the provided functions for selecting utterances from the Belebele-Fleurs dataset according to different criteria (e.g. minimizing or maximizing CER, or random selection). You can adjust the selection strategy (`strategy`) as needed.
After mapping, you will have columns for the processed passages using the selected criteria:
- `whisper_asr_flores_passage`
- `whisper_asr_translation_flores_passage`
- `seamlessm4t_asr_flores_passage`
- `seamlessm4t_asr_translation_flores_passage`
These contain concatenated transcripts or translations based on the chosen selection strategy.
### Selection Strategy:
You can choose how you want to select utterances:
- `strategy=""best""`: Selects utterances with the minimal Character Error Rate (CER).
- `strategy=""worst""`: Selects utterances with the maximal CER.
- `strategy=""random""`: Selects utterances at random.
**Note:** The selection logic takes into account which models are supported for a given language. If a language is unsupported by one of the models, the function automatically adjusts to only consider CERs from the supported models.
```python
import random
import torch
from transformers.tokenization_utils_fast import PreTrainedTokenizerFast
from typing import Any, Callable
from transformers import AutoTokenizer
def select_audio_mapper(
language: str,
strategy: str = ""best"",
) -> Callable[[dict[str, list[Any]]], dict[str, list[Any]]]:
""""""
Create a mapping function for selecting audio data based on CER.
Args:
language (str): Language code for filtering unsupported models.
strategy (str, optional): Selection strategy ('best', 'worst', or 'random'). Defaults to 'best'.
Returns:
Callable[[dict[str, list[Any]]], dict[str, list[Any]]]: A function for mapping dataset examples.
Raises:
ValueError: If an invalid selection strategy is provided.
""""""
keys = {
""audio"",
""filename"",
""gender"",
""num_samples"",
""seamlessm4t_asr"",
""seamlessm4t_asr_cer"",
""seamlessm4t_asr_translation"",
""seamlessm4t_asr_wer"",
""speaker_id"",
""split"",
""whisper_asr"",
""whisper_asr_cer"",
""whisper_asr_translation"",
""whisper_asr_wer"",
}
# Define unsupported languages for each model
seamless_unsupported = {
""ast_Latn"",
""hau_Latn"",
""kam_Latn"",
""kea_Latn"",
""lin_Latn"",
""mri_Latn"",
""nso_Latn"",
""oci_Latn"",
""tgl_Latn"",
""umb_Latn"",
""wol_Latn"",
""xho_Latn"",
}
whisper_unsupported = {
""ast_Latn"",
""ceb_Latn"",
""ckb_Arab"",
""fuv_Latn"",
""gle_Latn"",
""ibo_Latn"",
""kam_Latn"",
""kea_Latn"",
""kir_Cyrl"",
""lug_Latn"",
""luo_Latn"",
""nso_Latn"",
""tgl_Latn"",
""umb_Latn"",
""wol_Latn"",
""xho_Latn"",
""zul_Latn"",
}
# Define selection strategy
if strategy == ""best"":
select_func = lambda scores: min(range(len(scores)), key=lambda i: scores[i])
elif strategy == ""worst"":
select_func = lambda scores: max(range(len(scores)), key=lambda i: scores[i])
elif strategy == ""random"":
select_func = lambda scores: random.randint(0, len(scores) - 1)
else:
raise ValueError(""Invalid 'strategy'. Must be one of 'best', 'worst', or 'random'."")
# Determine which models are supported for the given language
if language not in whisper_unsupported and language not in seamless_unsupported:
models = [""whisper_asr_cer"", ""seamlessm4t_asr_cer""]
elif language in whisper_unsupported:
models = [""seamlessm4t_asr_cer""]
elif language in seamless_unsupported:
models = [""whisper_asr_cer""]
else:
models = [""whisper_asr_cer"", ""seamlessm4t_asr_cer""]
asr_keys = [
""whisper_asr"",
""whisper_asr_translation"",
""seamlessm4t_asr"",
""seamlessm4t_asr_translation"",
]
def map_fn(examples: dict[str, list[Any]]) -> dict[str, list[Any]]:
""""""
Map function to process dataset examples by selecting CER-based audio data.
Args:
examples (dict[str, list[Any]]): Dataset examples.
Returns:
dict[str, list[Any]]: Processed dataset examples.
""""""
sentence_data_containers: list[list[list]] = examples[""sentence_data""]
paragraphs = {k: [] for k in asr_keys}
for sentence_data in sentence_data_containers:
collected_sentence_data = []
for sentence in sentence_data:
cer_lists = [sentence[model] for model in models]
averaged_cer = [
sum(aligned_cer) / len(aligned_cer)
for aligned_cer in zip(*cer_lists)
]
argmin_idx = select_func(averaged_cer)
sentence_dict = {key: sentence[key][argmin_idx] for key in keys}
sentence_dict[""id""] = sentence[""id""]
collected_sentence_data.append(sentence_dict)
collected_sentence_data = list(
sorted(collected_sentence_data, key=lambda x: x[""id""])
)
for key in asr_keys:
texts = "" "".join(
[line[key].strip() for line in collected_sentence_data]
).strip()
paragraphs[key].append(texts)
for key in asr_keys:
examples[f""{key}_flores_passage""] = paragraphs[key]
return examples
return map_fn
from datasets import load_dataset
eng_Latn = load_dataset(""wuenlp/belebele-fleurs"", ""eng_Latn"", split=""test"")
mapper = select_audio_mapper(""eng_Latn"")
dataset = eng_Latn.map(
mapper, batched=True, batch_size=30, remove_columns=[""sentence_data""]
)
```
## Dataset statistics
| Language | Counts |
|:-----------|---------:|
| `eng_Latn` | 844 |
| `afr_Latn` | 309 |
| `amh_Ethi` | 782 |
| `arb_Arab` | 387 |
| `asm_Beng` | 824 |
| `azj_Latn` | 759 |
| `bul_Cyrl` | 873 |
| `ben_Beng` | 855 |
| `cat_Latn` | 652 |
| `ceb_Latn` | 783 |
| `ckb_Arab` | 842 |
| `zho_Hans` | 888 |
| `ces_Latn` | 802 |
| `dan_Latn` | 696 |
| `deu_Latn` | 804 |
| `ell_Grek` | 837 |
| `eng_Latn` | 844 |
| `spa_Latn` | 659 |
| `est_Latn` | 736 |
| `pes_Arab` | 673 |
| `fin_Latn` | 826 |
| `tgl_Latn` | 505 |
| `fra_Latn` | 839 |
| `guj_Gujr` | 880 |
| `afr_Latn` | 309 |
| `hau_Latn` | 838 |
| `heb_Hebr` | 878 |
| `hin_Deva` | 515 |
| `hrv_Latn` | 896 |
| `hun_Latn` | 879 |
| `hye_Armn` | 861 |
| `ind_Latn` | 783 |
| `ibo_Latn` | 838 |
| `isl_Latn` | 81 |
| `ita_Latn` | 851 |
| `jpn_Jpan` | 590 |
| `jav_Latn` | 835 |
| `kat_Geor` | 372 |
| `kea_Latn` | 770 |
| `kaz_Cyrl` | 870 |
| `khm_Khmr` | 439 |
| `kan_Knda` | 606 |
| `kor_Hang` | 535 |
| `kir_Cyrl` | 811 |
| `lug_Latn` | 703 |
| `lin_Latn` | 778 |
| `lao_Laoo` | 346 |
| `lit_Latn` | 834 |
| `luo_Latn` | 512 |
| `lvs_Latn` | 555 |
| `mri_Latn` | 877 |
| `mkd_Cyrl` | 667 |
| `mal_Mlym` | 809 |
| `khk_Cyrl` | 869 |
| `mar_Deva` | 869 |
| `zsm_Latn` | 749 |
| `mlt_Latn` | 816 |
| `mya_Mymr` | 864 |
| `nob_Latn` | 635 |
| `npi_Deva` | 876 |
| `nld_Latn` | 674 |
| `nso_Latn` | 569 |
| `nya_Latn` | 752 |
| `ory_Orya` | 220 |
| `pan_Guru` | 396 |
| `pol_Latn` | 765 |
| `pbt_Arab` | 628 |
| `por_Latn` | 791 |
| `ron_Latn` | 815 |
| `rus_Cyrl` | 819 |
| `snd_Arab` | 878 |
| `slk_Latn` | 513 |
| `slv_Latn` | 724 |
| `sna_Latn` | 735 |
| `som_Latn` | 874 |
| `srp_Cyrl` | 766 |
| `swe_Latn` | 681 |
| `swh_Latn` | 780 |
| `tam_Taml` | 714 |
| `tel_Telu` | 567 |
| `tgk_Cyrl` | 632 |
| `tha_Thai` | 745 |
| `tur_Latn` | 706 |
| `ukr_Cyrl` | 773 |
| `urd_Arab` | 482 |
| `uzn_Latn` | 812 |
| `vie_Latn` | 847 |
| `wol_Latn` | 495 |
| `xho_Latn` | 900 |
| `yor_Latn` | 652 |
| `zho_Hant` | 527 |
| `zul_Latn` | 838 |
| `fuv_Latn` | 848 |
| `gaz_Latn` | 252 |
## ASR Results
Complete per-language results can be found in ./results.csv. This CSV file will be updated continuously as new results become available.
### Description
The usage by split for the dataset is described below.
- **Training / Validaton**: The models are trained and validated on clean English paragraphs from the training and validation splits constructed by the compilation script provided by Belebele. For more details, refer to the script here: https://github.com/facebookresearch/belebele/blob/main/assemble_training_set.py. The created dataset is available at: [https://huggingface.co/datasets/WueNLP/belebele-fleurs-train-val-text](https://huggingface.co/datasets/WueNLP/belebele-fleurs-train-val-text)
- **Testing**: We concatenate the sentence-level in-language ASR and speech-to-English translations of SeamlessM4Tv2-Large and WhisperV3-Large to evaluate zero-shot cross-lingual transfer with `NLLB-LLM2Vec-Meta-Llama-31-8B-Instruct-mntp-unsup-simcse` and translate-test on speech-to-English translations with `LLM2Vec-Meta-Llama-3.1-8B-Instruct-mntp-unsup-simcse`
model | Input | Utterance-ASR-Quality | seed | LR | Batch Size | eng_Latn | avg |
:---------------------------------------------------------|:----------------------------------------|:------------------------|-------:|-------:|-------------:|:-----------|:------|
LLM2Vec-Meta-Llama-3.1-8B-Instruct-mntp-unsup-simcse | SeamlessM4Tv2-Large English Translation | best | 43 | 0.0001 | 32 | 96.0% | 65.4% |
LLM2Vec-Meta-Llama-3.1-8B-Instruct-mntp-unsup-simcse | SeamlessM4Tv2-Large English Translation | best | 42 | 0.0001 | 32 | 95.6% | 63.5% |
LLM2Vec-Meta-Llama-3.1-8B-Instruct-mntp-unsup-simcse | SeamlessM4Tv2-Large English Translation | best | 44 | 0.0001 | 32 | 94.7% | 62.6% |
LLM2Vec-Meta-Llama-3.1-8B-Instruct-mntp-unsup-simcse | SeamlessM4Tv2-Large English Translation | best | 44 | 0.0002 | 32 | 94.3% | 61.9% |
LLM2Vec-Meta-Llama-3.1-8B-Instruct-mntp-unsup-simcse | SeamlessM4Tv2-Large English Translation | best | 43 | 0.0002 | 32 | 95.3% | 61.7% |
LLM2Vec-Meta-Llama-3.1-8B-Instruct-mntp-unsup-simcse | SeamlessM4Tv2-Large English Translation | best | 42 | 0.0002 | 32 | 95.3% | 60.6% |
NLLB-LLM2Vec-Meta-Llama-31-8B-Instruct-mntp-unsup-simcse | SeamlessM4Tv2-Large ASR | best | 43 | 0.0001 | 32 | 95.3% | 59.9% |
NLLB-LLM2Vec-Meta-Llama-31-8B-Instruct-mntp-unsup-simcse | SeamlessM4Tv2-Large ASR | best | 43 | 0.0002 | 32 | 93.8% | 59.4% |
NLLB-LLM2Vec-Meta-Llama-31-8B-Instruct-mntp-unsup-simcse | SeamlessM4Tv2-Large ASR | best | 44 | 0.0001 | 32 | 94.4% | 59.4% |
NLLB-LLM2Vec-Meta-Llama-31-8B-Instruct-mntp-unsup-simcse | SeamlessM4Tv2-Large ASR | best | 42 | 0.0001 | 32 | 95.0% | 58.3% |
LLM2Vec-Meta-Llama-3.1-8B-Instruct-mntp-unsup-simcse | SeamlessM4Tv2-Large English Translation | best | 43 | 0.0003 | 32 | 92.8% | 57.9% |
LLM2Vec-Meta-Llama-3.1-8B-Instruct-mntp-unsup-simcse | WhisperV3-Large English Translation | best | 43 | 0.0001 | 32 | 95.3% | 57.5% |
NLLB-LLM2Vec-Meta-Llama-31-8B-Instruct-mntp-unsup-simcse | SeamlessM4Tv2-Large ASR | best | 44 | 0.0002 | 32 | 93.2% | 56.5% |
NLLB-LLM2Vec-Meta-Llama-31-8B-Instruct-mntp-unsup-simcse | WhisperV3-Large ASR | best | 43 | 0.0001 | 32 | 95.4% | 56.4% |
LLM2Vec-Meta-Llama-3.1-8B-Instruct-mntp-unsup-simcse | SeamlessM4Tv2-Large English Translation | best | 42 | 0.0003 | 32 | 93.4% | 56.4% |
LLM2Vec-Meta-Llama-3.1-8B-Instruct-mntp-unsup-simcse | WhisperV3-Large English Translation | best | 42 | 0.0001 | 32 | 94.8% | 56.2% |
LLM2Vec-Meta-Llama-3.1-8B-Instruct-mntp-unsup-simcse | WhisperV3-Large English Translation | best | 44 | 0.0001 | 32 | 94.0% | 55.8% |
NLLB-LLM2Vec-Meta-Llama-31-8B-Instruct-mntp-unsup-simcse | WhisperV3-Large ASR | best | 43 | 0.0002 | 32 | 94.1% | 55.4% |
NLLB-LLM2Vec-Meta-Llama-31-8B-Instruct-mntp-unsup-simcse | WhisperV3-Large ASR | best | 44 | 0.0001 | 32 | 94.3% | 55.3% |
LLM2Vec-Meta-Llama-3.1-8B-Instruct-mntp-unsup-simcse | WhisperV3-Large English Translation | best | 44 | 0.0002 | 32 | 94.5% | 55.3% |
LLM2Vec-Meta-Llama-3.1-8B-Instruct-mntp-unsup-simcse | WhisperV3-Large English Translation | best | 43 | 0.0002 | 32 | 94.7% | 55.3% |
NLLB-LLM2Vec-Meta-Llama-31-8B-Instruct-mntp-unsup-simcse | SeamlessM4Tv2-Large ASR | best | 42 | 0.0002 | 32 | 94.1% | 54.8% |
NLLB-LLM2Vec-Meta-Llama-31-8B-Instruct-mntp-unsup-simcse | WhisperV3-Large ASR | best | 42 | 0.0001 | 32 | 94.9% | 54.6% |
LLM2Vec-Meta-Llama-3.1-8B-Instruct-mntp-unsup-simcse | SeamlessM4Tv2-Large English Translation | best | 44 | 0.0003 | 32 | 91.6% | 54.6% |
LLM2Vec-Meta-Llama-3.1-8B-Instruct-mntp-unsup-simcse | WhisperV3-Large English Translation | best | 42 | 0.0002 | 32 | 94.4% | 54.3% |
NLLB-LLM2Vec-Meta-Llama-31-8B-Instruct-mntp-unsup-simcse | WhisperV3-Large ASR | best | 44 | 0.0002 | 32 | 93.5% | 53.6% |
NLLB-LLM2Vec-Meta-Llama-31-8B-Instruct-mntp-unsup-simcse | SeamlessM4Tv2-Large ASR | best | 43 | 0.0003 | 32 | 91.0% | 52.7% |
LLM2Vec-Meta-Llama-3.1-8B-Instruct-mntp-unsup-simcse | WhisperV3-Large English Translation | best | 43 | 0.0003 | 32 | 93.1% | 52.6% |
NLLB-LLM2Vec-Meta-Llama-31-8B-Instruct-mntp-unsup-simcse | WhisperV3-Large ASR | best | 42 | 0.0002 | 32 | 94.1% | 52.0% |
LLM2Vec-Meta-Llama-3.1-8B-Instruct-mntp-unsup-simcse | WhisperV3-Large English Translation | best | 42 | 0.0003 | 32 | 92.9% | 51.7% |
NLLB-LLM2Vec-Meta-Llama-31-8B-Instruct-mntp-unsup-simcse | SeamlessM4Tv2-Large ASR | best | 42 | 0.0003 | 32 | 93.2% | 50.1% |
NLLB-LLM2Vec-Meta-Llama-31-8B-Instruct-mntp-unsup-simcse | WhisperV3-Large ASR | best | 43 | 0.0003 | 32 | 90.9% | 50.1% |
LLM2Vec-Meta-Llama-3.1-8B-Instruct-mntp-unsup-simcse | WhisperV3-Large English Translation | best | 44 | 0.0003 | 32 | 91.6% | 49.8% |
NLLB-LLM2Vec-Meta-Llama-31-8B-Instruct-mntp-unsup-simcse | WhisperV3-Large ASR | best | 42 | 0.0003 | 32 | 94.2% | 48.0% |
NLLB-LLM2Vec-Meta-Llama-31-8B-Instruct-mntp-unsup-simcse | WhisperV3-Large ASR | best | 44 | 0.0003 | 32 | 25.5% | 25.1% |
NLLB-LLM2Vec-Meta-Llama-31-8B-Instruct-mntp-unsup-simcse | SeamlessM4Tv2-Large ASR | best | 44 | 0.0003 | 32 | 26.9% | 24.9% |
# Citation
Should you be using this dataset, please cite the original Belebele dataset. Our dataset will be released ASAP.
```
@inproceedings{bandarkar-etal-2024-belebele,
title = ""The Belebele Benchmark: a Parallel Reading Comprehension Dataset in 122 Language Variants"",
author = ""Bandarkar, Lucas and
Liang, Davis and
Muller, Benjamin and
Artetxe, Mikel and
Shukla, Satya Narayan and
Husa, Donald and
Goyal, Naman and
Krishnan, Abhinandan and
Zettlemoyer, Luke and
Khabsa, Madian"",
editor = ""Ku, Lun-Wei and
Martins, Andre and
Srikumar, Vivek"",
booktitle = ""Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)"",
month = aug,
year = ""2024"",
address = ""Bangkok, Thailand"",
publisher = ""Association for Computational Linguistics"",
url = ""https://aclanthology.org/2024.acl-long.44"",
doi = ""10.18653/v1/2024.acl-long.44"",
pages = ""749--775"",
abstract = ""We present Belebele, a multiple-choice machine reading comprehension (MRC) dataset spanning 122 language variants. Significantly expanding the language coverage of natural language understanding (NLU) benchmarks, this dataset enables the evaluation of text models in high-, medium-, and low-resource languages. Each question is based on a short passage from the FLORES-200 dataset and has four multiple-choice answers. The questions were carefully curated to discriminate between models with different levels of general language comprehension. The English dataset on its own proves difficult enough to challenge state-of-the-art language models. Being fully parallel, this dataset enables direct comparison of model performance across all languages. We use this dataset to evaluate the capabilities of multilingual masked language models (MLMs) and large language models (LLMs). We present extensive results and findings, notably that despite significant cross-lingual transfer in English-centric LLMs, much smaller MLMs pretrained on balanced multilingual data still understand far more languages. Overall, Belebele opens up new avenues for evaluating and analyzing the multilingual capabilities of NLP systems."",
}
```"
lamhieu/sharegpt_dialogue_base,"{""dataset_info"": {""features"": [{""name"": ""messages"", ""list"": [{""name"": ""content"", ""dtype"": ""string""}, {""name"": ""role"", ""dtype"": ""string""}]}], ""splits"": [{""name"": ""train"", ""num_bytes"": 847547561, ""num_examples"": 111912}], ""download_size"": 383263271, ""dataset_size"": 847547561}, ""configs"": [{""config_name"": ""default"", ""data_files"": [{""split"": ""train"", ""path"": ""data/train-*""}]}], ""license"": ""mit"", ""task_categories"": [""text-generation"", ""text2text-generation""], ""language"": [""en"", ""vi"", ""zh"", ""es"", ""pt"", ""ja"", ""ko""], ""size_categories"": [""100K`, 여러 줄의 `\n` 등을 정리했습니다.
데이터의 `license` 필드에서 위키문헌 링크와 원본 자료 위치를 확인할 수 있습니다."
CarrotAI/kmmlu-conversation-sample,"{""license"": ""mit"", ""task_categories"": [""text-generation""], ""language"": [""ko""]}","Kmmlu 데이터를 이용해서 대화 데이터셋 샘픔을 생성하였습니다.
멀티턴 데이터셋으로 학습용도로 만들어졌습니다."
teddylee777/rag-eval-mini,"{""language"": [""ko""], ""license"": ""mit"", ""dataset_info"": {""features"": [{""name"": ""contexts"", ""dtype"": ""string""}, {""name"": ""evolution_type"", ""dtype"": ""string""}, {""name"": ""metadata"", ""dtype"": ""string""}, {""name"": ""episode_done"", ""dtype"": ""bool""}, {""name"": ""question"", ""dtype"": ""string""}, {""name"": ""ground_truth"", ""dtype"": ""string""}], ""splits"": [{""name"": ""train"", ""num_bytes"": 309562, ""num_examples"": 95}, {""name"": ""korean_v1"", ""num_bytes"": 309562, ""num_examples"": 95}], ""download_size"": 157278, ""dataset_size"": 619124}, ""configs"": [{""config_name"": ""default"", ""data_files"": [{""split"": ""train"", ""path"": ""data/train-*""}, {""split"": ""korean_v1"", ""path"": ""data/korean_v1-*""}]}]}",
Yettiesoft/ragtruth-qa-ko,"{""task_categories"": [""question-answering""], ""language"": [""ko""], ""pretty_name"": ""translated ragtruth-qa data in korean"", ""size_categories"": [""10K
[ragtruth-qa](https://huggingface.co/datasets/flowaicom/formatted-ragtruth-qa) 데이터셋을 gpt-4o를 이용하여 한글로 번역 한 데이터셋.
## Dataset Details
### Dataset Description
- **Curated by:** [More Information Needed]
- **Language(s) (NLP):** [한국어]
- **License:** [미정]
### Dataset Sources [optional]
- **Repository:** [https://huggingface.co/datasets/flowaicom/formatted-ragtruth-qa]
## Uses
### Direct Use
[More Information Needed]
### Out-of-Scope Use
[More Information Needed]
## Dataset Structure
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Data Collection and Processing
[More Information Needed]
#### Who are the source data producers?
[More Information Needed]
### Annotations [optional]
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
#### Personal and Sensitive Information
[More Information Needed]
## Bias, Risks, and Limitations
[More Information Needed]
### Recommendations
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed]"
taresco/xP3x_african_subset,"{""language"": [""aeb"", ""af"", ""ak"", ""ars"", ""ary"", ""arz"", ""bm"", ""ee"", ""en"", ""ff"", ""fr"", ""fon"", ""gaz"", ""ha"", ""ig"", ""kam"", ""ki"", ""knc"", ""ko"", ""lg"", ""ln"", ""luo"", ""pcm"", ""plt"", ""po"", ""nso"", ""ny"", ""rn"", ""rw"", ""sn"", ""so"", ""st"", ""ss"", ""swh"", ""ti"", ""tn"", ""tum"", ""tw"", ""umb"", ""xh"", ""wo"", ""yo"", ""zu""]}","# Dataset Card for xP3x
This version of xP3x includes only African Languages + Arabic, English, French, & Portuguese.
See [Cohere/xP3x](https://huggingface.co/datasets/CohereForAI/xP3x) for full information on the dataset and its license."
didi0di/KoChatGPT,"{""license"": ""unknown"", ""dataset_info"": {""features"": [{""name"": ""prompt"", ""dtype"": ""string""}, {""name"": ""completion"", ""dtype"": ""string""}, {""name"": ""tokens"", ""dtype"": ""int64""}], ""splits"": [{""name"": ""train"", ""num_bytes"": 4902713, ""num_examples"": 12000}], ""download_size"": 2876495, ""dataset_size"": 4902713}, ""task_categories"": [""text-generation""], ""language"": [""ko""], ""tags"": [""instruction"", ""SFT""], ""size_categories"": [""10K>
<사람>: 그렇군요. 무엇에 대해 이야기하고 싶으세요?
<챗봇>: 궁금하신 주제에 대해 이야기하고 싶으신가요? 예를 들어 여행, 식품, 스포츠, 등등 다양한 토론에 참여할 수 있습니다.
<사람>: 그래서 스포츠에 대해 이야기해볼까요?
<챗봇>: 그렇습니다. 스포츠에 대해 말씀하시겠습니까? 스포츠는 다양한 종목과 관련된 주제로 비슷한 경기를 비교하거나 새로운 경기를 분석할 수 있습니다. 또한 스포츠 관련 소식과 업데이트 등을 제공할 수 있습니다.
<사람>: 괜찮습니다. 소식이나 업데이트는 어떻게 받을 수 있나요?
<챗봇>: 귀하는 스포츠 관련 소식과 업데이트를 접할 수 있는 다양한 방법을 사용할 수 있습니다. 예를 들면 스포츠 사이트, 스포츠 앱, 스포츠 유튜브 채널 등을 통해 스포츠 소식과 업데이트를 받을 수 있습니다. 또한 소셜 미디어에서도 스포츠 소식과 업데이트를 받을 수 있습니다.
<사람>: 감사합니다. 이제 스포츠 소식과 업데이트를 받아볼 준비가 되었습니다.
<챗봇>: 그렇게 되었군요. 스포츠 소식과 업데이트를 받기 위해 귀하가 해야 할 준비는 무엇인가요? 스포츠 사이트, 스포츠 앱, 스포츠 유튜브 채널, 소셜 미디어 등을 통해 스포츠 소식과 업데이트를 받기 위해 준비해야 합니다.
```
## Data Source
https://github.com/airobotlab/KoChatGPT/tree/main"
UniDataPro/llm-training-dataset,"{""license"": ""cc-by-nc-nd-4.0"", ""task_categories"": [""text-generation"", ""text2text-generation"", ""text-classification""], ""language"": [""uk"", ""tr"", ""th"", ""sk"", ""pt"", ""pl"", ""fa"", ""nl"", ""mr"", ""ml"", ""ko"", ""ja"", ""it"", ""id"", ""hu"", ""hi"", ""ga"", ""el"", ""de"", ""fr"", ""fi"", ""es"", ""en"", ""da"", ""cs"", ""ca"", ""az"", ""ar""], ""tags"": [""llm"", ""llm fine-tuning "", ""finetuning "", ""logs"", ""llm training"", ""nlp"", ""question answering""]}","# LLM Fine-Tuning Dataset - 4,000,000+ logs, 32 languages
The dataset contains over **4 million+ logs** written in **32 languages** and is tailored for LLM training. It includes **log and response pairs** from **3 models**, and is designed for language models and instruction fine-tuning to achieve improved performance in various NLP tasks - **[Get the data](https://unidata.pro/datasets/llm-text-generation/?utm_source=huggingface&utm_medium=cpc&utm_campaign=llm)**
## Models used for text generation:
- **GPT-3.5**
- **GPT-4**
- **Uncensored GPT Version** (is not included inthe sample)
### Languages in the dataset:
*Ukrainian, Turkish, Thai, Swedish, Slovak, Portuguese (Brazil), Portuguese, Polish, Persian, Dutch, Maratham, Malayalam, Korean, Japanese, Italian, Indonesian, Hungarian, Hindi, Irish, Greek, German, French, Finnish, Esperanto, English, Danish, Czech, Chinese, Catalan, Azerbaijani, Arabic*

The dataset features a comprehensive training corpus with **prompts and answers**, suitable for generating text, question answering, and text classification. It enhances pre-trained LLMs, making it valuable for specific tasks, specific needs, and various generation tasks in the realm of language processing
# 💵 Buy the Dataset: This is a limited preview of the data. To access the full dataset, please contact us at [https://unidata.pro](https://unidata.pro/datasets/llm-text-generation/?utm_source=huggingface&utm_medium=cpc&utm_campaign=llm) to discuss your requirements and pricing options.
## Content
Dataset has the following columns:
- **language**: language the prompt is made in,
- **model**: type of the model (GPT-3.5, GPT-4 and Uncensored GPT Version),
- **time**: time when the answer was generated,
- **text**: user's prompt,
- **response**: response generated by the model
The text corpus supports instruction tuning and supervised fine-tuning for larger language models, enhancing text generation and human language understanding. With a focus on generating human-like content, it is useful for evaluating LLMs, improving generation capabilities, and performing well in classification tasks. This dataset also assists in mitigating biases, supporting longer texts, and optimizing LLM architectures for more effective language processing and language understanding.
# 🌐 [UniData](https://unidata.pro/datasets/llm-text-generation/?utm_source=huggingface&utm_medium=cpc&utm_campaign=llm) provides high-quality datasets, content moderation, data collection and annotation for your AI/ML projects"
inswave/AISquare_Koalpaca_Orca_merged,{},"---
license: cc-by-nc-4.0
task_categories:
- text-generation
language:
- ko
---"
FrancophonIA/TED_talks,"{""task_categories"": [""translation""], ""language"": [""en"", ""es"", ""fr"", ""he"", ""it"", ""ja"", ""ko"", ""pt"", ""ru"", ""tr"", ""zh""], ""multilinguality"": [""multilingual""], ""configs"": [{""config_name"": ""EN"", ""data_files"": [{""split"": ""train"", ""path"": ""ted_talks_en.csv""}]}, {""config_name"": ""ES"", ""data_files"": [{""split"": ""train"", ""path"": ""ted_talks_es.csv""}]}, {""config_name"": ""FR"", ""data_files"": [{""split"": ""train"", ""path"": ""ted_talks_fr.csv""}]}, {""config_name"": ""HE"", ""data_files"": [{""split"": ""train"", ""path"": ""ted_talks_he.csv""}]}, {""config_name"": ""IT"", ""data_files"": [{""split"": ""train"", ""path"": ""ted_talks_it.csv""}]}, {""config_name"": ""JA"", ""data_files"": [{""split"": ""train"", ""path"": ""ted_talks_ja.csv""}]}, {""config_name"": ""KO"", ""data_files"": [{""split"": ""train"", ""path"": ""ted_talks_ko.csv""}]}, {""config_name"": ""PT-BR"", ""data_files"": [{""split"": ""train"", ""path"": ""ted_talks_pt-br.csv""}]}, {""config_name"": ""RU"", ""data_files"": [{""split"": ""train"", ""path"": ""ted_talks_ru.csv""}]}, {""config_name"": ""TR"", ""data_files"": [{""split"": ""train"", ""path"": ""ted_talks_tr.csv""}]}, {""config_name"": ""ZH-CN"", ""data_files"": [{""split"": ""train"", ""path"": ""ted_talks_zh-cn.csv""}]}, {""config_name"": ""ZH-TW"", ""data_files"": [{""split"": ""train"", ""path"": ""ted_talks_zh-tw.csv""}]}]}","> [!NOTE]
> Dataset origin: https://www.kaggle.com/datasets/miguelcorraljr/ted-ultimate-dataset
## Context
TED is devoted to spreading powerful ideas in just about any topic. These datasets contain over 4,000 TED talks including transcripts in many languages.
If you would like a dataset for a language that is not listed below or a in a different file format (JSON, SQL, etc.), please checkout my Python module – [TEDscraper](https://github.com/corralm/ted-scraper).
## Attributes
| Attribute | Description | Data Type |
|----------------|------------------------------------------------|------------|
| talk_id | Talk identification number provided by TED | int |
| title | Title of the talk | string |
| speaker_1 | First speaker in TED's speaker list | string |
| speakers | Speakers in the talk | dictionary |
| occupations | *Occupations of the speakers | dictionary |
| about_speakers | *Blurb about each speaker | dictionary |
| views | Count of views | int |
| recorded_date | Date the talk was recorded | string |
| published_date | Date the talk was published to TED com | string |
| event | Event or medium in which the talk was given | string |
| native_lang | Language the talk was given in | string |
| available_lang | All available languages (lang_code) for a talk | list |
| comments | Count of comments | int |
| duration | Duration in seconds | int |
| topics | Related tags or topics for the talk | list |
| related_talks | Related talks (key='talk_id', value='title') | dictionary |
| url | URL of the talk | string |
| description | Description of the talk | string |
| transcript | Full transcript of the talk | string |
*The dictionary key maps to the speaker in ‘speakers’.
## Meta
Author: Miguel Corral Jr.
Email: corraljrmiguel@gmail.com
LinkedIn: https://www.linkedin.com/in/iMiguel/
GitHub: https://github.com/corralm
## License
Distributed under the Creative Commons license – Attribution-NonCommercial 4.0 International (CC BY-NC 4.0)."
hajun1020/korean_profanity_masking,{},"---
license: apache-2.0
task_categories:
- fill-mask
language:
- ko
size_categories:
- 1M 🚨 Disclaimer: All models and datasets are intended for research purposes only.
## Dataset Description
- **Repository:** [Code](https://github.com/passing2961/KorEmpatheticDialogues)
- **Paper:** [Paper](https://koreascience.kr/article/CFKO202306643316560.pdf)
- **Point of Contact:** [Young-Jun Lee](mailto:yj2961@kaist.ac.kr)
## Dataset Summary
KorEmpatheticDialogues is a publicly available Korean empathetic dialogue dataset translated from the original [EmpatheticDialogues](https://github.com/facebookresearch/EmpatheticDialogues) dataset using the [DeepL](https://www.deepl.com/translator) translator API.
## Languages
Korean
## Dataset Structure
field | type | description
--- | --- | ---
`dialogue_id` | int | the identifier for the dialogue
`dialogue` | list of dict | the dialogue where each dict entry includes {utter_idx, utter, user_id}
`situation` | str | the emotional situation sentence
`emotion` | str | the emotion category (e.g., guilty, caring, etc)
## Acknowledgements
This work was supported by a grant from the KAIST-KT joint research project through AI Tech Lab, Institute of Convergence Technology, funded by KT [Project No. G01230605, Development of Task-oriented Persona-based Dialogue Generation Combining Multi-modal Interaction and Knowledge Modeling].
### Citation
Please cite our work if you find the resources in this repository useful:
```
@inproceedings{lee2023language,
title={Language Model Evaluation Based on Korean-English Empathetic Dialogue Datasets and Personality},
author={Lee, Young-Jun and Hyeon, JongHwan and Lee, DoKyong and Sung, Joo-Won and Choi, Ho-Jin},
booktitle={Annual Conference on Human and Language Technology},
pages={312--318},
year={2023},
organization={Human and Language Technology}
}
```"
ChuGyouk/Ko-MTS-Dialog,"{""license"": ""cc-by-4.0"", ""task_categories"": [""text-generation""], ""language"": [""ko""], ""tags"": [""medical""]}","**The validation set and two test sets will also be updated soon.**
# MTS-Dialog
This is the repo for *Ko-MTS-Dialog*, which is a Korean translated version of MTS-Dialog dataset. MTS-Dialog dataset is a collection of 1.7k short doctor-patient conversations and corresponding summaries.
## Data Translation Process
I use DeepL for automatic translation and manually reviewed results.
## Warning
There are some concerns and warnings for this dataset.
1. I recommend **NOT to use this dataset for Korean medical dialogue summarization task, but to use JUST DIALOGUES for training something like Chatbot**. I did not reviewed for the section_text in detail. More specifically, I've seen cases where ""noncontributory"" is translated as ""비기여"" and ""negative"" is translated as ""음수"" instead of ""부정"" or ""음성"".
2. There are cases where the translation becomes strange due to some **fundamental differences between English and Korean**. This has not been directly modified during review (for consistency). **For example**, for the doctor's question ""And no shortness of breath or wheezing that you've noticed?(숨가쁨이나 쌕쌕거림은 없었나요?)"", the response in English is 'No', but it is natural to response as ""네"" in Korean.
3. The content may not be helpful due to **cultural differences between the US and Korea**. Simple **example** is the unit of measurements; temperature(°F vs °C), weight(lb vs kg). **Also**, cannabis/marijuana is legal in some states in the US, though it is illegal in Korea.
4. **Some medical abbreviations are not translated well**. Some have been translated well(ex:COPD - 만성폐쇄성폐질환), but others have been translated incorrectly (ex: CVA - 심근경색) or remain untranslated at all(ex: MCV).
## Example
```
""ID"": 990,
""section_text"": ""This is a 1-year-old male who comes in with a cough and congestion for the past two to three weeks.
Started off as a congestion but then he started coughing about a week ago.
Cough has gotten worsen. Mother was also worried.
He had Pop Can just three days ago and she never found the top of that and was wondering if he had swallowed that,
but his breathing has not gotten worse since that happened. He is not running any fevers."",
""섹션_텍스트"": ""지난 2~3주 동안 기침과 코막힘 증상으로 내원한 1세 남자 아이입니다.
처음에는 코막힘으로 시작했지만 약 일주일 전부터 기침을 시작했습니다.
기침이 더 심해졌습니다. 어머니도 걱정이 많으셨어요.
3일 전에 팝캔을 먹었는데 뚜껑을 찾지 못해 삼킨 건 아닌지 걱정했는데,
그 이후로 호흡이 더 심해지지는 않았어요. 열도 나지 않아요."",
""대화"": ""게스트_가족: 일주일째 기침 중입니다.\n
의사: 코막힘은 어떤가요? 일주일 이상인가요?\n
게스트_가족: 제가 그렇게 말했나요? 2~3주라고 했어요. 너무 걱정돼요. 이제 겨우 한 살이에요. \n
의사: 걱정하지 마세요, 처음부터 다 말씀해 주실 수 있나요? \n
게스트_가족: 네, 처음에는 코막힘으로 시작했는데 일주일 전부터 기침을 하기 시작하더니 점점 심해지고 있습니다.\n
의사: 좋아요, 또 어떤가요?\n
게스트_가족: 사실 3일 전에 팝캔을 먹었는데 뚜껑을 찾지 못해서 혹시 삼킨 건 아닌지 궁금합니다.\n
의사: 혹시 호흡에 변화를 보셨나요?\n
게스트_가족: 그 이후로 호흡이 더 나빠지지는 않았습니다.\n
의사: 열은 없나요? \n
게스트_가족: 아니요, 열은 없습니다.\n
의사: 네."",
""dialogue"": ""Guest_family: He is coughing for one week now.\n
Doctor: How about any congestion? Is it one week or more?\n
Guest_family: No did I say that? I meant two to three weeks. I am so worried. He is just one year. \n
Doctor: Don't worry let me see, can you tell me everything from the beginning?\n
Guest_family: Sure. It started off as a congestion, but then he started coughing about a week ago and it is getting worse.\n
Doctor: Okay, what else?\n
Guest_family: Actually, he had Pop Can just three days ago and I never found the top of that and was wondering if he had swallowed that.\n
Doctor: It is possible, have you seen any change in his breathing?\n
Guest_family: His breathing has not gotten worse since that happened.\n
Doctor: Any fever?\n
Guest_family: No. No fever.\n
Doctor: Okay.""
```
## Reference
```
@inproceedings{mts-dialog,
title = {An Empirical Study of Clinical Note Generation from Doctor-Patient Encounters},
author = ""Ben Abacha, Asma and
Yim, Wen-wai and
Fan, Yadan and
Lin, Thomas"",
booktitle = ""Proceedings of the 17th Conference of the European Chapter of the Association for Computational Linguistics"",
month = may,
year = ""2023"",
address = ""Dubrovnik, Croatia"",
publisher = ""Association for Computational Linguistics"",
url = ""https://aclanthology.org/2023.eacl-main.168"",
pages = ""2291--2302""
}
```"
Taegyuu/KoAlpaca-v1.1a,"{""dataset_info"": {""features"": [{""name"": ""instruction"", ""dtype"": ""string""}, {""name"": ""output"", ""dtype"": ""string""}, {""name"": ""url"", ""dtype"": ""string""}], ""splits"": [{""name"": ""train"", ""num_bytes"": 23371027, ""num_examples"": 21155}], ""download_size"": 12856014, ""dataset_size"": 23371027}, ""task_categories"": [""text-generation""], ""language"": [""ko""], ""tags"": [""KoAlpaca""], ""pretty_name"": ""KoAlpaca-v1.1a""}","# Dataset Card for ""KoAlpaca-v1.1a""
## Project Repo
- Github Repo: [Beomi/KoAlpaca](https://github.com/Beomi/KoAlpaca)
## How to use
```python
>>> from datasets import load_dataset
>>> ds = load_dataset(""beomi/KoAlpaca-v1.1a"", split=""train"")
>>> ds
Dataset({
features: ['instruction', 'input', 'output'],
num_rows: 21155
})
```
```python
>>> ds[0]
{'instruction': '양파는 어떤 식물 부위인가요? 그리고 고구마는 뿌리인가요?',
'output': '양파는 잎이 아닌 식물의 줄기 부분입니다. 고구마는 식물의 뿌리 부분입니다. \n\n식물의 부위의 구분에 대해 궁금해하는 분이라면 분명 이 질문에 대한 답을 찾고 있을 것입니다. 양파는 잎이 아닌 줄기 부분입니다. 고구마는 다른 질문과 답변에서 언급된 것과 같이 뿌리 부분입니다. 따라서, 양파는 식물의 줄기 부분이 되고, 고구마는 식물의 뿌리 부분입니다.\n\n 덧붙이는 답변: 고구마 줄기도 볶아먹을 수 있나요? \n\n고구마 줄기도 식용으로 볶아먹을 수 있습니다. 하지만 줄기 뿐만 아니라, 잎, 씨, 뿌리까지 모든 부위가 식용으로 활용되기도 합니다. 다만, 한국에서는 일반적으로 뿌리 부분인 고구마를 주로 먹습니다.',
'url': 'https://kin.naver.com/qna/detail.naver?d1id=11&dirId=1116&docId=55320268'}"
CanariaView/GlobalCopperDemandForecastingDataset,"{""task_categories"": [""time-series-forecasting""], ""language"": [""en"", ""ko""], ""tags"": [""mining"", ""LSTM"", ""TimeSeries"", ""CanariaView""]}","# CanariaView Global Copper Demand Forecasting Dataset
## Description
This dataset encompasses economic and industrial indicators vital for constructing a copper demand forecasting model.
Coverage Period: Monthly data from January 1995 to March 2023, encompassing a total of 339 months.
Column Descriptions and Sources:
- `HSI_value (US Housing Starts Index)`: Y-Chart
- `CCI_value (Consumer Confidence Index)`: OECD
- `IPI_value (Industrial Production Total Index)`: FRED
- `GDPC_value (Real Gross Domestic Product)`: FRED
- `Copper price`: MacroTrends
Preprocessing Methodology and Data Collection Details:
- Comprehensive analysis of data structure followed by essential preprocessing.
- Appropriate handling of missing values.
- Daily and quarterly data uniformly expanded to a monthly timescale for consistency.
- Daily data (e.g., Copper price) and quarterly data (e.g., GDPC_value)
- Dependent variable data used in the model was available from 1995, guiding the collection of independent variables-this dataset- from that year.
## 한국어 설명
본 데이터셋은 구리 수요 예측 모델 구축을 위한 경제지표 및 산업지표로 구성되었습니다.
기간: 1995년 1월~2023년 3월(월별), 총 339개월.
컬럼 설명 및 출처
- `HSI_value (미국 주택착공지수)`: Y-Chart
- `CCI_value (미국 소비자신뢰지수)`: OECD
- `IPI_value (미국 산업생산자지수)`: FRED
- `GDPC_value (미국 실질 GDP)`: FRED
- `Copper price (구리 가격)`: MacroTrends
데이터 전처리 및 수집 방법:
- 데이터 구조 분석 및 전처리 과정 수행.
- 결측치 처리.
- 일별 및 분기별 자료는 월별 데이터로의 확장을 통해 일관된 시계열 데이터로 통합.
- 일별 자료 (구리 가격), 분기별 자료 (GDPC_value)
- 수요 모델에 사용된 종속변수 데이터가 1995년부터 확보되어 독립변수인 본 데이터셋도 1995년도부터 수집함."
4n3mone/mmmlu_kor,"{""language"": [""ko""], ""license"": ""mit"", ""task_categories"": [""question-answering""], ""dataset_info"": {""features"": [{""name"": ""Unnamed: 0"", ""dtype"": ""int64""}, {""name"": ""Question"", ""dtype"": ""string""}, {""name"": ""A"", ""dtype"": ""string""}, {""name"": ""B"", ""dtype"": ""string""}, {""name"": ""C"", ""dtype"": ""string""}, {""name"": ""D"", ""dtype"": ""string""}, {""name"": ""Answer"", ""dtype"": ""string""}, {""name"": ""Subject"", ""dtype"": ""string""}]}}","# MMMLU_KOREAN
this dataset is korean subset of [openai/MMMLU](https://huggingface.co/datasets/openai/MMMLU) dataset.
---
# Multilingual Massive Multitask Language Understanding (MMMLU)
The MMLU is a widely recognized benchmark of general knowledge attained by AI models. It covers a broad range of topics from 57 different categories, covering elementary-level knowledge up to advanced professional subjects like law, physics, history, and computer science.
We translated the MMLU’s test set into 14 languages using professional human translators. Relying on human translators for this evaluation increases confidence in the accuracy of the translations, especially for low-resource languages like Yoruba. We are publishing the professional human translations and the code we use to run the evaluations.
This effort reflects our commitment to improving the multilingual capabilities of AI models, ensuring they perform accurately across languages, particularly for underrepresented communities. By prioritizing high-quality translations, we aim to make AI technology more inclusive and effective for users worldwide.
## Locales
MMMLU contains the MMLU test set translated into the following locales:
* AR_XY (Arabic)
* BN_BD (Bengali)
* DE_DE (German)
* ES_LA (Spanish)
* FR_FR (French)
* HI_IN (Hindi)
* ID_ID (Indonesian)
* IT_IT (Italian)
* JA_JP (Japanese)
* KO_KR (Korean)
* PT_BR (Brazilian Portuguese)
* SW_KE (Swahili)
* YO_NG (Yoruba)
* ZH_CH (Simplied Chinese)
## Sources
Hendrycks, D., Burns, C., Kadavath, S., Arora, A., Basart, S., Tang, E., Song, D., & Steinhardt, J. (2021). [*Measuring Massive Multitask Language Understanding*](https://arxiv.org/abs/2009.03300).
[OpenAI Simple Evals GitHub Repository](https://github.com/openai/simple-evals)"
youjunhyeok/SkunkworksAI-reasoning-0.01-ko,"{""language"": [""ko""], ""task_categories"": [""text-generation""], ""dataset_info"": {""features"": [{""name"": ""instruction"", ""dtype"": ""string""}, {""name"": ""reasoning"", ""dtype"": ""string""}, {""name"": ""output"", ""dtype"": ""string""}, {""name"": ""reasoning_chains"", ""list"": [{""name"": ""step"", ""dtype"": ""int64""}, {""name"": ""thought"", ""dtype"": ""string""}]}], ""splits"": [{""name"": ""train"", ""num_bytes"": 126390632, ""num_examples"": 29857}], ""download_size"": 60009967, ""dataset_size"": 126390632}, ""configs"": [{""config_name"": ""default"", ""data_files"": [{""split"": ""train"", ""path"": ""data/train-*""}]}], ""tags"": [""instruction"", ""korean"", ""reasoning""], ""license"": ""apache-2.0""}","[SkunkworksAI/reasoning-0.01](https://huggingface.co/datasets/SkunkworksAI/reasoning-0.01) 데이터셋을 [nayohan/llama3-instrucTrans-enko-8b](https://huggingface.co/nayohan/llama3-instrucTrans-enko-8b) 모델을 사용해 번역했습니다.
Thanks for [SkunkworksAI](https://huggingface.co/SkunkworksAI) and [nayohan](https://huggingface.co/nayohan).
---
# 원본
# reasoning-0.01 subset
synthetic dataset of reasoning chains for a wide variety of tasks.
we leverage data like this across multiple reasoning experiments/projects.
stay tuned for reasoning models and more data.
Thanks to Hive Digital Technologies (https://x.com/HIVEDigitalTech) for their compute support in this project and beyond."
Heng666/TED2020-TW-Corpus,{},
klei22/korean-english-jamon-parallel-corpora,"{""license"": ""cc-by-sa-3.0"", ""task_categories"": [""translation""], ""language"": [""ko"", ""en""]}","This is modified from the korean-english-parallel-corpora adding jamon style phonetic content.
---
license: cc-by-sa-3.0
---"
KETI-AIR/kor_ropes,"{""pretty_name"": ""ROPES"", ""language"": [""ko""], ""license"": [""cc-by-4.0""], ""size_categories"": [""10K
Image |
MMStar |
K-MMStar |
![]() |
question: Which option describe the object relationship in the image correctly? Options: A: The suitcase is on the book., B: The suitcase is beneath the cat., C: The suitcase is beneath the bed., D: The suitcase is beneath the book.
|
question: 이미지에서 물체들의 관계를 올바르게 설명하는 옵션은 무엇인가요? Options: A: 가방이 책 위에 있다., B: 가방이 고양이 아래에 있다., C: 가방이 침대 아래에 있다., D: 가방이 책 아래에 있다.
|
## Inference Prompt
```
{question}
```
## Results
Below are the evaluation results of various vision-language models, including [VARCO-VISION-14B](https://huggingface.co/NCSOFT/VARCO-VISION-14B) on K-MMStar.
| | VARCO-VISION-14B | Pangea-7B | Pixtral-12B | Molmo-7B-D | Qwen2-VL-7B-Instruct | LLaVA-One-Vision-7B |
| :---: | :---: | :---: | :---: | :---: | :---: | :---: |
| K-MMStar | **57.33** | 35.00 | 23.93 | 47.40 | 50.67 | 54.00 |
## References
[1] Lin Chen, Jinsong Li, Xiaoyi Dong, Pan Zhang, Yuhang Zang, Zehui Chen, Haodong Duan, Jiaqi Wang, Yu Qiao, Dahua Lin, and Feng Zhao. Are we on the right way for evaluating large vision-language models? In The Thirty-eighth Annual Conference on Neural Information Processing Systems, 2024. URL https://openreview.net/forum?id=evP9mxNNxJ.
## Citation
If you use K-MMStar in your research, please cite the following:
```bibtex
@misc{ju2024varcovisionexpandingfrontierskorean,
title={VARCO-VISION: Expanding Frontiers in Korean Vision-Language Models},
author={Jeongho Ju and Daeyoung Kim and SunYoung Park and Youngjune Kim},
year={2024},
eprint={2411.19103},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2411.19103},
}
```"
ziozzang/deepl-trans-ES-KO,"{""task_categories"": [""translation""], ""language"": [""ko"", ""es""]}","This dataset is some wikipedia article with DeepL translation, auto-aggregated.
# String/Corpus pairs
From ES/Spanish to KO/Korean.
# Quality Filtering
- Stripping whole HTML tags.
- removed references and annotation mark.
- Filtered by string length.
---
The strings/corpus are aggregated from wikipedia(pt) using DeepL translated.
whole data collected by Jioh L. Jung
license: mit
---"
jayliqinzhang/Test_mumospee,"{""license"": ""cc-by-nc-4.0"", ""language"": [""de"", ""en"", ""zh"", ""ja"", ""ko"", ""fr""], ""pretty_name"": ""tiny_demo"", ""size_categories"": [""n<1K""]}","# Mumospee tiny demo
This is a tiny Mumospee demo."
UICHEOL-HWANG/InterView_Datasets,"{""language"": [""ko""], ""dataset_info"": {""features"": [{""name"": ""experience"", ""dtype"": ""string""}, {""name"": ""ageRange"", ""dtype"": ""string""}, {""name"": ""occupation"", ""dtype"": ""string""}, {""name"": ""question"", ""dtype"": ""string""}, {""name"": ""answer"", ""dtype"": ""string""}], ""splits"": [{""name"": ""train"", ""num_bytes"": 88047841, ""num_examples"": 68078}, {""name"": ""valid"", ""num_bytes"": 9067188, ""num_examples"": 8028}], ""download_size"": 48282576, ""dataset_size"": 97115029}, ""configs"": [{""config_name"": ""default"", ""data_files"": [{""split"": ""train"", ""path"": ""data/train-*""}, {""split"": ""valid"", ""path"": ""data/valid-*""}]}]}","# AiHub Datasets
origin : https://aihub.or.kr/aihubdata/data/view.do?currMenu=115&topMenu=100&aihubDataSe=data&dataSetSn=71592
import data
```python
from datasets import load_dataset
# train 데이터셋 로드
dataset = load_dataset(""UICHEOL-HWANG/InterView_Datasets"", split=""train"")
print(dataset)
# valid 데이터셋 로드
dataset = load_dataset(""UICHEOL-HWANG/InterView_Datasets"", split=""valid"")
print(dataset)
```
```bash
README.md: 100%
428/428 [00:00<00:00, 25.0kB/s]
train-00000-of-00001.parquet: 100%
4.53M/4.53M [00:00<00:00, 17.0MB/s]
Generating train split: 100%
8028/8028 [00:00<00:00, 79755.45 examples/s]
Dataset({
features: ['experience', 'ageRange', 'occupation', 'question', 'answer'],
num_rows: 8028
})
```"
sosoai/dataset_ko_Ultrafeedback_binarized_test,{},
csujeong/KoAlpaca-v1.1a,"{""dataset_info"": {""features"": [{""name"": ""instruction"", ""dtype"": ""string""}, {""name"": ""output"", ""dtype"": ""string""}, {""name"": ""url"", ""dtype"": ""string""}], ""splits"": [{""name"": ""train"", ""num_bytes"": 23371027, ""num_examples"": 21155}], ""download_size"": 12856014, ""dataset_size"": 23371027}, ""task_categories"": [""text-generation""], ""language"": [""ko""], ""tags"": [""KoAlpaca""], ""pretty_name"": ""KoAlpaca-v1.1a""}","# Dataset Card for ""KoAlpaca-v1.1a""
## Project Repo
- Github Repo: [Beomi/KoAlpaca](https://github.com/Beomi/KoAlpaca)
## How to use
```python
>>> from datasets import load_dataset
>>> ds = load_dataset(""beomi/KoAlpaca-v1.1a"", split=""train"")
>>> ds
Dataset({
features: ['instruction', 'input', 'output'],
num_rows: 21155
})
```
```python
>>> ds[0]
{'instruction': '양파는 어떤 식물 부위인가요? 그리고 고구마는 뿌리인가요?',
'output': '양파는 잎이 아닌 식물의 줄기 부분입니다. 고구마는 식물의 뿌리 부분입니다. \n\n식물의 부위의 구분에 대해 궁금해하는 분이라면 분명 이 질문에 대한 답을 찾고 있을 것입니다. 양파는 잎이 아닌 줄기 부분입니다. 고구마는 다른 질문과 답변에서 언급된 것과 같이 뿌리 부분입니다. 따라서, 양파는 식물의 줄기 부분이 되고, 고구마는 식물의 뿌리 부분입니다.\n\n 덧붙이는 답변: 고구마 줄기도 볶아먹을 수 있나요? \n\n고구마 줄기도 식용으로 볶아먹을 수 있습니다. 하지만 줄기 뿐만 아니라, 잎, 씨, 뿌리까지 모든 부위가 식용으로 활용되기도 합니다. 다만, 한국에서는 일반적으로 뿌리 부분인 고구마를 주로 먹습니다.',
'url': 'https://kin.naver.com/qna/detail.naver?d1id=11&dirId=1116&docId=55320268'}
```"
opencompass/mmmlu_lite,"{""task_categories"": [""question-answering""], ""configs"": [{""config_name"": ""AR_XY"", ""data_files"": [{""split"": ""test"", ""path"": ""AR-XY/test.jsonl""}]}, {""config_name"": ""BN_BD"", ""data_files"": [{""split"": ""test"", ""path"": ""BN-BD/test.jsonl""}]}, {""config_name"": ""DE_DE"", ""data_files"": [{""split"": ""test"", ""path"": ""DE-DE/test.jsonl""}]}, {""config_name"": ""ES_LA"", ""data_files"": [{""split"": ""test"", ""path"": ""ES-LA/test.jsonl""}]}, {""config_name"": ""FR_FR"", ""data_files"": [{""split"": ""test"", ""path"": ""FR-FR/test.jsonl""}]}, {""config_name"": ""HI_IN"", ""data_files"": [{""split"": ""test"", ""path"": ""HI-IN/test.jsonl""}]}, {""config_name"": ""ID_ID"", ""data_files"": [{""split"": ""test"", ""path"": ""ID-ID/test.jsonl""}]}, {""config_name"": ""IT_IT"", ""data_files"": [{""split"": ""test"", ""path"": ""IT-IT/test.jsonl""}]}, {""config_name"": ""JA_JP"", ""data_files"": [{""split"": ""test"", ""path"": ""JA-JP/test.jsonl""}]}, {""config_name"": ""KO_KR"", ""data_files"": [{""split"": ""test"", ""path"": ""KO-KR/test.jsonl""}]}, {""config_name"": ""PT_BR"", ""data_files"": [{""split"": ""test"", ""path"": ""PT-BR/test.jsonl""}]}, {""config_name"": ""SW_KE"", ""data_files"": [{""split"": ""test"", ""path"": ""SW-KE/test.jsonl""}]}, {""config_name"": ""YO_NG"", ""data_files"": [{""split"": ""test"", ""path"": ""YO-NG/test.jsonl""}]}, {""config_name"": ""ZH_CN"", ""data_files"": [{""split"": ""test"", ""path"": ""ZH-CN/test.jsonl""}]}], ""language"": [""ar"", ""bn"", ""de"", ""es"", ""fr"", ""hi"", ""id"", ""it"", ""ja"", ""ko"", ""pt"", ""sw"", ""yo"", ""zh""], ""license"": ""mit""}","# MMMLU-Lite
## Introduction
A lite version of the MMMLU dataset, which is an community version of the MMMLU dataset by [OpenCompass](https://github.com/open-compass/opencompass). Due to the large size of the original dataset (about 200k questions), we have created a lite version of the dataset to make it easier to use. We sample 25 examples from each language subject in the original dataset with fixed seed to ensure reproducibility, finally we have 19950 examples in the lite version of the dataset, which is about 10% of the original dataset.
## Dataset Description
Multilingual Massive Multitask Language Understanding (MMMLU)
The MMLU is a widely recognized benchmark of general knowledge attained by AI models. It covers a broad range of topics from 57 different categories, covering elementary-level knowledge up to advanced professional subjects like law, physics, history, and computer science.
We translated the MMLU’s test set into 14 languages using professional human translators. Relying on human translators for this evaluation increases confidence in the accuracy of the translations, especially for low-resource languages like Yoruba. We are publishing the professional human translations and the code we use to run the evaluations.
This effort reflects our commitment to improving the multilingual capabilities of AI models, ensuring they perform accurately across languages, particularly for underrepresented communities. By prioritizing high-quality translations, we aim to make AI technology more inclusive and effective for users worldwide.
MMMLU contains the MMLU test set translated into the following locales:
- AR_XY (Arabic)
- BN_BD (Bengali)
- DE_DE (German)
- ES_LA (Spanish)
- FR_FR (French)
- HI_IN (Hindi)
- ID_ID (Indonesian)
- IT_IT (Italian)
- JA_JP (Japanese)
- KO_KR (Korean)
- PT_BR (Brazilian Portuguese)
- SW_KE (Swahili)
- YO_NG (Yoruba)
- ZH_CH (Simplied Chinese)
## How to use
```python
from datasets import load_dataset
ds = load_dataset(""opencompass/mmmlu_lite"", ""AR_XY"")
```"
jungsoon/ComputerLiteracy,"{""language"": [""ko""], ""license"": ""apache-2.0"", ""task_categories"": [""multiple-choice""], ""tags"": [""Computer Proficiency Test Level 2"", ""computer proficiency Test""], ""dataset_info"": {""features"": [{""name"": ""subject"", ""dtype"": ""string""}, {""name"": ""subsection"", ""dtype"": ""string""}, {""name"": ""question"", ""dtype"": ""string""}, {""name"": ""choices"", ""dtype"": ""string""}, {""name"": ""user_input"", ""dtype"": ""string""}, {""name"": ""answer"", ""dtype"": ""string""}], ""splits"": [{""name"": ""train"", ""num_bytes"": 389316, ""num_examples"": 738}, {""name"": ""test"", ""num_bytes"": 93611, ""num_examples"": 185}], ""download_size"": 213533, ""dataset_size"": 482927}, ""configs"": [{""config_name"": ""default"", ""data_files"": [{""split"": ""train"", ""path"": ""data/train-*""}, {""split"": ""test"", ""path"": ""data/test-*""}]}]}",
nayohan/coedit-ko,"{""dataset_info"": {""features"": [{""name"": ""src"", ""dtype"": ""string""}, {""name"": ""_id"", ""dtype"": ""string""}, {""name"": ""task"", ""dtype"": ""string""}, {""name"": ""tgt"", ""dtype"": ""string""}], ""splits"": [{""name"": ""train"", ""num_bytes"": 22903490, ""num_examples"": 69071}], ""download_size"": 11307364, ""dataset_size"": 22903490}, ""configs"": [{""config_name"": ""default"", ""data_files"": [{""split"": ""train"", ""path"": ""data/train-*""}]}], ""task_categories"": [""text-generation""], ""language"": [""ko""], ""tags"": [""instruction"", ""korean""]}","Translated [grammarly/coedit](https://huggingface.co/datasets/grammarly/coedit) using [nayohan/llama3-instrucTrans-enko-8b](https://huggingface.co/nayohan/llama3-instrucTrans-enko-8b).
This dataset is a raw translated dataset and contains repetitive sentences generated by the model, so it needs to be filtered.
```
@article{raheja2023coedit,
title={CoEdIT: Text Editing by Task-Specific Instruction Tuning},
author={Vipul Raheja and Dhruv Kumar and Ryan Koo and Dongyeop Kang},
year={2023},
eprint={2305.09857},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```"
davidkim205/Ko-Bench,{},
Saxo/alpaca_function_calling_dataset,"{""license"": ""apache-2.0"", ""language"": [""ko"", ""en""], ""tags"": [""alpaca format"", ""function calling"", ""dataset""]}","
AI 와 빅데이터 분석 전문 기업인 Linkbricks(www.linkbricks.com)의 데이터사이언티스트인 지윤성(Saxo) 박사가 만든 llm RAG를 위한 function calling 학습용 데이터셋으로 llam3 instruct format인 mzbac/function-calling-llama-3-format-v1.1 을 Alpaca Format으로 변경.
Changed the llam3 instruct format, mzbac/function-calling-llama-3-format-v1.1, to Alpaca Format as a dataset for learning function calling for the llm RAG, created by Dr. Ji Yun Sung(Saxo), a data scientist at Linkbricks (www.linkbricks.com), a company specializing in AI and big data analytics."
JosephLee/science_textbook_elementary_kor_seed,{},"---
task_categories:
- question-answering
language:
- ko
pretty_name: test dataset
---"
HyaDoo/ko-voicephishing-binary-classification,"{""language"": [""ko""], ""license"": ""apache-2.0""}",
csujeong/Non_life_insurance,"{""language"": [""ko""]}",손해보험 데이터
richard-park/horangi-336K-Filtered-split,"{""dataset_info"": {""features"": [{""name"": ""conversation_id"", ""dtype"": ""string""}, {""name"": ""instruction"", ""dtype"": ""string""}, {""name"": ""response"", ""dtype"": ""string""}, {""name"": ""conversations"", ""list"": [{""name"": ""from"", ""dtype"": ""string""}, {""name"": ""value"", ""dtype"": ""string""}]}, {""name"": ""gen_input_configs"", ""struct"": [{""name"": ""pre_query_template"", ""dtype"": ""string""}]}, {""name"": ""intent"", ""dtype"": ""string""}, {""name"": ""knowledge"", ""dtype"": ""string""}, {""name"": ""difficulty"", ""dtype"": ""string""}, {""name"": ""difficulty_generator"", ""dtype"": ""string""}, {""name"": ""input_quality"", ""dtype"": ""string""}, {""name"": ""quality_explanation"", ""dtype"": ""string""}, {""name"": ""quality_generator"", ""dtype"": ""string""}, {""name"": ""task_category"", ""dtype"": ""string""}, {""name"": ""other_task_category"", ""sequence"": ""string""}, {""name"": ""task_category_generator"", ""dtype"": ""string""}, {""name"": ""llama_guard_2"", ""dtype"": ""string""}, {""name"": ""instruct_reward"", ""dtype"": ""float64""}, {""name"": ""reward_model"", ""dtype"": ""string""}, {""name"": ""language"", ""dtype"": ""string""}, {""name"": ""min_neighbor_distance"", ""dtype"": ""float64""}, {""name"": ""repeat_count"", ""dtype"": ""int64""}, {""name"": ""min_similar_conversation_id"", ""dtype"": ""string""}, {""name"": ""response_length"", ""dtype"": ""int64""}], ""splits"": [{""name"": ""train"", ""num_bytes"": 835642107.5149903, ""num_examples"": 299725}, {""name"": ""test"", ""num_bytes"": 100263112.75487225, ""num_examples"": 35962}], ""download_size"": 385510612, ""dataset_size"": 935905220.2698625}, ""configs"": [{""config_name"": ""default"", ""data_files"": [{""split"": ""train"", ""path"": ""data/train-*""}, {""split"": ""test"", ""path"": ""data/test-*""}]}], ""language"": [""ko""]}","# Dataset Card for Dataset Name
This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1).
## Dataset Details
### Dataset Description
호랑이 리더 보드 점수 향상을 위한 dataset
- axolotl 지원 format: alpaca, gpteacher (conversation format)
### 파일 목록 및 각 파일의 데이터 개수
1. **STEM_alpaca_data** - 53,960
2. **Applied Science_alpaca_data** - 75,948
3. **kullm-v2-modified** - 152,630
4. **KOR-OpenOrca-Platypus-v2** - 44,394
5. **klue_re_processed** - 32,470
6. **Ko_MMLU_ver0.3** - 221,051
7. **sentineg_dataset** - 3,649
8. **nia_2022_15-2_commonsense_TL** - 400,000
9. **HUMSS_alpaca_data** - 5,715
10. **converted_ko_lima_vicuna_dataset** - 1,030
11. **korSTS_dataset** - 5,691
12. **merge_kobest_dataset_15K** - 15,737
13. **klue_ner_processed** - 21,008
14. **Other_alpaca_data** - 44,977
**총 데이터 개수 (모든 파일 합산):** 1,078,260
**트레인/테스트 비율:** 300,000 / 36,000
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
### Direct Use
[More Information Needed]
### Out-of-Scope Use
[More Information Needed]
## Dataset Structure
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Data Collection and Processing
[More Information Needed]
#### Who are the source data producers?
[More Information Needed]
### Annotations [optional]
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
#### Personal and Sensitive Information
[More Information Needed]
## Bias, Risks, and Limitations
[More Information Needed]
### Recommendations
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed]"
jr-d-analyst24/naver_review_sum,{},"---
license: apache-2.0
language:
- ko
tags:
- blog
- review
size_categories:
- n<1K
---"
jonghwanhyeon/korean-emotion-lexicon,"{""license"": ""cc0-1.0"", ""language"": [""ko""], ""tags"": [""korean"", ""emotion""], ""pretty_name"": ""Korean Emotion Lexicon"", ""size_categories"": [""n<1K""]}","# Korean Emotion Lexicon
This repository contains a comprehensive dataset of Korean emotion lexicons developed through psychological research conducted by [In-jo Park](mailto:park73@jbnu.ac.kr) and [Kyung-Hwan Min](mailto:minhwan@plaza.snu.ac.kr) from Seoul National University. The dataset includes several key measures for each emotion lexicon:
- `lexicon`: The lexicon that represents a specific emotion in the Korean language.
- `representative`: The degree to which the lexicon is a representative example of the emotion.
- `prototypicality`: A rating of how appropriate the lexicon is as an emotion lexicon.
- `familiarity`: A rating of how familiar the lexicon is.
- `valence`: The positivity or negativity of the emotion.
- `arousal`: The activation or intensity level of the emotion.
## Acknowledgments
This dataset was created based on the research published in the *Korean Journal of Social and Personality Psychology*, specifically the study titled *[""Making a List of Korean Emotion Terms and Exploring Dimensions Underlying Them""](https://accesson.kr/ksppa/v.19/1/109/25622)*. Special thanks to the authors **[In-jo Park](mailto:park73@jbnu.ac.kr)** and **[Kyung-Hwan Min](mailto:minhwan@plaza.snu.ac.kr)** for their significant contributions to emotion research in the Korean context.
---
# 한국어 감정 단어
본 저장소는 서울대학교 소속의 박인조 및 민경환의 연구원의 연구를 통해 개발된 한국어 감정 어휘 데이터셋을 포함하고 있습니다. 해당 데이터셋은 각 감정 어휘에 대하여 다음과 같은 주요 측정치를 포함하고 있습니다:
- `어휘` (`lexicon`): 특정 감정을 나타내는 한국어 어휘.
- `대표성` (`representative`): 해당 어휘가 감정을 대표하는 정도.
- `원형성` (`prototypicality`): 해당 어휘가 감정 단어로 얼마나 적당한지에 대한 평점.
- `친숙성` (`familiarity`): 해당 어휘가 얼마나 친숙한지에 대한 평점.
- `쾌/불쾌` (`valence`): 감정의 긍정적 혹은 부정적 정도.
- `활성화` (`arousal`): 감정의 활성화 수준.
## 감사의 말
본 데이터셋은 *한국심리학회지: 사회 및 성격*에 게재된 *[""한국어 감정단어의 목록 작성과 차원 탐색""](https://accesson.kr/ksppa/v.19/1/109/25622)* 연구를 기반으로 작성되었습니다. 한국어 감정 연구에 기여한 **[박인조](mailto:park73@jbnu.ac.kr)** 및 **[민경환](mailto:minhwan@plaza.snu.ac.kr)** 연구원에게 감사드립니다."
hyun5ooo/hansoldeco,{},"---
task_categories:
- table-question-answering
language:
- ko
size_categories:
- 1M
- [ragtruth-qa](https://huggingface.co/datasets/flowaicom/formatted-ragtruth-qa) 데이터셋을 gpt-4o를 이용하여 한글로 번역 한 데이터셋.
- [ragtruth-qa-ko](https://huggingface.co/datasets/Yettiesoft/ragtruth-qa-ko) 데이터 셋을 alpaca 포맷으로 변환.
## Dataset Details
### Dataset Description
- **Curated by:** [More Information Needed]
- **Language(s) (NLP):** [한국어]
- **License:** [미정]
### Dataset Sources [optional]
- **Repository:** [https://huggingface.co/datasets/flowaicom/formatted-ragtruth-qa]
## Uses
### Direct Use
[More Information Needed]
### Out-of-Scope Use
[More Information Needed]
## Dataset Structure
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Data Collection and Processing
[More Information Needed]
#### Who are the source data producers?
[More Information Needed]
### Annotations [optional]
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
#### Personal and Sensitive Information
[More Information Needed]
## Bias, Risks, and Limitations
[More Information Needed]
### Recommendations
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed]"
ChuGyouk/GenMedGPT-5k-ko,"{""license"": ""mit"", ""task_categories"": [""text-generation""], ""language"": [""ko""], ""tags"": [""medical""]}","# MedQA-Evol
Original Data: [ChatDoctor](https://github.com/Kent0n-Li/ChatDoctor/blob/main/README.md).
Translated into Korean by DeepL Pro."
youjunhyeok/Magpie-Llama-3.1-Pro-500K-Filtered-ko,"{""language"": [""ko""], ""size_categories"": [""100KClick Here
High-quality instruction data is critical for aligning large language models (LLMs). Although some models, such as Llama-3-Instruct, have open weights, their alignment data remain private, which hinders the democratization of AI. High human labor costs and a limited, predefined scope for prompting prevent existing open-source data creation methods from scaling effectively, potentially limiting the diversity and quality of public alignment datasets. Is it possible to synthesize high-quality instruction data at scale by extracting it directly from an aligned LLM? We present a self-synthesis method for generating large-scale alignment data named Magpie. Our key observation is that aligned LLMs like Llama-3-Instruct can generate a user query when we input only the left-side templates up to the position reserved for user messages, thanks to their auto-regressive nature. We use this method to prompt Llama-3-Instruct and generate 4 million instructions along with their corresponding responses. We perform a comprehensive analysis of the extracted data and select 300K high-quality instances. To compare Magpie data with other public instruction datasets, we fine-tune Llama-3-8B-Base with each dataset and evaluate the performance of the fine-tuned models. Our results indicate that in some tasks, models fine-tuned with Magpie perform comparably to the official Llama-3-8B-Instruct, despite the latter being enhanced with 10 million data points through supervised fine-tuning (SFT) and subsequent feedback learning. We also show that using Magpie solely for SFT can surpass the performance of previous public datasets utilized for both SFT and preference optimization, such as direct preference optimization with UltraFeedback. This advantage is evident on alignment benchmarks such as AlpacaEval, ArenaHard, and WildBench.
## Dataset Details
This dataset is generated by [Llama 3 70B Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-70B-Instruct) using [Magpie](https://huggingface.co/Magpie-Align). Please refer to our [paper](https://arxiv.org/abs/2406.08464) and [codebase](https://github.com/magpie-align/magpie) for implementation details.
This is the filtered data. Please see below for the filter design. Please do not use **Magpie-Pro-300K-Filtered** and **Magpie-Pro-MT-300K** to fine-tune the model simultaneously as they are largely the same for the first turn!
You can find the model fine-tuned using this dataset [here](https://huggingface.co/Magpie-Align/Llama-3-8B-Magpie-Pro-SFT-v0.1).
## Filter Setups
- **Input Quality**: >= average
- **Instruction Reward**: >=-10
- Remove repetition and incomplete instructions (e.g., end with :)
- Choose 300K data with the longest responses
## Dataset Navigation 🧭
|Model Name | Dataset | Type | Description |
|-------------|:-------|:-------|:-------|
| [Llama 3 70B Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-70B-Instruct) | [Magpie-Pro-1M](https://huggingface.co/datasets/Magpie-Align/Llama-3-Magpie-Pro-1M-v0.1) | SFT | 1M Raw conversations built with Meta Llama 3 70B.
| [Llama 3 70B Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-70B-Instruct) | [Magpie-Pro-300K-Filtered](https://huggingface.co/datasets/Magpie-Align/Magpie-Pro-300K-Filtered) | SFT | Apply a filter and select 300K high quality conversations.
| [Llama 3 70B Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-70B-Instruct) | [Magpie-Pro-MT-300K](https://huggingface.co/datasets/Magpie-Align/Magpie-Pro-MT-300K-v0.1) | SFT | Select 300K difficult questions and extend to multi-turn conversations.
| [Llama 3 8B Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) | [Magpie-Air-3M](https://huggingface.co/datasets/Magpie-Align/Llama-3-Magpie-Air-3M-v0.1) | SFT | 3M Raw conversations built with Meta Llama 3 8B.
| [Llama 3 8B Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) | [Magpie-Air-300K-Filtered](https://huggingface.co/datasets/Magpie-Align/Magpie-Air-300K-Filtered) | SFT | Apply a filter and select 300K high quality data.
| [Llama 3 8B Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) | [Magpie-Air-MT-300K](https://huggingface.co/datasets/Magpie-Align/Magpie-Air-MT-300K-v0.1) | SFT | Select 300K difficult questions and extend to multi-turn conversations."
Ssua/real_data,{},"---
dataset_info:
features:
- name: instruction
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 22277
num_examples: 200
download_size: 24576
dataset_size: 22277
task_categories:
- text-generation
language:
- ko
tags:
- mgame
pretty_name: mgame-dataset
license: unknown
---"
won75/text_to_sql_ko,"{""dataset_info"": {""features"": [{""name"": ""TEXT"", ""dtype"": ""string""}, {""name"": ""MySQL"", ""dtype"": ""string""}, {""name"": ""Schema"", ""dtype"": ""string""}, {""name"": ""Difficulty"", ""dtype"": ""int64""}, {""name"": ""JOIN Count"", ""dtype"": ""int64""}, {""name"": ""DB Count"", ""dtype"": ""int64""}, {""name"": ""Type"", ""dtype"": ""string""}, {""name"": ""Main Syntax"", ""sequence"": ""string""}], ""splits"": [{""name"": ""train"", ""num_bytes"": 3162282, ""num_examples"": 3299}], ""download_size"": 710775, ""dataset_size"": 3162282}, ""configs"": [{""config_name"": ""default"", ""data_files"": [{""split"": ""train"", ""path"": ""data/train-*""}]}], ""license"": ""cc-by-nc-4.0"", ""language"": [""ko""]}","# Korean Text to MySQL Dataset
## Dataset Summary
Korean Text to MySQL is a dataset comprising approximately 3,300 samples generated using OpenAI's gpt-4o model. This dataset is designed to train models that convert natural language questions in Korean into MySQL queries. The data generation process was inspired by the Self-Instruct method and followed the steps outlined below.
### Data Generation Process
**1. Creation of SEED Dataset**
- Approximately 100 SEED samples were created and refined through a thorough review process to finalize the initial dataset.
**2. Data Augmentation**
- **Prompt Construction**: For each prompt, three randomly selected samples from the SEED dataset and one sample generated by the model in the previous augmentation step were used as examples. Additionally, one of 15 industry categories was randomly selected and included to encourage the model to produce diverse and accurate data.
- **Duplicate Check**: To ensure data diversity, duplicates were identified and removed by measuring Rouge-L F-scores between natural language questions.
**3. Final Review**
- The final dataset was reviewed and refined using the gpt-4o model to enhance quality.
A total of approximately 3,300 samples were generated at a cost of less than $70.
## Dataset Structure
### Data Instances
```json
{
""TEXT"": ""가장 최근에 주문한 고객의 이름과 구매 날짜는?"",
""MySQL"": ""SELECT c.customer_name, o.order_datetime
FROM common_db.customers c
JOIN common_db.orders o ON c.customer_id = o.customer_id
ORDER BY o.order_datetime DESC LIMIT 1;"",
""Schema"": ""DB: common_db
TABLE DDL: CREATE TABLE `customers` ( customer_id BIGINT NOT NULL,
customer_name VARCHAR(100), email VARCHAR(100), phone VARCHAR(20),
customer_grade VARCHAR(20), created_at DATETIME, is_active TINYINT(1),
last_purchase_date DATETIME, PRIMARY KEY (customer_id) )
DB: common_db
TABLE DDL: CREATE TABLE `orders` ( order_id BIGINT NOT NULL,
customer_id BIGINT, order_datetime DATETIME, total_amount DECIMAL(10, 2),
order_status VARCHAR(50), payment_method VARCHAR(50), channel VARCHAR(50),
created_at DATETIME, PRIMARY KEY (order_id), CONSTRAINT `orders_ibfk_1`
FOREIGN KEY(customer_id) REFERENCES `customers` (customer_id) )"",
""Difficulty"": 2,
""JOIN Count"": 1,
""DB Count"": 1,
""Type"": ""SELECT"",
""Main Syntax"": [""JOIN"", ""ORDER BY""]
}
```
**Note**: I have manually added line breaks for better readability.
### Data Fields
- **TEXT**: A natural language question in Korean.
- **MySQL**: The corresponding MySQL query generated from the natural language question.
- **Schema**: The names of the databases and the DDL (Data Definition Language) information for the tables used in the MySQL query.
- **Difficulty**: The difficulty level of generating the given MySQL query. It is primarily based on the number of Main Syntax used. If the JOIN Count exceeds 1, the excess count is added to the difficulty. The difficulty level ranges from 1 to 5.
- **JOIN Count**: The number of JOIN clauses used in the MySQL query.
- **DB Count**: The number of databases referenced in the MySQL query.
- **Type**: The type of the MySQL query. Currently, only 'SELECT' queries are included.
- **Main Syntax**: A list of key MySQL syntax used in the MySQL query.
```python
key_syntax = [""COUNT"", ""AVG"", ""MIN"", ""MAX"", ""COUNT"", ""JOIN"", ""GROUP BY"", ""ORDER BY"", ""SUBQUERY"", ""WHERE"", ""LIKE"", ""CASE/WHEN"", ""DISTINCT"", ""UNION"", ""WITH""]
```
### Data Splits
| | train |
|---------|-------|
| dataset | 3299 |
## Others
### Precautions when Using
- This dataset was generated using the gpt-4o model, and there may be inaccuracies or incorrect data included. Please review and verify the data before use.
### In Case of Discovering an Issue
- If you discover any inaccuracies or issues in this dataset, please share them in the community tab. Thank you for your contribution!"
youjunhyeok/Magpie-Qwen2-Pro-300K-Filtered-ko,"{""language"": [""ko""], ""task_categories"": [""text-generation""], ""dataset_info"": {""features"": [{""name"": ""uuid"", ""dtype"": ""string""}, {""name"": ""model"", ""dtype"": ""string""}, {""name"": ""gen_input_configs"", ""struct"": [{""name"": ""temperature"", ""dtype"": ""float64""}, {""name"": ""top_p"", ""dtype"": ""float64""}, {""name"": ""input_generator"", ""dtype"": ""string""}, {""name"": ""seed"", ""dtype"": ""null""}, {""name"": ""extract_input"", ""dtype"": ""string""}]}, {""name"": ""instruction"", ""dtype"": ""string""}, {""name"": ""response"", ""dtype"": ""string""}, {""name"": ""conversations"", ""list"": [{""name"": ""from"", ""dtype"": ""string""}, {""name"": ""value"", ""dtype"": ""string""}]}, {""name"": ""task_category"", ""dtype"": ""string""}, {""name"": ""other_task_category"", ""sequence"": ""string""}, {""name"": ""task_category_generator"", ""dtype"": ""string""}, {""name"": ""difficulty"", ""dtype"": ""string""}, {""name"": ""intent"", ""dtype"": ""string""}, {""name"": ""knowledge"", ""dtype"": ""string""}, {""name"": ""difficulty_generator"", ""dtype"": ""string""}, {""name"": ""input_quality"", ""dtype"": ""string""}, {""name"": ""quality_explanation"", ""dtype"": ""string""}, {""name"": ""quality_generator"", ""dtype"": ""string""}, {""name"": ""llama_guard_2"", ""dtype"": ""string""}, {""name"": ""reward_model"", ""dtype"": ""string""}, {""name"": ""instruct_reward"", ""dtype"": ""float64""}, {""name"": ""min_neighbor_distance"", ""dtype"": ""float64""}, {""name"": ""repeat_count"", ""dtype"": ""int64""}, {""name"": ""min_similar_uuid"", ""dtype"": ""string""}, {""name"": ""instruction_length"", ""dtype"": ""int64""}, {""name"": ""response_length"", ""dtype"": ""int64""}, {""name"": ""language"", ""dtype"": ""string""}], ""splits"": [{""name"": ""train"", ""num_bytes"": 1499958065, ""num_examples"": 254271}], ""download_size"": 725079837, ""dataset_size"": 1499958065}, ""configs"": [{""config_name"": ""default"", ""data_files"": [{""split"": ""train"", ""path"": ""data/train-*""}]}], ""tags"": [""instruction"", ""korean"", ""magpie""], ""size_categories"": [""100K
혹시 그 전에 모델을 벤치마크 하고 싶으시면 https://github.com/qwopqwop200/ko-arena-hard-auto를 참고하시면 도움이 되실것 같습니다.
[arena-hard-auto-v0.1](https://huggingface.co/datasets/lmarena-ai/arena-hard-auto-v0.1) 를 `GPT-4o`와 `o1`을 사용하여 한국어로 번역하고 수작업으로 검수한 데이터셋입니다.
오역이나 의역 또는 부자연스러운 번역이 있을 수 있습니다. 혹시 이을 발견하신다면 issue를 제기하거나 이를 수정한 pr를 만들어주시면 감사하겠습니다.
```python
# 프롬프트 템플릿
""""""<|User Prompt|>\n{question_1}\n\n<|The Start of Assistant A's Answer|>\n{answer_1}\n<|The End of Assistant A's Answer|>\n\n<|The Start of Assistant B's Answer|>\n{answer_2}\n<|The End of Assistant B's Answer|>""""""
# 번역된 프롬프트 템플릿
""""""<|사용자 프롬프트|>\n{question_1}\n\n<|어시스턴트 A의 답변 시작|>\n{answer_1}\n<|어시스턴트 A의 답변 끝|>\n\n<|어시스턴트 B의 답변 시작|>\n{answer_2}\n<|어시스턴트 B의 답변 끝|>""""""
# 프롬프트
""""""Please act as an impartial judge and evaluate the quality of the responses provided by two AI assistants to the user prompt displayed below. You will be given assistant A's answer and assistant B's answer. Your job is to evaluate which assistant's answer is better.\n\nBegin your evaluation by generating your own answer to the prompt. You must provide your answers before judging any answers.\n\nWhen evaluating the assistants' answers, compare both assistants' answers with your answer. You must identify and correct any mistakes or inaccurate information.\n\nThen consider if the assistant's answers are helpful, relevant, and concise. Helpful means the answer correctly responds to the prompt or follows the instructions. Note when user prompt has any ambiguity or more than one interpretation, it is more helpful and appropriate to ask for clarifications or more information from the user than providing an answer based on assumptions. Relevant means all parts of the response closely connect or are appropriate to what is being asked. Concise means the response is clear and not verbose or excessive.\n\nThen consider the creativity and novelty of the assistant's answers when needed. Finally, identify any missing important information in the assistants' answers that would be beneficial to include when responding to the user prompt.\n\nAfter providing your explanation, you must output only one of the following choices as your final verdict with a label:\n\n1. Assistant A is significantly better: [[A>>B]]\n2. Assistant A is slightly better: [[A>B]]\n3. Tie, relatively the same: [[A=B]]\n4. Assistant B is slightly better: [[B>A]]\n5. Assistant B is significantly better: [[B>>A]]\n\nExample output: \""My final verdict is tie: [[A=B]]\"".""""""
# 번역된 프롬프트
""""""아래에 표시된 사용자 프롬프트에 대해 두 AI 어시스턴트가 제공한 응답의 품질을 평가하는 공정한 심판으로서 행동해 주십시오. 당신에게 어시스턴트 A의 답변과 어시스턴트 B의 답변이 주어집니다. 당신의 일은 어느 어시스턴트의 답변이 더 나은지 평가하는 것입니다.\n\n평가를 시작하기 전에, 먼저 프롬프트에 대한 당신의 답변을 생성하십시오. 어떤 답변을 평가하기 전에 반드시 당신의 답변을 제공해야 합니다.\n\n어시스턴트들의 답변을 평가할 때, 당신의 답변과 두 어시스턴트의 답변을 비교하십시오. 어떠한 실수나 부정확한 정보가 있는지 식별하고 수정해야 합니다.\n\n그런 다음 어시스턴트들의 답변이 도움이 되는지, 관련성이 있는지, 그리고 간결한지 고려하십시오. 도움이 된다는 것은 답변이 프롬프트에 대하여 정확하게 응답하거나 지시을 따르는 것을 의미합니다. 사용자 프롬프트에 모호성이 있거나 여러 해석이 가능한 경우, 가정에 기반한 답변을 제공하는 것보다 사용자에게 명확히 하거나 더 많은 정보를 요청하는 것이 더 도움이 되고 적절하다는 점에 유의하십시오. 관련성 있다는 것은 응답의 모든 부분이 묻는 내용과 밀접하게 연결되거나 적절하다는 것을 의미합니다. 간결하다는 것은 응답이 명확하고 장황하거나 과도하지 않다는 것을 의미합니다.\n\n그런 다음 필요할 때 어시스턴트들의 답변이 얼마나 창의적이고 참신한지 고려하십시오. 마지막으로, 사용자 프롬프트에 응답할 때 포함하면 유용할 중요한 정보가 어시스턴트들의 답변에 누락되어 있는지 확인하십시오.\n\n이유을 제공한 후, 최종 판단으로 다음 선택지 중 하나만 레이블과 함께 출력해야 합니다:\n\n어시스턴트 A가 훨씬 더 좋음: [[A>>B]]\n어시스턴트 A가 약간 더 좋음: [[A>B]]\n무승부, 거의 동일함: [[A=B]]\n어시스턴트 B가 약간 더 좋음: [[B>A]]\n어시스턴트 B가 훨씬 더 좋음: [[B>>A]]\n\n예시 출력: ""제 최종 판단은 무승부입니다: [[A=B]]"".""""""
```
원래 문제의 형식을 유지하기 힘들어서 변경했습니다.
인덱스 : `1, 28, 29`
한국어를 문제를 한국어로 유도하기 위해 문제 형식을 변경했습니다.
인덱스 : `30, 379, 190`
# m-ArenaHard 와의 비교
[m-ArenaHard](https://huggingface.co/datasets/CohereForAI/m-ArenaHard)는 CohereForAI에서 LLM의 한국어를 포함한 다국어 성능을 벤치마크하기위해 [arena-hard-auto-v0.1](https://huggingface.co/datasets/lmarena-ai/arena-hard-auto-v0.1)을 `Google Translate API v3`로 번역한 데이터셋입니다.
그러나 `Google Translate API v3`의 한계로 인하여 오역, 코드등을 과도하게 번역하고, 프롬프트 형식을 변경하는 등의 문제가 있습니다.
그와 달리 ko-arena-hard-auto-v0.1 데이터셋은 이러한 문제를 최소화하기 위해 `GPT-4o`와 `o1`을 사용하여 번역된 결과를 수작업으로 확인하고 수정하였습니다.
이러한 점을 강조하기 위해 몇 개의 데이터를 비교합니다.
### 13
```
# 원문:
Proof that Q(sqrt(-11)) is a principal ideal domain
# m-ArenaHard
Q(sqrt(-11))이 주 아이디얼 도메인임을 증명
# ko-arena-hard-auto-v0.1
Q(sqrt(-11))가 주 아이디얼 정역임을 증명하시오.
```
### 18
```
# 원문:
How can I generate a seaborn barplot that includes the values of the bar heights and confidence intervals?
# m-ArenaHard
막대 높이와 신뢰 구간 값을 포함하는 시본 막대 그래프를 어떻게 생성할 수 있나요?
# ko-arena-hard-auto-v0.1
막대 높이와 신뢰 구간의 값을 포함하는 seaborn 막대 그래프를 생성하려면 어떻게 해야되?
```
### 25
```
# 원문:
If I have a TypeScript class:
class Foo {
ReactProperties: {
a: string;
}
}
How do I extract the type of the ReactProperties member object from the type Class?
# m-ArenaHard
TypeScript 클래스가 있는 경우: class Foo { ReactProperties: { a: string; } } Class 유형에서 ReactProperties 멤버 객체의 유형을 추출하려면 어떻게 해야 하나요?
# ko-arena-hard-auto-v0.1
TypeScript 클래스가 있는 경우:
class Foo {
ReactProperties: {
a: string;
}
}
Class 타입에서 ReactProperties 멤버 객체의 타입을 어떻게 추출하니?
```
### 27
```
# 원문:
Introduce Ethan, including his experience-level with software development methodologies like waterfall and agile development. Describe the major differences between traditional waterfall and agile software developments. In his opinion, what are the most notable advantages and disadvantages of each methodology?
# m-ArenaHard
Ethan을 소개하고, 폭포수 및 애자일 개발과 같은 소프트웨어 개발 방법론에 대한 그의 경험 수준을 포함합니다. 전통적인 폭포수 및 애자일 소프트웨어 개발의 주요 차이점을 설명합니다. 그의 의견으로는 각 방법론의 가장 주목할 만한 장단점은 무엇입니까?
# ko-arena-hard-auto-v0.1
waterfall 및 agile 개발과 같은 소프트웨어 개발 방법론에 대한 경험 수준을 포함하여 Ethan을 소개하세요. 전통적인 waterfall과 agile 소프트웨어 개발의 주요 차이점을 설명하세요. 그의 의견으로는 각 방법론의 가장 눈에 띄는 장점과 단점은 무엇입니까?
```
### 32
```
# 원문:
Provide 15 attack vectors in Manufacturing sector and methods to mitigate the identied risks
# m-ArenaHard
제조 부문의 15가지 공격 벡터와 식별된 위험을 완화하는 방법을 제공합니다.
# ko-arena-hard-auto-v0.1
제조업 섹터의 공격 벡터 15개와 확인된 위험을 완화하기 위한 방법을 제공하십시오
```
### 40
```
# 원문:
You are a data scientist, output a Python script in OOP for a contextual multi armed bandit sampling from 3 models
# m-ArenaHard
당신은 데이터 과학자이고 3개 모델에서 상황에 맞는 다중 무장 도적 샘플링을 위한 OOP로 Python 스크립트를 출력합니다.
# ko-arena-hard-auto-v0.1
당신은 데이터 과학자이며, 3개의 모델에서 샘플링하는 contextual multi armed bandit을 위한 파이썬 스크립트를 OOP 방식으로 출력해주세요.
```
### 46
```
# 원문:
Give me a recipe for making 5L of strawberry and blackberry melomel. Use metric measurements.
# m-ArenaHard
딸기와 블랙베리 멜로멜 5L를 만드는 레시피를 알려주세요. 미터법 측정을 사용하세요.
# ko-arena-hard-auto-v0.1
5L의 딸기와 블랙베리 멜로멜을 만드는 레시피를 줘. 미터법을 사용해.
```
### 150
```
# 원문:
Can you give me a swimming workout with a main set of 15x100 at 1:30 and in total around 4500m ? For an swimmer at an advanced level
# m-ArenaHard
1:30에 15x100의 메인 세트와 총 4500m 정도의 수영 운동을 해줄 수 있나요? 고급 레벨의 수영 선수를 위한
# ko-arena-hard-auto-v0.1
수영 상급자를 위해 1분 30초 간격으로 100m를 15회 하는 메인 세트를 포함하는 총 약 4500m의 수영 프로그램을 제공해 주실 수 있나요?
```
### 364
```
# 원문:
write python code to web scrape https://naivas.online using beautiful soup
# m-ArenaHard
https://naivas.online에서 아름다운 수프를 사용하여 웹 스크래핑에 파이썬 코드를 작성하세요
# ko-arena-hard-auto-v0.1
beautiful soup을 사용해 https://naivas.online 웹을 스크래핑하는 파이썬 코드를 작성해
```
### 461
```
# 원문:
help me remove column A based on this code data vertical3;
set vertical2;
format Treatment $Drug. Effectiveness $Effective. Sex $Sex. ;
# m-ArenaHard
이 코드 데이터 vertical3을 기반으로 열 A를 제거하도록 도와주세요; vertical2를 설정하세요; 치료 $약물. 효과 $효과. 성별 $성별. 형식을 지정하세요.
# ko-arena-hard-auto-v0.1
이 코드를 기반으로 열 A를 제거하도록 도와주세요 data vertical3;
set vertical2;
format Treatment $Drug. Effectiveness $Effective. Sex $Sex. ;
```"
Nexdata/Korean_Conversational_Speech_Data_by_Mobile_Phone,"{""YAML tags"": [{""copy-paste the tags obtained with the tagging app"": ""https://github.com/huggingface/datasets-tagging""}], ""task_categories"": [""conversational""], ""language"": [""ko""]}","# Dataset Card for Nexdata/Korean_Conversational_Speech_Data_by_Mobile_Phone
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://www.nexdata.ai/datasets/speechrecog/1103?source=Huggingface
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
About 700 Korean speakers participated in the recording, and conducted face-to-face communication in a natural way. They had free discussion on a number of given topics, with a wide range of fields; the voice was natural and fluent, in line with the actual dialogue scene. Text is transferred manually, with high accuracy.
For more details, please refer to the link: https://www.nexdata.ai/datasets/speechrecog/1103?source=Huggingface
### Supported Tasks and Leaderboards
automatic-speech-recognition, audio-speaker-identification: The dataset can be used to train a model for Automatic Speech Recognition (ASR).
### Languages
Korean
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
Commercial License
### Citation Information
[More Information Needed]
### Contributions"
prometheus-eval/MMQA,"{""configs"": [{""config_name"": ""default"", ""data_files"": [{""split"": ""test"", ""path"": ""data/train-*""}]}], ""language"": [""bn"", ""ko"", ""eu"", ""ca"", ""es"", ""vi"", ""ar""]}","## Links for Reference
- **Repository: https://github.com/guijinSON/MM-Eval**
- **Paper: https://arxiv.org/abs/2410.17578**
- **Point of Contact:spthsrbwls123@yonsei.ac.kr / dkyoon@kaist.ac.kr**
# **M**ultilingual **M**ulticultural-**Q**uestion **A**nswering (MMQA)
MMQA is a multilingual and multicultural long-form question-answering dataset, which originated as a subset of the [MM-Eval](https://huggingface.co/datasets/prometheus-eval/MM-Eval) benchmark.
MMQA features long-form question-answer pairs that inquire about culture-related contexts in seven languages: Bengali, Korean, Catalan, Basque, Spanish, Vietnamese, and Arabic. The dataset is designed to evaluate the ability of models to generate detailed, culturally informed answers across diverse languages and contexts.
### Languages Covered:
Bengali, Korean, Catalan, Basque, Spanish, Vietnamese, Arabic
### Citation:
If you find the following model helpful, please consider citing our paper!
```
@article{son2024mm,
title={MM-Eval: A Multilingual Meta-Evaluation Benchmark for LLM-as-a-Judge and Reward Models},
author={Son, Guijin and Yoon, Dongkeun and Suk, Juyoung and Aula-Blasco, Javier and Aslan, Mano and Kim, Vu Trong and Islam, Shayekh Bin and Prats-Cristi{\`a}, Jaume and Tormo-Ba{\~n}uelos, Luc{\'\i}a and Kim, Seungone},
journal={arXiv preprint arXiv:2410.17578},
year={2024}
}
```"
PerRing/CulturaX_ko_10k,"{""language"": [""ko""], ""dataset_info"": {""features"": [{""name"": ""text"", ""dtype"": ""string""}], ""splits"": [{""name"": ""train"", ""num_bytes"": 51498183.8771707, ""num_examples"": 10000}], ""download_size"": 29717993, ""dataset_size"": 51498183.8771707}, ""configs"": [{""config_name"": ""default"", ""data_files"": [{""split"": ""train"", ""path"": ""data/train-*""}]}]}",
ChuGyouk/MedExpQA-Kor,"{""license"": ""cc-by-4.0"", ""task_categories"": [""text-generation""], ""language"": [""ko""], ""tags"": [""medical""]}","# MedExpQA
Original Data: [HiTZ/MedExpQA](https://huggingface.co/datasets/HiTZ/MedExpQA)
En subset, train split and validation split are translated into Korean by ""solar-1-mini-translate-enko""."
nayohan/Maths-College-ko,"{""dataset_info"": {""features"": [{""name"": ""instruction"", ""dtype"": ""string""}, {""name"": ""output"", ""dtype"": ""string""}], ""splits"": [{""name"": ""train"", ""num_bytes"": 292028782, ""num_examples"": 48499}], ""download_size"": 124837493, ""dataset_size"": 292028782}, ""configs"": [{""config_name"": ""default"", ""data_files"": [{""split"": ""train"", ""path"": ""data/train-*""}]}], ""license"": ""apache-2.0"", ""task_categories"": [""question-answering""], ""language"": [""ko""], ""tags"": [""math"", ""ko""]}",Translated 5% [ajibawa-2023/Maths-College](https://huggingface.co/datasets/ajibawa-2023/Maths-College) using [nayohan/llama3-instrucTrans-enko-8b](https://huggingface.co/nayohan/llama3-instrucTrans-enko-8b).
phnyxlab/klue-nli-simcse,"{""dataset_info"": {""features"": [{""name"": ""premise"", ""dtype"": ""string""}, {""name"": ""entailment"", ""dtype"": ""string""}, {""name"": ""contradiction"", ""dtype"": ""string""}], ""splits"": [{""name"": ""train"", ""num_bytes"": 2022859.0657676577, ""num_examples"": 8142}, {""name"": ""validation"", ""num_bytes"": 224844.9342323422, ""num_examples"": 905}], ""download_size"": 1572558, ""dataset_size"": 2247704}, ""configs"": [{""config_name"": ""default"", ""data_files"": [{""split"": ""train"", ""path"": ""data/train-*""}, {""split"": ""validation"", ""path"": ""data/validation-*""}]}], ""language"": [""ko""], ""pretty_name"": ""k"", ""size_categories"": [""1K
**This dataset was prepared by converting KLUENLI dataset** to use it for contrastive training (SimCSE). The code used to prepare the data is given below:
```py
import pandas as pd
from datasets import load_dataset, concatenate_datasets, Dataset
from torch.utils.data import random_split
class PrepTriplets:
@staticmethod
def make_dataset():
train_dataset = load_dataset(""klue"", ""nli"", split=""train"")
val_dataset = load_dataset(""klue"", ""nli"", split=""validation"")
merged_dataset = concatenate_datasets([train_dataset, val_dataset])
triplets_dataset = PrepTriplets._get_triplets(merged_dataset)
# Split back into train and validation
train_size = int(0.9 * len(triplets_dataset))
val_size = len(triplets_dataset) - train_size
train_subset, val_subset = random_split(
triplets_dataset, [train_size, val_size]
)
# Convert Subset objects back to Dataset
train_dataset = triplets_dataset.select(train_subset.indices)
val_dataset = triplets_dataset.select(val_subset.indices)
return train_dataset, val_dataset
@staticmethod
def _get_triplets(dataset):
df = pd.DataFrame(dataset)
entailments = df[df[""label""] == 0]
contradictions = df[df[""label""] == 2]
triplets = []
for premise in df[""premise""].unique():
entailment_hypothesis = entailments[entailments[""premise""] == premise][
""hypothesis""
].tolist()
contradiction_hypothesis = contradictions[
contradictions[""premise""] == premise
][""hypothesis""].tolist()
if entailment_hypothesis and contradiction_hypothesis:
triplets.append(
{
""premise"": premise,
""entailment"": entailment_hypothesis[0],
""contradiction"": contradiction_hypothesis[0],
}
)
triplets_dataset = Dataset.from_pandas(pd.DataFrame(triplets))
return triplets_dataset
# Example usage:
# PrepTriplets.make_dataset()
```
**How to download**
```
from datasets import load_dataset
data = load_dataset(""phnyxlab/klue-nli-simcse"")
```
**If you use this dataset for research, please cite this paper:**
```
@misc{park2021klue,
title={KLUE: Korean Language Understanding Evaluation},
author={Sungjoon Park and Jihyung Moon and Sungdong Kim and Won Ik Cho and Jiyoon Han and Jangwon Park and Chisung Song and Junseong Kim and Yongsook Song and Taehwan Oh and Joohong Lee and Juhyun Oh and Sungwon Lyu and Younghoon Jeong and Inkwon Lee and Sangwoo Seo and Dongjun Lee and Hyunwoo Kim and Myeonghwa Lee and Seongbo Jang and Seungwon Do and Sunkyoung Kim and Kyungtae Lim and Jongwon Lee and Kyumin Park and Jamin Shin and Seonghyun Kim and Lucy Park and Alice Oh and Jung-Woo Ha and Kyunghyun Cho},
year={2021},
eprint={2105.09680},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```"
qowlsdud/CounselGPT,{},"---
license: openrail
language:
- ko
---"
seongs/dell-qa-en-to-ko-translated-by-ke-t5-base,"{""license"": ""apache-2.0"", ""language"": [""ko""], ""task_categories"": [""question-answering"", ""translation""], ""size_categories"": [""10K
license: mit
---"
seongyeon1/nursing-home-qa,"{""language"": [""ko""], ""dataset_info"": {""features"": [{""name"": ""instruction"", ""dtype"": ""string""}, {""name"": ""input"", ""dtype"": ""string""}, {""name"": ""output"", ""dtype"": ""string""}], ""splits"": [{""name"": ""train"", ""num_bytes"": 43110, ""num_examples"": 82}], ""download_size"": 23380, ""dataset_size"": 43110}, ""configs"": [{""config_name"": ""default"", ""data_files"": [{""split"": ""train"", ""path"": ""data/train-*""}]}]}",
joonhok-exo-ai/korean_law_case_codes,"{""license"": ""openrail"", ""language"": [""ko""], ""tags"": [""legal""], ""size_categories"": [""n<1K""]}","# Dataset Card for Dataset Name
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:** [김준호](mailto:joonhok@smartfitnow.com)
### Dataset Summary
[사건별 부호문자의 부여에 관한 예규(재일 2003-1, 재판예규 제1769호)](https://glaw.scourt.go.kr/wsjo/gchick/sjo330.do?contId=3245922&q=%EC%82%AC%EA%B1%B4%EB%B3%84+%EB%B6%80%ED%98%B8%EB%AC%B8%EC%9E%90&nq=&w=total&pg=NaN#1696829627652)에서 규정한 전체 사건부호 데이터셋입니다.
## Additional Information
### Dataset Curators
김준호([링크드인](https://www.linkedin.com/in/joonho-kim/)): 이 데이터셋은 인공지능 법률 서비스를 만들고 있는 제가 직접 필요해서 만들게 되었습니다.
### Contributions
혹시 데이터 중 잘못된 부분을 발견하신 분은 [joonhok@smartfitnow.com](mailto:joonhok@smartfitnow.com)로 연락 주시면
확인 후 반영하겠습니다."
nayohan/HelpSteer2-ko,"{""dataset_info"": {""features"": [{""name"": ""prompt"", ""dtype"": ""string""}, {""name"": ""response"", ""dtype"": ""string""}, {""name"": ""helpfulness"", ""dtype"": ""int64""}, {""name"": ""correctness"", ""dtype"": ""int64""}, {""name"": ""coherence"", ""dtype"": ""int64""}, {""name"": ""complexity"", ""dtype"": ""int64""}, {""name"": ""verbosity"", ""dtype"": ""int64""}], ""splits"": [{""name"": ""train"", ""num_bytes"": 56968252, ""num_examples"": 20324}], ""download_size"": 20291307, ""dataset_size"": 56968252}, ""configs"": [{""config_name"": ""default"", ""data_files"": [{""split"": ""train"", ""path"": ""data/train-*""}]}], ""task_categories"": [""text-generation""], ""language"": [""ko""], ""tags"": [""dpo""]}","Translated [nvidia/HelpSteer2](https://huggingface.co/datasets/nvidia/HelpSteer2) using [nayohan/llama3-instrucTrans-enko-8b](https://huggingface.co/nayohan/llama3-instrucTrans-enko-8b).
This dataset is a raw translated dataset and contains repetitive sentences generated by the model, so it needs to be filtered.
```
@misc{wang2024helpsteer2,
title={HelpSteer2: Open-source dataset for training top-performing reward models},
author={Zhilin Wang and Yi Dong and Olivier Delalleau and Jiaqi Zeng and Gerald Shen and Daniel Egert and Jimmy J. Zhang and Makesh Narsimhan Sreedhar and Oleksii Kuchaiev},
year={2024},
eprint={2406.08673},
archivePrefix={arXiv},
primaryClass={id='cs.CL' full_name='Computation and Language' is_active=True alt_name='cmp-lg' in_archive='cs' is_general=False description='Covers natural language processing. Roughly includes material in ACM Subject Class I.2.7. Note that work on artificial languages (programming languages, logics, formal systems) that does not explicitly address natural-language issues broadly construed (natural-language processing, computational linguistics, speech, text retrieval, etc.) is not appropriate for this area.'}
}
```"
youjunhyeok/Magpie-Qwen2-Pro-200K-English-ko,"{""language"": [""ko""], ""task_categories"": [""text-generation""], ""dataset_info"": {""features"": [{""name"": ""uuid"", ""dtype"": ""string""}, {""name"": ""model"", ""dtype"": ""string""}, {""name"": ""gen_input_configs"", ""struct"": [{""name"": ""temperature"", ""dtype"": ""float64""}, {""name"": ""top_p"", ""dtype"": ""float64""}, {""name"": ""input_generator"", ""dtype"": ""string""}, {""name"": ""seed"", ""dtype"": ""null""}, {""name"": ""extract_input"", ""dtype"": ""string""}]}, {""name"": ""instruction"", ""dtype"": ""string""}, {""name"": ""response"", ""dtype"": ""string""}, {""name"": ""conversations"", ""list"": [{""name"": ""from"", ""dtype"": ""string""}, {""name"": ""value"", ""dtype"": ""string""}]}, {""name"": ""task_category"", ""dtype"": ""string""}, {""name"": ""other_task_category"", ""sequence"": ""string""}, {""name"": ""task_category_generator"", ""dtype"": ""string""}, {""name"": ""difficulty"", ""dtype"": ""string""}, {""name"": ""intent"", ""dtype"": ""string""}, {""name"": ""knowledge"", ""dtype"": ""string""}, {""name"": ""difficulty_generator"", ""dtype"": ""string""}, {""name"": ""input_quality"", ""dtype"": ""string""}, {""name"": ""quality_explanation"", ""dtype"": ""string""}, {""name"": ""quality_generator"", ""dtype"": ""string""}, {""name"": ""llama_guard_2"", ""dtype"": ""string""}, {""name"": ""reward_model"", ""dtype"": ""string""}, {""name"": ""instruct_reward"", ""dtype"": ""float64""}, {""name"": ""min_neighbor_distance"", ""dtype"": ""float64""}, {""name"": ""repeat_count"", ""dtype"": ""int64""}, {""name"": ""min_similar_uuid"", ""dtype"": ""string""}, {""name"": ""instruction_length"", ""dtype"": ""int64""}, {""name"": ""response_length"", ""dtype"": ""int64""}, {""name"": ""language"", ""dtype"": ""string""}], ""splits"": [{""name"": ""train"", ""num_bytes"": 1287322246, ""num_examples"": 197639}], ""download_size"": 623665981, ""dataset_size"": 1287322246}, ""configs"": [{""config_name"": ""default"", ""data_files"": [{""split"": ""train"", ""path"": ""data/train-*""}]}], ""tags"": [""instruction"", ""korean"", ""magpie""], ""size_categories"": [""100K 내의 함수 서명 형태로 출력할 json 스키마도 제공받습니다. json 스키마에 어떤 값을 넣을지 가정하지 마세요. \n\n[{\""type\"": \""function\"", \""function\"": {\""name\"": \""ExpertQAExtractor\"", \""description\"": \""문서에서 개념이나 정보가 실제 상황에 어떻게 적용될 수 있는지를 묻는 질문의 목록을 추출합니다. 이는 지식을 적용할 수 있는 능력을 평가합니다.\"", \""parameters\"": {\""type\"": \""object\"", \""properties\"": {\""application_questions\"": {\""type\"": \""array\"", \""items\"": {\""type\"": \""string\""}}}, \""required\"": [\""application_questions\""]}}}]\n\n각 추출 함수 호출에 대해 함수 이름과 인수를 포함한 json 객체를 반환하고, 다음 스키마와 함께 태그를 사용하세요:\n\n{‘arguments’: , ‘name’: }\n""
},
{
""role"": ""user"",
""content"": ""다음 구문에서 쿼리를 추출하는 데 도움을 주실 수 있나요? \""O(n)\""이라고 발표된 경우, 이는 알고리즘을 실행하는 데 걸리는 시간이 _노드 수에 비례_함을 의미합니다. 이는 특정 밀리초의 수치를 의미하지는 않으며, 이는 사용하고 있는 컴퓨터 하드웨어의 유형, 프로그래밍 언어 및 기타 여러 가지에 크게 의존합니다. 그러나 O(n) 알고리즘에 대해 _우리가_ 말할 수 있는 것은 노드 수를 두 배로 늘리면 실행 시간이 대략 두 배가 된다는 것입니다. 노드 수를 네 배로 늘리면 실행 시간을 네 배로 늘어납니다. 이것은 당신이 기대하는 것과 같습니다.\n128 CHAPTER 5. 구조\n간단한 정렬되지 않은 이름 목록에서 \""몰리\""를 찾는 것은 O(n) 작업입니다. 목록에 천 개의 노드가 있다면 평균적으로 500개를 스캔한 후에 몰리를 찾게 될 것입니다. (운이 좋으면 초기에 몰리를 찾을 수도 있지만, 운이 나쁘면 끝에 가서야 몰리를 찾게 될 것입니다. 일반적으로 이는 리스트 크기의 절반에 해당합니다.) 그러나 백만 개의 노드가 있다면, 평균적으로 500,000회를 탐색해야 몰리를 찾을 수 있습니다. 열 배가 되는 노드 수는 몰리를 찾는 데 걸리는 시간을 열 배 늘립니다. 그리고 천 배의 노드 수는 천 배 더 오래 걸립니다. 정말 안타깝네요.\n몰리를 BST에서 찾는 것은 O(lg n) 과정입니다. \""lg\""는 로그(밑 2)를 의미한다고 기억하세요. 이는 노드 수를 두 배로 늘리면 실행 시간이 _미미하게_ 증가함을 의미합니다. 나무에 천 개의 노드가 있다고 가정해 봅시다. 몰리를 찾기 위해 500개를 살펴볼 필요는 없습니다; 단지 _10개_만 살펴보면 됩니다(1000의 lg는 약 10이기 때문입니다). 이제 노드 수를 백만으로 늘리면 몰리를 찾기 위해 500,000개를 살펴볼 필요는 없으며, _20개_만 살펴보면 됩니다. 만약 여러분의 나무에 60억 개의 노드가 있다면 (지구의 인구와 비슷합니다) 30개만 살펴보면 몰리를 찾을 수 있습니다. 정말 믿기 어렵습니다.\n**BST에 노드 추가하기**\nBST에서 사물을 찾는 것은 번개처럼 빠릅니다. 사실, 사물을 추가하는 것도 빠릅니다. 제니퍼라는 새로운 고객을 얻었고, 나중에 그녀의 계좌 정보를 검색할 수 있도록 BST에 추가해야 한다고 가정해봅시다. 우리는 제니퍼를 _찾고_ 있는 것과 같은 프로세스를 따릅니다. 하지만 그녀가 있어야 할 자리를 찾자마자 그녀를 추가합니다. 이 경우 제니퍼는 미치보다 먼저(왼쪽으로 가고), 제시카보다 먼저(다시 왼쪽으로 가고), 벤보다 나중(오른쪽으로 가고)입니다. 벤에게는 오른쪽 자식이 없으므로 그 지점에서 제시카를 추가합니다. (그림 5.26 참조.)\n이 추가 프로세스 또한 O(lg n) 알고리즘입니다. 우리는 나무의 높이와 같은 적은 수의 노드만 살펴보면 되기 때문입니다. \n새로운 항목은 추가될 때 항상 _단말_이 됩니다. 사실,\n5.2. 나무 129\n미치\n제시카\n벤 짐\n랜디\n오웬\n몰리\n잔더\n미치\n제시카\n벤\n제니퍼\n짐\n랜디\n오웬\n몰리\n잔더\n그림 5.26: 제니퍼를 추가한 후의 BST.\n이는 우리가 나무를 살펴보고 이전의 일부를 재구성할 수 있도록 합니다. 예를 들어, 우리는 미치가 원래 삽입된 첫 번째 노드였고, 랜디가 오웬, 잔더 또는 몰리보다 먼저 삽입되었음을 알고 있습니다. 연습으로 자신과 친구 몇 명의 이름을 이 트리에 추가하여 매력을 느껴보세요. 작업이 끝나면 물론 트리는 BST 속성을 준수해야 합니다. \n**BST에서 노드 제거하기**\n노드 제거는 추가하는 것보다 조금 더 까다롭습니다. 어떻게 트리의 구조를 망치지 않고 항목을 삭제할 수 있을까요? 몰리를 삭제하는 것은 쉽게 볼 수 있습니다. 그녀는 단지 단말이므로 제거하고 끝입니다. 하지만 제시카를 삭제하는 것은요? 또는 미치를 삭제하는 것은요?\n당신의 첫 번째 충동은 노드를 제거하고 자식 중 하나를 제자리에 올리는 것을 생각할 수 있습니다. 예를 들어, 제시카를 삭제하면 벤을 제시카가 있던 곳으로 올려보내고 제니퍼를 벤에게 올리면 좋겠다고 생각할 수 있습니다. 그러나 이것은 작동하지 않습니다. 결과는 그림 5.27과 같을 것이며 제니퍼가 잘못된 위치에 있을 것입니다. 우리가 트리에서 제니퍼를 찾을 다음 번에는 벤의 _오른쪽_을 검색하게 되어 그녀를 전적으로 놓치게 됩니다. 제니퍼는 효과적으로 사라진 것입니다.\n미치\n제시카\n벤\n제니퍼\n짐\n랜디\n오웬\n몰리\n잔더\n#### !\n미치\n벤\n제니퍼 짐\n랜디\n오웬\n몰리\n잔더\n그림 5.27: 제시카를 잘못 제거한 후의 **잘못된** (비)BST.\n노드 제거의 한 가지 올바른 방법(다른 방법들도 있습니다)은 노드를 _오른쪽 서브트리의 가장 왼쪽 자손_으로 교체하는 것입니다. (또는, 동등하게, 왼쪽 서브트리의 가장 오른쪽 자손으로 교체하는 것). 이를 정의할 때 주의해야 합니다. 노드의 오른쪽 서브트리의 가장 왼쪽 자손을 얻으려면 (1) 노드의 _오른쪽_ 자식으로 가고, (2) 가능한 한 왼쪽으로 계속 이동합니다. 왼쪽 자식이 없는 노드에 도달할 때까지입니다. 그 노드(왼쪽 자식이 없는 노드)는 원래 노드의 오른쪽 서브트리의 가장 왼쪽 자손입니다.\n예시: 그림 5.17로 돌아가 봅시다 (페이지 117). G의 오른쪽 서브트리의 가장 왼쪽 자손은 무엇인가요? 답: A입니다. 우리는 G에서 H쪽으로 오른쪽으로 가고, 가능한 한 왼쪽으로 가는데, 결국 A에 도착하며 A에는 왼쪽 자식이 없습니다 (또는 오른쪽 자식도 없습니다). 이러한 추가 예제를 스스로 풀어보세요: K의 오른쪽 서브트리의 가장 왼쪽 자손은 무엇인가요? D의 것은요? H의 것은요?^5\n이제 그림 5.26로 돌아가서 제시카를 _올바른_ 방법으로 제거하겠습니다. 우리는 단순히 그녀의 오른쪽 서브트리의 가장 왼쪽 자손 - 즉, 짐을 찾아서 그 자리에 배치합니다. 그림 5.28이 결과를 보여줍니다. 우리는 그를 제시카의 오른쪽 자식으로 단순히 올렸기 때문에 그가 왼쪽 자손이 없었으므로 _그가 그녀의 오른쪽 서브트리에서 가장 왼쪽 노드였습니다._ (그가 왼쪽 자손이 있었다면 그를 올리는 것도 벤을 올리는 것과 마찬가지로 잘못되었습니다. 대신 우리는 짐에서 왼쪽으로 가며 더 이상 왼쪽으로 갈 수 없을 때까지 이동하고 _그_ 노드를 승진시켰을 것입니다.)\n미치\n제시카\n벤\n제니퍼\n짐\n랜디\n오웬\n몰리\n잔더\n미치\n짐\n벤\n제니퍼\n랜디\n오웬\n몰리\n잔더\n그림 5.28: 제시카를 올바르게 제거한 후의 BST.\n예시로, 우리는 과감하게 루트 노드인 미치를 제거해보겠습니다. 결과는 그림 5.29와 같습니다. 몰리에게는 정말 행복한 일이에요: 그녀는 단말에서 최상위로 승진했습니다. 왜 몰리인가요? 그녀가 미치의 오른쪽 서브트리에서 가장 왼쪽 자손이었기 때문입니다.\n왜 이것이 작동하는지 보려면, _몰리가 미치의 알파벳 순서에서 바로 뒤에 있었음을 고려하세요._ 그가 왕이고 그녀가 농민이라는 사실은 오해의 소지가 있었습니다. 두 사람은 실제로 매우 가까운 사이였습니다: 사실, 중위 순회에서 _연속적_이었습니다. 미치를 몰리로 교체하면 누군가의 알파벳 순서를 변경하지 않으며, 중요할 BST 속성을 보존합니다.\n132 CHAPTER 5. 구조\n미치\n짐\n벤\n제니퍼\n랜디\n오웬\n몰리\n잔더\n몰리\n짐\n벤\n제니퍼\n랜디\n오웬\n잔더\n그림 5.29: 미치를 제거한 후의 BST.\n**균형 유지**\n마지막으로, 이 놀랍게 빠른 조회는 트리가 \""풀 스타일\""로 되어 있는지에 크게 의존한다는 점을 기억하세요. 그렇지 않으면 h = lg(l)의 근사값이 상실됩니다. 우스꽝스러운 극단적 예시로, 그림 5.30을 고려해 보세요. 이는 우리가 계속 사용해 온 노드들을 포함하고 있습니다. 이것은 합법적인 이진 검색 트리입니다! (확인하세요!) 그러나 이 괴물에서 노드를 찾는 것은 노드를 평범한 목록에서 찾는 것보다 더 빨라 보이지 않습니다. 우리는 다시 O(n) 성능으로 되돌아옵니다.\n실제로 이를 처리하는 세 가지 방법이 있습니다. 한 가지 접근법은 걱정하지 않는 것입니다. 결국, 노드를 무작위로 삽입하고 제거하는 한, 그림 5.30처럼 불균형한 트리가 생성될 확률은 천문학적으로 적습니다. 마치 카드 덱을 공중에 던지고 그것이 모두 정돈된 스택으로 떨어질 확률처럼 희귀합니다. 엔트로피 법칙은 우리가 짧은 가지와 긴 가지의 조합을 얻을 것이라고告诉しています. 큰 나무에서는 불균형도가 최소화될 것입니다.\n두 번째 접근법은 주기적으로 트리를 재조정하는 것입니다. 만약 우리의 웹사이트가 수시로 유지보수를 위해 오프라인된다면, 우리는 유익한 순서로 노드를 삽입하여 나무를 처음부터 다시 구축할 수 있습니다. 어떤 순서로 삽입해야 할까요? 음, 첫 번째로 삽입된 노드가 루트가 되도록 하여 우리는 가운데 노드를 먼저 우리의 나무에 삽입하고 싶습니다. 그래서 몰리가 새로운 루트가 됩니다. 이는 그녀의 왼쪽 서브트리에 절반의 노드가 남고 오른쪽에도 절반이 남게 합니다. 이 프로세스를 논리적으로(및 재귀적으로) 따르다 보면, 우리는 각 절반의 가운데 노드를 삽입하려고 할 것이라는 것을 인식하게 됩니다. 이는 제니퍼와 랜디(어떤 순서든 상관없이)를 삽입하게 됩니다. 나는 그것을 자의 눈금과 같다고 생각합니다: 먼저 반 인치를 삽입한 후, 1/4 및 3/4인치를 삽입한 후, 1/8, 3/8, 5/8 및 7/8인치를 삽입합니다. _기타 등등._ 이는 정기적으로 완벽하게 균형 잡힌 나무를 회복시켜 큰 불균형이 한산해지는 것을 더욱 어렵게 만듭니다.\n134 CHAPTER 5. 구조\n세 번째로, AVL 트리 및 레드-블랙 트리와 같은 특별한 데이터 구조가 있습니다. 이는 이진 검색 트리로, 불균형을 방지하기 위한 추가 규칙을 추가합니다. 기본적으로 아이디어는 노드가 삽입되거나(삭제되었을 때 특정 지표를 확인하여 변경이 불균형이 초래하지 않았는지 확인하는 것입니다. 만약 불균형이었다면, 나무는 불균형을 최소화하기 위해 조정됩니다. 이는 트리가 변경될 때마다 약간의 비용이 발생하지만 장기적으로는 느린 조회를 초래할 수 있는 불균형한 트리를 방지합니다.\n**5.3 마무리 말씀**\n와우, 구조에 대한 정보가 정말 많았습니다. 다음 장에서 전혀 다른 주제로 계속하기 전에 이 요약된 생각을 남기겠습니다. BST를 이진 검색 트리의 집합으로, BT를 이진 트리의 집합으로 합니다. RT를 뿌리 트리의 집합으로, T를 트리(자유형 혹은 뿌리형)의 집합으로 합니다. 마지막으로, CG를 연결 그래프의 집합으로, G를 모든 그래프의 집합으로 합니다. 그러면 우리는:\nBST ⊆ BT ⊆ RT ⊆ T ⊆ CG ⊆ G.\n아름다운 것이죠.""
},
{
""role"": ""assistant"",
""content"": ""\\n{\""arguments\"": {\""queries\"": [\""빅오 복잡성 개념이 실제 알고리즘의 효율성을 개선하는 데 어떻게 적용될 수 있을까요?\"",\""특정 값을 검색하기 위해 이진 검색 트리가 단순 정렬되지 않은 목록보다 더 실용적인 선택이 되는 상황은 무엇인가요?\"",\""이진 검색 트리에서 노드를 효율적으로 추가하고 제거하는 능력이 유용할 수 있는 실제 시나리오를 생각해볼 수 있나요?\""], \""name\"": \""ExpertQAExtractor\""}}\\n""
}
]
```
## Example 2 (role=tool에 주의하세요)
```json
[
{
""role"": ""system"",
""content"": ""당신은 함수 호출 AI 모델입니다. XML 태그 내에서 함수 시그니처가 제공됩니다. 사용자 쿼리를 지원하기 위해 하나 이상의 함수를 호출할 수 있습니다. 함수에 어떤 값을 삽입할지에 대한 가정을 하지 마십시오. 사용 가능한 도구는 다음과 같습니다: [{'type': 'function', 'function': {'name': 'get_stock_price', 'description': '현재 주가를 가져옵니다.', 'parameters': {'type': 'object', 'properties': {'stock_symbol': {'type': 'string', 'description': '주식의 심볼, 예: AAPL'}}, 'required': ['stock_symbol']}}}, {'type': 'function', 'function': {'name': 'create_new_post', 'description': '새 게시물을 생성합니다.', 'parameters': {'type': 'object', 'properties': {'title': {'type': 'string', 'description': '게시물의 제목'}, 'content': {'type': 'string', 'description': '게시물의 내용/본문'}, 'tags': {'type': 'array', 'items': {'type': 'string'}, 'description': '게시물과 관련된 태그'}}, 'required': ['title', 'content']}}}] 각 도구 호출에 대해 다음 pydantic 모델 json 스키마를 사용하십시오: {'title': 'FunctionCall', 'type': 'object', 'properties': {'arguments': {'title': 'Arguments', 'type': 'object'}, 'name': {'title': 'Name', 'type': 'string'}}, 'required': ['arguments', 'name']} 각 함수 호출에 대해 다음과 같이 함수 이름 및 인수의 JSON 객체를 포함한 XML 태그를 반환하십시오:\n\n{tool_call}\n""
},
{
""role"": ""user"",
""content"": ""안녕하세요, 애플의 현재 주가를 알려줄 수 있나요?""
},
{
""role"": ""assistant"",
""content"": ""\n{'name': 'get_stock_price', 'arguments': {'stock_symbol': 'AAPL'}}\n""
},
{
""role"": ""tool"",
""content"": ""\n{'stock_price': 150.75}\n""
},
{
""role"": ""assistant"",
""content"": ""애플(AAPL)의 현재 주가는 150.75달러입니다.""
},
{
""role"": ""user"",
""content"": ""그거 대단하네요. 마이크로소프트의 주가도 알려주실 수 있나요?""
},
{
""role"": ""assistant"",
""content"": ""\n{'name': 'get_stock_price', 'arguments': {'stock_symbol': 'MSFT'}}\n""
},
{
""role"": ""tool"",
""content"": ""\n{'주식_가격': 210.22}\n""
},
{
""role"": ""assistant"",
""content"": ""마이크로소프트(MSFT)의 현재 주식 가격은 210.22달러입니다.""
},
{
""role"": ""user"",
""content"": ""정보 주셔서 감사합니다.""
},
{
""role"": ""assistant"",
""content"": ""천만에요! 다른 질문이 있으면 언제든지 물어보세요.""
}
]
```
## Example 3 (여러번의 Tool 호출)
```json
[
{
""role"": ""system"",
""content"": ""당신은 함수 호출 AI 모델입니다. XML 태그 내에서 함수 시그니처가 제공됩니다. 사용자 쿼리를 도와주기 위해 하나 이상의 함수를 호출할 수 있습니다. 함수에 어떤 값을 입력해야 할지에 대해 추측하지 마십시오.\n\n[{'type': 'function', 'function': {'name': 'optimize_inventory_management', 'description': '현재 재고를 분석하고 미래 수요를 예측하여 재고 과잉 또는 재고 부족을 방지함으로써 재고 수준을 최적화합니다.', 'parameters': {'type': 'object', 'properties': {'inventory_database': {'type': 'string', 'description': '현재 재고 수준 및 제품 세부 정보를 포함한 재고 데이터의 JSON 파일입니다.'}}, 'required': ['inventory_database']}}}, {'type': 'function', 'function': {'name': 'select_optimal_suppliers', 'description': '비용, 품질 및 납기와 같은 기준에 따라 잠재적인 공급업체 목록을 분석하여 최상의 공급업체를 선택합니다.', 'parameters': {'type': 'object', 'properties': {'suppliers_list': {'type': 'string', 'description': '공급업체의 프로필 및 성과 지표를 포함한 잠재적 공급업체 데이터의 JSON 파일입니다.'}}, 'required': ['suppliers_list']}}}, {'type': 'function', 'function': {'name': 'enhance_product_tracking', 'description': '실시간 위치 업데이트 및 상태 보고서를 제공하여 공급망 전반에서 제품 추적을 향상시킵니다.', 'parameters': {'type': 'object', 'properties': {'tracking_system': {'type': 'string', 'description': '배송 ID 및 위치 추적 세부 정보를 포함한 현재 제품 추적 시스템 데이터의 JSON 파일입니다.'}}, 'required': ['tracking_system']}}}]\n\n각 함수 호출에 대해 다음 스키마와 함께 태그 내의 함수 이름과 인수로 JSON 객체를 반환하십시오:\n\n{'arguments': , 'name': }\n\n""
},
{
""role"": ""user"",
""content"": ""우리 소비자 전자기기 회사는 제품 라인을 확장하고 있으며 판매량이 증가하고 있습니다. 이러한 성장을 효과적으로 관리하기 위해 공급망 운영을 최적화하는 데 필요한 전문 기능이 필요합니다. 특히 세 가지 중요한 영역에 대한 지원이 필요합니다:\n\n1. **재고 관리**: 우리는 스마트폰, 노트북, 헤드폰 및 액세서스와 같은 품목의 현재 재고 수준을 분석해야 합니다. 우리는 양, 창고 위치, 판매 예측 및 재주문 점을 포함한 종합적인 재고 데이터를 보유하고 있습니다.\n\n재고 데이터 샘플:\n```json\n{\n \""products\"": [\n {\""item_id\"": \""SM123\"", \""name\"": \""스마트폰 모델 X\"", \""stock_level\"": 2500, \""warehouse_location\"": \""W1\"", \""sales_forecast\"": 3000, \""reorder_point\"": 1500},\n {\""item_id\"": \""LT456\"", \""name\"": \""노트북 파워-Y\"", \""stock_level\"": 1800, \""warehouse_location\"": \""W2\"", \""sales_forecast\"": 2200, \""reorder_point\"": 1000},\n ...\n {\""item_id\"": \""HP789\"", \""name\"": \""무선 헤드폰 Z\"", \""stock_level\"": 3200, \""warehouse_location\"": \""W1\"", \""sales_forecast\"": 3500, \""reorder_point\"": 2000}\n ]\n}\n```\n\n2. **공급업체 최적화**: 우리는 마이크로칩, 디스플레이 및 배터리와 같은 부품에 대한 가격표, 품질 지표 및 리드 타임을 포함한 공급업체 데이터를 수집했습니다.\n\n공급업체 목록 샘플:\n```json\n{\n \""suppliers\"": [\n {\""supplier_id\"": \""SUP1\"", \""name\"": \""MicroChips Ltd\"", \""component\"": \""microchip\"", \""cost\"": 1.50, \""quality_rating\"": 95, \""lead_time\"": 5},\n {\""supplier_id\"": \""SUP2\"", \""name\"": \""DisplayWorks\"", \""component\"": \""display\"", \""cost\"": 47.00, \""quality_rating\"": 90, \""lead_time\"": 10},\n ...\n {\""supplier_id\"": \""SUP3\"", \""name\"": \""BatteryPlus\"", \""component\"": \""battery\"", \""cost\"": 18.00, \""quality_rating\"": 85, \""lead_time\"": 7}\n ]\n}\n```\n\n3. **제품 추적 개선**: 우리는 현재 위치, 예상 도착 시간 및 상태 보고서를 포함한 전송 중인 품목에 대한 SKU 수준 정보를 추적하고 있습니다.\n\n추적 시스템 샘플:\n```json\n{\n \""shipments\"": [\n {\""shipment_id\"": \""SH001\"", \""sku\"": \""SM123\"", \""current_location\"": \""물류 센터 5\"", \""expected_arrival\"": \""2023-05-15\"", \""status\"": \""in_transit\""},\n {\""shipment_id\"": \""SH002\"", \""sku\"": \""LT456\"", \""current_location\"": \""항구 당국 부두 9\"", \""expected_arrival\"": \""2023-05-18\"", \""status\"": \""in_transit\""},\n ...\n {\""shipment_id\"": \""SH003\"", \""sku\"": \""HP789\"", \""current_location\"": \""물류 센터 2\"", \""expected_arrival\"": \""2023-05-20\"", \""status\"": \""delayed\""}\n ]\n}\n```\n\n재고 관리를 최적화하고 가장 적합한 공급업체를 선택하며 제품 추적을 향상시키기 위해 함수를 호출해 주세요. 이렇게 하면 균형 잡힌 재고 수준을 유지하고 조달을 개선하며 고객에게 정확한 배송 업데이트를 제공할 수 있습니다.""
},
{
""role"": ""assistant"",
""content"": ""\n{'arguments': {'inventory_database': '{\""products\"":[{\""item_id\"":\""SM123\"",\""name\"":\""스마트폰 모델 X\"",\""stock_level\"":2500,\""warehouse_location\"":\""W1\"",\""sales_forecast\"":3000,\""reorder_point\"":1500},{\""item_id\"":\""LT456\"",\""name\"":\""노트북 파워-Y\"",\""stock_level\"":1800,\""warehouse_location\"":\""W2\"",\""sales_forecast\"":2200,\""reorder_point\"":1000},{\""item_id\"":\""HP789\"",\""name\"":\""무선 헤드폰 Z\"",\""stock_level\"":3200,\""warehouse_location\"":\""W1\"",\""sales_forecast\"":3500,\""reorder_point\"":2000}]}'}}, 'name': 'optimize_inventory_management'}\n
\n{'arguments': {'suppliers_list': '{\""suppliers\"":[{\""supplier_id\"":\""SUP1\"",\""name\"":\""MicroChips Ltd\"",\""component\"":\""microchip\"",\""cost\"":1.50,\""quality_rating\"":95,\""lead_time\"":5},{\""supplier_id\"":\""SUP2\"",\""name\"":\""DisplayWorks\"",\""component\"":\""display\"",\""cost\"":47.00,\""quality_rating\"":90,\""lead_time\"":10},{\""supplier_id\"":\""SUP3\"",\""name\"":\""BatteryPlus\"",\""component\"":\""battery\"",\""cost\"":18.00,\""quality_rating\"":85,\""lead_time\"":7}]}'}}, 'name': 'select_optimal_suppliers'}\n\n
\n{'arguments': {'tracking_system': '{\""shipments\"":[{\""shipment_id\"":\""SH001\"",\""sku\"":\""SM123\"",\""current_location\"":\""물류 센터 5\"",\""expected_arrival\"":\""2023-05-15\"",\""status\"":\""in_transit\""},{\""shipment_id\"":\""SH002\"",\""sku\"":\""LT456\"",\""current_location\"":\""항구 당국 부두 9\"",\""expected_arrival\"":\""2023-05-18\"",\""status\"":\""in_transit\""},{\""shipment_id\"":\""SH003\"",\""sku\"":\""HP789\"",\""current_location\"":\""물류 센터 2\"",\""expected_arrival\"":\""2023-05-20\"",\""status\"":\""delayed\""}]}'}}, 'name': 'enhance_product_tracking'}\n\n""
}
]
```"
SugoiLoki/TranslationTraining,{},"---
license: apache-2.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: kor_sent
dtype: string
- name: eng_sent
dtype: string
- name: source
dtype: string
- name: similarity
dtype: float64
- name: from
dtype: string
- name: __index_level_0__
dtype: float64
splits:
- name: train
num_bytes: 784539402
num_examples: 3332436
download_size: 374217193
dataset_size: 784539402
task_categories:
- translation
language:
- en
- ko
pretty_name: loki
size_categories:
- 1M [!NOTE]
> Dataset origin: https://www.kaggle.com/datasets/zarajamshaid/language-identification-datasst
## Context
WiLI-2018, the Wikipedia language identification benchmark dataset, contains 235000 paragraphs of 235 languages.
Each language in this dataset contains 1000 rows/paragraphs.
After data selection and preprocessing I used the 22 selective languages from the original dataset"
valencianatasha/AnythingLLM,{},"---
language:
- zh
- ja
- ko
license: openrail
task_categories:
- text-generation
pretty_name: tiny_demo
size_categories:
- n<1K
---"
SangMoone/KoreanDataSet,{},"---
dataset_info:
features:
- name: instruction
dtype: string
- name: output
dtype: string
- name: input
dtype: string
splits:
- name: train
num_bytes: 114301314
num_examples: 900000
download_size: 42379819
dataset_size: 114301314
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
language:
- ko
pretty_name: aihub_도서자료 기계독해
tags:
- Aihub
- 한국지능정보사회진흥원
---"
TwinDoc/math-qa-ko,"{""dataset_info"": {""features"": [{""name"": ""mediatype"", ""dtype"": ""string""}, {""name"": ""medianame"", ""dtype"": ""string""}, {""name"": ""category"", ""dtype"": ""string""}, {""name"": ""title"", ""dtype"": ""string""}, {""name"": ""context"", ""dtype"": ""string""}, {""name"": ""question"", ""dtype"": ""string""}, {""name"": ""calculation_type"", ""dtype"": ""string""}, {""name"": ""answer"", ""dtype"": ""string""}], ""splits"": [{""name"": ""train"", ""num_bytes"": 302532422, ""num_examples"": 168555}], ""download_size"": 171217683, ""dataset_size"": 302532422}, ""configs"": [{""config_name"": ""default"", ""data_files"": [{""split"": ""train"", ""path"": ""data/train-*""}]}], ""license"": ""cc-by-nc-sa-4.0"", ""task_categories"": [""question-answering""], ""language"": [""ko""], ""tags"": [""economy"", ""mathQA""], ""size_categories"": [""100K Train > json 파일을 DataFrame 형태로 변형하여 수정 없이 업로드하였습니다.
- 저작권에 의해 본 데이터는 외부 반출 및 타인의 acess 승낙은 불허합니다.
## 데이터 설명
- 본 데이터의 Type 은 `['양자/다자비교', '비율연산', '단서추출', '날짜추출', '가산/감산', '날짜가산/감산', '경계추출']` 로 구성되어 있습니다.
- '단서추출' 데이터 예시
```
{'idx': 'kpf.02100351.20220202090230002',
'mediatype': '뉴스',
'medianame': '이투데이',
'category': '경제',
'source': 'https://www.etoday.co.kr/news/view/2101739',
'date': '2022-02-02',
'title': '""고객이 직접 아이디어 제안""…SSG닷컴, 디자인ㆍ친환경 공모전 진행',
'passage': ""SSG닷컴은 고객이 직접 아이디어를 제안할 수 있는 디자인 공모전 및 친환경 아이디어 공모전을 동시 진행한다고 2일 밝혔다.\n총 상금 규모는 5000여만 원에 달하며 수상자 100여 명을 선정할 예정이다.\n우선 3일부터 20일까지 '제 1회 이마트몰 디자인 공모전'을 개최한다. 쓱배송의 쓱케치북을 주제로 선정해 SSG닷컴 당일 시간대 지정 배송 서비스인 쓱배송에 적용할 수 있는 디자인을 공모 받는다.\n디자인 주제는 캐릭터, 쓱배송, 장보기 등 3가지다. SSG닷컴 회원 누구나 참여할 수 있으며 ID당 최대 3개 작품까지 응모할 수 있다. 수상자 발표는 다음달 17일 예정이다.\n총 상금 규모는 3000만 원이다. 1등 쓱카소상 3명에게는 SSG머니 500만 원, 2등 쓱크리에이터 6명은 SSG머니 100만 원을 제공한다. 3등 쓱케치상 12명은 SSG머니 50만 원, 4등 쓱린이상 30명에게는 SSG머니 30만 원을 증정한다.\n추가로 28일까지 환경재단과 함께 '지쓱 가능한 세상을 위한 친환경 아이디어 공모전'도 시행한다. 자유 형식으로 구성해 누구나 쉽게 아이디어를 응모할 수 있다. SSG닷컴 직원 및 외부 환경전문가로 구성된 심사위원회를 거쳐 당선작을 선발한다. 내달 18일 이후 수상자 발표 예정이다.\n대상 1명에게 상금 500만 원, 최우수상 2명은 상금 300만 원, 우수상 6명은 상금 100만 원을 지급한다. 특별상 40명에게는 신세계상품권 5만 원을 증정한다.\n김진설 SSG닷컴 마케팅담당은 “'쓱닷컴이 내 회사'라는 생각을 가질 수 있는 공모전을 통해 자연스럽게 '팬슈머'를 확보하고 브랜드 충성도를 높일 수 있을 것으로 기대한다”며 “당선작은 향후 쓱닷컴이 진행하는 프로모션에 활용 예정이다”고 했다."",
'qa_pairs': [{'query_id': 'cb1f217d-941b-4289-8ccd-6fc445a576a3',
'question': 'SSG닷컴이 진행하는 제 1회 이마트몰 디자인 공모전에서 1등 수상자가 받는 SSG머니는 총상금 대비 몇 %인가?',
'answer': {'number': None,
'date': None,
'spans': [{'calculation': '5000000/30000000*100',
'calculation_type': '단서추출',
'text': ""우선 3일부터 20일까지 '제 1회 이마트몰 디자인 공모전'을 개최한다. 쓱배송의 쓱케치북을 주제로 선정해 SSG닷컴 당일 시간대 지정 배송 서비스인 쓱배송에 적용할 수 있는 디자인을 공모 받는다.\n디자인 주제는 캐릭터, 쓱배송, 장보기 등 3가지다. SSG닷컴 회원 누구나 참여할 수 있으며 ID당 최대 3개 작품까지 응모할 수 있다. 수상자 발표는 다음달 17일 예정이다.\n총 상금 규모는 3000만 원이다. 1등 쓱카소상 3명에게는 SSG머니 500만 원, 2등 쓱크리에이터 6명은 SSG머니 100만 원을 제공한다."",
'start_index': 115,
'end_index': 407}]}}]}
```
## 데이터 변환 코드
- AI-HUB 에서 다운로드 받은 json 파일을 DataFrame 형태로 변형 시 사용한 코드입니다.
```
import json
import pandas as pd
from collections import defaultdict
def convert2df(json_file):
data_dict = defaultdict(list)
for data in json_file.get('data'):
mediatype = data.get('mediatype')
medianame = data.get('medianame')
category = data.get('category')
title = data.get('title')
context = data.get('passage')
if len(data.get('qa_pairs')) > 1:
import pdb; pdb.set_trace()
else:
answer = data.get('qa_pairs')[0].get('answer')
question = data.get('qa_pairs')[0].get('question')
if answer.get('spans') is not None:
calculation_type = answer.get('spans')[0].get('calculation_type')
elif answer.get('number') is not None:
calculation_type = answer.get('number').get('calculation_type')
elif answer.get('date') is not None:
calculation_type = answer.get('date').get('calculation_type')
else:
import pdb; pdb.set_trace()
data_dict['mediatype'].append(mediatype)
data_dict['medianame'].append(medianame)
data_dict['category'].append(category)
data_dict['title'].append(title)
data_dict['context'].append(context)
data_dict['question'].append(question)
data_dict['calculation_type'].append(calculation_type)
data_dict['answer'].append(answer)
return data_dict
data_dict = convert2df(json_file)
df = pd.DataFrame(data_dict)
```
## License
The content of this project, created by AGILESODA, is licensed under the [Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0)](https://creativecommons.org/licenses/by-nc-sa/4.0/)."
saillab/alpaca_korean_taco,"{""language"": [""ko""], ""pretty_name"": ""Korean alpaca-52k"", ""size_categories"": [""100K>> from datasets import load_dataset
>>> ds = load_dataset(""onit3772/EvChargerFineTune.1a"", split='train', streaming=True)
>>> ds
Dataset({
features: ['instruction', 'output', 'url'],
num_rows: 3
})
```
```python
>>> ds[0]
{
""instruction"": ""회원 카드 인증 오류"",
""output"": ""회원 인증 오류에 대해 즉시 확인 중입니다. 잠시 후에 다시 시도해주세요.더 많은 도움이 필요하시다면 언제든지 도와드리겠습니다."",
""url"": ""https://ai.on-it.co.kr""
}
```"
jin-code/test3,{},"---
language:
- ko
license: cc-by-4.0
size_categories:
- n<1K
configs:
- config_name: default
data_files:
- split: train
path: data/*
dataset_info:
features:
- name: input
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 46130
num_examples: 10
---"
jr-d-analyst24/ai_hub_narr_sum_vali,{},"---
license: apache-2.0
task_categories:
- summarization
language:
- ko
size_categories:
- 1K
license: mit
---"
jiinnn/llama-custom-dataset,{},"---
license: mit
language:
- ko
task_categories:
- text-generation
tags:
- llama
size_categories:
- 1K Train > json 파일을 DataFrame 형태로 변형하여 전처리 및 답변 생성을 하였습니다.
- Raw 데이터의 answer 정보를 참고하여 답변을 생성하였습니다.
- 답변 생성 시 gpt-4o 를 활용했습니다.
- 저작권에 의해 본 데이터는 외부 반출 및 타인의 acess 승낙은 불허합니다.
## 데이터 설명
- 본 데이터의 Type 은 `'단서추출'` 로만 구성되어 있습니다.
- 데이터 예시
```
### context ###
서울시가 민속 대명절인 추석을 맞아 내달 1일부터 20일까지 상생상회(매장), 네이버(온라인), 롯데백화점(매장)과 함께 팔도특산물로 구성된 명절 직거래장터를 진행한다고 31일 밝혔다.
팔도특산물을 구매할 수 있는 지역상생 거점공간인 '상생상회' 매장에서는 상주, 제주 등 14개 시도의 117개 농가에서 생산한 총 416개 상품을 판매한다.
상주 곶감, 산청 상황버섯, 논산 딸기식초 등 각 지역을 대표하는 특산물로 구성된 추석 선물세트와 건나물, 약과, 전통주 등 제수상품을 원가보다 최대 50% 할인된 가격에 구입할 수 있다.
네이버에서는 전국 128개 농가의 지역 농수산식품 128개 품목을 구입할 수 있는 온라인 특별전을 1일부터 12일까지 진행한다. 추석 선물세트 뿐만 아니라 농수축산 및 가공식품 등 다양한 지역 상품을 구입할 수 있다.
또한 네이버 쇼핑라이브 방송을 통해 가평 홍로사과, 상주 샤인머스켓 등 추석 선물세트 판매방송도 진행한다. 1일부터 8일 중 5회 방송 예정이며, 생산자와 소비자가 실시간으로 소통하며 구매 가능하다.
롯데백화점에서는 전국 총 32개 지점에 배치된 추석 선물세트 카탈로그 '추석마중'를 통하여 전국 9개 농가, 9개 품목을 내달 20일까지 구입할 수 있다. 롯데 추석선물 카탈로그에 소개 되는 제품은 그동안 '시시호시' 판매코너에서 인기리에 판매됐던 서산 감태 세트부터 충주 사과한과, 영광 굴비 세트, 상주 곶감 세트 등 지역의 우수 농특산물로 구성됐다.
김광덕 서울시 도시농업과장은 “추석을 맞아 지역의 우수 농특산물 판매를 촉진하고, 코로나19로 판로확보에 어려움을 지역 농가의 어려움을 돕기 위해 특별전을 마련하게 됐다”면서 “지속적인 온오프라인 판로 지원을 통해 지역과 서울, 농어민과 소비자가 모두 상생할 수 있도록 노력하겠다” 고 말했다.
### question ###
상생상회 매장에서 판매되는 농가 생산 상품수는 네이버에서 판매되는 상품 수 보다 몇 % 더 많은가?
{'calculation': '(416-128)/128*100',
'calculation_type': '단서추출',
'end_index': 369,
'start_index': 104,
'text': ""팔도특산물을 구매할 수 있는 지역상생 거점공간인 '상생상회' 매장에서는 상주, 제주 등 14개 시도의 117개 농가에서 ""
'생산한 총 416개 상품을 판매한다.\n'
'상주 곶감, 산청 상황버섯, 논산 딸기식초 등 각 지역을 대표하는 특산물로 구성된 추석 선물세트와 건나물, 약과, 전통주 '
'등 제수상품을 원가보다 최대 50% 할인된 가격에 구입할 수 있다.\n'
'네이버에서는 전국 128개 농가의 지역 농수산식품 128개 품목을 구입할 수 있는 온라인 특별전을 1일부터 12일까지 '
'진행한다.'}
### gen_answer ###
상생상회 매장에서 판매되는 농가 생산 상품 수는 총 416개입니다. 반면, 네이버에서는 128개의 농수산식품 품목을 판매합니다. 따라서 상생상회 매장에서 판매되는 상품 수는 네이버에서 판매되는 상품 수보다 더 많습니다.
이를 퍼센트로 계산하면 다음과 같습니다:
\[ \frac{(416 - 128)}{128} \times 100 = 225\% \]
즉, 상생상회 매장에서 판매되는 농가 생산 상품 수는 네이버에서 판매되는 상품 수보다 225% 더 많습니다. 이는 상생상회가 다양한 지역의 특산물을 더 많이 제공하고 있음을 의미합니다.
```
## 답변 생성 코드
```
## model choice
model = ""gpt-4o""
## Prompt template
system_string = '##context## and ##question## 이 주어지면, ##json answer## 참고하여 친절하고 상세한 답변을 만들어주세요. 답변에는 근거가 포함된 상세한 설명이 있어야 합니다.'
question = '''##context##
{context}
##question##
{question}'''
answer = '''##json answer##
{json_answer}
해당 정보에서 계산의 결과는 calculation 값을 참고해야 하며, 답변을 위한 상세 설명을 위해서는 text 값을 참고해서 친절하고 상세한 답변을 생성해야 합니다.
##text answer##
{text_answer}'''
## context for one shot
one_shot_question = question.format(
context = new_df.context[6],
question = new_df.question[6]
)
one_shot_answer = answer.format(
json_answer = eval(new_df.answer[6]).get(""spans"")[0],
text_answer = ""경기도가 올해부터 시행하는 '착한 임대인 지원사업'에 따르면, 50만 원 이상 100만 원 미만의 임대료를 인하한 임대인에게는 10만 원의 인센티브가 지급됩니다. 반면, 700만 원 이상 임대료를 인하한 임대인에게는 50만 원의 인센티브가 지급됩니다.\n\n따라서, 50만 원 이상 100만 원 미만 임대료를 인하한 임대인에게 지급되는 인센티브는 700만 원 이상 인하한 경우에 지급되는 인센티브의 20%입니다. 이는 10만 원을 50만 원으로 나누고 100을 곱한 결과로 계산할 수 있습니다:\n\n\\[ \\frac{10만 원}{50만 원} \\times 100 = 20\\% \\]\n\n이 계산을 통해, 50만 원 이상 100만 원 미만 임대료를 인하한 임대인에게 지급되는 인센티브가 700만 원 이상 인하한 경우 대비 20%임을 알 수 있습니다.""
)
## 생성하고자 하는 question 입력
current_question = question.format(
context = new_df.context[index],
question = new_df.question[index]
)
## 생성하고자 하는 answer 입력
current_answer = answer.format(
json_answer = eval(new_df.answer[index]).get(""spans"")[0],
text_answer = ''
)
## 답변 생성
completion = client.chat.completions.create(
model= model,
messages= [
{""role"": ""system"", ""content"": system_string},
{""role"": ""user"", ""content"": one_shot_question},
{""role"": ""assistant"", ""content"": one_shot_answer},
{""role"": ""user"", ""content"": current_question},
{""role"": ""assistant"", ""content"": current_answer}
],
temperature = 0,
# timeout = 40,
n = 1
)
```
## License
The content of this project, created by AGILESODA, is licensed under the [Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0)](https://creativecommons.org/licenses/by-nc-sa/4.0/)."
youngmon/atlassian-qna,"{""license"": ""mit"", ""task_categories"": [""question-answering""], ""language"": [""en"", ""ko"", ""zh"", ""ja"", ""es"", ""ru""], ""pretty_name"": ""Question and Answer for Atlassian Products"", ""size_categories"": [""100K
license: mit
---"
iknow-lab/wildguardmix-train-ko-11k,"{""dataset_info"": {""features"": [{""name"": ""prompt"", ""dtype"": ""string""}, {""name"": ""adversarial"", ""dtype"": ""bool""}, {""name"": ""response"", ""dtype"": ""string""}, {""name"": ""prompt_harm_label"", ""dtype"": ""string""}, {""name"": ""response_refusal_label"", ""dtype"": ""string""}, {""name"": ""response_harm_label"", ""dtype"": ""string""}, {""name"": ""subcategory"", ""dtype"": ""string""}, {""name"": ""prompt_ko"", ""dtype"": ""string""}, {""name"": ""response_ko"", ""dtype"": ""string""}], ""splits"": [{""name"": ""train"", ""num_bytes"": 48289970, ""num_examples"": 11105}], ""download_size"": 24569189, ""dataset_size"": 48289970}, ""configs"": [{""config_name"": ""default"", ""data_files"": [{""split"": ""train"", ""path"": ""data/train-*""}]}], ""license"": ""odc-by"", ""task_categories"": [""text-classification""], ""language"": [""ko""], ""size_categories"": [""10K Train > json 파일을 DataFrame 형태로 변형하여 전처리 및 답변 생성을 하였습니다.
- Raw 데이터의 answer 정보를 참고하여 답변을 생성하였습니다.
- 답변 생성 시 gpt-4o 를 활용했습니다.
- 저작권에 의해 본 데이터는 외부 반출 및 타인의 acess 승낙은 불허합니다.
## 데이터 설명
- 본 데이터의 Type 은 `'가산/감산'` 로만 구성되어 있습니다.
- 데이터 예시
```
### context ###
2분기 순이익만 떼서 보면 증가세가 더욱 뚜렷하다. 신한금융은 9961억원, KB금융은 9911억원으로 1분기보다 각각 8.5%, 17.2% 늘었다. 하나금융은 6584억원, 우리금융은 6103억원으로 증가율은 각각 20.6%, 7.3%이다. 특히 KB금융은 분기 기준 사상 최대 실적을 올렸다.
수출 부진에 미·중 무역전쟁, 일본 수출규제 등의 내우외환이 겹쳐 저조한 성적표를 받은 일반 기업들과 달리 금융지주사들만 웃는 모습이다. 금융지주사들이 성적표에 제각각 '역대 최대'라는 수식어를 붙있을 수 있었던 원동력은 은행 담보대출 위주의 이자 수익 덕분이다. 이자 이익은 올해 상반기에도 증가세를 이어갔다. 신한·KB금융의 상반기 이자 이익은 각각 3조9041억원, 4조5492억원으로 지난해보다 5.6%, 4.8% 늘었다. 우리금융은 2조9309억원이었다. 하나금융은 지난해보다 5.3% 많은 2조8866억원을 기록했다. 이들 4개 금융 그룹이 상반기에 거둔 이자 이익은 총 14조2700억여원에 이른다.
### question ###
2분기 신한금융과 KB금융의 순이익의 합은 얼마인가?
### answer ###
{'number': {'calculation': '996100000000+991100000000', 'calculation_type': '가산/감산', 'number': '1987200000000', 'transcription': '일조구천팔백칠십이억', 'unit': '원'}, 'date': None, 'spans': None}
### gen_answer ###
2분기 신한금융과 KB금융의 순이익을 합산하면 1조9872억 원입니다. 신한금융의 순이익은 9961억 원, KB금융의 순이익은 9911억 원으로, 두 금융사의 순이익을 더한 값입니다.
```
## 답변 생성 코드
```
## model choice
model = ""gpt-4o""
## system prompt 입력
system_string = '##context## and ##question## 이 주어지면, ##json answer## 참고하여 친절하고 상세한 답변을 만들어주세요. 답변에는 근거가 포함된 상세한 설명이 있어야 합니다.'
question = '''##context##
{context}
##question##
{question}'''
answer = '''##json answer##
{json_answer}
해당 정보에서 calculation, number, transcription, unit 정보를 참고하여 자연스럽고 친절하며 상세한 답변을 생성하겠습니다.
##text answer##
{text_answer}'''
## context for one shot
one_shot_question = question.format(
context = new_df.context[5],
question = new_df.question[5]
)
one_shot_answer = answer.format(
json_answer = eval(new_df.answer[5]).get(""number""),
text_answer = '통계청에 따르면, 4월과 5월의 월 평균 소매 판매액은 43조2045억 원으로, 1분기의 월 평균 소매 판매액인 40조957억 원에 비해 3조1088억 원 증가했습니다.'
)
## context for two shot
two_shot_question = question.format(
context = new_df.context[12],
question = new_df.question[12]
)
two_shot_answer = answer.format(
json_answer = eval(new_df.answer[12]).get(""number""),
text_answer = '국토부가 지난 4월 실시한 수도권 5개 단지 신혼부부·다자녀 특별공급 당첨자 대상 표본 점검 결과, 임신진단서를 제출해 당첨된 83건 중 8건이 허위서류에 의한 부정청약으로 적발되었습니다. 따라서 부정청약이 아닌 건수는 75건입니다.'
)
## 생성하고자 하는 question 입력
current_question = question.format(
context = new_df.context[index],
question = new_df.question[index]
)
## 생성하고자 하는 answer 입력
current_answer = answer.format(
json_answer = eval(new_df.answer[index]).get(""number""),
text_answer = ''
)
## 답변 생성
completion = client.chat.completions.create(
model= model,
messages= [
{""role"": ""system"", ""content"": system_string},
{""role"": ""user"", ""content"": one_shot_question},
{""role"": ""assistant"", ""content"": one_shot_answer},
{""role"": ""user"", ""content"": two_shot_question},
{""role"": ""assistant"", ""content"": two_shot_answer},
{""role"": ""user"", ""content"": current_question},
{""role"": ""assistant"", ""content"": current_answer}
],
temperature = 0,
# timeout = 40,
n = 1
)
```
## License
The content of this project, created by AGILESODA, is licensed under the [Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0)](https://creativecommons.org/licenses/by-nc-sa/4.0/)."
overfit-brothers/KRX-INST,"{""configs"": [{""config_name"": ""example-it"", ""data_files"": ""example-it.csv""}, {""config_name"": ""finqa-50k-merged-quality_filtered"", ""data_files"": ""finqa-50k-merged-quality_filtered.csv""}, {""config_name"": ""financial-market-law-it"", ""data_files"": ""financial-market-law-it.csv""}, {""config_name"": ""financial_judgement_instructions"", ""data_files"": ""financial_judgement_instructions.csv""}, {""config_name"": ""evil-jailbreak-kor-train"", ""data_files"": ""evil-jailbreak-kor-train.csv""}, {""config_name"": ""evil-jailbreak-kor-test"", ""data_files"": ""evil-jailbreak-kor-test.csv""}, {""config_name"": ""Finace-Jailbreak-prompt"", ""data_files"": ""Finace-Jailbreak-prompt.csv""}], ""language"": [""ko""], ""license"": [""cc-by-nc-nd-4.0""], ""tags"": [""krx""]}","# 제3회 KRX 금융 언어 모델 경진대회 Instruction 데이터셋
- 팀명 : overfit-brothers
### 데이터셋 구성
- example-it
* KRX-Bench 예시 데이터
* 출처: KRX-Bench 예시 데이터
- finqa-50k-merged-quality_filtered
* ConvFinQA-ko
- fingpt-convfinqa 데이터셋을 샘플링하여 번역한 mncai/finance-tasks-ConvFinQA-ko 의 질문 문장을 gpt-4o API에게 제공하고 CoT 형식의 풀이 데이터 생성
- 데이터 개수: 26,450
- [출처]( https://huggingface.co/datasets/mncai/finance-tasks-ConvFinQA-ko?row=0)
- 라이센스: ConvFinQA-MIT License
* 튜토리얼Instruction 데이터
- 경진대회 튜토리얼에서 제공하는 도메인 합성 데이터(with raw text/web text) 2종에 대해 gpt-4o API를 사용하여 퀄리티 1~5점 스케일로 필터링 후 5점을 획득한 샘플로 재구성
- [출처1](https://huggingface.co/datasets/Cartinoe5930/raw_text_synthetic_dataset_50k) , [출처2](https://huggingface.co/datasets/Cartinoe5930/web_text_synthetic_dataset_50k)
- 금융 판례 기반 instruction 데이터(financial_judgement_instructions)
* 판결문 sLLM 학습을 위한 데이터(Suchae/korean-judgment-easyread-transform)를 instruction 데이터로 변환
* 데이터 개수: 5611
* [출처](https://huggingface.co/datasets/Suchae/korean-judgment-easyread-transform)
* 라이센스: apache-2.0
- KRX 법규 데이터(financial-market-law-it)
* KRX 법규서비스 홈페이지에서 공통규정, 유가증권시장규정, 코스닥시장규정, 코넥스시장규정 업무규정에서 규정 데이터 81건 수집 후 gpt-4o API를 활용해 Instruction형식으로 변환
* 데이터 개수: 81
* [출처](https://law.krx.co.kr/las/TopFrame.jsp)
- evil-jailbreak-kor
* Many-shot Jailbreaking, prompt injection을 통해 총 100건의 유해 질문을 생성
* 기존 유해 질문 데이터셋 번역: Granther/evil-jailbreak, rubend18/ChatGPT-Jailbreak-Prompts
- Finace-Jailbreak-prompt
* 금융 관련 유해질문에 대한 방어 생성 성능 측정용 데이터셋
* 출처: 직접 제작"
CarrotAI/Korean-Common,"{""task_categories"": [""text-generation""], ""language"": [""ko""], ""size_categories"": [""100K
license: mit
---"
ymoslem/Tatoeba-Translations,"{""dataset_info"": {""features"": [{""name"": ""id_src"", ""dtype"": ""int64""}, {""name"": ""lang_src"", ""dtype"": ""string""}, {""name"": ""sentence_src"", ""dtype"": ""string""}, {""name"": ""id_tgt"", ""dtype"": ""int64""}, {""name"": ""lang_tgt"", ""dtype"": ""string""}, {""name"": ""sentence_tgt"", ""dtype"": ""string""}, {""name"": ""lang_pair"", ""sequence"": ""string""}], ""splits"": [{""name"": ""train"", ""num_bytes"": 1144194352, ""num_examples"": 8547819}], ""download_size"": 726390210, ""dataset_size"": 1144194352}, ""configs"": [{""config_name"": ""default"", ""data_files"": [{""split"": ""train"", ""path"": ""data/train-*""}]}], ""language"": [""multilingual"", ""ab"", ""af"", ""am"", ""ar"", ""an"", ""as"", ""av"", ""ay"", ""az"", ""ba"", ""bm"", ""be"", ""bn"", ""bi"", ""bo"", ""bs"", ""br"", ""bg"", ""ca"", ""cs"", ""ch"", ""ce"", ""cv"", ""kw"", ""co"", ""cy"", ""da"", ""de"", ""dv"", ""el"", ""en"", ""eo"", ""et"", ""eu"", ""ee"", ""fo"", ""fj"", ""fi"", ""fr"", ""fy"", ""gd"", ""ga"", ""gl"", ""gv"", ""gn"", ""gu"", ""ht"", ""ha"", ""he"", ""hi"", ""hr"", ""hu"", ""hy"", ""ig"", ""io"", ""ii"", ""ie"", ""ia"", ""id"", ""is"", ""it"", ""jv"", ""ja"", ""kl"", ""kn"", ""ks"", ""ka"", ""kk"", ""km"", ""rw"", ""ky"", ""ko"", ""lo"", ""la"", ""li"", ""ln"", ""lt"", ""lb"", ""lg"", ""mh"", ""ml"", ""mr"", ""mk"", ""mg"", ""mt"", ""mn"", ""mi"", ""my"", ""na"", ""nv"", ""nl"", ""nn"", ""nb"", ""ny"", ""oc"", ""oj"", ""or"", ""os"", ""pa"", ""pi"", ""pl"", ""pt"", ""ps"", ""qu"", ""rm"", ""ro"", ""rn"", ""ru"", ""sg"", ""sa"", ""si"", ""sk"", ""sl"", ""se"", ""sm"", ""sn"", ""sd"", ""so"", ""st"", ""es"", ""sq"", ""sc"", ""sr"", ""ss"", ""su"", ""sv"", ""ty"", ""ta"", ""tt"", ""te"", ""tg"", ""tl"", ""th"", ""ti"", ""to"", ""tn"", ""ts"", ""tk"", ""tr"", ""ug"", ""uk"", ""ur"", ""uz"", ""vi"", ""vo"", ""wa"", ""wo"", ""xh"", ""yi"", ""yo"", ""zu""], ""license"": ""cc-by-2.0"", ""task_categories"": [""translation""], ""size_categories"": [""1M
Show the full list of languages.
Abkhazian (abk), Adyghe (ady), Afrihili (afh), Afrikaans (afr), Ainu (Japan) (ain), Albanian (sqi), Algerian Arabic (arq), Amharic (amh), Ancient Greek (to 1453) (grc), Ancient Hebrew (hbo), Arabic (ara), Aragonese (arg), Armenian (hye), Assamese (asm), Assyrian Neo-Aramaic (aii), Asturian (ast), Avaric (ava), Awadhi (awa), Aymara (aym), Azerbaijani (aze), Balinese (ban), Baluchi (bal), Bambara (bam), Banjar (bjn), Bashkir (bak), Basque (eus), Bavarian (bar), Baybayanon (bvy), Belarusian (bel), Bengali (ben), Berber languages (ber), Berom (bom), Bhojpuri (bho), Bislama (bis), Bodo (India) (brx), Bosnian (bos), Breton (bre), Brithenig (bzt), Bulgarian (bul), Buriat (bua), Burmese (mya), Catalan (cat), Cayuga (cay), Cebuano (ceb), Central Bikol (bcl), Central Huasteca Nahuatl (nch), Central Kanuri (knc), Central Kurdish (ckb), Central Mnong (cmo), Central Okinawan (ryu), Chagatai (chg), Chamorro (cha), Chavacano (cbk), Chechen (che), Cherokee (chr), Chinese Pidgin English (cpi), Chinook jargon (chn), Choctaw (cho), Chukot (ckt), Chuvash (chv), Classical Syriac (syc), Congo Swahili (swc), Cornish (cor), Corsican (cos), Creek (mus), Crimean Tatar (crh), Croatian (hrv), Cuyonon (cyo), Czech (ces), Danish (dan), Dhivehi (div), Dimli (individual language) (diq), Drents (drt), Dungan (dng), Dutch (nld), Dutton World Speedwords (dws), Eastern Canadian Inuktitut (ike), Eastern Mari (mhr), Egyptian Arabic (arz), Emilian (egl), English (eng), Erromintxela (emx), Erzya (myv), Esperanto (epo), Estonian (est), Evenki (evn), Ewe (ewe), Extremaduran (ext), Faroese (fao), Fiji Hindi (hif), Fijian (fij), Finnish (fin), French (fra), Friulian (fur), Ga (gaa), Gagauz (gag), Galician (glg), Gan Chinese (gan), Ganda (lug), Garhwali (gbm), Georgian (kat), German (deu), Gheg Albanian (aln), Gilbertese (gil), Goan Konkani (gom), Gothic (got), Gronings (gos), Guadeloupean Creole French (gcf), Guarani (grn), Guerrero Nahuatl (ngu), Gujarati (guj), Gulf Arabic (afb), Gun (guw), Haitian (hat), Hakka Chinese (hak), Hausa (hau), Hawaiian (haw), Hebrew (heb), Hiligaynon (hil), Hindi (hin), Hmong Daw (mww), Hmong Njua (hnj), Ho (hoc), Hungarian (hun), Hunsrik (hrx), Iban (iba), Icelandic (isl), Ido (ido), Igbo (ibo), Iloko (ilo), Indonesian (ind), Ingrian (izh), Interglossa (igs), Interlingua (International Auxiliary Language Association) (ina), Interlingue (ile), Iranian Persian (pes), Irish (gle), Italian (ita), Jamaican Creole English (jam), Japanese (jpn), Javanese (jav), Jewish Babylonian Aramaic (ca. 200-1200 CE) (tmr), Jewish Palestinian Aramaic (jpa), Jinyu Chinese (cjy), Judeo-Tat (jdt), K'iche' (quc), Kabardian (kbd), Kabyle (kab), Kadazan Dusun (dtp / kzj), Kalaallisut (kal), Kalmyk (xal), Kamba (Kenya) (kam), Kannada (kan), Kara-Kalpak (kaa), Karachay-Balkar (krc), Karakhanid (xqa), Karelian (krl), Kashmiri (kas), Kashubian (csb), Kazakh (kaz), Kekchí (kek), Keningau Murut (kxi), Khakas (kjh), Khalaj (klj), Khasi (kha), Khmer (khm), Kinyarwanda (kin), Kirghiz (kir), Kirmanjki (individual language) (kiu), Klingon (tlh), Komi-Permyak (koi), Komi-Zyrian (kpv), Korean (kor), Kotava (avk), Kriang (ngt), Kumyk (kum), Kven Finnish (fkv), Kölsch (ksh), Ladin (lld), Ladino (lad), Lakota (lkt), Lao (lao), Latgalian (ltg), Latin (lat), Laz (lzz), Levantine Arabic (apc / ajp), Lezghian (lez), Libyan Arabic (ayl), Ligurian (lij), Limburgan (lim), Lingala (lin), Lingua Franca Nova (lfn), Literary Chinese (lzh), Lithuanian (lit), Liv (liv), Lojban (jbo), Lombard (lmo), Louisiana Creole (lou), Low German (nds), Lower Sorbian (dsb), Lushootseed (lut), Luxembourgish (ltz), Láadan (ldn), Macedonian (mkd), Madurese (mad), Mahasu Pahari (bfz), Maithili (mai), Malagasy (mlg), Malay (individual language) (zlm), Malayalam (mal), Maltese (mlt), Mambae (mgm), Manchu (mnc), Mandarin Chinese (cmn), Manipuri (mni), Manx (glv), Maori (mri), Mapudungun (arn), Marathi (mar), Marshallese (mah), Mesopotamian Arabic (acm), Mi'kmaq (mic), Middle English (1100-1500) (enm), Middle French (ca. 1400-1600) (frm), Mikasuki (mik), Min Nan Chinese (nan), Minangkabau (min), Mingrelian (xmf), Mirandese (mwl), Modern Greek (1453-) (ell), Mohawk (moh), Moksha (mdf), Mon (mnw), Mongolian (mon), Mono (USA) (mnr), Morisyen (mfe), Moroccan Arabic (ary), Nahuatl languages (nah), Nande (nnb), Nauru (nau), Navajo (nav), Neapolitan (nap), Nepali (individual language) (npi), Nigerian Fulfulde (fuv), Niuean (niu), Nogai (nog), North Moluccan Malay (max), Northeastern Thai (tts), Northern Frisian (frr), Northern Haida (hdn), Northern Kurdish (kmr), Northern Sami (sme), Norwegian Bokmål (nob), Norwegian Nynorsk (nno), Novial (nov), Nuer (nus), Nyanja (nya), Nyungar (nys), Occitan (post 1500) (oci), Ojibwa (oji), Old Aramaic (up to 700 BCE) (oar), Old English (ca. 450-1100) (ang), Old French (842-ca. 1400) (fro), Old Frisian (ofs), Old Norse (non), Old Russian (orv), Old Saxon (osx), Old Spanish (osp), Old Turkish (otk), Oriya (macrolanguage) (ori), Orizaba Nahuatl (nlv), Ossetian (oss), Ottoman Turkish (1500-1928) (ota), Pahlavi (pal), Palauan (pau), Pali (pli), Pampanga (pam), Pangasinan (pag), Panjabi (pan), Papiamento (pap), Pattani Malay (mfa), Pennsylvania German (pdc), Pfaelzisch (pfl), Phoenician (phn), Picard (pcd), Piemontese (pms), Pipil (ppl), Plains Cree (crk), Polish (pol), Portuguese (por), Prussian (prg), Pulaar (fuc), Pushto (pus), Qashqa'i (qxq), Quechua (que), Quenya (qya), Rapanui (rap), Rohingya (rhg), Romanian (ron), Romansh (roh), Romany (rom), Rundi (run), Russian (rus), Rusyn (rue), Samoan (smo), Samogitian (sgs), Sango (sag), Sanskrit (san), Santali (sat), Saraiki (skr), Sardinian (srd), Saterfriesisch (stq), Scots (sco), Scottish Gaelic (gla), Serbian (srp), Seselwa Creole French (crs), Shona (sna), Shuswap (shs), Sichuan Yi (iii), Sicilian (scn), Silesian (szl), Sindarin (sjn), Sindhi (snd), Sinhala (sin), Slovak (slk), Slovenian (slv), Somali (som), Southern Altai (alt), Southern Haida (hax), Southern Kurdish (sdh), Southern Sami (sma), Southern Sotho (sot), Southern Subanen (laa), Spanish (spa), Sranan Tongo (srn), Standard Latvian (lvs), Standard Malay (zsm), Standard Moroccan Tamazight (zgh), Sumerian (sux), Sundanese (sun), Swabian (swg), Swahili (individual language) (swh), Swati (ssw), Swedish (swe), Swiss German (gsw), Sylheti (syl), Tachawit (shy), Tachelhit (shi), Tagal Murut (mvv), Tagalog (tgl), Tahaggart Tamahaq (thv), Tahitian (tah), Tajik (tgk), Talossan (tzl), Talysh (tly), Tamil (tam), Tarifit (rif), Tase Naga (nst), Tatar (tat), Telugu (tel), Temuan (tmw), Tetum (tet), Thai (tha), Tibetan (bod), Tigre (tig), Tigrinya (tir), Tohono O'odham (ood), Tok Pisin (tpi), Tokelau (tkl), Toki Pona (tok), Tonga (Tonga Islands) (ton), Tonga (Zambia) (toi), Tsonga (tso), Tswana (tsn), Tumbuka (tum), Tupinambá (tpn / tpw), Turkish (tur), Turkmen (tuk), Tuvalu (tvl), Tuvinian (tyv), Uab Meto (aoz), Udmurt (udm), Uighur (uig), Ukrainian (ukr), Umbundu (umb), Upper Sorbian (hsb), Urdu (urd), Urhobo (urh), Uzbek (uzb), Venetian (vec), Veps (vep), Vietnamese (vie), Volapük (vol), Võro (vro), Walloon (wln), Waray (Philippines) (war), Wayuu (guc), Welsh (cym), Western Armenian (hyw), Western Frisian (fry), Western Mari (mrj), Western Panjabi (pnb), Wolof (wol), Wu Chinese (wuu), Xhosa (xho), Xiang Chinese (hsn), Yakut (sah), Yiddish (yid), Yoruba (yor), Yucateco (yua), Yue Chinese (yue), Zaza (zza), Zeeuws (zea), Zulu (zul)
### Contact
The dataset was processed and brought to Hugging Face by [ymoslem](https://huggingface.co/ymoslem)."
kingkim/DS_Building_SecurityManual_V2,"{""language"": [""ko""], ""dataset_info"": {""features"": [{""name"": ""text"", ""dtype"": ""string""}], ""splits"": [{""name"": ""train"", ""num_bytes"": 53319, ""num_examples"": 300}], ""download_size"": 6765, ""dataset_size"": 53319}, ""configs"": [{""config_name"": ""default"", ""data_files"": [{""split"": ""train"", ""path"": ""data/train-*""}]}]}",
FrancophonIA/Deltacorpus_1.1,"{""language"": [""af"", ""sq"", ""am"", ""ar"", ""an"", ""hy"", ""ast"", ""az"", ""eu"", ""be"", ""bn"", ""bpy"", ""bs"", ""br"", ""bg"", ""ca"", ""cv"", ""hr"", ""cs"", ""da"", ""diq"", ""nl"", ""arz"", ""en"", ""eo"", ""et"", ""fo"", ""hif"", ""fi"", ""fr"", ""gl"", ""ka"", ""de"", ""glk"", ""gu"", ""ht"", ""he"", ""hi"", ""hu"", ""is"", ""io"", ""id"", ""ia"", ""ga"", ""it"", ""jv"", ""kn"", ""kk"", ""ko"", ""ku"", ""la"", ""lv"", ""li"", ""lt"", ""lmo"", ""nds"", ""lb"", ""mk"", ""mg"", ""ms"", ""ml"", ""mi"", ""mr"", ""el"", ""mn"", ""nap"", ""ne"", ""new"", ""no"", ""nn"", ""pam"", ""fa"", ""pms"", ""pl"", ""pt"", ""ro"", ""ru"", ""sco"", ""gd"", ""sr"", ""sh"", ""sk"", ""sl"", ""es"", ""su"", ""sw"", ""sv"", ""gsw"", ""tl"", ""tg"", ""ta"", ""tt"", ""te"", ""tr"", ""uk"", ""hsb"", ""ur"", ""uz"", ""vec"", ""vi"", ""vo"", ""wa"", ""war"", ""cy"", ""fy"", ""sah"", ""yi""], ""multilinguality"": [""multilingual""], ""license"": ""cc-by-sa-4.0"", ""configs"": [{""config_name"": ""afr"", ""data_files"": [{""split"": ""train"", ""path"": ""afr.txt/afr.txt""}]}, {""config_name"": ""amh"", ""data_files"": [{""split"": ""train"", ""path"": ""amh.txt/amh.txt""}]}, {""config_name"": ""ara"", ""data_files"": [{""split"": ""train"", ""path"": ""ara.txt/ara.txt""}]}, {""config_name"": ""arg"", ""data_files"": [{""split"": ""train"", ""path"": ""arg.txt/arg.txt""}]}, {""config_name"": ""arz"", ""data_files"": [{""split"": ""train"", ""path"": ""arz.txt/arz.txt""}]}, {""config_name"": ""ast"", ""data_files"": [{""split"": ""train"", ""path"": ""ast.txt/ast.txt""}]}, {""config_name"": ""aze"", ""data_files"": [{""split"": ""train"", ""path"": ""aze.txt/aze.txt""}]}, {""config_name"": ""bel"", ""data_files"": [{""split"": ""train"", ""path"": ""bel.txt/bel.txt""}]}, {""config_name"": ""ben"", ""data_files"": [{""split"": ""train"", ""path"": ""ben.txt/ben.txt""}]}, {""config_name"": ""bos"", ""data_files"": [{""split"": ""train"", ""path"": ""bos.txt/bos.txt""}]}, {""config_name"": ""bpy"", ""data_files"": [{""split"": ""train"", ""path"": ""bpy.txt/bpy.txt""}]}, {""config_name"": ""bre"", ""data_files"": [{""split"": ""train"", ""path"": ""bre.txt/bre.txt""}]}, {""config_name"": ""bul"", ""data_files"": [{""split"": ""train"", ""path"": ""bul.txt/bul.txt""}]}, {""config_name"": ""cat"", ""data_files"": [{""split"": ""train"", ""path"": ""cat.txt/cat.txt""}]}, {""config_name"": ""ces"", ""data_files"": [{""split"": ""train"", ""path"": ""ces.txt/ces.txt""}]}, {""config_name"": ""chv"", ""data_files"": [{""split"": ""train"", ""path"": ""chv.txt/chv.txt""}]}, {""config_name"": ""cym"", ""data_files"": [{""split"": ""train"", ""path"": ""cym.txt/cym.txt""}]}, {""config_name"": ""dan"", ""data_files"": [{""split"": ""train"", ""path"": ""dan.txt/dan.txt""}]}, {""config_name"": ""deu"", ""data_files"": [{""split"": ""train"", ""path"": ""deu.txt/deu.txt""}]}, {""config_name"": ""diq"", ""data_files"": [{""split"": ""train"", ""path"": ""diq.txt/diq.txt""}]}, {""config_name"": ""ell"", ""data_files"": [{""split"": ""train"", ""path"": ""ell.txt/ell.txt""}]}, {""config_name"": ""eng"", ""data_files"": [{""split"": ""train"", ""path"": ""eng.txt/eng.txt""}]}, {""config_name"": ""epo"", ""data_files"": [{""split"": ""train"", ""path"": ""epo.txt/epo.txt""}]}, {""config_name"": ""est"", ""data_files"": [{""split"": ""train"", ""path"": ""est.txt/est.txt""}]}, {""config_name"": ""eus"", ""data_files"": [{""split"": ""train"", ""path"": ""eus.txt/eus.txt""}]}, {""config_name"": ""fao"", ""data_files"": [{""split"": ""train"", ""path"": ""fao.txt/fao.txt""}]}, {""config_name"": ""fas"", ""data_files"": [{""split"": ""train"", ""path"": ""fas.txt/fas.txt""}]}, {""config_name"": ""fin"", ""data_files"": [{""split"": ""train"", ""path"": ""fin.txt/fin.txt""}]}, {""config_name"": ""fra"", ""data_files"": [{""split"": ""train"", ""path"": ""fra.txt/fra.txt""}]}, {""config_name"": ""fry"", ""data_files"": [{""split"": ""train"", ""path"": ""fry.txt/fry.txt""}]}, {""config_name"": ""gla"", ""data_files"": [{""split"": ""train"", ""path"": ""gla.txt/gla.txt""}]}, {""config_name"": ""gle"", ""data_files"": [{""split"": ""train"", ""path"": ""gle.txt/gle.txt""}]}, {""config_name"": ""glg"", ""data_files"": [{""split"": ""train"", ""path"": ""glg.txt/glg.txt""}]}, {""config_name"": ""glk"", ""data_files"": [{""split"": ""train"", ""path"": ""glk.txt/glk.txt""}]}, {""config_name"": ""gsw"", ""data_files"": [{""split"": ""train"", ""path"": ""gsw.txt/gsw.txt""}]}, {""config_name"": ""guj"", ""data_files"": [{""split"": ""train"", ""path"": ""guj.txt/guj.txt""}]}, {""config_name"": ""hat"", ""data_files"": [{""split"": ""train"", ""path"": ""hat.txt/hat.txt""}]}, {""config_name"": ""hbs"", ""data_files"": [{""split"": ""train"", ""path"": ""hbs.txt/hbs.txt""}]}, {""config_name"": ""heb"", ""data_files"": [{""split"": ""train"", ""path"": ""heb.txt/heb.txt""}]}, {""config_name"": ""hif"", ""data_files"": [{""split"": ""train"", ""path"": ""hif.txt/hif.txt""}]}, {""config_name"": ""hin"", ""data_files"": [{""split"": ""train"", ""path"": ""hin.txt/hin.txt""}]}, {""config_name"": ""hrv"", ""data_files"": [{""split"": ""train"", ""path"": ""hrv.txt/hrv.txt""}]}, {""config_name"": ""hsb"", ""data_files"": [{""split"": ""train"", ""path"": ""hsb.txt/hsb.txt""}]}, {""config_name"": ""hun"", ""data_files"": [{""split"": ""train"", ""path"": ""hun.txt/hun.txt""}]}, {""config_name"": ""hye"", ""data_files"": [{""split"": ""train"", ""path"": ""hye.txt/hye.txt""}]}, {""config_name"": ""ido"", ""data_files"": [{""split"": ""train"", ""path"": ""ido.txt/ido.txt""}]}, {""config_name"": ""ina"", ""data_files"": [{""split"": ""train"", ""path"": ""ina.txt/ina.txt""}]}, {""config_name"": ""ind"", ""data_files"": [{""split"": ""train"", ""path"": ""ind.txt/ind.txt""}]}, {""config_name"": ""isl"", ""data_files"": [{""split"": ""train"", ""path"": ""isl.txt/isl.txt""}]}, {""config_name"": ""ita"", ""data_files"": [{""split"": ""train"", ""path"": ""ita.txt/ita.txt""}]}, {""config_name"": ""jav"", ""data_files"": [{""split"": ""train"", ""path"": ""jav.txt/jav.txt""}]}, {""config_name"": ""kan"", ""data_files"": [{""split"": ""train"", ""path"": ""kan.txt/kan.txt""}]}, {""config_name"": ""kat"", ""data_files"": [{""split"": ""train"", ""path"": ""kat.txt/kat.txt""}]}, {""config_name"": ""kaz"", ""data_files"": [{""split"": ""train"", ""path"": ""kaz.txt/kaz.txt""}]}, {""config_name"": ""kor"", ""data_files"": [{""split"": ""train"", ""path"": ""kor.txt/kor.txt""}]}, {""config_name"": ""kur"", ""data_files"": [{""split"": ""train"", ""path"": ""kur.txt/kur.txt""}]}, {""config_name"": ""lat"", ""data_files"": [{""split"": ""train"", ""path"": ""lat.txt/lat.txt""}]}, {""config_name"": ""lav"", ""data_files"": [{""split"": ""train"", ""path"": ""lav.txt/lav.txt""}]}, {""config_name"": ""lim"", ""data_files"": [{""split"": ""train"", ""path"": ""lim.txt/lim.txt""}]}, {""config_name"": ""lit"", ""data_files"": [{""split"": ""train"", ""path"": ""lit.txt/lit.txt""}]}, {""config_name"": ""lmo"", ""data_files"": [{""split"": ""train"", ""path"": ""lmo.txt/lmo.txt""}]}, {""config_name"": ""ltz"", ""data_files"": [{""split"": ""train"", ""path"": ""ltz.txt/ltz.txt""}]}, {""config_name"": ""mal"", ""data_files"": [{""split"": ""train"", ""path"": ""mal.txt/mal.txt""}]}, {""config_name"": ""mar"", ""data_files"": [{""split"": ""train"", ""path"": ""mar.txt/mar.txt""}]}, {""config_name"": ""mkd"", ""data_files"": [{""split"": ""train"", ""path"": ""mkd.txt/mkd.txt""}]}, {""config_name"": ""mlg"", ""data_files"": [{""split"": ""train"", ""path"": ""mlg.txt/mlg.txt""}]}, {""config_name"": ""mon"", ""data_files"": [{""split"": ""train"", ""path"": ""mon.txt/mon.txt""}]}, {""config_name"": ""mri"", ""data_files"": [{""split"": ""train"", ""path"": ""mri.txt/mri.txt""}]}, {""config_name"": ""msa"", ""data_files"": [{""split"": ""train"", ""path"": ""msa.txt/msa.txt""}]}, {""config_name"": ""nap"", ""data_files"": [{""split"": ""train"", ""path"": ""nap.txt/nap.txt""}]}, {""config_name"": ""nds"", ""data_files"": [{""split"": ""train"", ""path"": ""nds.txt/nds.txt""}]}, {""config_name"": ""nep"", ""data_files"": [{""split"": ""train"", ""path"": ""nep.txt/nep.txt""}]}, {""config_name"": ""new"", ""data_files"": [{""split"": ""train"", ""path"": ""new.txt/new.txt""}]}, {""config_name"": ""nld"", ""data_files"": [{""split"": ""train"", ""path"": ""nld.txt/nld.txt""}]}, {""config_name"": ""nno"", ""data_files"": [{""split"": ""train"", ""path"": ""nno.txt/nno.txt""}]}, {""config_name"": ""nor"", ""data_files"": [{""split"": ""train"", ""path"": ""nor.txt/nor.txt""}]}, {""config_name"": ""pam"", ""data_files"": [{""split"": ""train"", ""path"": ""pam.txt/pam.txt""}]}, {""config_name"": ""pms"", ""data_files"": [{""split"": ""train"", ""path"": ""pms.txt/pms.txt""}]}, {""config_name"": ""pol"", ""data_files"": [{""split"": ""train"", ""path"": ""pol.txt/pol.txt""}]}, {""config_name"": ""por"", ""data_files"": [{""split"": ""train"", ""path"": ""por.txt/por.txt""}]}, {""config_name"": ""ron"", ""data_files"": [{""split"": ""train"", ""path"": ""ron.txt/ron.txt""}]}, {""config_name"": ""rus"", ""data_files"": [{""split"": ""train"", ""path"": ""rus.txt/rus.txt""}]}, {""config_name"": ""sah"", ""data_files"": [{""split"": ""train"", ""path"": ""sah.txt/sah.txt""}]}, {""config_name"": ""sco"", ""data_files"": [{""split"": ""train"", ""path"": ""sco.txt/sco.txt""}]}, {""config_name"": ""slk"", ""data_files"": [{""split"": ""train"", ""path"": ""slk.txt/slk.txt""}]}, {""config_name"": ""slv"", ""data_files"": [{""split"": ""train"", ""path"": ""slv.txt/slv.txt""}]}, {""config_name"": ""spa"", ""data_files"": [{""split"": ""train"", ""path"": ""spa.txt/spa.txt""}]}, {""config_name"": ""sqi"", ""data_files"": [{""split"": ""train"", ""path"": ""sqi.txt/sqi.txt""}]}, {""config_name"": ""srp"", ""data_files"": [{""split"": ""train"", ""path"": ""srp.txt/srp.txt""}]}, {""config_name"": ""sun"", ""data_files"": [{""split"": ""train"", ""path"": ""sun.txt/sun.txt""}]}, {""config_name"": ""swa"", ""data_files"": [{""split"": ""train"", ""path"": ""swa.txt/swa.txt""}]}, {""config_name"": ""swe"", ""data_files"": [{""split"": ""train"", ""path"": ""swe.txt/swe.txt""}]}, {""config_name"": ""tam"", ""data_files"": [{""split"": ""train"", ""path"": ""tam.txt/tam.txt""}]}, {""config_name"": ""tat"", ""data_files"": [{""split"": ""train"", ""path"": ""tat.txt/tat.txt""}]}, {""config_name"": ""tel"", ""data_files"": [{""split"": ""train"", ""path"": ""tel.txt/tel.txt""}]}, {""config_name"": ""tgk"", ""data_files"": [{""split"": ""train"", ""path"": ""tgk.txt/tgk.txt""}]}, {""config_name"": ""tgl"", ""data_files"": [{""split"": ""train"", ""path"": ""tgl.txt/tgl.txt""}]}, {""config_name"": ""tur"", ""data_files"": [{""split"": ""train"", ""path"": ""tur.txt/tur.txt""}]}, {""config_name"": ""ukr"", ""data_files"": [{""split"": ""train"", ""path"": ""ukr.txt/ukr.txt""}]}, {""config_name"": ""urd"", ""data_files"": [{""split"": ""train"", ""path"": ""urd.txt/urd.txt""}]}, {""config_name"": ""uzb"", ""data_files"": [{""split"": ""train"", ""path"": ""uzb.txt/uzb.txt""}]}, {""config_name"": ""vec"", ""data_files"": [{""split"": ""train"", ""path"": ""vec.txt/vec.txt""}]}, {""config_name"": ""vie"", ""data_files"": [{""split"": ""train"", ""path"": ""vie.txt/vie.txt""}]}, {""config_name"": ""vol"", ""data_files"": [{""split"": ""train"", ""path"": ""vol.txt/vol.txt""}]}, {""config_name"": ""war"", ""data_files"": [{""split"": ""train"", ""path"": ""war.txt/war.txt""}]}, {""config_name"": ""wln"", ""data_files"": [{""split"": ""train"", ""path"": ""wln.txt/wln.txt""}]}, {""config_name"": ""yid"", ""data_files"": [{""split"": ""train"", ""path"": ""yid.txt/yid.txt""}]}], ""task_categories"": [""token-classification""]}","> [!NOTE]
> Dataset origin: https://lindat.cz/repository/xmlui/handle/11234/1-1743
## Description
Texts in 107 languages from the W2C corpus (http://hdl.handle.net/11858/00-097C-0000-0022-6133-9), first 1,000,000 tokens per language, tagged by the delexicalized tagger described in Yu et al. (2016, LREC, Portorož, Slovenia).
Changes in version 1.1:
1. Universal Dependencies tagset instead of the older and smaller Google Universal POS tagset.
2. SVM classifier trained on Universal Dependencies 1.2 instead of HamleDT 2.0.
3. Balto-Slavic languages, Germanic languages and Romance languages were tagged by classifier trained only on the respective group of languages. Other languages were tagged by a classifier trained on all available languages. The ""c7"" combination from version 1.0 is no longer used.
Universal POS tags as defined by the Universal Dependencies project.
For more information, see http://universaldependencies.org/.
VERB - content verbs (all forms)
AUX - auxiliary verbs (all forms)
NOUN - common nouns
PROPN - proper nouns
PRON - pronouns
ADJ - adjectives
ADV - adverbs
ADP - adpositions (prepositions and postpositions)
CONJ - coordinating conjunctions
SCONJ - subordinating conjunctions
DET - determiners
NUM - cardinal numbers
PART - particles
INTJ - interjections
SYM - symbols
X - other: foreign words, typos, unknown
PUNC - punctuation
## Citation
```
@misc{11234/1-1743,
title = {Deltacorpus 1.1},
author = {Mare{\v c}ek, David and Yu, Zhiwei and Zeman, Daniel and {\v Z}abokrtsk{\'y}, Zden{\v e}k},
url = {http://hdl.handle.net/11234/1-1743},
note = {{LINDAT}/{CLARIAH}-{CZ} digital library at the Institute of Formal and Applied Linguistics ({{\'U}FAL}), Faculty of Mathematics and Physics, Charles University},
copyright = {Creative Commons - Attribution-{ShareAlike} 4.0 International ({CC} {BY}-{SA} 4.0)},
year = {2016} }
```"
MaChangH/QA-Youtube_categorization_1000-each,"{""dataset_info"": {""features"": [{""name"": ""instruction"", ""dtype"": ""string""}, {""name"": ""input"", ""dtype"": ""string""}, {""name"": ""output"", ""dtype"": ""string""}], ""splits"": [{""name"": ""train"", ""num_bytes"": 7806908, ""num_examples"": 13000}], ""download_size"": 3918129, ""dataset_size"": 7806908}, ""configs"": [{""config_name"": ""default"", ""data_files"": [{""split"": ""train"", ""path"": ""data/train-*""}]}], ""language"": [""ko""], ""tags"": [""korean"", ""youtube ""], ""pretty_name"": ""Generated by MaChangH/QA-Youtube_categorization""}","generated by MaChangH/QA-Youtube_categorization
Total Entries: 13000
{'Autos & Vehicles': {'count': 1000, 'percentage': 7.6923076923076925},
'Comedy': {'count': 1000, 'percentage': 7.6923076923076925},
'Education': {'count': 1000, 'percentage': 7.6923076923076925},
'Entertainment': {'count': 1000, 'percentage': 7.6923076923076925},
'Film & Animation': {'count': 1000, 'percentage': 7.6923076923076925},
'Gaming': {'count': 1000, 'percentage': 7.6923076923076925},
'Howto & Style': {'count': 1000, 'percentage': 7.6923076923076925},
'Music': {'count': 1000, 'percentage': 7.6923076923076925},
'News & Politics': {'count': 1000, 'percentage': 7.6923076923076925},
'People & Blogs': {'count': 1000, 'percentage': 7.6923076923076925},
'Science & Technology': {'count': 1000, 'percentage': 7.6923076923076925},
'Sports': {'count': 1000, 'percentage': 7.6923076923076925},
'Travel & Events': {'count': 1000, 'percentage': 7.6923076923076925}}"
ziozzang/deepl-trans-FR-KO,"{""task_categories"": [""translation""], ""language"": [""ko"", ""fr""]}","This dataset is some wikipedia article with DeepL translation, auto-aggregated.
# String/Corpus pairs
From FR/French to KO/Korean.
# Quality Filtering
- Stripping whole HTML tags.
- removed references and annotation mark.
- Filtered by string length.
---
The strings/corpus are aggregated from wikipedia(pt) using DeepL translated.
whole data collected by Jioh L. Jung
license: mit
---"
PeterGraebner/LDNOOBW_V2,"{""license"": ""cc0-1.0"", ""language"": [""af"", ""az"", ""am"", ""be"", ""bg"", ""dz"", ""eu"", ""my"", ""ca"", ""cs"", ""cy"", ""hr"", ""zh"", ""da"", ""de"", ""nl"", ""el"", ""en"", ""eo"", ""es"", ""et"", ""fa"", ""fi"", ""fr"", ""gl"", ""gd"", ""hi"", ""hy"", ""hu"", ""id"", ""is"", ""it"", ""ja"", ""ko"", ""la"", ""lt"", ""lv"", ""mi"", ""mk"", ""ml"", ""ms"", ""mt"", ""mr"", ""mn"", ""no"", ""pl"", ""pt"", ""ro"", ""ru"", ""sk"", ""sl"", ""sm"", ""sq"", ""te"", ""ta"", ""to"", ""tr"", ""uk"", ""uz"", ""vi"", ""yid"", ""zu""], ""pretty_name"": ""List of Dirty Naughty Obscene and Otherwise Bad Words V2"", ""size_categories"": [""10K Written with [StackEdit](https://stackedit.io/).
> ## [List-of-Dirty-Naughty-Obscene-and-Otherwise-Bad-Words_V2](https://github.com/LDNOOBWV2/List-of-Dirty-Naughty-Obscene-and-Otherwise-Bad-Words_V2#list-of-dirty-naughty-obscene-and-otherwise-bad-words_v2)
This list of words is a follow-up and extension of the Shutterstock [List-of-Dirty-Naughty-Obscene-and-Otherwise-Bad-Words](https://github.com/LDNOOBW/List-of-Dirty-Naughty-Obscene-and-Otherwise-Bad-Words/tree/master) as that list is not maintained anymore. As there are many profanity word lists around on the web (and many not maintained) their content was crabbed and joined here together (see the source list below).
As the opinion on which words should be in such lists varies between culture, language, and geographies, feel free to extend them to your needs, hopefully getting a lot of feedback.
The lists need reviews from native speakers. It would be great to collect more words and even get more languages (**75** right now, with over **50k words** alltogether).
The long list of English words shows that people got very creative to get around profanity filters. The best way to use these hard-coded word lists is to use them as an additional quality criterion for filtering texts like it is done in [RedPajama](https://github.com/togethercomputer/RedPajama-Data) data set or use them for ML building profanity filters.
### Structure and Format
- filename is the **iso-code** of the country
- file extension is **"".txt""**
- **utf-8** encoded
- all words are **lowercase**
- one expression per line
- if the language has non-ASCII chrachters a transcription with python's ""anyascii"" is in the wordlist
- for leed-speech there are python lists for the most common leet replacements in naughty and slang words, see LEET.md for details
- for English and French there are wordlists with these replacements being already done
- all words contained in the English ""***en.txt***"" file are **excluded** in the other language files
- often used words where the classification as a profane word is doubtful, there is a separate csv file
- the csv-file is: [questionable_international_words.csv](questionable_international_words.csv)
- separator is the comma ""**,**""
- **51** words for several languages (see table below)
- the header line contains the iso-code of the language, a classification column (*category*), and a *remark* column
- these words are **NOT** included in the language-text-files, e.g. ""*.txt""
- when I couldn't find a translation, the field contains the string: ****
### Languages Files Overview
language | count | filename | in csv-file | remark
--- | --- | --- | --- | ---
[Afrikaans](data/af.txt) | 256 | af | Y|
[Albanian](data/sq.txt) | 223 | sq | Y|
[Algerian](data/dz.txt) | 86 | dz | N|
[Amharic](data/am.txt) | 71 | am | N|
[Arabic](data/ar.txt) |1609 | ar | N|
[Armenian](data/hy.txt) | 440 | hy | Y|
[Australian Kriol](data/rop.txt) | 16 | rop| N|
[Azerbaijanian](data/az.txt) | 37 | az | N|
[Basque](data/eu.txt) | 48 | eu | N|
[Belorussian](data/be.txt) | 236 | be | N|
[Bulgarian](data/bg.txt) | 535 | bg | Y|
[Burmese](data/my.txt) | 133 | my | N|
[Cambodian](data/kh.txt) | 264 | kh | N|
[Catalan](data/ca.txt) | 163 | ca | Y|
[Cebuano](data/ceb.txt) | 18 | ceb| N|
[Chinese](data/zh.txt) |3090 | zh | Y|
[Croatian](data/hr.txt) | 275 | hr | Y|
[Czech](data/cs.txt) | 343 | cs | Y|
[Danish](data/da.txt) | 227 | da | Y|
[Dutch](data/nl.txt) |1224 | nl | Y|
[English](data/en.txt) |12996| en | Y| various spelling variations, does not contain Spanish (es) words
[English](data/en_leet.txt) |12532| en | Y| version with repaced leet letters, see LEET.md
[Esperanto](data/eo.txt) | 60 | eo | N|
[Estonian](data/et.txt) | 203 | et | Y|
[Filipino](data/fil.txt) | 165 | fil| Y|
[Finnish](data/fi.txt) | 368 | fi | Y|
[French](data/fr.txt) |4056 | fr | Y| many spelling variations
[French](data/fr.txt) |2380 | fr | Y| version with repaced leet letters, see LEET.md
[Gaelic](data/gd.txt) | 105 | gd | N|
[Galician](data/gl.txt) | 89 | gl | N|
[German](data/de.txt) | 685 | de | Y|
[Greek](data/el.txt) | 417 | el | Y|
[Hebrew](data/yid.txt) | 173 | yid| N|
[Hindi](data/hi.txt) |1102 | hi | Y|
[Hungarian](data/hu.txt) | 433 | hu | Y|
[Icelandic](data/is.txt) | 208 | is | Y|
[Italian](data/it.txt) |1710 | it | Y|
[Indonesian](data/id.txt) | 582 | id | Y|
[Japanese](data/ja.txt) | 783 | ja | Y|
[Kabyle](data/kab.txt) | 31 | kab| N|
[Klingon](data/tlh.txt) | 33 | tlh| N|
[Korean](data/ko.txt) |6125 | ko | Y|
[Latin](data/la.txt) | 103 | la | N|
[Latvian](data/lv.txt) | 280 | lv | Y|
[Lithuanian](data/lt.txt) | 211 | lt | Y|
[Macedonian](data/mk.txt) | 294 | mk | N|
[Malay](data/ms.txt) | 201 | ms | Y|
[Malayalam](data/ml.txt) | 338 | ml | Y|
[Maltese](data/mt.txt) | 132 | mt | Y|
[Maori](data/mi.txt) | 75 | mi | Y|
[Marathi](data/mr.txt) | 453 | mr | Y|
[Mongolian](data/mn.txt) | 164 | mn | N|
[Norwegian](data/no.txt) | 341 | no | Y|
[Persian](data/fa.txt) |1128 | fa | N|
[Pictrain-Norfolk](data/pih.txt) | 14 | pih| N|
[Piya-Kwonci](data/piy.txt) | 13 | piy| N|
[Polish](data/pl.txt) |12639 | pl | Y| different grammatical variations
[Portuguese](data/pt.txt) | 629 | pt | Y| including Brasilian
[Romanian](data/ro.txt) | 341 | ro | Y|
[Russian](data/ru.txt) |9569 | ru | Y|
[Samoan](data/sm.txt) | 116 | sm | Y|
[Serbian](data/sr.txt) | 459 | sr | Y| sr_k & sr_l in csv file
[Slovak](data/sk.txt) | 586 | sk | Y|
[Slovene](data/sl.txt) | 186 | sl | Y|
[Spanish](data/es.txt) |1804 | es | Y| including Middle- and South American
[Swedish](data/sv.txt) | 304 | sv | Y|
[Tamil](data/ta.txt) | 143 | ta | N|
[Telugu](data/te.txt) | 509 | te | Y|
[Tetum](data/tet.txt) | 11 | tet| N|
[Thai](data/th.txt) |4377 | th | Y|
[Tongan](data/to.txt) | 68 | to | N|
[Turkish](data/tr.txt) | 491 | tr | Y|
[Ukrainian](data/uk.txt) | 377 | uk | Y|
[Uzbek](data/uz.txt) | 102 | uz | N|
[Vietnamese](data/vi.txt) |1031 | vi | Y|
[Welsh](data/cy.txt) | 169 | cy | Y|
[Zulu](data/zu.txt) | 115 | zu | N|
### Categories in *questionable_international_words.csv*
The categories used are:
- **cul**: cultural differences
- **dm**: drugs & medicine
- **his**: historical
- **leg**: Legislative term
- **mab**: medical, anatomic, biological term
- **pol**: political
- **rel**: religious
- **so**: sexual orientation
- **vm**: various meanings
This is just an ad hoc classification where several expressions can be in different categories."
Hyungmo/Nurier,"{""language"": [""ko""]}","DatasetDict({
train: Dataset({
features: ['questions', 'answers'],
num_rows: 22194
})
test: Dataset({
features: ['questions', 'answers'],
num_rows: 2740
})
validation: Dataset({
features: ['questions', 'answers'],
num_rows: 2466
})
})"
williamjeong2/msmarco-triplets-ko-v2,"{""task_categories"": [""feature-extraction""], ""language"": [""ko""]}","# MS MARCO Triplets - Korean Version (v2)
## Introduction
Welcome to the Korean version of the MS MARCO Triplets dataset. This project aims to provide a comprehensive, high-quality translation of the original MS MARCO Triplets dataset into Korean, facilitating natural language processing and information retrieval research for the Korean language community.
## Dataset Description
The MS MARCO (Microsoft Machine Reading Comprehension) Triplets dataset is a large-scale dataset designed for information retrieval tasks. It consists of query-document pairs along with relevance judgments, making it an invaluable resource for training and evaluating information retrieval systems.
This Korean version maintains the structure and integrity of the original dataset while offering content in the Korean language. It includes:
- Queries: Questions or search queries in Korean
- Positive Documents: Relevant passages or documents translated into Korean
- Negative Documents: Non-relevant passages or documents translated into Korean
### Key Features
1. Large-scale dataset suitable for machine learning model training
2. Diverse range of topics covered
3. Human-quality translations preserving semantic meanings
4. Maintained triplet structure for relevance assessments
### Data Format
The dataset is provided in JSONL (JSON Lines) format. Each line in the file represents a single data point with the following structure:
```json
{""query"": ""Korean query"", ""pos"": [""Positive Korean sentence""], ""neg"": [""Negative Korean sentence 1"", ""Negative Korean sentence 2"", ...]}
```
- `query`: A string containing the Korean query
- `pos`: An array containing a single string, which is the positive (relevant) document for the query
- `neg`: An array containing one or more strings, each representing a negative (non-relevant) document for the query
## Translation Process
### Methodology
The translation was performed using the [Translation-EnKo/EXAONE-3.0-7.8B-Instruct-translation-general12m-en-ko-e1-b64-trc1400k-e1-b64-trc313eval45-e2-b16](https://huggingface.co/Translation-EnKo/EXAONE-3.0-7.8B-Instruct-translation-general12m-en-ko-e1-b64-trc1400k-e1-b64-trc313eval45-e2-b16) model. This advanced language model was chosen for its capability to handle nuanced translations and maintain contextual accuracy.
### Known Limitations
During the translation process, we encountered a limitation that affected some data points:
- Repetitive Translations: Due to constraints in the translation model, some data points contain repetitive phrases or sentences in their Korean translations. This occurs when the model reaches its output limit and repeats the last translated segment.
Users should be aware of this limitation when working with the dataset. While these instances are present, they represent a small portion of the overall dataset and should not significantly impact its utility for most applications.
## Usage
This Korean version of the MS MARCO Triplets dataset is suitable for a wide range of natural language processing and information retrieval tasks, including but not limited to:
1. Training and evaluating Korean language information retrieval systems
2. Developing and testing question-answering models for Korean
3. Researching semantic similarity and relevance ranking in Korean
4. Cross-lingual information retrieval studies (when used in conjunction with the original English dataset)
5. Benchmarking machine learning models for Korean language understanding
### Loading the Dataset
To load and use the dataset, you can use libraries such as `jsonlines` in Python. Here's a simple example:
```python
import jsonlines
def load_dataset(file_path):
data = []
with jsonlines.open(file_path) as reader:
for obj in reader:
data.append(obj)
return data
# Usage
dataset = load_dataset('path_to_your_jsonl_file.jsonl')
# Accessing data
for item in dataset:
query = item['query']
positive_doc = item['pos'][0]
negative_docs = item['neg']
# Process your data here
```
## Ethical Considerations
When using this dataset, please consider the following ethical points:
1. Bias: While efforts have been made to maintain the integrity of the original dataset, unconscious biases may have been introduced during the translation process.
2. Privacy: Ensure that your use of the dataset complies with relevant privacy laws and regulations.
3. Responsible AI: Develop models and applications with this dataset in a manner that promotes fairness, transparency, and accountability.
## License
This dataset inherits the license of the original MS MARCO Triplets dataset. Users are required to comply with the [terms and conditions](https://microsoft.github.io/msmarco/Submission#terms-and-conditions) set forth in the original license.
## Citation
If you use this dataset in your research or applications, please cite both this Korean version and the original MS MARCO Triplets dataset. Suggested citations:
(For this Korean version)
```
[Jinwoo Jeong]. (2024). MS MARCO Triplets - Korean Version (v2) [Data set]. Hugging Face. https://huggingface.co/datasets/williamjeong2/msmarco-triplets-korean-v2
```
(For the original MS MARCO dataset)
```
@article{bajaj2016ms,
title={Ms marco: A human generated machine reading comprehension dataset},
author={Bajaj, Payal and Campos, Daniel and Craswell, Nick and Deng, Li and Gao, Jianfeng and Liu, Xiaodong and Majumder, Rangan and McNamara, Andrew and Mitra, Bhaskar and Nguyen, Tri and others},
journal={arXiv preprint arXiv:1611.09268},
year={2016}
}
```
## Acknowledgments
We would like to express our gratitude to:
- The creators and maintainers of the original MS MARCO Triplets dataset for providing this valuable resource to the research community.
- The developers of the [Translation-EnKo/EXAONE-3.0-7.8B-Instruct-translation-general12m-en-ko-e1-b64-trc1400k-e1-b64-trc313eval45-e2-b16](https://huggingface.co/Translation-EnKo/EXAONE-3.0-7.8B-Instruct-translation-general12m-en-ko-e1-b64-trc1400k-e1-b64-trc313eval45-e2-b16) model, which made this high-quality Korean translation possible.
- The open-source community for their continuous support and contributions to natural language processing research.
## Contact and Support
For questions, feedback, or issues related to this Korean version of the MS MARCO Triplets dataset, please:
1. Open an issue in this repository
2. Contact the maintainer at [wjd5480@gmail.com]
We welcome contributions and suggestions to improve the quality and usability of this dataset for the Korean NLP community."
eyl45/demo,{},"---
license: mit
task_categories:
- text-generation
language:
- zh
- ko
- ja
pretty_name: tiny_demo
size_categories:
- n<1K
---"
kuuhaku06/RP,"{""language"": [""ko""]}",Aratako/Synthetic-Japanese-Roleplay-NSFW-Claude-3.5s-15.3k-formatted를 EzTransXP로 번역
lomit/business_report_qa_ko_2023,{},
PrompTartLAB/PTT_advanced_en_ko,"{""task_categories"": [""translation""], ""language"": [""en"", ""ko""], ""size_categories"": [""1K>> from datasets import load_dataset
>>> ds = load_dataset(""jaeyong2/ko-persona-cot-inst"", split=""train"")
>>> ds
Dataset({
features: ['content', 'text'],
num_rows: 240000
})
```
### Development Process
1. load Question dataset from [jaeyong2/persona-inst](https://huggingface.co/datasets/jaeyong2/persona-inst)
2. We used [Qwen/Qwen2-72B-Instruct](https://huggingface.co/Qwen/Qwen2.5-72B-Instruct) model to generate answer with COT.
## License
- Qwen/Qwen2.5-72B-Instruct : https://huggingface.co/Qwen/Qwen2-72B-Instruct/blob/main/LICENSE
- proj-persona/PersonaHub : https://spdx.org/licenses/CC-BY-NC-SA-4.0
## Acknowledgement
This research is supported by **TPU Research Cloud program**."
CarrotAI/ko-tree-conversation,"{""license"": ""apache-2.0"", ""task_categories"": [""text-generation""], ""language"": [""ko""], ""size_categories"": [""1Ksystem\n당신은 알리바바 클라우드에서 만든 Qwen입니다. 당신은 유용한 어시스턴트입니다.\nuser가 논리적인 다단계의 추론 과정이 필요한 복잡한 문제를 내면, assistant는 한국어로 단계적으로 풀이를 제시합니다.<|im_end|>\n<|im_start|>user\n""
```
- (Original)
```
""pre_query_template"": ""<|im_start|>system\nYou are Qwen, created by Alibaba Cloud. You are a helpful assistant.<|im_end|>\n<|im_start|>user\n""
```
#### Model
- Question: [Qwen/Qwen2.5-32B-Instruct](https://huggingface.co/Qwen/Qwen2.5-32B-Instruct)
- With vLLM
- Answer: [Qwen/Qwen2.5-72B-Instruct](https://huggingface.co/Qwen/Qwen2.5-72B-Instruct)
- With [openrouter.ai](https://openrouter.ai/), Provider: [Together.ai](https://www.together.ai/)
※ Together AI's Qwen2.5-72b-instruct answer is not guaranteed to be reproducible, even with temperature=0.0.
#### Options
- temperature: {1.0, 0.9, 0.8}
- Amount: {40k, 40k, 40k}
- top_p: 0.99
- max_length: 1024(question), 4096(answer)
### Filtering
#### Non-Korean Result
- Removed question/answer contains Chinese character
- Removed question/answer contains full English sentence
#### Question Similarity
- Embedding model: [jinaai/jina-embeddings-v3](https://huggingface.co/jinaai/jina-embeddings-v3)
- Based on [Korean Embedding Model Benchmark](https://github.com/su-park/mteb_ko_leaderboard) from su-park
- Similarity Distribution

- Removed min_neighbor_distance < 0.1 (about 1%)
#### Repitition(Max Token)
- Removed answer length == 4096
#### Question Distribution

---
### Question Quality(Completeness) based filtering (Not Applied Yet)
- Some questions are not solvable without additional conditions. The answers are confusing. I didn't remove it yet.
### Difficulty based filtering (Not Applied Yet)
- Some questions are too easy to solve."
BigShort/bok_words_700,"{""language"": [""ko""], ""size_categories"": [""n<1K""], ""task_categories"": [""text-generation""], ""dataset_info"": {""features"": [{""name"": ""instruction"", ""dtype"": ""string""}, {""name"": ""output"", ""dtype"": ""string""}], ""splits"": [{""name"": ""train"", ""num_bytes"": 915115, ""num_examples"": 700}], ""download_size"": 494274, ""dataset_size"": 915115}, ""configs"": [{""config_name"": ""default"", ""data_files"": [{""split"": ""train"", ""path"": ""data/train-*""}]}], ""tags"": [""finance""]}",
david9dragon9/shp_translations,"{""license"": ""mit"", ""language"": [""en"", ""ko"", ""zh"", ""th""], ""task_categories"": [""question-answering""], ""tags"": [""legal""]}","This dataset contains translations of three splits (askscience, explainlikeimfive, legaladvice) of the Stanford Human Preference (SHP) dataset, used for training domain-invariant reward models.
The translation was conducted using the No Language Left Behind (NLLB) 3.3 B 200 model.
References:
Stanford Human Preference Dataset: https://huggingface.co/datasets/stanfordnlp/SHP
NLLB: https://huggingface.co/facebook/nllb-200-3.3B"
global-llm-2024/ko_ifeval,{},
bot-yaya/parallel_corpus_game,"{""dataset_info"": {""features"": [{""name"": ""ar_text"", ""dtype"": ""string""}, {""name"": ""cht_text"", ""dtype"": ""string""}, {""name"": ""de_text"", ""dtype"": ""string""}, {""name"": ""en_text"", ""dtype"": ""string""}, {""name"": ""eo_text"", ""dtype"": ""string""}, {""name"": ""es_text"", ""dtype"": ""string""}, {""name"": ""fr_text"", ""dtype"": ""string""}, {""name"": ""he_text"", ""dtype"": ""string""}, {""name"": ""id_text"", ""dtype"": ""string""}, {""name"": ""it_text"", ""dtype"": ""string""}, {""name"": ""ja_text"", ""dtype"": ""string""}, {""name"": ""ko_text"", ""dtype"": ""string""}, {""name"": ""nl_text"", ""dtype"": ""string""}, {""name"": ""pt_text"", ""dtype"": ""string""}, {""name"": ""ru_text"", ""dtype"": ""string""}, {""name"": ""sv_text"", ""dtype"": ""string""}, {""name"": ""th_text"", ""dtype"": ""string""}, {""name"": ""vi_text"", ""dtype"": ""string""}, {""name"": ""zh_text"", ""dtype"": ""string""}, {""name"": ""zh_text_md5"", ""dtype"": ""string""}, {""name"": ""\u4f4e\u8d28\u91cf\u6bb5\u843d\u6570"", ""dtype"": ""int64""}, {""name"": ""\u53bb\u91cd\u6bb5\u843d\u6570"", ""dtype"": ""int64""}, {""name"": ""\u6269\u5c55\u5b57\u6bb5"", ""dtype"": ""string""}, {""name"": ""\u6587\u4ef6\u540d"", ""dtype"": ""string""}, {""name"": ""\u65f6\u95f4"", ""dtype"": ""string""}, {""name"": ""\u662f\u5426\u5f85\u67e5\u6587\u4ef6"", ""dtype"": ""bool""}, {""name"": ""\u662f\u5426\u8de8\u6587\u4ef6\u91cd\u590d"", ""dtype"": ""bool""}, {""name"": ""\u662f\u5426\u91cd\u590d"", ""dtype"": ""bool""}, {""name"": ""\u662f\u5426\u91cd\u590d\u6587\u4ef6"", ""dtype"": ""bool""}, {""name"": ""\u6bb5\u843d\u6570"", ""dtype"": ""int64""}, {""name"": ""\u884c\u53f7"", ""dtype"": ""int64""}], ""splits"": [{""name"": ""train"", ""num_bytes"": 2164583258, ""num_examples"": 1784466}], ""download_size"": 1228640703, ""dataset_size"": 2164583258}, ""configs"": [{""config_name"": ""default"", ""data_files"": [{""split"": ""train"", ""path"": ""data/train-*""}]}], ""license"": ""mit"", ""language"": [""ar"", ""zh"", ""de"", ""en"", ""eo"", ""es"", ""fr"", ""he"", ""id"", ""it"", ""ja"", ""ko"", ""nl"", ""pt"", ""ru"", ""sv"", ""th"", ""vi"", ""pl"", ""tr""], ""task_categories"": [""translation""], ""tags"": [""game""]}","https://github.com/mnbvc-parallel-corpus-team/parallel_corpus_mnbvc
MNBVC平行语料小组:游戏语料
不定期更新,目前已收录的游戏语料文件,共29份:
- 博德之门3
- 赛博朋克2077
- 黑暗之魂3
- 底特律:化身为人
- 饥荒
- 艾尔登法环
- 原神
- 黑帝斯
- 霍格沃兹之遗
- Ib
- 如龙8
- 如龙7外传
- 荒野大镖客2
- 只狼:影逝二度
- 文明6
- 杀戮尖塔
- 崩坏星穹铁道
- 群星
- 泰拉瑞亚
- 巫师3
- 魔女之泉3
- 魔女之泉R
- 鸣潮
- 如龙3
- 如龙4
- 如龙5
- 如龙6
- 如龙极2
- 如龙7"
devngho/korean-webtext-edu,"{""dataset_info"": [{""config_name"": ""raw"", ""features"": [{""name"": ""text"", ""dtype"": ""string""}, {""name"": ""score"", ""dtype"": ""float64""}], ""splits"": [{""name"": ""train"", ""num_bytes"": 8528708812, ""num_examples"": 1284879}], ""download_size"": 4462835969, ""dataset_size"": 8528708812}, {""config_name"": ""scored_over_2"", ""features"": [{""name"": ""text"", ""dtype"": ""string""}, {""name"": ""score"", ""dtype"": ""float64""}], ""splits"": [{""name"": ""train"", ""num_bytes"": 3729474351.820441, ""num_examples"": 561858}], ""download_size"": 2075034458, ""dataset_size"": 3729474351.820441}, {""config_name"": ""scored_over_3"", ""features"": [{""name"": ""text"", ""dtype"": ""string""}, {""name"": ""score"", ""dtype"": ""float64""}], ""splits"": [{""name"": ""train"", ""num_bytes"": 854185819.9729562, ""num_examples"": 128686}], ""download_size"": 494289966, ""dataset_size"": 854185819.9729562}], ""configs"": [{""config_name"": ""raw"", ""data_files"": [{""split"": ""train"", ""path"": ""raw/train-*""}]}, {""config_name"": ""scored_over_2"", ""data_files"": [{""split"": ""train"", ""path"": ""scored_over_2/train-*""}]}, {""config_name"": ""scored_over_3"", ""data_files"": [{""split"": ""train"", ""path"": ""scored_over_3/train-*""}], ""default"": true}], ""license"": ""mit"", ""source_datasets"": [""HAERAE-HUB/KOREAN-WEBTEXT""], ""task_categories"": [""text-generation""], ""language"": [""ko""], ""size_categories"": [""100KClick Here
High-quality instruction data is critical for aligning large language models (LLMs). Although some models, such as Llama-3-Instruct, have open weights, their alignment data remain private, which hinders the democratization of AI. High human labor costs and a limited, predefined scope for prompting prevent existing open-source data creation methods from scaling effectively, potentially limiting the diversity and quality of public alignment datasets. Is it possible to synthesize high-quality instruction data at scale by extracting it directly from an aligned LLM? We present a self-synthesis method for generating large-scale alignment data named Magpie. Our key observation is that aligned LLMs like Llama-3-Instruct can generate a user query when we input only the left-side templates up to the position reserved for user messages, thanks to their auto-regressive nature. We use this method to prompt Llama-3-Instruct and generate 4 million instructions along with their corresponding responses. We perform a comprehensive analysis of the extracted data and select 300K high-quality instances. To compare Magpie data with other public instruction datasets, we fine-tune Llama-3-8B-Base with each dataset and evaluate the performance of the fine-tuned models. Our results indicate that in some tasks, models fine-tuned with Magpie perform comparably to the official Llama-3-8B-Instruct, despite the latter being enhanced with 10 million data points through supervised fine-tuning (SFT) and subsequent feedback learning. We also show that using Magpie solely for SFT can surpass the performance of previous public datasets utilized for both SFT and preference optimization, such as direct preference optimization with UltraFeedback. This advantage is evident on alignment benchmarks such as AlpacaEval, ArenaHard, and WildBench.
## Dataset Details
This dataset is generated by [Llama 3 70B Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-70B-Instruct) for direct preference optimization.
To create the dataset, we first selected 100K high-quality Magpie instructions with diverse task categories, then generated responses using [Llama 3 70B Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-70B-Instruct) 5 times for each instruction, using a temperature of 0.8. We then annotated RM scores using RLHFlow/ArmoRM-Llama3-8B-v0.1, labeling the response with the highest RM score as the chosen response, and the one with the lowest RM score as the rejected response.
## 📚 Citation
If you find the model, data, or code useful, please cite our paper:
```
@article{xu2024magpie,
title={Magpie: Alignment Data Synthesis from Scratch by Prompting Aligned LLMs with Nothing},
author={Zhangchen Xu and Fengqing Jiang and Luyao Niu and Yuntian Deng and Radha Poovendran and Yejin Choi and Bill Yuchen Lin},
year={2024},
eprint={2406.08464},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
Please also cite the reward model for creating preference datasets:
ArmoRM paper:
```
@article{wang2024interpretable,
title={Interpretable Preferences via Multi-Objective Reward Modeling and Mixture-of-Experts},
author={Wang, Haoxiang and Xiong, Wei and Xie, Tengyang and Zhao, Han and Zhang, Tong},
journal={arXiv preprint arXiv:2406.12845},
year={2024}
}
```
**Questions?** Please contact [Zhangchen](https://zhangchenxu.com/) by email."
jp1924/DevelopmentandDataofLLMswithEnhancedKoreanLanguagePerformance,"{""dataset_info"": [{""config_name"": ""PPO"", ""features"": [{""name"": ""id"", ""dtype"": ""string""}, {""name"": ""prompt"", ""dtype"": ""string""}, {""name"": ""metadata"", ""struct"": [{""name"": ""main"", ""dtype"": ""string""}, {""name"": ""middle"", ""dtype"": ""string""}, {""name"": ""prompt_type"", ""dtype"": ""string""}]}], ""splits"": [{""name"": ""train"", ""num_bytes"": 4188330, ""num_examples"": 25443}, {""name"": ""validation"", ""num_bytes"": 523760, ""num_examples"": 3180}], ""download_size"": 2803659, ""dataset_size"": 4712090}, {""config_name"": ""RL"", ""features"": [{""name"": ""id"", ""dtype"": ""string""}, {""name"": ""chosen_conversations"", ""list"": [{""name"": ""role"", ""dtype"": ""string""}, {""name"": ""content"", ""dtype"": ""string""}]}, {""name"": ""reject_conversations"", ""list"": [{""name"": ""role"", ""dtype"": ""string""}, {""name"": ""content"", ""dtype"": ""string""}]}, {""name"": ""prompt"", ""dtype"": ""string""}, {""name"": ""chosen"", ""dtype"": ""string""}, {""name"": ""reject"", ""dtype"": ""string""}, {""name"": ""preperence_ranking"", ""list"": [{""name"": ""content"", ""dtype"": ""string""}, {""name"": ""ranking"", ""dtype"": ""float32""}]}, {""name"": ""metadata"", ""struct"": [{""name"": ""main"", ""dtype"": ""string""}, {""name"": ""middle"", ""dtype"": ""string""}, {""name"": ""prompt_type"", ""dtype"": ""string""}]}], ""splits"": [{""name"": ""train"", ""num_bytes"": 334079275, ""num_examples"": 26408}, {""name"": ""validation"", ""num_bytes"": 41575319, ""num_examples"": 3301}], ""download_size"": 161522809, ""dataset_size"": 375654594}, {""config_name"": ""SFT"", ""features"": [{""name"": ""id"", ""dtype"": ""string""}, {""name"": ""conversations"", ""list"": [{""name"": ""role"", ""dtype"": ""string""}, {""name"": ""content"", ""dtype"": ""string""}]}, {""name"": ""prompt"", ""dtype"": ""string""}, {""name"": ""answer"", ""dtype"": ""string""}, {""name"": ""metadata"", ""struct"": [{""name"": ""main"", ""dtype"": ""string""}, {""name"": ""middle"", ""dtype"": ""string""}, {""name"": ""prompt_type"", ""dtype"": ""string""}]}], ""splits"": [{""name"": ""train"", ""num_bytes"": 33260825, ""num_examples"": 10580}, {""name"": ""validation"", ""num_bytes"": 4221081, ""num_examples"": 1322}], ""download_size"": 18694012, ""dataset_size"": 37481906}], ""configs"": [{""config_name"": ""PPO"", ""data_files"": [{""split"": ""train"", ""path"": ""PPO/train-*""}, {""split"": ""validation"", ""path"": ""PPO/validation-*""}]}, {""config_name"": ""RL"", ""data_files"": [{""split"": ""train"", ""path"": ""RL/train-*""}, {""split"": ""validation"", ""path"": ""RL/validation-*""}]}, {""config_name"": ""SFT"", ""data_files"": [{""split"": ""train"", ""path"": ""SFT/train-*""}, {""split"": ""validation"", ""path"": ""SFT/validation-*""}]}], ""language"": [""ko""]}","데이터에 ``과 같은 비식별화가 되어 있다. 이거 참고해서 사용해야 함.
데이터 필더할 때, 비식별화 되어 있는 데이터는 따로 전처리 하지 않음"
youjunhyeok/Magpie-Air-DPO-100K-v0.1-ko,"{""language"": [""ko""], ""task_categories"": [""text-generation""], ""dataset_info"": {""features"": [{""name"": ""uuid"", ""dtype"": ""string""}, {""name"": ""instruction"", ""dtype"": ""string""}, {""name"": ""gen_input_configs"", ""struct"": [{""name"": ""temperature"", ""dtype"": ""float64""}, {""name"": ""top_p"", ""dtype"": ""float64""}]}, {""name"": ""intent"", ""dtype"": ""string""}, {""name"": ""knowledge"", ""dtype"": ""string""}, {""name"": ""difficulty"", ""dtype"": ""string""}, {""name"": ""input_quality"", ""dtype"": ""string""}, {""name"": ""quality_explanation"", ""dtype"": ""string""}, {""name"": ""task_category"", ""dtype"": ""string""}, {""name"": ""input_length"", ""dtype"": ""int64""}, {""name"": ""responses"", ""sequence"": ""string""}, {""name"": ""gen_response_configs"", ""struct"": [{""name"": ""engine"", ""dtype"": ""string""}, {""name"": ""max_tokens"", ""dtype"": ""int64""}, {""name"": ""output_generator"", ""dtype"": ""string""}, {""name"": ""prompt"", ""dtype"": ""string""}, {""name"": ""repetition_penalty"", ""dtype"": ""float64""}, {""name"": ""stop_tokens"", ""sequence"": ""string""}, {""name"": ""temperature"", ""dtype"": ""float64""}, {""name"": ""top_p"", ""dtype"": ""float64""}]}, {""name"": ""rewards_armorm"", ""list"": [{""name"": ""score"", ""dtype"": ""float64""}]}, {""name"": ""chosen"", ""list"": [{""name"": ""content"", ""dtype"": ""string""}, {""name"": ""role"", ""dtype"": ""string""}]}, {""name"": ""rejected"", ""list"": [{""name"": ""content"", ""dtype"": ""string""}, {""name"": ""role"", ""dtype"": ""string""}]}], ""splits"": [{""name"": ""train"", ""num_bytes"": 2144596138, ""num_examples"": 98000}], ""download_size"": 943642287, ""dataset_size"": 2144596138}, ""configs"": [{""config_name"": ""default"", ""data_files"": [{""split"": ""train"", ""path"": ""data/train-*""}]}], ""tags"": [""instruction"", ""korean"", ""magpie""]}","[Magpie-Align/Magpie-Air-DPO-100K-v0.1](https://huggingface.co/datasets/Magpie-Align/Magpie-Air-DPO-100K-v0.1) 데이터셋을 [nayohan/llama3-instrucTrans-enko-8b](https://huggingface.co/nayohan/llama3-instrucTrans-enko-8b) 모델을 사용해 번역했습니다.
Thanks for [Magpie-Align](https://huggingface.co/Magpie-Align) and [nayohan](https://huggingface.co/nayohan).
---

Project Web: [https://magpie-align.github.io/](https://magpie-align.github.io/)
Arxiv Technical Report: [https://arxiv.org/abs/2406.08464](https://arxiv.org/abs/2406.08464)
Codes: [https://github.com/magpie-align/magpie](https://github.com/magpie-align/magpie)
## Abstract
Click Here
High-quality instruction data is critical for aligning large language models (LLMs). Although some models, such as Llama-3-Instruct, have open weights, their alignment data remain private, which hinders the democratization of AI. High human labor costs and a limited, predefined scope for prompting prevent existing open-source data creation methods from scaling effectively, potentially limiting the diversity and quality of public alignment datasets. Is it possible to synthesize high-quality instruction data at scale by extracting it directly from an aligned LLM? We present a self-synthesis method for generating large-scale alignment data named Magpie. Our key observation is that aligned LLMs like Llama-3-Instruct can generate a user query when we input only the left-side templates up to the position reserved for user messages, thanks to their auto-regressive nature. We use this method to prompt Llama-3-Instruct and generate 4 million instructions along with their corresponding responses. We perform a comprehensive analysis of the extracted data and select 300K high-quality instances. To compare Magpie data with other public instruction datasets, we fine-tune Llama-3-8B-Base with each dataset and evaluate the performance of the fine-tuned models. Our results indicate that in some tasks, models fine-tuned with Magpie perform comparably to the official Llama-3-8B-Instruct, despite the latter being enhanced with 10 million data points through supervised fine-tuning (SFT) and subsequent feedback learning. We also show that using Magpie solely for SFT can surpass the performance of previous public datasets utilized for both SFT and preference optimization, such as direct preference optimization with UltraFeedback. This advantage is evident on alignment benchmarks such as AlpacaEval, ArenaHard, and WildBench.
## Dataset Details
This dataset is generated by [Llama 3 8B Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) for direct preference optimization.
To create the dataset, we first selected 100K high-quality Magpie instructions with diverse task categories, then generated responses using [Llama 3 8B Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) 5 times for each instruction, using a temperature of 0.8. We then annotated RM scores using RLHFlow/ArmoRM-Llama3-8B-v0.1, labeling the response with the highest RM score as the chosen response, and the one with the lowest RM score as the rejected response.
## 📚 Citation
If you find the model, data, or code useful, please cite our paper:
```
@article{xu2024magpie,
title={Magpie: Alignment Data Synthesis from Scratch by Prompting Aligned LLMs with Nothing},
author={Zhangchen Xu and Fengqing Jiang and Luyao Niu and Yuntian Deng and Radha Poovendran and Yejin Choi and Bill Yuchen Lin},
year={2024},
eprint={2406.08464},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
Please also cite the reward model for creating preference datasets:
ArmoRM paper:
```
@article{wang2024interpretable,
title={Interpretable Preferences via Multi-Objective Reward Modeling and Mixture-of-Experts},
author={Wang, Haoxiang and Xiong, Wei and Xie, Tengyang and Zhao, Han and Zhang, Tong},
journal={arXiv preprint arXiv:2406.12845},
year={2024}
}
```
**Questions?** Please contact [Zhangchen](https://zhangchenxu.com/) by email."
Hyunggi/influencer_rec_ko,{},"---
license: gpl-3.0
language:
- ko
tags:
- influencer
- korean
size_categories:
- 10KYou are a highly skilled translator specializing in medical and healthcare documents.
>Your task is to translate the following three categories (Question, Reasoning, Final_answer) into Korean with utmost accuracy.
>Each category may consist of multiple sentences.
>Ensure the translation preserves the technical and domain-specific terminology.
>For medical terms, translate them into Korean (if possible) and include the original English term in parentheses.
>The translation must be natural and fluent, avoiding any signs of 'translationese' or awkward phrasing.
>Output with formal and polite language, suitable for professional communication.
>Use appropriate formal sentence endings such as ""-입니다"".
>Provide the Korean translation for each category between the designated special tokens as shown below.
>
>Question - between [KorQ] and [/KorQ]
>
>Reasoning - between [KorR] and [/KorR]
>
>Final_answer - between [KorF] and [/KorF]
>
>SENTENCES:
>
>[QUESTION] {QUESTION} [/QUESTION]
>
>[REASONING] {REASONING} [/REASONING]
>
>[FINAL_ANSWER] {FINAL_ANSWER} [/FINAL_ANSWER]
>
## Citation
```
@misc{chen2024huatuogpto1medicalcomplexreasoning,
title={HuatuoGPT-o1, Towards Medical Complex Reasoning with LLMs},
author={Junying Chen and Zhenyang Cai and Ke Ji and Xidong Wang and Wanlong Liu and Rongsheng Wang and Jianye Hou and Benyou Wang},
year={2024},
eprint={2412.18925},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2412.18925},
}
```"
werty1248/EnKo-Translation-Preference-Eval,"{""dataset_info"": {""features"": [{""name"": ""source_text"", ""dtype"": ""string""}, {""name"": ""rejected"", ""dtype"": ""string""}, {""name"": ""chosen"", ""dtype"": ""string""}, {""name"": ""source"", ""dtype"": ""string""}], ""splits"": [{""name"": ""test"", ""num_bytes"": 38868, ""num_examples"": 57}], ""download_size"": 29880, ""dataset_size"": 38868}, ""configs"": [{""config_name"": ""default"", ""data_files"": [{""split"": ""test"", ""path"": ""data/test-*""}]}], ""language"": [""en"", ""ko""]}","- Mistranslation dataset for evaluating the performance of reward models or filtering methods used in assessing the quality of Korean to English translations.
- Unnatural grammar usage, misinterpretation due to incorrect phrase/clause segmentation, awkward terminology, etc.
- Yes, 57 records is too small to evaluate something."
o0dimplz0o/zeroth-STT-Ko,"{""license"": ""apache-2.0"", ""configs"": [{""config_name"": ""default"", ""data_files"": [{""split"": ""train"", ""path"": ""data/train-*""}]}], ""dataset_info"": {""features"": [{""name"": ""audio"", ""dtype"": {""audio"": {""sampling_rate"": 16000}}}, {""name"": ""text"", ""dtype"": ""string""}], ""splits"": [{""name"": ""train"", ""num_bytes"": 11169873357.152, ""num_examples"": 102263}], ""download_size"": 10186919269, ""dataset_size"": 11169873357.152}, ""task_categories"": [""automatic-speech-recognition""], ""language"": [""ko""], ""size_categories"": [""100K"",
""json"": {
""domain"": ""law"",
""file_name"": ""[민사] 미성년자 성형외과 코필러 주입술 시 필러물질이 미성년자에 대한 사용금지 조치를 받은 상태였던 사안.pdf"",
""page_number"": 1,
""pages"": 20,
// pdf 파일에 대한 id
""pid"": ""p_50"",
// 해당 pdf와 관련된 qa 데이터들의 모음.
""test_cases"": [
{
""context_type"": ""paragraph"",
""pid"": ""p_50"",
""qid"": ""q_222"",
""question"": ""원고 A이 시력 저하등의 증상이 발생한 시점은 언제로 보고, 그 시점 판단의 기준은 무엇인가요?"",
""target_answer"": ""원고 A의 시력 저하 등의 증상은 2017년 2월 25일 이전, 즉 필러 주입술 직후의 시점에 일어났다고 판단된다. 이 판단의 기준은 원고 A이 필러 주입술 3일 뒤인 2월 25일에 병원을 방문해 시력저하 등의 증상을 호소했지만, 진료기록에 '이미 그 전에 증상이 있었으나 환자는 쌍꺼풀 수술을 해서 그럴 것이라고 생각하였다'는 내용이 있었고, 필러 주입 후 코에 보호대를 착용했어서 초기에 증상을 알아차리기 어려웠을 것으로 보여진다는 점이다."",
""target_page_no"": ""7""
},
...
]
},
""pdf"": """"
}
```
### Data Preprocessing
• PDF 파일을 페이지 단위로 분리하였습니다.
• 각 페이지를 webdataset 형식의 하나의 예제로 구성하였습니다.
## Dataset Creation
### Curation Rationale
• 원본 데이터셋의 불편함을 해소하고, 평가 작업을 쉽게 수행할 수 있도록 PDF 파일을 포함한 데이터셋을 재구성하였습니다.
• 유효하지 않은 PDF 경로 문제를 해결하기 위해 모든 PDF 파일을 로컬 저장소에 포함시켰습니다.
### Source Data
• Original Dataset: allganize/RAG-Evaluation-Dataset-KO
• Reconstruction: 원본 데이터셋에서 제공된 경로의 PDF 파일을 수집 및 분리.
# RAG Evaluation preparation code.
* 원본 데이터에서 `allganize/RAG-Evaluation-Dataset-KO`, `documents.csv`와 유사한 구조로 데이터를 로드하는 코드.
* 분리된 pdf binary는 병합하여 로컬 경로에 pdf파일로 저장함.
```
import os
from typing import List
from itertools import groupby
import datasets
import pandas as pd
import fitz # pip install PyMuPDF
def merge_pdf_data(pdf_data_list:List[bytes], output_pdf_path:str):
""""""Merge List of pdf binary to pdf files.""""""
output_pdf = fitz.open()
for pdf_data in pdf_data_list:
pdf_document = fitz.open(""pdf"", pdf_data)
for page_num in range(len(pdf_document)):
output_pdf.insert_pdf(pdf_document, from_page=page_num, to_page=page_num)
output_pdf.save(output_pdf_path)
output_pdf.close()
print(f""Merged PDF saved as {output_pdf_path}"")
def load_rag_eval_data(target_pdf_dir:str):
""""""Load `allganize/RAG-Evaluation-Dataset-KO` dataset
Process:
* Download RAG Eval data from huggingface hub (query and pdf)
* Build QA DataFrame
* Cache the pdf_binary to local directory and return pdf metadata.
""""""
# load_dataset
ds = datasets.load_dataset(""datalama/RAG-Evaluation-Dataset-KO"", split='test')
target_pdf_dir = os.path.abspath(os.path.expanduser(target_pdf_dir))
# get query
qa_pairs = []
for meta in ds['json']:
test_cases = [{**tc, ""domain"":meta[""domain""]} for tc in meta['test_cases']]
qa_pairs += test_cases
qa_df = pd.DataFrame(qa_pairs).drop_duplicates()
qa_df['qid_int'] = qa_df['qid'].str.extract(r'(\d+)').astype(int)
qa_df = qa_df.sort_values(by='qid_int').drop(columns=['qid_int']).reset_index(drop=True)
cols = [""domain"", ""qid"",""question"", ""target_answer"", ""pid"", ""target_page_no"", ""context_type""]
qa_df = qa_df.loc[:,cols]
# get pdfs
os.makedirs(target_pdf_dir, exist_ok=True)
pid_indices = [(i, meta['pid']) for i, meta in enumerate(ds['json'])]
# Grouping by the second element of the tuples
grouped_pid_indices = {key: [item[0] for item in group]
for key, group in groupby(pid_indices, lambda x: x[1])}
pdf_meta_dict = dict()
for i in range(len(grouped_pid_indices)):
pid = f""p_{i}""
output_path = f""{target_pdf_dir}/{pid}.pdf""
sub_ds = ds.select(grouped_pid_indices[pid])
sub_meta = sub_ds[0]['json']
# merge and save pdf in local
merge_pdf_data(sub_ds['pdf'], output_path)
pdf_meta_dict[pid] = {
""pid"": pid,
""domain"": sub_meta[""domain""],
""file_name"": sub_meta[""file_name""],
""pages"": sub_meta[""pages""],
""local_file_path"": output_path,
}
return qa_df, pdf_meta_dict
```"
ygyoung/easylaw_ko_qa,{},"---
license: apache-2.0
task_categories:
- question-answering
language:
- ko
size_categories:
- 1M
This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1).
## Dataset Details
### Dataset Description
- tokenizer: sapie-tokenizer-v2.0
- sample #: 2,688,972
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
### Direct Use
[More Information Needed]
### Out-of-Scope Use
[More Information Needed]
## Dataset Structure
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Data Collection and Processing
[More Information Needed]
#### Who are the source data producers?
[More Information Needed]
### Annotations [optional]
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
#### Personal and Sensitive Information
[More Information Needed]
## Bias, Risks, and Limitations
[More Information Needed]
### Recommendations
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed]"
fantastic-jobs/linkedin-industry-list,{},"---
license: mit
task_categories:
- text-classification
- translation
language:
- en
- ko
- es
- ru
- nl
- de
- ml
- it
- fr
- pt
- id
- cs
- tr
- da
- sv
- ar
- th
tags:
- jobs
- linkedin
- job
- industries
- industry
size_categories:
- 1K |