EU_Wikipedias / README.md
kapllan's picture
Edited the statistics
fc0df03
|
raw
history blame
6.4 kB
metadata
annotations_creators:
  - other
language_creators:
  - found
language:
  - bg
  - cs
  - da
  - de
  - el
  - en
  - es
  - et
  - fi
  - fr
  - ga
  - hr
  - hu
  - it
  - lt
  - lv
  - mt
  - nl
  - pl
  - pt
  - ro
  - sk
  - sl
  - sv
license:
  - cc-by-4.0
multilinguality:
  - multilingual
paperswithcode_id: null
pretty_name: 'EUWikipedias: A dataset of Wikipedias in the EU languages'
size_categories:
  - 10M<n<100M
source_datasets:
  - original
task_categories:
  - fill-mask

Dataset Card for EUWikipedias: A dataset of Wikipedias in the EU languages

Table of Contents

Dataset Description

  • Homepage:
  • Repository:
  • Paper:
  • Leaderboard:
  • Point of Contact: Joel Niklaus

Dataset Summary

Wikipedia dataset containing cleaned articles of all languages. The datasets are built from the Wikipedia dump (https://dumps.wikimedia.org/) with one split per language. Each example contains the content of one full Wikipedia article with cleaning to strip markdown and unwanted sections (references, etc.).

Supported Tasks and Leaderboards

The dataset supports the tasks of fill-mask.

Languages

The following languages are supported: bg, cs, da, de, el, en, es, et, fi, fr, ga, hr, hu, it, lt, lv, mt, nl, pl, pt, ro, sk, sl, sv

Dataset Structure

It is structured in the following format: {date}/{language}_{shard}.jsonl.xz At the moment only the date '20221120' is supported.

Use the dataset like this:

from datasets import load_dataset

dataset = load_dataset('joelito/EU_Wikipedias', date="20221120", language="de", split='train', streaming=True)

Data Instances

The file format is jsonl.xz and there is one split available (train).

Language Source Size (MB) Words Documents Words/Document
all 20221120 86034 9506846949 26481379 359
bg 20221120 1261 88138772 285876 308
cs 20221120 1904 189580185 513851 368
da 20221120 679 74546410 286864 259
de 20221120 0 1191919523 2740891 434
el 20221120 1531 103504078 215046 481
en 20221120 26685 3192209334 6575634 485
es 20221120 6636 801322400 1583597 506
et 20221120 538 48618507 231609 209
fi 20221120 1391 115779646 542134 213
fr 20221120 9703 1140823165 2472002 461
ga 20221120 72 8025297 57808 138
hr 20221120 555 58853753 198746 296
hu 20221120 1855 167732810 515777 325
it 20221120 5999 687745355 1782242 385
lt 20221120 409 37572513 203233 184
lv 20221120 269 25091547 116740 214
mt 20221120 29 2867779 5030 570
nl 20221120 3208 355031186 2107071 168
pl 20221120 3608 349900622 1543442 226
pt 20221120 3315 389786026 1095808 355
ro 20221120 1017 111455336 434935 256
sk 20221120 506 49612232 238439 208
sl 20221120 543 58858041 178472 329
sv 20221120 2560 257872432 2556132 100

Data Fields

[More Information Needed]

Data Splits

[More Information Needed]

Dataset Creation

This dataset has been created by downloading the wikipedias using olm/wikipedia for the 24 EU languages. For more information about the creation of the dataset please refer to prepare_wikipedias.py

Curation Rationale

[More Information Needed]

Source Data

Initial Data Collection and Normalization

[More Information Needed]

Who are the source language producers?

[More Information Needed]

Annotations

Annotation process

[More Information Needed]

Who are the annotators?

[More Information Needed]

Personal and Sensitive Information

[More Information Needed]

Considerations for Using the Data

Social Impact of Dataset

[More Information Needed]

Discussion of Biases

[More Information Needed]

Other Known Limitations

[More Information Needed]

Additional Information

Dataset Curators

[More Information Needed]

Licensing Information

[More Information Needed]

Citation Information

TODO add citation

Contributions

Thanks to @JoelNiklaus for adding this dataset.