|
|
--- |
|
|
dataset_info: |
|
|
features: |
|
|
- name: id |
|
|
dtype: string |
|
|
- name: url |
|
|
dtype: string |
|
|
- name: title |
|
|
dtype: string |
|
|
- name: text |
|
|
dtype: string |
|
|
splits: |
|
|
- name: train |
|
|
num_bytes: 8215028188 |
|
|
num_examples: 2099775 |
|
|
- name: test |
|
|
num_bytes: 4619762 |
|
|
num_examples: 1960 |
|
|
download_size: 4554337553 |
|
|
dataset_size: 8219647950 |
|
|
configs: |
|
|
- config_name: default |
|
|
data_files: |
|
|
- split: train |
|
|
path: data/train-* |
|
|
- split: test |
|
|
path: data/test-* |
|
|
language: |
|
|
- ar |
|
|
- en |
|
|
--- |
|
|
|
|
|
# Dataset Card for Wikimedia Wikipedia |
|
|
|
|
|
## Table of Contents |
|
|
- [Table of Contents](#table-of-contents) |
|
|
- [Dataset Description](#dataset-description) |
|
|
- [Dataset Summary](#dataset-summary) |
|
|
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) |
|
|
- [Languages](#languages) |
|
|
- [Dataset Structure](#dataset-structure) |
|
|
- [Data Instances](#data-instances) |
|
|
- [Data Fields](#data-fields) |
|
|
- [Data Splits](#data-splits) |
|
|
- [Dataset Creation](#dataset-creation) |
|
|
- [Curation Rationale](#curation-rationale) |
|
|
- [Source Data](#source-data) |
|
|
- [Annotations](#annotations) |
|
|
- [Personal and Sensitive Information](#personal-and-sensitive-information) |
|
|
- [Considerations for Using the Data](#considerations-for-using-the-data) |
|
|
- [Social Impact of Dataset](#social-impact-of-dataset) |
|
|
- [Discussion of Biases](#discussion-of-biases) |
|
|
- [Other Known Limitations](#other-known-limitations) |
|
|
- [Additional Information](#additional-information) |
|
|
- [Dataset Curators](#dataset-curators) |
|
|
- [Licensing Information](#licensing-information) |
|
|
- [Citation Information](#citation-information) |
|
|
- [Contributions](#contributions) |
|
|
|
|
|
## Dataset Description |
|
|
|
|
|
- **Homepage:** [https://dumps.wikimedia.org](https://dumps.wikimedia.org) |
|
|
- **Repository:** |
|
|
- **Paper:** |
|
|
- **Point of Contact:** |
|
|
|
|
|
### Dataset Summary |
|
|
|
|
|
Wikipedia dataset containing cleaned articles of the Standard Arabic Wikipedia version and the linked articles from the English version. |
|
|
|
|
|
| Language | Articles | Tokens * | Characters | |
|
|
|----------|----------------|----------------|------------| |
|
|
| English | 882,534 | 1,177.94 M | 5014.68 M | |
|
|
| Arabic | 1,219,201 | 621.51 M | 1655.64 M | |
|
|
|
|
|
*Gemma Tokenizer |
|
|
|
|
|
|
|
|
### Original model card |
|
|
The dataset is built from the Wikipedia dumps (https://dumps.wikimedia.org/) |
|
|
with one subset per language, each containing a single train split. |
|
|
|
|
|
Each example contains the content of one full Wikipedia article with cleaning to strip |
|
|
markdown and unwanted sections (references, etc.). |
|
|
|
|
|
|
|
|
All language subsets have already been processed for recent dump, and you can load them per date and language this way: |
|
|
```python |
|
|
from datasets import load_dataset |
|
|
|
|
|
ds = load_dataset("wikimedia/wikipedia", "20231101.en") |
|
|
``` |
|
|
|
|
|
#### Data Visualization |
|
|
Click the [Nomic Atlas](https://atlas.nomic.ai/map/475c26d7-b142-4795-9887-02b6eeb18dc0/0d312be6-a3bb-4586-b6b7-53dcd0cbefa5) map below to visualize the 6.4 million samples in the `20231101.en` split. |
|
|
|
|
|
<a href="https://atlas.nomic.ai/map/475c26d7-b142-4795-9887-02b6eeb18dc0/0d312be6-a3bb-4586-b6b7-53dcd0cbefa5"> |
|
|
<img src="https://cdn-uploads.huggingface.co/production/uploads/6480c476cacb1c4a0696eeb8/sZNN6Vubc0Oue83vKaJUu.webp" alt="Nomic-Atlas Wikipedia Map" width="25%"/> |
|
|
</a> |
|
|
|
|
|
### Supported Tasks and Leaderboards |
|
|
|
|
|
The dataset is generally used for Language Modeling. |
|
|
|
|
|
### Languages |
|
|
|
|
|
You can find the list of languages here: https://meta.wikimedia.org/wiki/List_of_Wikipedias |
|
|
|
|
|
## Dataset Structure |
|
|
|
|
|
### Data Instances |
|
|
|
|
|
An example looks as follows: |
|
|
``` |
|
|
{'id': '1', |
|
|
'url': 'https://simple.wikipedia.org/wiki/April', |
|
|
'title': 'April', |
|
|
'text': 'April is the fourth month...' |
|
|
} |
|
|
``` |
|
|
|
|
|
### Data Fields |
|
|
|
|
|
The data fields are the same among all configurations: |
|
|
- `id` (`str`): ID of the article. |
|
|
- `url` (`str`): URL of the article. |
|
|
- `title` (`str`): Title of the article. |
|
|
- `text` (`str`): Text content of the article. |
|
|
|
|
|
### Data Splits |
|
|
|
|
|
All configurations contain a single `train` split. |
|
|
|
|
|
## Dataset Creation |
|
|
|
|
|
### Curation Rationale |
|
|
|
|
|
[More Information Needed] |
|
|
|
|
|
### Source Data |
|
|
|
|
|
#### Initial Data Collection and Normalization |
|
|
|
|
|
The dataset is built from the Wikipedia dumps: https://dumps.wikimedia.org |
|
|
|
|
|
You can find the full list of languages and dates here: https://dumps.wikimedia.org/backup-index.html |
|
|
|
|
|
The articles have been parsed using the [`mwparserfromhell`](https://mwparserfromhell.readthedocs.io) tool. |
|
|
|
|
|
When uploading the data files for the 20231101 dump, we noticed that the Wikimedia Dumps website does not contain this date dump |
|
|
for the "bbc", "dga", nor "zgh" Wikipedias. We have reported the issue to the Wikimedia Phabricator: https://phabricator.wikimedia.org/T351761 |
|
|
|
|
|
#### Who are the source language producers? |
|
|
|
|
|
[More Information Needed] |
|
|
|
|
|
### Annotations |
|
|
|
|
|
#### Annotation process |
|
|
|
|
|
[More Information Needed] |
|
|
|
|
|
#### Who are the annotators? |
|
|
|
|
|
[More Information Needed] |
|
|
|
|
|
### Personal and Sensitive Information |
|
|
|
|
|
[More Information Needed] |
|
|
|
|
|
## Considerations for Using the Data |
|
|
|
|
|
### Social Impact of Dataset |
|
|
|
|
|
[More Information Needed] |
|
|
|
|
|
### Discussion of Biases |
|
|
|
|
|
[More Information Needed] |
|
|
|
|
|
### Other Known Limitations |
|
|
|
|
|
[More Information Needed] |
|
|
|
|
|
## Additional Information |
|
|
|
|
|
### Dataset Curators |
|
|
|
|
|
[More Information Needed] |
|
|
|
|
|
### Licensing Information |
|
|
|
|
|
Copyright licensing information: https://dumps.wikimedia.org/legal.html |
|
|
|
|
|
All original textual content is licensed under the [GNU Free Documentation License](https://www.gnu.org/licenses/fdl-1.3.html) (GFDL) |
|
|
and the [Creative Commons Attribution-Share-Alike 3.0 License](https://creativecommons.org/licenses/by-sa/3.0/). |
|
|
Some text may be available only under the Creative Commons license; see their [Terms of Use](https://foundation.wikimedia.org/wiki/Policy:Terms_of_Use) for details. |
|
|
Text written by some authors may be released under additional licenses or into the public domain. |
|
|
|
|
|
### Citation Information |
|
|
|
|
|
``` |
|
|
@ONLINE{wikidump, |
|
|
author = "Wikimedia Foundation", |
|
|
title = "Wikimedia Downloads", |
|
|
url = "https://dumps.wikimedia.org" |
|
|
} |
|
|
``` |