File size: 3,087 Bytes
7b3bf46
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
---
license: cc-by-sa-4.0
dataset_info:
- config_name: default
  features:
  - name: text
    dtype: string
  splits:
  - name: train
    num_bytes: 203133024
    num_examples: 109486
  - name: validation
    num_bytes: 11424453
    num_examples: 6173
  - name: test
    num_bytes: 11808744
    num_examples: 6219
  download_size: 143418920
  dataset_size: 226366221
- config_name: sentences
  features:
  - name: text
    dtype: string
  splits:
  - name: train
    num_bytes: 202232488.28022403
    num_examples: 1572268
  - name: validation
    num_bytes: 11383118.592627235
    num_examples: 88647
  - name: test
    num_bytes: 11756845.828945814
    num_examples: 90769
  download_size: 149698561
  dataset_size: 225372452.70179707
configs:
- config_name: default
  data_files:
  - split: train
    path: data/train-*
  - split: validation
    path: data/validation-*
  - split: test
    path: data/test-*
- config_name: sentences
  data_files:
  - split: train
    path: sentences/train-*
  - split: validation
    path: sentences/validation-*
  - split: test
    path: sentences/test-*
---

# Dataset Card for "wiki40b-da-clean"


### Dataset Summary

This dataset is an slightly modified and filtered version of [Wiki40b-da daset](https://huggingface.co/datasets/alexandrainst/wiki40b-da/) which is a fork of [this dataset on the Hugging Face Hub](https://huggingface.co/datasets/wiki40b).

The dataset contains two sub-sets, for which the original columns "wikidata_id" and "version_id" are removed from both:
- "**text**": Contains the filtered text of the Wikipedia paragraphs, with formatting removed (_START_ARTICLE_, _START_PARAGRAPH_ and \n removed)
- "**sentences**" Contains the sentences from all the 'text' dataset, filtered to only include sentences >5 and <100 words (split after all punctuations (!,?,.) that is followed by a space and a capital letter)

The dataset is curated to use the "text" config for masked next token prediction (MNTP) and the sentences config for SimCSE in relation to training encoder and decoder models.

The training, validation and test splits are the original ones.


### Languages

The dataset is available in Danish (`da`).


## Dataset 

**text** (default)
An example from the text dataset looks as follows.
```
{
 'text': "Tekstiler havde mange forskellige formål i oldtidens Ægypten, og blev brugt af (...)",
}
```

**sentences**
An example from the sentences dataset looks as follows.
```
{
 'text': "Det tog tre måneder, før hørren kunne høstes.",
}
```

## Additional Information

### Dataset Curators

[Jesper Alkestrup](https://github.com/jalkestrup) from the [The Tech Collective](https://thetechcollective.eu/) filtered and uploaded the dataset to the Hugging Face Hub.

Thanks to [Dan Saattrup Nielsen](https://saattrupdan.github.io/) from the [The Alexandra
Institute](https://alexandra.dk/) for uploading [Wiki40b-da daset](https://huggingface.co/datasets/alexandrainst/wiki40b-da/).

### Licensing Information

The dataset is licensed under the [CC-BY-SA
license](https://creativecommons.org/licenses/by-sa/4.0/).