File size: 11,471 Bytes
10c8f64
07513e2
 
10c8f64
07513e2
10c8f64
07513e2
10c8f64
 
07513e2
 
 
10c8f64
 
 
 
07513e2
10c8f64
 
 
 
 
 
 
 
 
243fc1f
07513e2
 
 
10c8f64
 
 
 
 
 
 
 
 
 
 
 
 
 
 
07513e2
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
cf95d56
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3bf489c
 
 
 
 
 
 
 
 
 
 
10c8f64
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
07513e2
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
10c8f64
 
 
07513e2
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
10c8f64
 
07513e2
10c8f64
07513e2
 
 
 
 
5cc0948
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
# Dynaword: Moving from One-shot to Continously Developed Datasets  

Authors:
<!-- Current authors -->
- Kenneth Enevoldsen
- Kristian Nørgaaard Jensen
- Jan Kostkan

- Peter Bjørn Jørgensen 
- Per
- Kristoffer Nielbo

<!-- 
Potential co-authors to invite
CHC:
- Marton

Alexandra
KU
SDU
Leon 
Danish Royal Library
that guys from the treebank project
someone from DDSC
someone from Huggingface
-->


# Abstract

Large scale datasets are foundational for research and development in natural language processing and related fields and good datasets
often require multiple iterations to improve and adjust.
Despite this we see many releases of static datasets rather than intended continually expanding resourced, thus preventing community 
contributions and expansion. Even when a large-scale dataset see versioned releases the filtering and quality assurance is often only done by the
team releasing the data.
And while we have seen impressive large-scale released these are often derived from Common crawl or related sources which is likely to contain 
copyrighted data that does not support the stated license of the release. This restricts not only the use of the data, but also its derivates, such as
annotated data and language models.
In an attempt to remedy this shortcoming we developed Danish Dynaword. An illustrative example of how large-scale datasets can be developed. This dynawords contain more than 2x as many tokens as comparable releases, is restricted to strictly permissible licenses data and have seen multipl contributions across industry and research.
This dataset comes equipped with CI to ensure data format, quality, and high documentation standards than can be run in a developer-friendly 
enviroments in under 10 minutes.
Along with this release we have additionally started dynawords projects for Norwegian, Swedish, Faroese, Icelandic.
<!-- Candidate abstract - will probably change -->

dataset is available at: https://huggingface.co/datasets/danish-foundation-models/danish-dynaword

# Introduction

Current datasets
While creating a current

Current methods for dataset creation tacke only a small [@joshiStateFateLinguistic2020]
In the project we specifically choose to focus on the low to mid-resource language Danish (dan). We see two reasons for doing this:

- The dynaword approach is most likely to be beneficial for low to mid resourced languages (class 2-4; @joshiStateFateLinguistic2020) which have contributors able and willing to contribute and where the domain high resource languages (class 5; @joshiStateFateLinguistic2020) could likely sustain multiple dynaword project targeting specific domains.
-  not only for Danish b

While it is in theory possible to open a PR on existing dataset, this practice is often rare and instead we often see improvements on the existing dataset published (see e.g. [@pascal_alie_kenneth_et_paper], [@that_guy_that_added_langauge_tag_to_a_dataset]). These derivative works rarely get as many downloads as the original 

Contrasting this approach to code development - where it is common practice to create PRs to continually improve the codebase - makes this dataset development landscape seems immature and inefficent. 

<!-- 
KCE - just some loose thoughts:

Over the last couple of years multiple models or dataset released in the Nordics have retracted or not released due to concerns about the data.
This include the 
- SWEb (https://arxiv.org/html/2410.04456v1) a large scale web-data, which was never released due to the fear of legal disputes.
- Models from Danish Foundation Models (cite) that was retracted either temporarily (dfm-encoder-large) or permanently (wav2vec model) due to concern from rightholders
- Similarly, the Norwegian collosus corpus (NCC) (https://huggingface.co/datasets/NbAiLab/NCC) have had to retract a large part of the initial dataset
following a request from the media houses.

Similar situations have happened throughout Europe and it is easy to imagine that more will follow.

Building models on uncertain data foundation makes it unclear what 
Derivatives models might are thus presented with a unclear legal status. This may halt implementation or if implemented potentially lead to legal consequences.

Danish Dynaword seeks to remedy this issue by ...
 -->


<!-- 
Comparable released datasets such as the public datasets by Pleaias (Danish_PD) which consist of >300M words initally seems
impressive, but examining the Danish subsection we see that it contains an extensive degree of OCR errors if we apply a simple heuristic alpha ratio filter [cite-gopher] we see that more than 50% of the texts are included.
the ratio of words which include only alphabetic characters, we see that 71% 

 it 71% of the dataset 

https://huggingface.co/datasets/danish-foundation-models/danish-dynaword/discussions/38
 -->

## What is a Dynaword

A dynaword is a continously developed dataset resource intended a variety of downstream use-cases within natural language processing. Dynaword does intend to replace existing large scale releases such as fine-web [@fineweb], OSCAR [@OSCAR], or HLPT [@hplt], but rather
complement these in situation where clearly licensed dataset might be preferred. Some of these cases for example include:

- Clearly license datasets lends itself to better to derivative providing good starting points for permissibly annotated datasets.
- EUs AI-act also poses requirement on the training data used for model training
- The EUs AI act makes the distributor of a model responsible for copyright violations and thus companies might prefer models derived from clearly permissible data.
<!-- probably other cases -->

### Continuous Development of large Scale datasets

Cont

### Design Considerations

## Related work


### Existing approaches in Dataset development

Large project like OSCAR [@OSCAR], HPLT [@hplt], and fineweb [@fineweb] release iterative version of dataset derived from commoncrawl [@commoncrawl].
These approaches make it hard to contributors to join contribute and siloes dataset development in a few institutions. Furthermore the focus 
commoncrawl ignores other valuable resources such as public APIs and comes with a slew of ethical and legal concerns [@missing] which effect only the usefulness of the datasets but also the models derived from these. 
While these resources such as individual dataset derived from APIs would be extensive to collect for individual groups as they rarely offer enough data to be worth the time opening up this approach to a community makes these approaches more viable. 


Opening up development pipeline also increases openness around the dataset collection. ADD SOMETHING on inclusion here.

Read up on fineweb!!! (I assume they do some CI)

Other successful open-source project: dependency treebank project [@dep_treebank], ...

Existing projects on open-licensed data [@elutherAI]

We note that our approach is complementary to existing projects such as fineweb




<!-- add stuff on data ops -->

### Danish and Scandinavian Datasets 

Lacunae of danish [@cite]
Danish gigaword [@dagw]
Swedish gigaword? [@swedish]
NCC [@ncc_kummervold]


Existing benchmark covering Scandinavian languages such as ScandEval [@scandeval; @scandeval2] and SEB [@seb] argue that reasonable to evalaute on the 

# Methods

## Continuous Integration 

Our approach for continuous integration, how to submit, what we test for. 


# Results 

## Dataset collection

Current collection. 

| Source          | Date       | Domain         | License | Size |
| --------------- | ---------- | -------------- | ------- | ---- |
| **Legal**       |            |                |         |      |
| Retsinformation | date range | Legal, Written |         | 188M |
| ...             |            |                |         |      |
| **Total**       |            |                |         |      |


For a description of each dataset we refer to the public repository. 
<!-- we could also include -->

# Conclusion

## Dataset delivery

# Limitation

- Is danish too limited: Should we consider multilingual sources, scandinavian, germanic, English

- Size: 
  - The size is currently limited if the size grows to large developing becomes problematic
  - This is still way smaller than what could be extracted from CC

- Only Danish: While developing CI for datasets is by no means new [@missing] doing so for open pre-training datasets open a collaborative fashion has
not been tested on a larger scale. Once the approach has been validated we plan to host a collaboration along with huggingface to develop these dataset sources.

- Huggingface datasets as a development platform for datasets: Througout this work it was clear to many of the developers that the ease of contributing minor changes (e.g. filtering out a few bad examples) was both hard to create a PRs for and hard to review often requiring the reviewer to simply trust that the user did what was stated in the commit message. While previous projects have tackled this issue using human readable formats [@dep_treebank], due to the scope of the dataset this would quickly become inefficient.
This lack of clarity increased the likelihood of dataset attacks such as dataset poisoning [@missing]. We expect to see both interface development and software development to detect and prevent such attacks. 

- Machine generated content within training data: Not 

- Often we are interested in high-quality data when training an LLM. However the presented dynaword only performs a minimal level of cleaning. While
this is a deliberate decision as certain model choices might warrant for different cleaning approaches. This could leave a substantial level of post-processing to the user of the dataset.

Ethical and Environmental consideration 

enviromental: 
- common codebase lead to less duplication of dataset and reduces storage required
- continual ci running on large datasets could be a concern. Currently out tests use a total of XXX Co2-eq (estimated using codecarbon). however we have already seen people using training [@fineweb] and evaluating LLMs to appriximate dataset quality, such workflows could quickly incrase the co2 consumption.





## Aditional content

Comparison table


|                        | Size | Sufficient Documentation | Data availability | Legal Status    | Quality        |
| ---------------------- | ---- | ------------------------ | ----------------- | --------------- | -------------- |
| Danish Dynaword (Ours) | 3.5B | Replicable^              | Open Access       | Openly Licensed | Mixed (high)   |
| Danish Gigaword*       |      | Documentary              | Open Access       | Openly Licensed | Mixed (high)   |
| Common Corpus (dan)    |      | Replicable               | Open Access       | Openly Licensed | OCR (low)      |
| Fineweb (dan)          |      | Replicable               | Open Access       |                 | Mixed (medium) |



<!-- 
Could we create an interesting figure of this Marton? See figure 1
better notion of quality? Potentially a bit more objective?

-->

*The Danish gigaword subsection included in Danish Dynaword. I.e. the subsection that is permissibly licensed.
^Some datasets are derived from Danish Gigaword, some of these subsection are not (currently) replicable

This follows the scheme from figure 1 (https://arxiv.org/abs/2501.08365)

Add comparison number of tokens comparison:
Common Corpus (DA) - 
Gigaword (DA) - Open
M-Fineweb (DA) - 
 -->