Kenneth Enevoldsen
updated paper
5cc0948 unverified
|
raw
history blame
11.5 kB

Dynaword: Moving from One-shot to Continously Developed Datasets

Authors:

  • Kenneth Enevoldsen

  • Kristian Nørgaaard Jensen

  • Jan Kostkan

  • Peter Bjørn Jørgensen

  • Per

  • Kristoffer Nielbo

Abstract

Large scale datasets are foundational for research and development in natural language processing and related fields and good datasets often require multiple iterations to improve and adjust. Despite this we see many releases of static datasets rather than intended continually expanding resourced, thus preventing community contributions and expansion. Even when a large-scale dataset see versioned releases the filtering and quality assurance is often only done by the team releasing the data. And while we have seen impressive large-scale released these are often derived from Common crawl or related sources which is likely to contain copyrighted data that does not support the stated license of the release. This restricts not only the use of the data, but also its derivates, such as annotated data and language models. In an attempt to remedy this shortcoming we developed Danish Dynaword. An illustrative example of how large-scale datasets can be developed. This dynawords contain more than 2x as many tokens as comparable releases, is restricted to strictly permissible licenses data and have seen multipl contributions across industry and research. This dataset comes equipped with CI to ensure data format, quality, and high documentation standards than can be run in a developer-friendly enviroments in under 10 minutes. Along with this release we have additionally started dynawords projects for Norwegian, Swedish, Faroese, Icelandic.

dataset is available at: https://huggingface.co/datasets/danish-foundation-models/danish-dynaword

Introduction

Current datasets While creating a current

Current methods for dataset creation tacke only a small [@joshiStateFateLinguistic2020] In the project we specifically choose to focus on the low to mid-resource language Danish (dan). We see two reasons for doing this:

  • The dynaword approach is most likely to be beneficial for low to mid resourced languages (class 2-4; @joshiStateFateLinguistic2020) which have contributors able and willing to contribute and where the domain high resource languages (class 5; @joshiStateFateLinguistic2020) could likely sustain multiple dynaword project targeting specific domains.
  • not only for Danish b

While it is in theory possible to open a PR on existing dataset, this practice is often rare and instead we often see improvements on the existing dataset published (see e.g. [@pascal_alie_kenneth_et_paper], [@that_guy_that_added_langauge_tag_to_a_dataset]). These derivative works rarely get as many downloads as the original

Contrasting this approach to code development - where it is common practice to create PRs to continually improve the codebase - makes this dataset development landscape seems immature and inefficent.

What is a Dynaword

A dynaword is a continously developed dataset resource intended a variety of downstream use-cases within natural language processing. Dynaword does intend to replace existing large scale releases such as fine-web [@fineweb], OSCAR [@OSCAR], or HLPT [@hplt], but rather complement these in situation where clearly licensed dataset might be preferred. Some of these cases for example include:

  • Clearly license datasets lends itself to better to derivative providing good starting points for permissibly annotated datasets.
  • EUs AI-act also poses requirement on the training data used for model training
  • The EUs AI act makes the distributor of a model responsible for copyright violations and thus companies might prefer models derived from clearly permissible data.

Continuous Development of large Scale datasets

Cont

Design Considerations

Related work

Existing approaches in Dataset development

Large project like OSCAR [@OSCAR], HPLT [@hplt], and fineweb [@fineweb] release iterative version of dataset derived from commoncrawl [@commoncrawl]. These approaches make it hard to contributors to join contribute and siloes dataset development in a few institutions. Furthermore the focus commoncrawl ignores other valuable resources such as public APIs and comes with a slew of ethical and legal concerns [@missing] which effect only the usefulness of the datasets but also the models derived from these. While these resources such as individual dataset derived from APIs would be extensive to collect for individual groups as they rarely offer enough data to be worth the time opening up this approach to a community makes these approaches more viable.

Opening up development pipeline also increases openness around the dataset collection. ADD SOMETHING on inclusion here.

Read up on fineweb!!! (I assume they do some CI)

Other successful open-source project: dependency treebank project [@dep_treebank], ...

Existing projects on open-licensed data [@elutherAI]

We note that our approach is complementary to existing projects such as fineweb

Danish and Scandinavian Datasets

Lacunae of danish [@cite] Danish gigaword [@dagw] Swedish gigaword? [@swedish] NCC [@ncc_kummervold]

Existing benchmark covering Scandinavian languages such as ScandEval [@scandeval; @scandeval2] and SEB [@seb] argue that reasonable to evalaute on the

Methods

Continuous Integration

Our approach for continuous integration, how to submit, what we test for.

Results

Dataset collection

Current collection.

Source Date Domain License Size
Legal
Retsinformation date range Legal, Written 188M
...
Total

For a description of each dataset we refer to the public repository.

Conclusion

Dataset delivery

Limitation

  • Is danish too limited: Should we consider multilingual sources, scandinavian, germanic, English

  • Size:

    • The size is currently limited if the size grows to large developing becomes problematic
    • This is still way smaller than what could be extracted from CC
  • Only Danish: While developing CI for datasets is by no means new [@missing] doing so for open pre-training datasets open a collaborative fashion has not been tested on a larger scale. Once the approach has been validated we plan to host a collaboration along with huggingface to develop these dataset sources.

  • Huggingface datasets as a development platform for datasets: Througout this work it was clear to many of the developers that the ease of contributing minor changes (e.g. filtering out a few bad examples) was both hard to create a PRs for and hard to review often requiring the reviewer to simply trust that the user did what was stated in the commit message. While previous projects have tackled this issue using human readable formats [@dep_treebank], due to the scope of the dataset this would quickly become inefficient. This lack of clarity increased the likelihood of dataset attacks such as dataset poisoning [@missing]. We expect to see both interface development and software development to detect and prevent such attacks.

  • Machine generated content within training data: Not

  • Often we are interested in high-quality data when training an LLM. However the presented dynaword only performs a minimal level of cleaning. While this is a deliberate decision as certain model choices might warrant for different cleaning approaches. This could leave a substantial level of post-processing to the user of the dataset.

Ethical and Environmental consideration

enviromental:

  • common codebase lead to less duplication of dataset and reduces storage required
  • continual ci running on large datasets could be a concern. Currently out tests use a total of XXX Co2-eq (estimated using codecarbon). however we have already seen people using training [@fineweb] and evaluating LLMs to appriximate dataset quality, such workflows could quickly incrase the co2 consumption.

Aditional content

Comparison table

Size Sufficient Documentation Data availability Legal Status Quality
Danish Dynaword (Ours) 3.5B Replicable^ Open Access Openly Licensed Mixed (high)
Danish Gigaword* Documentary Open Access Openly Licensed Mixed (high)
Common Corpus (dan) Replicable Open Access Openly Licensed OCR (low)
Fineweb (dan) Replicable Open Access Mixed (medium)

*The Danish gigaword subsection included in Danish Dynaword. I.e. the subsection that is permissibly licensed. ^Some datasets are derived from Danish Gigaword, some of these subsection are not (currently) replicable

This follows the scheme from figure 1 (https://arxiv.org/abs/2501.08365)

Add comparison number of tokens comparison: Common Corpus (DA) - Gigaword (DA) - Open M-Fineweb (DA) - -->