Datasets:
Tasks:
Text Generation
Formats:
parquet
Sub-tasks:
language-modeling
Languages:
Danish
Size:
10M - 100M
ArXiv:
DOI:
License:
restructure-datasets
#11
by
KennethEnevoldsen
- opened
This view is limited to 50 files because it contains too many changes.
See the raw diff here.
- .gitignore +0 -21
- .vscode/data/memo/tmp/Corpus-v1.1 +0 -1
- .vscode/settings.json +2 -2
- CHANGELOG.md +0 -159
- CONTRIBUTING.md +4 -95
- README.md +157 -488
- data/adl/adl.md +40 -82
- data/adl/adl.parquet +2 -2
- data/adl/descriptive_stats.json +0 -9
- data/adl/images/dist_document_length.png +0 -3
- data/ai-aktindsigt/ai-aktindsigt.md +0 -85
- data/ai-aktindsigt/create.py +0 -64
- data/ai-aktindsigt/descriptive_stats.json +0 -9
- data/ai-aktindsigt/images/dist_document_length.png +0 -3
- data/botxt/botxt.md +40 -77
- data/botxt/botxt.parquet +2 -2
- data/botxt/descriptive_stats.json +0 -9
- data/botxt/images/dist_document_length.png +0 -3
- data/cellar/cellar.md +0 -77
- data/cellar/cellar.parquet +0 -3
- data/cellar/create.py +0 -60
- data/cellar/descriptive_stats.json +0 -9
- data/cellar/images/dist_document_length.png +0 -3
- data/dannet/dannet.md +63 -89
- data/dannet/dannet.parquet +2 -2
- data/dannet/descriptive_stats.json +0 -9
- data/dannet/images/dist_document_length.png +0 -3
- data/danske-taler/create.py +0 -314
- data/danske-taler/danske-taler.log +0 -167
- data/danske-taler/danske-taler.md +0 -135
- data/danske-taler/danske-taler.parquet +0 -3
- data/danske-taler/descriptive_stats.json +0 -9
- data/danske-taler/images/dist_document_length.png +0 -3
- data/depbank/depbank.md +33 -97
- data/depbank/depbank.parquet +2 -2
- data/depbank/descriptive_stats.json +0 -9
- data/depbank/images/dist_document_length.png +0 -3
- data/domsdatabasen/create.py +0 -344
- data/domsdatabasen/descriptive_stats.json +0 -9
- data/domsdatabasen/domsdatabasen.md +0 -119
- data/domsdatabasen/domsdatabasen.parquet +0 -3
- data/domsdatabasen/images/dist_document_length.png +0 -3
- data/enevaeldens_nyheder/create.py +0 -96
- data/enevaeldens_nyheder/descriptive_stats.json +0 -9
- data/enevaeldens_nyheder/enevaeldens_nyheder.log +0 -9
- data/enevaeldens_nyheder/enevaeldens_nyheder.md +0 -172
- data/enevaeldens_nyheder/enevaeldens_nyheder.parquet +0 -3
- data/enevaeldens_nyheder/images/coverage-of-newspapers.jpeg +0 -3
- data/enevaeldens_nyheder/images/dist_document_length.png +0 -3
- data/enevaeldens_nyheder/images/distribution-pr-year.jpeg +0 -3
.gitignore
CHANGED
@@ -1,25 +1,4 @@
|
|
1 |
# Python
|
2 |
__pycache__/*
|
3 |
*.pyc
|
4 |
-
|
5 |
-
# cSpell
|
6 |
-
cspell.json
|
7 |
-
|
8 |
-
# debugfile
|
9 |
-
.vscode/launch.json
|
10 |
-
|
11 |
-
# tmp files
|
12 |
tmp.py
|
13 |
-
tmp.png
|
14 |
-
|
15 |
-
# MacOS
|
16 |
-
.DS_Store
|
17 |
-
|
18 |
-
# tmp files
|
19 |
-
tmp.py
|
20 |
-
|
21 |
-
## to allow temporary data drops without pushing it to the hub
|
22 |
-
data/*/tmp/*
|
23 |
-
|
24 |
-
## node_modules
|
25 |
-
**/node_modules/
|
|
|
1 |
# Python
|
2 |
__pycache__/*
|
3 |
*.pyc
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
4 |
tmp.py
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
.vscode/data/memo/tmp/Corpus-v1.1
DELETED
@@ -1 +0,0 @@
|
|
1 |
-
Subproject commit 7205897f1f3ee65e296072f3e96d49488e54e8ce
|
|
|
|
.vscode/settings.json
CHANGED
@@ -1,7 +1,7 @@
|
|
1 |
{
|
2 |
"python.testing.pytestArgs": [
|
3 |
-
"
|
4 |
],
|
5 |
"python.testing.unittestEnabled": false,
|
6 |
-
"python.testing.pytestEnabled": true
|
7 |
}
|
|
|
1 |
{
|
2 |
"python.testing.pytestArgs": [
|
3 |
+
"."
|
4 |
],
|
5 |
"python.testing.unittestEnabled": false,
|
6 |
+
"python.testing.pytestEnabled": true
|
7 |
}
|
CHANGELOG.md
DELETED
@@ -1,159 +0,0 @@
|
|
1 |
-
|
2 |
-
# Changelog
|
3 |
-
|
4 |
-
All notable changes to this project will be documented in this file.
|
5 |
-
|
6 |
-
The format is based on [Keep a Changelog](http://keepachangelog.com/en/1.0.0/).
|
7 |
-
|
8 |
-
## [v1.2.10] - 2025-08-18
|
9 |
-
|
10 |
-
### Changed
|
11 |
-
|
12 |
-
- Updated the wiki, wikibooks, wikisource datasets.
|
13 |
-
- Changed `wiki` to `wikipedia`
|
14 |
-
- Fixed rounding error in average token count
|
15 |
-
- Improved the speed of token counting
|
16 |
-
|
17 |
-
### Added
|
18 |
-
|
19 |
-
- Added `create.py` for wiki, wikibooks, wikisource.
|
20 |
-
|
21 |
-
## [v1.2.9] - 2025-08-05
|
22 |
-
|
23 |
-
### Docs
|
24 |
-
|
25 |
-
- Average document length now uses tokens instead of characters
|
26 |
-
- Added vizualization for checking document length in sub datasets
|
27 |
-
- Changes to `*/descriptive_stats.json`:
|
28 |
-
- The object no longer includes revision.
|
29 |
-
- Now include character-level metrics along with minimum and maximum length. Removed average document length as it is computable from existing metrics.
|
30 |
-
- Removed per-dataset histograms from the main readme. The goal is to avoid loading the entire dataset when updating the readme. This should make it easier for contributors.
|
31 |
-
- Simplifying PR workflow in `contributing.md`
|
32 |
-
|
33 |
-
### CI
|
34 |
-
- Fixes bug causing `make update-descriptive-stats` to fail when not having a linear commit history. The script now skips a dataset update based on revision, but only if the `descriptive_stats.json` file does not exist. To ensure that the main readme is always up to date, we change the make command always to update it.
|
35 |
-
|
36 |
-
## [v1.2.8] - 2025-08-05
|
37 |
-
|
38 |
-
### Added
|
39 |
-
|
40 |
-
- Added dataset: Enevældens Nyheder Online (`enevaeldens_nyheder`). This brings us to >5B tokens!
|
41 |
-
|
42 |
-
## [v1.2.7] - 2025-07-22
|
43 |
-
|
44 |
-
### Added
|
45 |
-
|
46 |
-
- Added dataset: Grundtvigs Works (`grundtvig`)
|
47 |
-
- Added bias and risk section to the README
|
48 |
-
|
49 |
-
## [v1.2.6] - 2025-07-21
|
50 |
-
|
51 |
-
### Added
|
52 |
-
|
53 |
-
- Added two table to get an overview of data by license and domain
|
54 |
-
|
55 |
-
### Changed
|
56 |
-
|
57 |
-
- Dataset overview table now appears in a drop down menu
|
58 |
-
|
59 |
-
## [v1.2.5] - 2025-07-08
|
60 |
-
|
61 |
-
### Added
|
62 |
-
|
63 |
-
- Added the `domsdatabasen` dataset.
|
64 |
-
|
65 |
-
## [v1.2.4] - 2025-07-08
|
66 |
-
|
67 |
-
### Added
|
68 |
-
|
69 |
-
- Add a plot for tokens over time to see how the dataset develops
|
70 |
-
- Minor documentation improvements in the main readme
|
71 |
-
|
72 |
-
### Changed
|
73 |
-
|
74 |
-
- Rename `scrape_hovedstaden` to `health_hovedstaden` avoid confusion with its pretty name
|
75 |
-
|
76 |
-
## [v1.2.3] - 2025-06-30
|
77 |
-
|
78 |
-
### Added
|
79 |
-
|
80 |
-
- Added a `create.py` script for the `retsinformationdk` dataset.
|
81 |
-
- Resulted in a boost in tokens and documents
|
82 |
-
|
83 |
-
### Changed
|
84 |
-
|
85 |
-
- Did a full stats update on datasets, resulting in minor changes in a few datasheets
|
86 |
-
|
87 |
-
## [v1.2.2] - 2025-06-26
|
88 |
-
|
89 |
-
### Added
|
90 |
-
|
91 |
-
- Added the new `scrape_hovedstaden` dataset.
|
92 |
-
- Added a new domain type `Medical`.
|
93 |
-
|
94 |
-
## [v1.2.1] - 2025-06-24
|
95 |
-
|
96 |
-
### Fixed
|
97 |
-
|
98 |
-
- Updated the danske-taler dataset. This version fixes a problem where the texts from the API contains no newlines, and where there should have been newline there is now space between words and punctuation.
|
99 |
-
|
100 |
-
## [v1.2.0] - 2025-06-23
|
101 |
-
|
102 |
-
### Fixed
|
103 |
-
|
104 |
-
- Updated the memo dataset, this second version fixed previous [issues](https://huggingface.co/datasets/danish-foundation-models/danish-dynaword/discussions/67) with the download and processing of the Danish Memo which cut off the text leading to notably smaller documents.
|
105 |
-
|
106 |
-
## [v1.1.1] - 2025-06-16
|
107 |
-
|
108 |
-
### Added
|
109 |
-
|
110 |
-
- Added tests to ensure that 1 tokens document don't appear in the data. This filtered out 0 documents in total.
|
111 |
-
|
112 |
-
## [v1.1.0] - 2025-04-29
|
113 |
-
|
114 |
-
### Added
|
115 |
-
|
116 |
-
- Added multiple quality controls
|
117 |
-
- Removed all empty string
|
118 |
-
- Removed duplicates across within datasets
|
119 |
-
- Restructured datasets
|
120 |
-
- Removed columns from the dataset to make the structure more lightweight, these include domain, metadata, and license. These have been moved to the individual datasheets. It is still possible to filter for license by using the dataset name
|
121 |
-
- Added column for number of tokens
|
122 |
-
- For developers
|
123 |
-
- Restructered CI codebase substantially
|
124 |
-
- Added `DataSheet` to make CI for convenient
|
125 |
-
- factored out plots and tables
|
126 |
-
|
127 |
-
### Docs
|
128 |
-
|
129 |
-
- Sorted overview table
|
130 |
-
- Minor changes to dataset documentation
|
131 |
-
|
132 |
-
|
133 |
-
## [v1.0.12] - 2025-05-08
|
134 |
-
|
135 |
-
### Added
|
136 |
-
|
137 |
-
- Added new datasets
|
138 |
-
- Norwegian Colossal Corpus (newspapers) (~191.08K tokens)
|
139 |
-
- Norwegian Colossal Corpus (books) (~531.97M tokens)
|
140 |
-
- Norwegian Colossal Corpus (maalfrid) (~29.26M tokens)
|
141 |
-
- Norwegian Colossal Corpus (parliament) (~338.87M tokens)
|
142 |
-
|
143 |
-
## [v1.0.11] - 2025-03-29
|
144 |
-
|
145 |
-
### Added
|
146 |
-
|
147 |
-
- Added new datasets (more than 1B tokens 🎉)
|
148 |
-
- AI Aktindsigt
|
149 |
-
- Cellar
|
150 |
-
- Danske Taler
|
151 |
-
- Miljøportalen
|
152 |
-
- EUR-Lex SUM
|
153 |
-
- Finansministeriets Udgivelser
|
154 |
-
|
155 |
-
### Docs
|
156 |
-
|
157 |
-
- Sorted main table in readme
|
158 |
-
- Added Changelog
|
159 |
-
- Minor changes to dataset documentation
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
CONTRIBUTING.md
CHANGED
@@ -3,9 +3,8 @@
|
|
3 |
A huggingface datasets repository is a GitHub repository like any other. You can simply download it like so:
|
4 |
|
5 |
```bash
|
6 |
-
git clone https://huggingface.co/datasets/danish-foundation-models/danish-
|
7 |
-
cd danish-
|
8 |
-
git lfs pull # download large files to ensure that tests works
|
9 |
```
|
10 |
|
11 |
You can the work with the dataset locally like so:
|
@@ -13,99 +12,9 @@ You can the work with the dataset locally like so:
|
|
13 |
```py
|
14 |
from datasets import load_dataset
|
15 |
|
16 |
-
name = "../." # instead of "danish-foundation-models/danish-
|
17 |
dataset = load_dataset("../.", split="train")
|
18 |
# make transformations here
|
19 |
```
|
20 |
|
21 |
-
> Note: While it is local Huggingface still uses a cache, therefore you might need to reset it after changes have been made to see that it works correctly. You can do this by deleting the cached files which you can locate using `dataset.cache_files`.
|
22 |
-
|
23 |
-
## Adding a new dataset
|
24 |
-
|
25 |
-
To add a new dataset you will have to create a folder under `data/{dataset_name}/`, which should look as follows:
|
26 |
-
|
27 |
-
```
|
28 |
-
data/dataset_name
|
29 |
-
|- dataset_name.md
|
30 |
-
|- dataset_name.parquet
|
31 |
-
|- create.py # optional
|
32 |
-
```
|
33 |
-
|
34 |
-
The create.py is an optional python script that allow you to recreate the dataset from the source. This is typically to allow us to reproduce the
|
35 |
-
dataset with fixes or update the dataset to the latest version using an API.
|
36 |
-
|
37 |
-
## Installing dependencies
|
38 |
-
|
39 |
-
This repo comes with a few dependencies you need to install to make this run. It uses a [makefile](https://opensource.com/article/18/8/what-how-makefile) to run commands and a [uv](https://docs.astral.sh/uv/) for package management. Once you have uv installed you can install the dependencies using:
|
40 |
-
|
41 |
-
```bash
|
42 |
-
make install
|
43 |
-
```
|
44 |
-
|
45 |
-
## Running dataset tests
|
46 |
-
|
47 |
-
This dataset is special as it comes with a test suite, e.g. testing in the ids are unique and that the format is consistent. You can run the suite using
|
48 |
-
|
49 |
-
```bash
|
50 |
-
make test
|
51 |
-
```
|
52 |
-
|
53 |
-
## Submitting a PR
|
54 |
-
|
55 |
-
Creating a PR on Huggingface is a bit different from creating one on Github.
|
56 |
-
|
57 |
-
1) Go to the community tab on huggingface press *new pull request* and choose *on your machine*. Specify the title of the your PR. Then you can simply:
|
58 |
-
|
59 |
-
```bash
|
60 |
-
git checkout -b {new branch name}
|
61 |
-
# make your changes here
|
62 |
-
|
63 |
-
# push to hub
|
64 |
-
# you might need to first login:
|
65 |
-
# huggingface-cli login
|
66 |
-
git push origin HEAD:refs/pr/{PR NUMBER}
|
67 |
-
```
|
68 |
-
Where HEAD refers to the current branch.
|
69 |
-
|
70 |
-
Before you make the PR do be sure to make sure that you have completed the checklist below.
|
71 |
-
|
72 |
-
### Making changes to an existing PR
|
73 |
-
|
74 |
-
As a contributor you might need to develop on an existing branch. To do so you you
|
75 |
-
```bash
|
76 |
-
# fetch and checkout existing branch:
|
77 |
-
git fetch origin refs/pr/{PR NUMBER}:pr/{PR NUMBER}
|
78 |
-
git checkout pr/{PR NUMBER}
|
79 |
-
# make your changes here
|
80 |
-
|
81 |
-
# push changes
|
82 |
-
```
|
83 |
-
|
84 |
-
### Checklist
|
85 |
-
|
86 |
-
- [ ] I have run the test suite using `make test` and all tests pass
|
87 |
-
- [ ] I have added/changed a dataset:
|
88 |
-
- [ ] I have updated descriptive statistics using `make update-descriptive-statistics`
|
89 |
-
- [ ] I have bumped the version use `make bump-version`
|
90 |
-
- [ ] If I have added a `create.py` script I have added the [script dependencies](https://docs.astral.sh/uv/guides/scripts/#declaring-script-dependencies) required to run that script.
|
91 |
-
- [ ] I have updated the CHANGELOG.md if appropriate
|
92 |
-
|
93 |
-
|
94 |
-
### Examples of Previous PRs
|
95 |
-
To see example PR you can see the following:
|
96 |
-
|
97 |
-
- [Restructuring columns in the dataset](https://huggingface.co/datasets/danish-foundation-models/danish-dynaword/discussions/11)
|
98 |
-
- [Adding a new dataset](https://huggingface.co/datasets/danish-foundation-models/danish-dynaword/discussions/15)
|
99 |
-
- Updated [dataset description and metadata](https://huggingface.co/datasets/danish-foundation-models/danish-dynaword/discussions/20)
|
100 |
-
|
101 |
-
## Frequently asked questions
|
102 |
-
|
103 |
-
### Do you accept synthetic dataets
|
104 |
-
|
105 |
-
Yes we do generally accept synthetic datasets since it will likely be a promising research direction for low- to mid-resource languages.
|
106 |
-
However, you should be aware that synthetic dataset will probably require a more detailed examination and description.
|
107 |
-
We will for instance examine the quality of the synthetic subset and whether the model used for the creation permits resharing of the synthetic data under permissible licenses.
|
108 |
-
|
109 |
-
### Do you accept non-Danish data
|
110 |
-
|
111 |
-
Generally this repository is intended for Danish text, however quite broadly defined. For instance, we do accept data containing [code-switching](https://www.google.com/search?client=safari&rls=en&q=code+switching&ie=UTF-8&oe=UTF-8) and historical Danish text.
|
|
|
3 |
A huggingface datasets repository is a GitHub repository like any other. You can simply download it like so:
|
4 |
|
5 |
```bash
|
6 |
+
git clone https://huggingface.co/datasets/danish-foundation-models/danish-gigaword-2
|
7 |
+
cd danish-gigaword-2
|
|
|
8 |
```
|
9 |
|
10 |
You can the work with the dataset locally like so:
|
|
|
12 |
```py
|
13 |
from datasets import load_dataset
|
14 |
|
15 |
+
name = "../." # instead of "danish-foundation-models/danish-gigaword-2"
|
16 |
dataset = load_dataset("../.", split="train")
|
17 |
# make transformations here
|
18 |
```
|
19 |
|
20 |
+
> Note: While it is local Huggingface still uses a cache, therefore you might need to reset it after changes have been made to see that it works correctly. You can do this by deleting the cached files which you can locate using `dataset.cache_files`.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
README.md
CHANGED
@@ -1,85 +1,10 @@
|
|
1 |
---
|
2 |
-
|
3 |
-
- no-annotation
|
4 |
-
language_creators:
|
5 |
-
- crowdsourced
|
6 |
-
language:
|
7 |
-
- da
|
8 |
-
license: cc0-1.0
|
9 |
-
multilinguality:
|
10 |
-
- monolingual
|
11 |
-
source_datasets:
|
12 |
-
- original
|
13 |
-
task_categories:
|
14 |
-
- text-generation
|
15 |
-
task_ids:
|
16 |
-
- language-modeling
|
17 |
-
tags:
|
18 |
-
- text-corpus
|
19 |
-
- continual-development
|
20 |
-
- community-collaboration
|
21 |
-
pretty_name: Danish Dynaword
|
22 |
configs:
|
23 |
- config_name: default
|
24 |
data_files:
|
25 |
- split: train
|
26 |
-
path: data/*/*.parquet
|
27 |
-
- config_name: ai-aktindsigt
|
28 |
-
data_files:
|
29 |
-
- split: train
|
30 |
-
path: data/ai-aktindsigt/*.parquet
|
31 |
-
- config_name: cellar
|
32 |
-
data_files:
|
33 |
-
- split: train
|
34 |
-
path: data/cellar/*.parquet
|
35 |
-
- config_name: enevaeldens_nyheder
|
36 |
-
data_files:
|
37 |
-
- split: train
|
38 |
-
path: data/enevaeldens_nyheder/*.parquet
|
39 |
-
- config_name: grundtvig
|
40 |
-
data_files:
|
41 |
-
- split: train
|
42 |
-
path: data/grundtvig/*.parquet
|
43 |
-
- config_name: danske-taler
|
44 |
-
data_files:
|
45 |
-
- split: train
|
46 |
-
path: data/danske-taler/*.parquet
|
47 |
-
- config_name: ncc_books
|
48 |
-
data_files:
|
49 |
-
- split: train
|
50 |
-
path: data/ncc_books/*.parquet
|
51 |
-
- config_name: ncc_newspaper
|
52 |
-
data_files:
|
53 |
-
- split: train
|
54 |
-
path: data/ncc_newspaper/*.parquet
|
55 |
-
- config_name: ncc_maalfrid
|
56 |
-
data_files:
|
57 |
-
- split: train
|
58 |
-
path: data/ncc_maalfrid/*.parquet
|
59 |
-
- config_name: ncc_parliament
|
60 |
-
data_files:
|
61 |
-
- split: train
|
62 |
-
path: data/ncc_parliament/*.parquet
|
63 |
-
- config_name: eur-lex-sum-da
|
64 |
-
data_files:
|
65 |
-
- split: train
|
66 |
-
path: data/eur-lex-sum-da/*.parquet
|
67 |
-
- config_name: miljoeportalen
|
68 |
-
data_files:
|
69 |
-
- split: train
|
70 |
-
path: data/miljoeportalen/*.parquet
|
71 |
-
- config_name: fm-udgivelser
|
72 |
-
data_files:
|
73 |
-
- split: train
|
74 |
-
path: data/fm-udgivelser/*.parquet
|
75 |
-
- config_name: memo
|
76 |
-
data_files:
|
77 |
-
- split: train
|
78 |
-
path: data/memo/*.parquet
|
79 |
-
- config_name: opensubtitles
|
80 |
-
data_files:
|
81 |
-
- split: train
|
82 |
-
path: data/opensubtitles/*.parquet
|
83 |
- config_name: retsinformationdk
|
84 |
data_files:
|
85 |
- split: train
|
@@ -152,290 +77,97 @@ configs:
|
|
152 |
data_files:
|
153 |
- split: train
|
154 |
path: data/synne/*.parquet
|
155 |
-
- config_name:
|
156 |
data_files:
|
157 |
- split: train
|
158 |
-
path: data/
|
159 |
-
- config_name: nordjyllandnews
|
160 |
-
data_files:
|
161 |
-
- split: train
|
162 |
-
path: data/nordjyllandnews/*.parquet
|
163 |
- config_name: relig
|
164 |
data_files:
|
165 |
- split: train
|
166 |
path: data/relig/*.parquet
|
167 |
-
|
168 |
-
|
169 |
-
|
170 |
-
|
171 |
-
|
172 |
-
|
173 |
-
|
174 |
-
|
175 |
-
|
176 |
-
|
177 |
-
|
178 |
-
|
|
|
|
|
|
|
179 |
language_bcp47:
|
180 |
- da
|
181 |
- da-bornholm
|
182 |
- da-synnejyl
|
183 |
---
|
184 |
|
185 |
-
|
186 |
-
readme structure is inspired by:
|
187 |
-
https://github.com/huggingface/datasets/blob/main/templates/README_guide.md
|
188 |
-
-->
|
189 |
-
|
190 |
-
|
191 |
-
# 🧨 Danish Dynaword
|
192 |
-
|
193 |
-
|
194 |
-
<!-- START README TABLE -->
|
195 |
-
| | |
|
196 |
-
| ------------ | ----------------------------------------------------------------------------------------------------------------------------------------------------------- |
|
197 |
-
| **Version** | 1.2.10 ([Changelog](/CHANGELOG.md)) |
|
198 |
-
| **Language** | dan, dansk, Danish |
|
199 |
-
| **License** | Openly Licensed, See the respective dataset |
|
200 |
-
| **Models** | For model trained used this data see [danish-foundation-models](https://huggingface.co/danish-foundation-models) |
|
201 |
-
| **Contact** | If you have question about this project please create an issue [here](https://huggingface.co/datasets/danish-foundation-models/danish-dynaword/discussions) |
|
202 |
|
|
|
203 |
|
204 |
-
|
205 |
-
<!-- END README TABLE -->
|
206 |
|
207 |
## Table of Contents
|
208 |
-
- [
|
209 |
- [Table of Contents](#table-of-contents)
|
210 |
- [Dataset Description](#dataset-description)
|
211 |
- [Dataset Summary](#dataset-summary)
|
212 |
- [Loading the dataset](#loading-the-dataset)
|
213 |
-
- [Languages](#languages)
|
214 |
-
- [Domains](#domains)
|
215 |
-
- [Licensing](#licensing)
|
216 |
- [Dataset Structure](#dataset-structure)
|
217 |
- [Data Instances](#data-instances)
|
218 |
- [Data Fields](#data-fields)
|
219 |
- [Data Splits](#data-splits)
|
220 |
- [Dataset Creation](#dataset-creation)
|
221 |
-
- [Curation Rationale](#curation-rationale)
|
222 |
-
- [Annotations](#annotations)
|
223 |
- [Source Data](#source-data)
|
224 |
-
|
225 |
-
- [
|
226 |
-
- [Contributing to the dataset](#contributing-to-the-dataset)
|
227 |
-
- [Citation Information](#citation-information)
|
228 |
-
- [License information](#license-information)
|
229 |
-
- [Personal and Sensitive Information](#personal-and-sensitive-information)
|
230 |
-
- [Bias, Risks, and Limitations](#bias-risks-and-limitations)
|
231 |
-
- [Notice and takedown policy](#notice-and-takedown-policy)
|
232 |
|
233 |
## Dataset Description
|
234 |
|
235 |
-
|
236 |
-
- **Number of samples**: 5.60M
|
237 |
-
- **Number of tokens (Llama 3)**: 5.88B
|
238 |
-
- **Average document length in tokens (min, max)**: 1.05K (2, 9.81M)
|
239 |
-
<!-- END-DESC-STATS -->
|
240 |
-
|
241 |
|
242 |
### Dataset Summary
|
243 |
|
244 |
-
The Danish
|
245 |
-
and deemed permissible for training large language models.
|
246 |
-
|
247 |
-
Danish Dynaword is continually developed, which means that the dataset will actively be updated as new datasets become available. If you would like to contribute a dataset see the [contribute section](#contributing-to-the-dataset).
|
248 |
|
249 |
### Loading the dataset
|
250 |
|
251 |
```py
|
252 |
from datasets import load_dataset
|
253 |
|
254 |
-
name = "danish-foundation-models/danish-
|
255 |
ds = load_dataset(name, split = "train")
|
256 |
sample = ds[1] # see "Data Instances" below
|
257 |
-
```
|
258 |
|
259 |
-
or load
|
260 |
-
```py
|
261 |
ds = load_dataset(name, split = "train", streaming=True)
|
262 |
-
|
263 |
-
sample = next(iter(dataset_iter))
|
264 |
-
```
|
265 |
-
|
266 |
-
You can also load a single subset at a time:
|
267 |
-
```py
|
268 |
-
ds = load_dataset(name, "adl", split = "train")
|
269 |
-
```
|
270 |
-
|
271 |
-
|
272 |
-
As Danish Dynaword is continually expanding and curated you can make sure that you get the same dataset every time by specifying the revision:
|
273 |
-
You can also load a single subset at a time:
|
274 |
-
```py
|
275 |
-
ds = load_dataset(name, revision="{desired revision}")
|
276 |
```
|
277 |
|
278 |
-
### Languages
|
279 |
-
This dataset includes the following languages:
|
280 |
-
|
281 |
-
- Danish (dan-Latn) as we as the dialects Bornholmsk (dan-Latn-bornholm) and Synderjysk (dan-Latn-synnejyl)
|
282 |
-
|
283 |
-
In addition it likely contains small amounts of English due to code-switching and Norwegian due to the historical relation between the two languages and language misclassificaitons due to their similarity.
|
284 |
-
|
285 |
-
Language is denoted using [BCP-47](https://en.wikipedia.org/wiki/IETF_language_tag), using the langauge code ISO 639-3 and the script code ISO 15924. The third element denote the region variant.
|
286 |
-
|
287 |
-
|
288 |
-
### Domains
|
289 |
-
|
290 |
-
This dynaword consist of data from various domains (e.g., legal, books, social media). The following table and figure give an overview of the relative distributions of these domains. To see a full overview of the source check out the [source data section](#source-data)
|
291 |
-
|
292 |
-
<div style="display: flex; gap: 20px; align-items: flex-start;">
|
293 |
-
|
294 |
-
<div style="flex: 1;">
|
295 |
-
|
296 |
-
|
297 |
-
<!-- START-DOMAIN TABLE -->
|
298 |
-
| Domain | Sources | N. Tokens |
|
299 |
-
|:-------------|:---------------------------------------------------------------------------------------------------------|:------------|
|
300 |
-
| Legal | [cellar], [eur-lex-sum-da], [fm-udgivelser], [retsinformationdk], [skat], [retspraksis], [domsdatabasen] | 2.32B |
|
301 |
-
| News | [enevaeldens_nyheder], [ncc_newspaper], [tv2r], [nordjyllandnews] | 1.09B |
|
302 |
-
| Books | [grundtvig], [ncc_books], [memo], [adl], [wikibooks], [jvj], [gutenberg], [relig] | 733.92M |
|
303 |
-
| Conversation | [danske-taler], [opensubtitles], [ep], [ft], [spont], [naat] | 497.09M |
|
304 |
-
| Social Media | [hest] | 389.32M |
|
305 |
-
| Other | [ncc_parliament], [dannet], [depbank], [synne] | 340.59M |
|
306 |
-
| Web | [ai-aktindsigt], [ncc_maalfrid], [miljoeportalen] | 295.87M |
|
307 |
-
| Encyclopedic | [wikisource], [wikipedia] | 179.61M |
|
308 |
-
| Medical | [health_hovedstaden] | 27.07M |
|
309 |
-
| Readaloud | [nota] | 7.30M |
|
310 |
-
| Dialect | [botxt] | 847.97K |
|
311 |
-
| **Total** | | 5.88B |
|
312 |
-
|
313 |
-
[ai-aktindsigt]: data/ai-aktindsigt/ai-aktindsigt.md
|
314 |
-
[cellar]: data/cellar/cellar.md
|
315 |
-
[enevaeldens_nyheder]: data/enevaeldens_nyheder/enevaeldens_nyheder.md
|
316 |
-
[grundtvig]: data/grundtvig/grundtvig.md
|
317 |
-
[danske-taler]: data/danske-taler/danske-taler.md
|
318 |
-
[ncc_books]: data/ncc_books/ncc_books.md
|
319 |
-
[ncc_newspaper]: data/ncc_newspaper/ncc_newspaper.md
|
320 |
-
[ncc_maalfrid]: data/ncc_maalfrid/ncc_maalfrid.md
|
321 |
-
[ncc_parliament]: data/ncc_parliament/ncc_parliament.md
|
322 |
-
[eur-lex-sum-da]: data/eur-lex-sum-da/eur-lex-sum-da.md
|
323 |
-
[miljoeportalen]: data/miljoeportalen/miljoeportalen.md
|
324 |
-
[fm-udgivelser]: data/fm-udgivelser/fm-udgivelser.md
|
325 |
-
[memo]: data/memo/memo.md
|
326 |
-
[opensubtitles]: data/opensubtitles/opensubtitles.md
|
327 |
-
[retsinformationdk]: data/retsinformationdk/retsinformationdk.md
|
328 |
-
[ep]: data/ep/ep.md
|
329 |
-
[ft]: data/ft/ft.md
|
330 |
-
[wikisource]: data/wikisource/wikisource.md
|
331 |
-
[spont]: data/spont/spont.md
|
332 |
-
[tv2r]: data/tv2r/tv2r.md
|
333 |
-
[adl]: data/adl/adl.md
|
334 |
-
[hest]: data/hest/hest.md
|
335 |
-
[skat]: data/skat/skat.md
|
336 |
-
[dannet]: data/dannet/dannet.md
|
337 |
-
[retspraksis]: data/retspraksis/retspraksis.md
|
338 |
-
[wikibooks]: data/wikibooks/wikibooks.md
|
339 |
-
[jvj]: data/jvj/jvj.md
|
340 |
-
[gutenberg]: data/gutenberg/gutenberg.md
|
341 |
-
[botxt]: data/botxt/botxt.md
|
342 |
-
[depbank]: data/depbank/depbank.md
|
343 |
-
[naat]: data/naat/naat.md
|
344 |
-
[synne]: data/synne/synne.md
|
345 |
-
[wikipedia]: data/wikipedia/wikipedia.md
|
346 |
-
[nordjyllandnews]: data/nordjyllandnews/nordjyllandnews.md
|
347 |
-
[relig]: data/relig/relig.md
|
348 |
-
[nota]: data/nota/nota.md
|
349 |
-
[health_hovedstaden]: data/health_hovedstaden/health_hovedstaden.md
|
350 |
-
[domsdatabasen]: data/domsdatabasen/domsdatabasen.md
|
351 |
-
<!-- END-DOMAIN TABLE -->
|
352 |
-
|
353 |
-
</div>
|
354 |
-
|
355 |
-
<div style="flex: 1;">
|
356 |
-
|
357 |
-
<p align="center">
|
358 |
-
<img src="./images/domain_distribution.png" width="400" style="margin-right: 10px;" />
|
359 |
-
</p>
|
360 |
-
|
361 |
-
</div>
|
362 |
-
|
363 |
-
</div>
|
364 |
-
|
365 |
-
|
366 |
-
### Licensing
|
367 |
-
|
368 |
-
The following gives an overview of the licensing in the Dynaword. To get the exact license of the individual datasets check out the [overview table](#source-data).
|
369 |
-
These license is applied to the constituent data, i.e., the text. The collection of datasets (metadata, quality control, etc.) is licensed under [CC-0](https://creativecommons.org/publicdomain/zero/1.0/legalcode.en).
|
370 |
-
|
371 |
-
<!-- START-LICENSE TABLE -->
|
372 |
-
| License | Sources | N. Tokens |
|
373 |
-
|:--------------------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:------------|
|
374 |
-
| CC-BY-SA 4.0 | [cellar], [enevaeldens_nyheder], [eur-lex-sum-da], [fm-udgivelser], [memo], [tv2r], [jvj], [depbank] | 2.41B |
|
375 |
-
| CC-0 | [grundtvig], [danske-taler], [ncc_books], [ncc_newspaper], [miljoeportalen], [opensubtitles], [ep], [ft], [wikisource], [spont], [adl], [hest], [skat], [retspraksis], [wikibooks], [botxt], [naat], [synne], [wikipedia], [nordjyllandnews], [relig], [nota], [health_hovedstaden] | 2.06B |
|
376 |
-
| Other (No attribution required) | [retsinformationdk], [domsdatabasen] | 904.61M |
|
377 |
-
| Other (Attribution required) | [ai-aktindsigt], [ncc_maalfrid], [ncc_parliament], [dannet], [gutenberg] | 515.61M |
|
378 |
-
| **Total** | | 5.88B |
|
379 |
-
|
380 |
-
[ai-aktindsigt]: data/ai-aktindsigt/ai-aktindsigt.md
|
381 |
-
[cellar]: data/cellar/cellar.md
|
382 |
-
[enevaeldens_nyheder]: data/enevaeldens_nyheder/enevaeldens_nyheder.md
|
383 |
-
[grundtvig]: data/grundtvig/grundtvig.md
|
384 |
-
[danske-taler]: data/danske-taler/danske-taler.md
|
385 |
-
[ncc_books]: data/ncc_books/ncc_books.md
|
386 |
-
[ncc_newspaper]: data/ncc_newspaper/ncc_newspaper.md
|
387 |
-
[ncc_maalfrid]: data/ncc_maalfrid/ncc_maalfrid.md
|
388 |
-
[ncc_parliament]: data/ncc_parliament/ncc_parliament.md
|
389 |
-
[eur-lex-sum-da]: data/eur-lex-sum-da/eur-lex-sum-da.md
|
390 |
-
[miljoeportalen]: data/miljoeportalen/miljoeportalen.md
|
391 |
-
[fm-udgivelser]: data/fm-udgivelser/fm-udgivelser.md
|
392 |
-
[memo]: data/memo/memo.md
|
393 |
-
[opensubtitles]: data/opensubtitles/opensubtitles.md
|
394 |
-
[retsinformationdk]: data/retsinformationdk/retsinformationdk.md
|
395 |
-
[ep]: data/ep/ep.md
|
396 |
-
[ft]: data/ft/ft.md
|
397 |
-
[wikisource]: data/wikisource/wikisource.md
|
398 |
-
[spont]: data/spont/spont.md
|
399 |
-
[tv2r]: data/tv2r/tv2r.md
|
400 |
-
[adl]: data/adl/adl.md
|
401 |
-
[hest]: data/hest/hest.md
|
402 |
-
[skat]: data/skat/skat.md
|
403 |
-
[dannet]: data/dannet/dannet.md
|
404 |
-
[retspraksis]: data/retspraksis/retspraksis.md
|
405 |
-
[wikibooks]: data/wikibooks/wikibooks.md
|
406 |
-
[jvj]: data/jvj/jvj.md
|
407 |
-
[gutenberg]: data/gutenberg/gutenberg.md
|
408 |
-
[botxt]: data/botxt/botxt.md
|
409 |
-
[depbank]: data/depbank/depbank.md
|
410 |
-
[naat]: data/naat/naat.md
|
411 |
-
[synne]: data/synne/synne.md
|
412 |
-
[wikipedia]: data/wikipedia/wikipedia.md
|
413 |
-
[nordjyllandnews]: data/nordjyllandnews/nordjyllandnews.md
|
414 |
-
[relig]: data/relig/relig.md
|
415 |
-
[nota]: data/nota/nota.md
|
416 |
-
[health_hovedstaden]: data/health_hovedstaden/health_hovedstaden.md
|
417 |
-
[domsdatabasen]: data/domsdatabasen/domsdatabasen.md
|
418 |
-
<!-- END-LICENSE TABLE -->
|
419 |
-
|
420 |
-
|
421 |
-
|
422 |
## Dataset Structure
|
423 |
|
424 |
-
The dataset contains text from different sources which are thoroughly defined in [Source Data](#source-data).
|
425 |
|
426 |
### Data Instances
|
427 |
|
428 |
Each entry in the dataset consists of a single text with associated metadata
|
429 |
|
430 |
-
<!-- START-SAMPLE -->
|
431 |
```py
|
432 |
{
|
433 |
-
|
434 |
-
|
435 |
-
|
436 |
-
|
437 |
-
|
438 |
-
|
|
|
|
|
|
|
439 |
}
|
440 |
```
|
441 |
|
@@ -443,13 +175,15 @@ Each entry in the dataset consists of a single text with associated metadata
|
|
443 |
|
444 |
An entry in the dataset consists of the following fields:
|
445 |
|
446 |
-
- `id` (`str`): An unique identifier for each document.
|
447 |
- `text`(`str`): The content of the document.
|
448 |
- `source` (`str`): The source of the document (see [Source Data](#source-data)).
|
|
|
449 |
- `added` (`str`): An date for when the document was added to this collection.
|
450 |
- `created` (`str`): An date range for when the document was originally created.
|
451 |
-
- `
|
452 |
-
|
|
|
|
|
453 |
|
454 |
### Data Splits
|
455 |
|
@@ -457,193 +191,128 @@ The entire corpus is provided in the `train` split.
|
|
457 |
|
458 |
## Dataset Creation
|
459 |
|
460 |
-
### Curation Rationale
|
461 |
-
|
462 |
-
These datasets were collected and curated with the intention of making openly license Danish data available. While this was collected with the intention of developing language models it is likely to have multiple other uses such as examining language development and differences across domains.
|
463 |
-
|
464 |
-
|
465 |
-
|
466 |
-
### Annotations
|
467 |
-
|
468 |
-
This data generally contains no annotation besides the metadata attached to each sample such as what domain it belongs to.
|
469 |
-
|
470 |
### Source Data
|
471 |
|
472 |
-
|
473 |
-
|
474 |
-
|
475 |
-
|
476 |
-
|
477 |
-
|
478 |
-
|
479 |
-
|
480 |
-
|
481 |
-
|
|
482 |
-
|
483 |
-
|
|
484 |
-
|
|
485 |
-
|
|
486 |
-
|
|
487 |
-
|
|
488 |
-
|
|
489 |
-
|
|
490 |
-
|
|
491 |
-
|
|
492 |
-
|
|
493 |
-
|
|
494 |
-
|
|
495 |
-
|
|
496 |
-
|
497 |
-
|
498 |
-
|
499 |
-
|
500 |
-
|
501 |
-
|
502 |
-
|
503 |
-
|
504 |
-
|
505 |
-
|
506 |
-
|
507 |
-
|
508 |
-
|
509 |
-
|
510 |
-
|
511 |
-
|
512 |
-
|
513 |
-
|
514 |
-
|
515 |
-
|
516 |
-
|
517 |
-
|
518 |
-
|
519 |
-
|
520 |
-
|
521 |
-
|
522 |
-
|
523 |
-
|
524 |
-
|
525 |
-
[enevaeldens_nyheder]: data/enevaeldens_nyheder/enevaeldens_nyheder.md
|
526 |
-
[grundtvig]: data/grundtvig/grundtvig.md
|
527 |
-
[danske-taler]: data/danske-taler/danske-taler.md
|
528 |
-
[ncc_books]: data/ncc_books/ncc_books.md
|
529 |
-
[ncc_newspaper]: data/ncc_newspaper/ncc_newspaper.md
|
530 |
-
[ncc_maalfrid]: data/ncc_maalfrid/ncc_maalfrid.md
|
531 |
-
[ncc_parliament]: data/ncc_parliament/ncc_parliament.md
|
532 |
-
[eur-lex-sum-da]: data/eur-lex-sum-da/eur-lex-sum-da.md
|
533 |
-
[miljoeportalen]: data/miljoeportalen/miljoeportalen.md
|
534 |
-
[fm-udgivelser]: data/fm-udgivelser/fm-udgivelser.md
|
535 |
-
[memo]: data/memo/memo.md
|
536 |
-
[opensubtitles]: data/opensubtitles/opensubtitles.md
|
537 |
-
[retsinformationdk]: data/retsinformationdk/retsinformationdk.md
|
538 |
-
[ep]: data/ep/ep.md
|
539 |
-
[ft]: data/ft/ft.md
|
540 |
-
[wikisource]: data/wikisource/wikisource.md
|
541 |
-
[spont]: data/spont/spont.md
|
542 |
-
[tv2r]: data/tv2r/tv2r.md
|
543 |
-
[adl]: data/adl/adl.md
|
544 |
-
[hest]: data/hest/hest.md
|
545 |
-
[skat]: data/skat/skat.md
|
546 |
-
[dannet]: data/dannet/dannet.md
|
547 |
-
[retspraksis]: data/retspraksis/retspraksis.md
|
548 |
-
[wikibooks]: data/wikibooks/wikibooks.md
|
549 |
-
[jvj]: data/jvj/jvj.md
|
550 |
-
[gutenberg]: data/gutenberg/gutenberg.md
|
551 |
-
[botxt]: data/botxt/botxt.md
|
552 |
-
[depbank]: data/depbank/depbank.md
|
553 |
-
[naat]: data/naat/naat.md
|
554 |
-
[synne]: data/synne/synne.md
|
555 |
-
[wikipedia]: data/wikipedia/wikipedia.md
|
556 |
-
[nordjyllandnews]: data/nordjyllandnews/nordjyllandnews.md
|
557 |
-
[relig]: data/relig/relig.md
|
558 |
-
[nota]: data/nota/nota.md
|
559 |
-
[health_hovedstaden]: data/health_hovedstaden/health_hovedstaden.md
|
560 |
-
[domsdatabasen]: data/domsdatabasen/domsdatabasen.md
|
561 |
-
|
562 |
-
|
563 |
-
[CC-0]: https://creativecommons.org/publicdomain/zero/1.0/legalcode.en
|
564 |
-
[CC-BY-SA 4.0]: https://creativecommons.org/licenses/by-sa/4.0/deed.en
|
565 |
-
[Apache 2.0]: https://www.apache.org/licenses/LICENSE-2.0
|
566 |
-
[NLOD 2.0]: ./data/ncc_maalfrid/ncc_maalfrid.md#license-information
|
567 |
-
[NLOD 2.0]: ./data/ncc_parliament/ncc_parliament.md#license-information
|
568 |
-
[Danish Copyright Law]: ./data/retsinformationdk/retsinformationdk.md#license-information
|
569 |
-
[DanNet 1.0]: ./data/dannet/dannet.md#license-information
|
570 |
-
[Gutenberg]: ./data/gutenberg/gutenberg.md#license-information
|
571 |
-
[Danish Copyright Law]: ./data/domsdatabasen/domsdatabasen.md#license-information
|
572 |
-
<!-- END-MAIN TABLE -->
|
573 |
-
|
574 |
-
</details>
|
575 |
-
|
576 |
-
|
577 |
-
### Data Collection and Processing
|
578 |
-
|
579 |
-
Danish Dynaword is continually developed, which means that the dataset will actively be updated as new datasets become available. This means that the size of Dynaword increases over time as seen in the following plot:
|
580 |
-
|
581 |
-
<p align="center">
|
582 |
-
<img src="./images/tokens_over_time.svg" width="600" style="margin-right: 10px;" />
|
583 |
-
</p>
|
584 |
-
|
585 |
-
The data collection and processing varies depending on the dataset and is documentationed the individual datasheets, which is linked in the above table. If possible the collection is documented both in the datasheet and in the reproducible script (`data/{dataset}/create.py`).
|
586 |
-
|
587 |
-
In addition to data specific processing we also run a series automated quality checks to ensure formatting (e.g. ensuring correctly formatted columns and unique IDs), quality checks (e.g. duplicate and empty string detection) and datasheet documentation checks. These checks are there to ensure a high quality of documentation and a minimal level of quality. To allow for the development of novel cleaning methodologies we do not provide more extensive cleaning.
|
588 |
-
|
589 |
-
### Dataset Statistics
|
590 |
-
The following plot(s) are intended to give an overview of docuements length in the various sources.
|
591 |
-
|
592 |
-
<p align="center">
|
593 |
-
<img src="./images/dataset_size_plot.svg" width="600" style="margin-right: 10px;" />
|
594 |
-
</p>
|
595 |
-
|
596 |
-
|
597 |
-
|
598 |
-
### Contributing to the dataset
|
599 |
-
|
600 |
-
We welcome contributions to the dataset, including new sources, improved data filtering, and other enhancements. To get started on contributing, please see [the contribution guidelines](CONTRIBUTING.md)
|
601 |
-
|
602 |
-
## Citation Information
|
603 |
-
|
604 |
-
If you use this work, please cite the [scientific article](https://arxiv.org/abs/2508.02271), we recommend citing the following:
|
605 |
-
|
606 |
-
> Enevoldsen, K.C., Jensen, K.N., Kostkan, J., Szab'o, B.I., Kardos, M., Vad, K., Heinsen, J., N'unez, A.B., Barmina, G., Nielsen, J., Larsen, R., Vahlstrup, P.B., Dalum, P.M., Elliott, D., Galke, L., Schneider-Kamp, P., & Nielbo, K.L. (2025). Dynaword: From One-shot to Continuously Developed Datasets.
|
607 |
-
|
608 |
-
|
609 |
-
```
|
610 |
-
@article{enevoldsen2025dynaword,
|
611 |
-
title={Dynaword: From One-shot to Continuously Developed Datasets},
|
612 |
-
author={Enevoldsen, Kenneth and Jensen, Kristian N{\o}rgaard and Kostkan, Jan and Szab{\'o}, Bal{\'a}zs and Kardos, M{\'a}rton and Vad, Kirten and N{\'u}{\~n}ez, Andrea Blasi and Barmina, Gianluca and Nielsen, Jacob and Larsen, Rasmus and others},
|
613 |
-
journal={arXiv preprint arXiv:2508.02271},
|
614 |
-
year={2025}
|
615 |
}
|
616 |
```
|
617 |
|
618 |
-
|
619 |
-
|
620 |
-
|
621 |
-
|
622 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
623 |
|
624 |
-
|
625 |
|
626 |
-
As far as we are aware the dataset does not contain information identifying sexual orientation, political beliefs, religion, or health connected with utterer ID. In case that such information is present in the data we have been removed utterer information from social media content.
|
627 |
|
628 |
-
###
|
629 |
|
630 |
-
|
631 |
-
As such, it includes perspectives, assumptions, and biases characteristic of the period. For instance, the works of N.F.S. Grundtvig (`grundtvig`) were known to nationalistic views and critical stances toward specific groups, such as Germans, which may be considered offensive or exclusionary by contemporary standards.
|
632 |
|
|
|
633 |
|
634 |
-
|
635 |
-
|
636 |
-
|
637 |
-
|
638 |
-
|
639 |
-
|
640 |
-
|
|
|
|
|
641 |
|
642 |
-
---
|
643 |
|
644 |
-
|
645 |
-
|
646 |
-
|
647 |
-
|
648 |
-
|
649 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
---
|
2 |
+
license: other
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
3 |
configs:
|
4 |
- config_name: default
|
5 |
data_files:
|
6 |
- split: train
|
7 |
+
path: 'data/*/*.parquet'
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
8 |
- config_name: retsinformationdk
|
9 |
data_files:
|
10 |
- split: train
|
|
|
77 |
data_files:
|
78 |
- split: train
|
79 |
path: data/synne/*.parquet
|
80 |
+
- config_name: wiki
|
81 |
data_files:
|
82 |
- split: train
|
83 |
+
path: data/wiki/*.parquet
|
|
|
|
|
|
|
|
|
84 |
- config_name: relig
|
85 |
data_files:
|
86 |
- split: train
|
87 |
path: data/relig/*.parquet
|
88 |
+
annotations_creators:
|
89 |
+
- no-annotation
|
90 |
+
language_creators:
|
91 |
+
- crowdsourced
|
92 |
+
language:
|
93 |
+
- da
|
94 |
+
multilinguality:
|
95 |
+
- monolingual
|
96 |
+
source_datasets:
|
97 |
+
- original
|
98 |
+
task_categories:
|
99 |
+
- text-generation
|
100 |
+
task_ids:
|
101 |
+
- language-modeling
|
102 |
+
pretty_name: Danish Gigaword
|
103 |
language_bcp47:
|
104 |
- da
|
105 |
- da-bornholm
|
106 |
- da-synnejyl
|
107 |
---
|
108 |
|
109 |
+
# Danish Gigaword 2
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
110 |
|
111 |
+
*Version*: 2.0.0
|
112 |
|
113 |
+
*License*: See the respective dataset
|
|
|
114 |
|
115 |
## Table of Contents
|
116 |
+
- [Danish Gigaword 2](#danish-gigaword-2)
|
117 |
- [Table of Contents](#table-of-contents)
|
118 |
- [Dataset Description](#dataset-description)
|
119 |
- [Dataset Summary](#dataset-summary)
|
120 |
- [Loading the dataset](#loading-the-dataset)
|
|
|
|
|
|
|
121 |
- [Dataset Structure](#dataset-structure)
|
122 |
- [Data Instances](#data-instances)
|
123 |
- [Data Fields](#data-fields)
|
124 |
- [Data Splits](#data-splits)
|
125 |
- [Dataset Creation](#dataset-creation)
|
|
|
|
|
126 |
- [Source Data](#source-data)
|
127 |
+
- [Additional Information](#additional-information)
|
128 |
+
- [Citation Information](#citation-information)
|
|
|
|
|
|
|
|
|
|
|
|
|
129 |
|
130 |
## Dataset Description
|
131 |
|
132 |
+
This is intended as a second version of the Danish Gigaword corpus. It is intended to be continually updated with new data sources. This is currently a work in progress.
|
|
|
|
|
|
|
|
|
|
|
133 |
|
134 |
### Dataset Summary
|
135 |
|
136 |
+
The Danish Gigaword Corpus contains text spanning several domains and forms.
|
|
|
|
|
|
|
137 |
|
138 |
### Loading the dataset
|
139 |
|
140 |
```py
|
141 |
from datasets import load_dataset
|
142 |
|
143 |
+
name = "danish-foundation-models/danish-gigaword"
|
144 |
ds = load_dataset(name, split = "train")
|
145 |
sample = ds[1] # see "Data Instances" below
|
|
|
146 |
|
147 |
+
# or load by streaming the data
|
|
|
148 |
ds = load_dataset(name, split = "train", streaming=True)
|
149 |
+
sample = next(iter(ds))
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
150 |
```
|
151 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
152 |
## Dataset Structure
|
153 |
|
154 |
+
The dataset contains text from different sources which are thoroughly defined in [Source Data](#source-data). See the [homepage](https://gigaword.dk) or [paper](https://aclanthology.org/2021.nodalida-main.46.pdf) for more information.
|
155 |
|
156 |
### Data Instances
|
157 |
|
158 |
Each entry in the dataset consists of a single text with associated metadata
|
159 |
|
|
|
160 |
```py
|
161 |
{
|
162 |
+
'text': 'Vimoutiers er en kommune i departementet Orne i Basse-Normandie regionen i det nordvestlige Frankrig.\nCykelløbet Paris-Camembert slutter i Vimoutiers.\nHistorie.\nDen 14. juni 1944, under invasionen i Normandiet blev Vimoutiers bombarderet af allierede styrker. Landsbyen blev ødelagt og 220 civile dræbt.\nPersonligheder.\nPolitikeren Joseph Laniel (1889-1975) var født i Vomoutiers.',
|
163 |
+
'source': 'wiki',
|
164 |
+
'id': 'wiki_366127',
|
165 |
+
'added': '2021-03-28',
|
166 |
+
'created': '2019-01-01, 2021-01-01',
|
167 |
+
'metadata':
|
168 |
+
{'domain': 'Wiki & Books',
|
169 |
+
'license': 'Creative Commons Legal Code\n\nCC0 1.0 Universal', 'source-pretty': 'Wikipedia'
|
170 |
+
}
|
171 |
}
|
172 |
```
|
173 |
|
|
|
175 |
|
176 |
An entry in the dataset consists of the following fields:
|
177 |
|
|
|
178 |
- `text`(`str`): The content of the document.
|
179 |
- `source` (`str`): The source of the document (see [Source Data](#source-data)).
|
180 |
+
- `id` (`str`): An unique identifer for each document.
|
181 |
- `added` (`str`): An date for when the document was added to this collection.
|
182 |
- `created` (`str`): An date range for when the document was originally created.
|
183 |
+
- `metadata/license` (`str`): The license of the document. The licenses vary according to the source.
|
184 |
+
- `metadata/domain` (`str`): The domain of the source
|
185 |
+
- `metadata/source-pretty` (`str`): The longform version of the short-form source name
|
186 |
+
|
187 |
|
188 |
### Data Splits
|
189 |
|
|
|
191 |
|
192 |
## Dataset Creation
|
193 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
194 |
### Source Data
|
195 |
|
196 |
+
Below follows a brief overview of the sources in the corpus along with their individual license.
|
197 |
+
|
198 |
+
| Source | License |
|
199 |
+
| ----------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
|
200 |
+
| adl | Creative Commons Legal Code 1.0 Universal |
|
201 |
+
| botxt | Creative Commons Legal Code 1.0 Universal |
|
202 |
+
| dannet | [dannet license](https://cst.ku.dk/projekter/dannet/license.txt) |
|
203 |
+
| depbank | Attribution-ShareAlike 4.0 International |
|
204 |
+
| ep | Creative Commons Legal Code 1.0 Universal |
|
205 |
+
| ft | Creative Commons Legal Code 1.0 Universal |
|
206 |
+
| gutenberg | [gutenberg license](https://www.gutenberg.org/policy/license.html) |
|
207 |
+
| hest | Creative Commons Legal Code 1.0 Universal |
|
208 |
+
| jvj | Attribution-ShareAlike 4.0 International |
|
209 |
+
| naat | Creative Commons Legal Code 1.0 Universal |
|
210 |
+
| relig | Creative Commons Legal Code 1.0 Universal |
|
211 |
+
| retsinformationdk | Danish Copyright law at https://www.retsinformation.dk/forms/r0710.aspx?id=164796 states "§ 9. Love, administrative forskrifter, retsafgørelser og lignende offentlige aktstykker er ikke genstand for ophavsret. Stk. 2. Bestemmelsen i stk. 1 gælder ikke for værker, der fremtræder som selvstændige bidrag i de i stk. 1 nævnte aktstykker. Sådanne værker må dog gengives i forbindelse med aktstykket. Retten til videre udnyttelse afhænger af de i øvrigt gældende regler." |
|
212 |
+
| retspraksis | Creative Commons Legal Code 1.0 Universal |
|
213 |
+
| skat | Creative Commons Legal Code 1.0 Universal |
|
214 |
+
| spont | Creative Commons Legal Code 1.0 Universal |
|
215 |
+
| synne | Creative Commons Legal Code 1.0 Universal |
|
216 |
+
| tv2r | The owner of this content is TV2 Regionerne, Denmark. Creative Commons Attribution 4.0 International |
|
217 |
+
| wiki | Creative Commons Legal Code 1.0 Universal |
|
218 |
+
| wikibooks | Creative Commons Legal Code 1.0 Universal |
|
219 |
+
| wikisource | Creative Commons Legal Code 1.0 Universal |
|
220 |
+
|
221 |
+
These sources corresponds to the following top-level domains in the dataset:
|
222 |
+
```python
|
223 |
+
# mapping from domain to top-level domain
|
224 |
+
domain_mapping_dict = {
|
225 |
+
"retsinformationdk": "Legal",
|
226 |
+
"skat": "Legal",
|
227 |
+
"retspraksis": "Legal",
|
228 |
+
"hest": "Social Media",
|
229 |
+
"cc": "Web",
|
230 |
+
"adl": "Wiki & Books",
|
231 |
+
"botxt": "Other",
|
232 |
+
"danavis": "News",
|
233 |
+
"dannet": "dannet",
|
234 |
+
"depbank": "Other",
|
235 |
+
"ep": "Conversation",
|
236 |
+
"ft": "Conversation",
|
237 |
+
"gutenberg": "Wiki & Books",
|
238 |
+
"jvj": "Wiki & Books",
|
239 |
+
"naat": "Conversation",
|
240 |
+
"opensub": "Conversation",
|
241 |
+
"relig": "Wiki & Books",
|
242 |
+
"spont": "Conversation",
|
243 |
+
"synne": "Other",
|
244 |
+
"tv2r": "News",
|
245 |
+
"wiki": "Wiki & Books",
|
246 |
+
"wikibooks": "Wiki & Books",
|
247 |
+
"wikisource": "Wiki & Books",
|
248 |
+
"twfv19": "Social Media", # not present in this version of the dataset
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
249 |
}
|
250 |
```
|
251 |
|
252 |
+
And the following mapping translates between the short form and the long form of the source name
|
253 |
+
```python
|
254 |
+
# mapping from domain to its long name format
|
255 |
+
longname_mapping_dict = {
|
256 |
+
"retsinformationdk": "retsinformation.dk (Danish legal information)",
|
257 |
+
"skat": "Skat (Danish tax authority)",
|
258 |
+
"retspraksis": "retspraksis (Danish legal information)",
|
259 |
+
"hest": "Hestenettet (Danish debate forum)",
|
260 |
+
"cc": "Common Crawl",
|
261 |
+
"adl": " Archive for Danish Literature",
|
262 |
+
"botxt": "Bornholmsk (Danish dialect)",
|
263 |
+
"danavis": "Danish daily newspapers",
|
264 |
+
"dannet": "DanNet (Danish WordNet)",
|
265 |
+
"depbank": "Danish Dependency Treebank",
|
266 |
+
"ep": "European Parliament",
|
267 |
+
"ft": "Folketinget (Danish Parliament)",
|
268 |
+
"gutenberg": "Gutenberg",
|
269 |
+
"jvj": "Johannes V. Jensen (Danish author/poet)",
|
270 |
+
"naat": "NAAT",
|
271 |
+
"opensub": "Open Subtitles",
|
272 |
+
"relig": "Religious texts",
|
273 |
+
"spont": "Spontaneous speech",
|
274 |
+
"synne": "Synderjysk (Danish dialect)",
|
275 |
+
"tv2r": "TV 2 Radio (Danish news)",
|
276 |
+
"wiki": "Wikipedia",
|
277 |
+
"wikibooks": "Wikibooks",
|
278 |
+
"wikisource": "Wikisource",
|
279 |
+
"twfv19": "Twitter Folketingsvalget 2019 (Danish election tweets)", # not present in this version of the dataset
|
280 |
+
}
|
281 |
+
```
|
282 |
|
283 |
+
## Additional Information
|
284 |
|
|
|
285 |
|
286 |
+
### Citation Information
|
287 |
|
288 |
+
The original version of Danish Gigawords was created as a part of the following publication.
|
|
|
289 |
|
290 |
+
> Derczynski, L., Ciosici, M. R., et al. (2021). The Danish Gigaword Corpus. In Proceedings of the 23rd Nordic Conference on Computational Linguistics (NoDaLiDa 2021).
|
291 |
|
292 |
+
```
|
293 |
+
@inproceedings{dagw,
|
294 |
+
title = {{The Danish Gigaword Corpus}},
|
295 |
+
author = {Leon Derczynski and Manuel R. Ciosici and Rebekah Baglini and Morten H. Christiansen and Jacob Aarup Dalsgaard and Riccardo Fusaroli and Peter Juel Henrichsen and Rasmus Hvingelby and Andreas Kirkedal and Alex Speed Kjeldsen and Claus Ladefoged and Finn Årup Nielsen and Jens Madsen and Malte Lau Petersen and Jonathan Hvithamar Rystrøm and Daniel Varab},
|
296 |
+
year = 2021,
|
297 |
+
booktitle = {Proceedings of the 23rd Nordic Conference on Computational Linguistics},
|
298 |
+
publisher = {NEALT}
|
299 |
+
}
|
300 |
+
```
|
301 |
|
|
|
302 |
|
303 |
+
<!--
|
304 |
+
Todo:
|
305 |
+
|
306 |
+
add tests
|
307 |
+
- unique ids
|
308 |
+
- valid metadata
|
309 |
+
|
310 |
+
add ci:
|
311 |
+
- summary statistics
|
312 |
+
- tables
|
313 |
+
|
314 |
+
prettify:
|
315 |
+
- license as independent column
|
316 |
+
- ensure pretty_name is standard
|
317 |
+
- potentially remove some columns
|
318 |
+
-->
|
data/adl/adl.md
CHANGED
@@ -1,99 +1,57 @@
|
|
1 |
---
|
2 |
-
pretty_name:
|
3 |
language:
|
4 |
-
- da
|
5 |
license: cc0-1.0
|
6 |
-
license_name:
|
7 |
size_categories:
|
8 |
-
- 1-10k
|
9 |
task_categories:
|
10 |
-
- text-generation
|
11 |
-
- fill-mask
|
12 |
task_ids:
|
13 |
-
- language-modeling
|
14 |
-
source_datasets:
|
15 |
-
- danish-foundation-models/danish-gigaword
|
16 |
-
domains:
|
17 |
-
- Books
|
18 |
---
|
19 |
-
|
20 |
-
# Dataset Card for Archive for Danish Literature
|
21 |
-
|
22 |
## Dataset Description
|
23 |
-
|
24 |
-
|
25 |
-
|
26 |
-
<!-- END-SHORT DESCRIPTION -->
|
27 |
-
|
28 |
-
Archive for Danish Literature (ADL) is a literary-historical collection of selected parts of older Danish literature, from the Middle Ages up to the mid-20th century.
|
29 |
-
It provides access to both the texts and introductory material on most of the authors. ADL is a resource for research, teaching, and broad dissemination of older Danish
|
30 |
-
literature. Currently, ADL contains works by 78 authors. The texts are reproduced from standard printed editions. All texts are searchable, and many can also be viewed as facsimiles (photographs of the original edition)
|
31 |
-
on the Danish Royal Library's [website](https://tekster.kb.dk/text?editorial=no&f%5Bsubcollection_ssi%5D%5B%5D=adl&match=one&search_field=Alt).
|
32 |
-
|
33 |
-
See also dataset [entry](https://sprogteknologi.dk/dataset/public-adl-text-sources) on sprogteknologi.dk and an [API](https://rawgit.com/Det-Kongelige-Bibliotek/access-digital-objects/master/form-demos/adl-form.html).
|
34 |
-
|
35 |
-
<!-- START-DESC-STATS -->
|
36 |
-
- **Number of samples**: 498
|
37 |
-
- **Number of tokens (Llama 3)**: 58.49M
|
38 |
-
- **Average document length in tokens (min, max)**: 117.46K (53, 662.14K)
|
39 |
-
<!-- END-DESC-STATS -->
|
40 |
-
|
41 |
-
|
42 |
-
|
43 |
-
## Dataset Structure
|
44 |
An example from the dataset looks as follows.
|
45 |
-
|
46 |
-
|
47 |
-
<!-- START-SAMPLE -->
|
48 |
-
```py
|
49 |
{
|
50 |
-
|
51 |
-
|
52 |
-
|
53 |
-
|
54 |
-
|
55 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
56 |
}
|
57 |
```
|
58 |
|
59 |
-
|
60 |
-
|
61 |
-
An entry in the dataset consists of the following fields:
|
62 |
|
63 |
-
-
|
64 |
-
-
|
65 |
-
-
|
66 |
-
-
|
67 |
-
-
|
68 |
-
-
|
69 |
-
<!-- END-SAMPLE -->
|
70 |
|
|
|
|
|
|
|
|
|
|
|
71 |
|
72 |
-
|
73 |
-
### Dataset Statistics
|
74 |
-
|
75 |
-
<!-- START-DATASET PLOTS -->
|
76 |
-
<p align="center">
|
77 |
-
<img src="./images/dist_document_length.png" width="600" style="margin-right: 10px;" />
|
78 |
</p>
|
79 |
-
|
80 |
-
|
81 |
-
|
82 |
-
## Additional Information
|
83 |
-
|
84 |
-
|
85 |
-
### Citation Information
|
86 |
-
|
87 |
-
This dataset was initially published as part of the [Danish gigaword](https://huggingface.co/danish-foundation-models). We recommend that you cite and reference it if you use this dataset:
|
88 |
-
|
89 |
-
> Derczynski, L., Ciosici, M. R., et al. (2021). The Danish Gigaword Corpus. In Proceedings of the 23rd Nordic Conference on Computational Linguistics (NoDaLiDa 2021).
|
90 |
-
|
91 |
-
```bash
|
92 |
-
@inproceedings{dagw,
|
93 |
-
title = {{The Danish Gigaword Corpus}},
|
94 |
-
author = {Leon Derczynski and Manuel R. Ciosici and Rebekah Baglini and Morten H. Christiansen and Jacob Aarup Dalsgaard and Riccardo Fusaroli and Peter Juel Henrichsen and Rasmus Hvingelby and Andreas Kirkedal and Alex Speed Kjeldsen and Claus Ladefoged and Finn Årup Nielsen and Jens Madsen and Malte Lau Petersen and Jonathan Hvithamar Rystrøm and Daniel Varab},
|
95 |
-
year = 2021,
|
96 |
-
booktitle = {Proceedings of the 23rd Nordic Conference on Computational Linguistics},
|
97 |
-
publisher = {NEALT}
|
98 |
-
}
|
99 |
-
```
|
|
|
1 |
---
|
2 |
+
pretty_name: Archive for Danish Literature
|
3 |
language:
|
4 |
+
- da
|
5 |
license: cc0-1.0
|
6 |
+
license_name: Creative Commons Zero v1.0 Universal
|
7 |
size_categories:
|
8 |
+
- 1-10k
|
9 |
task_categories:
|
10 |
+
- text-generation
|
11 |
+
- fill-mask
|
12 |
task_ids:
|
13 |
+
- language-modeling
|
|
|
|
|
|
|
|
|
14 |
---
|
15 |
+
# Dataset Card for Archive for Danish Literature
|
|
|
|
|
16 |
## Dataset Description
|
17 |
+
- **Number of records:** 498
|
18 |
+
- **Languages:** Danish
|
19 |
+
## Dataset Sturcture
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
20 |
An example from the dataset looks as follows.
|
21 |
+
```yaml
|
|
|
|
|
|
|
22 |
{
|
23 |
+
'text': 'SAMLEDE VÆRKER
|
24 |
+
|
25 |
+
JEPPE AAKJÆR GYLDENDALSKE BOGHANDE',
|
26 |
+
'source': 'adl',
|
27 |
+
'id': 'adl_aakjaer06val',
|
28 |
+
'added': '2020-09-14',
|
29 |
+
'created': '1700-01-01, 2022-01-01',
|
30 |
+
'metadata': {
|
31 |
+
'domain': 'Wiki & Books',
|
32 |
+
'license': 'Creative Commons Legal Code
|
33 |
+
|
34 |
+
CC0 1.0 Universal',
|
35 |
+
'source-pretty': ' Archive for Danish Literature'
|
36 |
+
}
|
37 |
}
|
38 |
```
|
39 |
|
40 |
+
## Data Fields
|
|
|
|
|
41 |
|
42 |
+
- **id**: source-specific identifier.
|
43 |
+
- **text**: textual content of the document.
|
44 |
+
- **source**: source of the data.
|
45 |
+
- **added**: timestamp ai2 acquired this data.
|
46 |
+
- **created**": timestamp when original document was created (best-guess if not available)
|
47 |
+
- **metadata**: source-specific metadata.
|
|
|
48 |
|
49 |
+
## License Information
|
50 |
+
<details>
|
51 |
+
<summary>Creative Commons Zero v1.0 Universal</summary>
|
52 |
+
<p>
|
53 |
+
Creative Commons Legal Code
|
54 |
|
55 |
+
CC0 1.0 Universal
|
|
|
|
|
|
|
|
|
|
|
56 |
</p>
|
57 |
+
</details>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
data/adl/adl.parquet
CHANGED
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
-
size
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:5af9444529d92c37f35161829c652f8b928f9f1dfb5836065f320d1e1d698818
|
3 |
+
size 106401744
|
data/adl/descriptive_stats.json
DELETED
@@ -1,9 +0,0 @@
|
|
1 |
-
{
|
2 |
-
"number_of_samples": 498,
|
3 |
-
"number_of_tokens": 58493311,
|
4 |
-
"min_length_tokens": 53,
|
5 |
-
"max_length_tokens": 662143,
|
6 |
-
"number_of_characters": 161816257,
|
7 |
-
"min_length_characters": 136,
|
8 |
-
"max_length_characters": 1879004
|
9 |
-
}
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
data/adl/images/dist_document_length.png
DELETED
Git LFS Details
|
data/ai-aktindsigt/ai-aktindsigt.md
DELETED
@@ -1,85 +0,0 @@
|
|
1 |
-
---
|
2 |
-
pretty_name: AI Aktindsigt
|
3 |
-
language:
|
4 |
-
- da
|
5 |
-
license: apache-2.0
|
6 |
-
license_name: Apache 2.0
|
7 |
-
task_categories:
|
8 |
-
- text-generation
|
9 |
-
- fill-mask
|
10 |
-
task_ids:
|
11 |
-
- language-modeling
|
12 |
-
domains:
|
13 |
-
- Web
|
14 |
-
source_datasets:
|
15 |
-
- AI-aktindsigt/Skrabet_kommunale_hjemmesider
|
16 |
-
---
|
17 |
-
|
18 |
-
# Dataset Card for AI Aktindsigt
|
19 |
-
|
20 |
-
<!-- START-SHORT DESCRIPTION -->
|
21 |
-
Multiple web scrapes from municipality websites collected as a part of the [AI-aktindsigt](https://ai-aktindsigt.dk) project.
|
22 |
-
<!-- END-SHORT DESCRIPTION -->
|
23 |
-
|
24 |
-
The dataset consists of multiple scrapes of municipal websites compiled in connection with the work on the [AI-aktindsigt](https://ai-aktindsigt.dk) project. The scrape is made across different domains from several different municipalities.
|
25 |
-
|
26 |
-
## Dataset Description
|
27 |
-
|
28 |
-
|
29 |
-
<!-- START-DESC-STATS -->
|
30 |
-
- **Number of samples**: 200.91K
|
31 |
-
- **Number of tokens (Llama 3)**: 139.23M
|
32 |
-
- **Average document length in tokens (min, max)**: 693.0064405666105 (9, 152.60K)
|
33 |
-
<!-- END-DESC-STATS -->
|
34 |
-
|
35 |
-
|
36 |
-
## Dataset Structure
|
37 |
-
An example from the dataset looks as follows.
|
38 |
-
|
39 |
-
|
40 |
-
<!-- START-SAMPLE -->
|
41 |
-
```py
|
42 |
-
{
|
43 |
-
"id": "ai-aktindsigt_0",
|
44 |
-
"text": "Vallensbæk Stationstorv 100 2665 Vallensbæk Strand Telefon: +45 4797 4000",
|
45 |
-
"source": "ai-aktindsigt",
|
46 |
-
"added": "2025-03-24",
|
47 |
-
"created": "2010-01-01, 2024-03-18",
|
48 |
-
"token_count": 29
|
49 |
-
}
|
50 |
-
```
|
51 |
-
|
52 |
-
### Data Fields
|
53 |
-
|
54 |
-
An entry in the dataset consists of the following fields:
|
55 |
-
|
56 |
-
- `id` (`str`): An unique identifier for each document.
|
57 |
-
- `text`(`str`): The content of the document.
|
58 |
-
- `source` (`str`): The source of the document (see [Source Data](#source-data)).
|
59 |
-
- `added` (`str`): An date for when the document was added to this collection.
|
60 |
-
- `created` (`str`): An date range for when the document was originally created.
|
61 |
-
- `token_count` (`int`): The number of tokens in the sample computed using the Llama 8B tokenizer
|
62 |
-
<!-- END-SAMPLE -->
|
63 |
-
|
64 |
-
|
65 |
-
### Dataset Statistics
|
66 |
-
|
67 |
-
<!-- START-DATASET PLOTS -->
|
68 |
-
<p align="center">
|
69 |
-
<img src="./images/dist_document_length.png" width="600" style="margin-right: 10px;" />
|
70 |
-
</p>
|
71 |
-
<!-- END-DATASET PLOTS -->
|
72 |
-
|
73 |
-
|
74 |
-
|
75 |
-
## Additional Information
|
76 |
-
|
77 |
-
|
78 |
-
|
79 |
-
### Sourced data
|
80 |
-
This dataset is derived from [`AI-aktindsigt/Skrabet_kommunale_hjemmesider`](https://huggingface.co/datasets/AI-aktindsigt/Skrabet_kommunale_hjemmesider/tree/main
|
81 |
-
)
|
82 |
-
|
83 |
-
### Citation Information
|
84 |
-
|
85 |
-
No citation is applicable for this work. We recommend citing the huggingface repository.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
data/ai-aktindsigt/create.py
DELETED
@@ -1,64 +0,0 @@
|
|
1 |
-
# /// script
|
2 |
-
# requires-python = ">=3.12"
|
3 |
-
# dependencies = [
|
4 |
-
# "datasets>=3.2.0",
|
5 |
-
# ]
|
6 |
-
# ///
|
7 |
-
"""
|
8 |
-
This script is used to create the data for the AI-aktindsigt project.
|
9 |
-
|
10 |
-
This derived the data from a .json.gz file.
|
11 |
-
"""
|
12 |
-
|
13 |
-
from pathlib import Path
|
14 |
-
from typing import cast
|
15 |
-
|
16 |
-
from datasets import Dataset, load_dataset
|
17 |
-
|
18 |
-
source = "ai-aktindsigt"
|
19 |
-
|
20 |
-
|
21 |
-
def convert_sample(example):
|
22 |
-
# {'text': 'Vallensbæk Stationstorv 100 2665 Vallensbæk Strand Telefon: +45 4797 4000',
|
23 |
-
# 'id': '0_03fe7662f6d37df0ffbf5013907414f935350db9931043891a95ed830965a507a7bcb4df93741429bdfa4958cf25f6c273aa73146f2be80948f767eb5fa04645',
|
24 |
-
# 'source': 'AI-aktindsigt',
|
25 |
-
# 'added': '2024-04-16T12:35:52.000Z',
|
26 |
-
# 'metadata': {'url': 'https://vallensbaek.dk/', 'kommune': 'vallensbaek', 'sentence': 1,
|
27 |
-
# 'ppl_score': [634.6341],
|
28 |
-
# 'sha512': '03fe7662f6d37df0ffbf5013907414f935350db9931043891a95ed830965a507a7bcb4df93741429bdfa4958cf25f6c273aa73146f2be80948f767eb5fa04645'}
|
29 |
-
# }
|
30 |
-
|
31 |
-
new_example = dict(
|
32 |
-
text_new=example["text"],
|
33 |
-
source=source,
|
34 |
-
domain="Web",
|
35 |
-
license="Apache-2.0",
|
36 |
-
added="2025-03-24",
|
37 |
-
created="2010-01-01, 2024-03-18", # Start date is approximate guess end date is the date of the last update
|
38 |
-
metadata={"source-pretty": "AI Aktindsigt"},
|
39 |
-
)
|
40 |
-
|
41 |
-
return new_example
|
42 |
-
|
43 |
-
|
44 |
-
def main():
|
45 |
-
data_path = Path(
|
46 |
-
"/work/dfm-data/pre-training/ai_aktindsigt/documents/ai_aktindsigt.jsonl.gz"
|
47 |
-
)
|
48 |
-
ds = load_dataset("json", data_files=data_path.as_posix(), split="train")
|
49 |
-
|
50 |
-
ds = cast(Dataset, ds)
|
51 |
-
|
52 |
-
ds = ds.map(convert_sample, remove_columns=ds.column_names)
|
53 |
-
ds = ds.rename_columns({"text_new": "text"})
|
54 |
-
ds = ds.add_column("id", [f"{source}_{i}" for i in range(len(ds))]) # type: ignore
|
55 |
-
ds = ds.select_columns(
|
56 |
-
["text", "source", "id", "added", "created", "license", "domain", "metadata"]
|
57 |
-
)
|
58 |
-
|
59 |
-
save_path = Path(__file__).parent / f"{source}.parquet"
|
60 |
-
ds.to_parquet(save_path)
|
61 |
-
|
62 |
-
|
63 |
-
if __name__ == "__main__":
|
64 |
-
main()
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
data/ai-aktindsigt/descriptive_stats.json
DELETED
@@ -1,9 +0,0 @@
|
|
1 |
-
{
|
2 |
-
"number_of_samples": 200914,
|
3 |
-
"number_of_tokens": 139234696,
|
4 |
-
"min_length_tokens": 9,
|
5 |
-
"max_length_tokens": 152599,
|
6 |
-
"number_of_characters": 408005923,
|
7 |
-
"min_length_characters": 29,
|
8 |
-
"max_length_characters": 406832
|
9 |
-
}
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
data/ai-aktindsigt/images/dist_document_length.png
DELETED
Git LFS Details
|
data/botxt/botxt.md
CHANGED
@@ -1,94 +1,57 @@
|
|
1 |
---
|
2 |
-
pretty_name: Bornholmsk
|
3 |
language:
|
4 |
-
- da
|
5 |
license: cc0-1.0
|
6 |
-
license_name:
|
7 |
size_categories:
|
8 |
-
- 1-10k
|
9 |
task_categories:
|
10 |
-
- text-generation
|
11 |
-
- fill-mask
|
12 |
task_ids:
|
13 |
-
- language-modeling
|
14 |
-
domains:
|
15 |
-
- Dialect
|
16 |
-
- Web
|
17 |
-
source_datasets:
|
18 |
-
- danish-foundation-models/danish-gigaword
|
19 |
---
|
20 |
-
|
21 |
-
# Dataset Card for Bornholmsk
|
22 |
-
|
23 |
## Dataset Description
|
24 |
-
|
25 |
-
|
26 |
-
|
27 |
-
<!-- END-SHORT DESCRIPTION -->
|
28 |
-
|
29 |
-
Fictional texts of various kinds written in Bornholmsk, the dialect spoken on the Danish island of Bornholm (The language code for Bornholmsk under IETF BCP-47 is da-bornholm), have been digitized (OCR’ed and proofread) by volunteers working within the recently resumed Bornholmsk Ordbog dictionary project (Kjeldsen, 2019). Most of the material included is written by Otto J. Lund in the period 1930-48 (novels, short stories, and poems). The Bornholmsk subcorpus, which in its present state amounts to circa 400 K words, also includes folk stories published by J. P. Kuhre in 1938, and by K. M. Kofoed in 1935, fictional letters by various authors published in the 1930s, as well as poems by Alfred Jensen published in 1948 and various other texts from the same period. The non-standardized orthography varies considerably from source to source. The Bornholmsk part of the Danish Gigaword is a significantly extended dataset, well beyond that studied in earlier NLP work on the dialect [(Derczynski and Kjeldsen, 2019)](https://aclanthology.org/W19-6138/).
|
30 |
-
|
31 |
-
|
32 |
-
<!-- START-DESC-STATS -->
|
33 |
-
- **Number of samples**: 106
|
34 |
-
- **Number of tokens (Llama 3)**: 847.97K
|
35 |
-
- **Average document length in tokens (min, max)**: 8.00K (407, 83.79K)
|
36 |
-
<!-- END-DESC-STATS -->
|
37 |
-
|
38 |
-
|
39 |
-
|
40 |
-
## Dataset Structure
|
41 |
An example from the dataset looks as follows.
|
42 |
-
|
43 |
-
|
44 |
-
<!-- START-SAMPLE -->
|
45 |
-
```py
|
46 |
{
|
47 |
-
|
48 |
-
|
49 |
-
|
50 |
-
|
51 |
-
|
52 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
53 |
}
|
54 |
```
|
55 |
|
56 |
-
|
57 |
|
58 |
-
|
|
|
|
|
|
|
|
|
|
|
59 |
|
60 |
-
|
61 |
-
|
62 |
-
|
63 |
-
|
64 |
-
|
65 |
-
- `token_count` (`int`): The number of tokens in the sample computed using the Llama 8B tokenizer
|
66 |
-
<!-- END-SAMPLE -->
|
67 |
|
68 |
-
|
69 |
-
|
70 |
-
<!-- START-DATASET PLOTS -->
|
71 |
-
<p align="center">
|
72 |
-
<img src="./images/dist_document_length.png" width="600" style="margin-right: 10px;" />
|
73 |
</p>
|
74 |
-
|
75 |
-
|
76 |
-
|
77 |
-
## Additional Information
|
78 |
-
|
79 |
-
|
80 |
-
### Citation Information
|
81 |
-
|
82 |
-
This dataset was initially published as part of the [Danish gigaword](https://huggingface.co/danish-foundation-models). We recommend that you cite and reference it if you use this dataset:
|
83 |
-
|
84 |
-
> Derczynski, L., Ciosici, M. R., et al. (2021). The Danish Gigaword Corpus. In Proceedings of the 23rd Nordic Conference on Computational Linguistics (NoDaLiDa 2021).
|
85 |
-
|
86 |
-
```bash
|
87 |
-
@inproceedings{dagw,
|
88 |
-
title = {{The Danish Gigaword Corpus}},
|
89 |
-
author = {Leon Derczynski and Manuel R. Ciosici and Rebekah Baglini and Morten H. Christiansen and Jacob Aarup Dalsgaard and Riccardo Fusaroli and Peter Juel Henrichsen and Rasmus Hvingelby and Andreas Kirkedal and Alex Speed Kjeldsen and Claus Ladefoged and Finn Årup Nielsen and Jens Madsen and Malte Lau Petersen and Jonathan Hvithamar Rystrøm and Daniel Varab},
|
90 |
-
year = 2021,
|
91 |
-
booktitle = {Proceedings of the 23rd Nordic Conference on Computational Linguistics},
|
92 |
-
publisher = {NEALT}
|
93 |
-
}
|
94 |
-
```
|
|
|
1 |
---
|
2 |
+
pretty_name: Bornholmsk (Danish dialect)
|
3 |
language:
|
4 |
+
- da
|
5 |
license: cc0-1.0
|
6 |
+
license_name: Creative Commons Zero v1.0 Universal
|
7 |
size_categories:
|
8 |
+
- 1-10k
|
9 |
task_categories:
|
10 |
+
- text-generation
|
11 |
+
- fill-mask
|
12 |
task_ids:
|
13 |
+
- language-modeling
|
|
|
|
|
|
|
|
|
|
|
14 |
---
|
15 |
+
# Dataset Card for Bornholmsk (Danish dialect)
|
|
|
|
|
16 |
## Dataset Description
|
17 |
+
- **Number of records:** 106
|
18 |
+
- **Languages:** Danish
|
19 |
+
## Dataset Sturcture
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
20 |
An example from the dataset looks as follows.
|
21 |
+
```yaml
|
|
|
|
|
|
|
22 |
{
|
23 |
+
'text': 'Ræua-Lârs
|
24 |
+
|
25 |
+
Ræua-Lârs å hans Konna, Stina, bode uda',
|
26 |
+
'source': 'botxt',
|
27 |
+
'id': 'botxt_0000040',
|
28 |
+
'added': '2024-05-16',
|
29 |
+
'created': '2000-01-01, 2022-01-01',
|
30 |
+
'metadata': {
|
31 |
+
'domain': 'Other',
|
32 |
+
'license': 'Creative Commons Legal Code
|
33 |
+
|
34 |
+
CC0 1.0 Universal',
|
35 |
+
'source-pretty': 'Bornholmsk (Danish dialect)'
|
36 |
+
}
|
37 |
}
|
38 |
```
|
39 |
|
40 |
+
## Data Fields
|
41 |
|
42 |
+
- **id**: source-specific identifier.
|
43 |
+
- **text**: textual content of the document.
|
44 |
+
- **source**: source of the data.
|
45 |
+
- **added**: timestamp ai2 acquired this data.
|
46 |
+
- **created**": timestamp when original document was created (best-guess if not available)
|
47 |
+
- **metadata**: source-specific metadata.
|
48 |
|
49 |
+
## License Information
|
50 |
+
<details>
|
51 |
+
<summary>Creative Commons Zero v1.0 Universal</summary>
|
52 |
+
<p>
|
53 |
+
Creative Commons Legal Code
|
|
|
|
|
54 |
|
55 |
+
CC0 1.0 Universal
|
|
|
|
|
|
|
|
|
56 |
</p>
|
57 |
+
</details>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
data/botxt/botxt.parquet
CHANGED
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
-
size
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:ec89c1dd57f1987dc6fe059a33a1d16b41b8c87439673a381f9671497f65b017
|
3 |
+
size 1344033
|
data/botxt/descriptive_stats.json
DELETED
@@ -1,9 +0,0 @@
|
|
1 |
-
{
|
2 |
-
"number_of_samples": 106,
|
3 |
-
"number_of_tokens": 847973,
|
4 |
-
"min_length_tokens": 407,
|
5 |
-
"max_length_tokens": 83792,
|
6 |
-
"number_of_characters": 2011076,
|
7 |
-
"min_length_characters": 845,
|
8 |
-
"max_length_characters": 202015
|
9 |
-
}
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
data/botxt/images/dist_document_length.png
DELETED
Git LFS Details
|
data/cellar/cellar.md
DELETED
@@ -1,77 +0,0 @@
|
|
1 |
-
---
|
2 |
-
pretty_name: Cellar
|
3 |
-
language:
|
4 |
-
- da
|
5 |
-
license: cc-by-sa-4.0
|
6 |
-
license_name: CC-BY-SA 4.0
|
7 |
-
task_categories:
|
8 |
-
- text-generation
|
9 |
-
- fill-mask
|
10 |
-
task_ids:
|
11 |
-
- language-modeling
|
12 |
-
domains:
|
13 |
-
- Legal
|
14 |
-
---
|
15 |
-
|
16 |
-
# Dataset Card for Cellar
|
17 |
-
|
18 |
-
<!-- START-SHORT DESCRIPTION -->
|
19 |
-
The official digital repository for European Union legal documents and open data.
|
20 |
-
<!-- END-SHORT DESCRIPTION -->
|
21 |
-
|
22 |
-
The EU Dataset Cellar serves as the central access point for all official EU publications, legislation, and open data resources. Maintained by the Publications Office of the European Union, this comprehensive digital archive contains millions of documents in multiple languages, including regulations, directives, decisions, treaties, case law, and preparatory acts dating back decades. The repository employs standardized metadata and unique identifiers to organize its vast collection, making it an essential resource for researchers, legal professionals, policymakers, and citizens seeking authoritative information on EU law and policy. The Cellar's linked data architecture also enables sophisticated search capabilities and integration with other information systems across the European Union's digital landscape.
|
23 |
-
|
24 |
-
|
25 |
-
## Dataset Description
|
26 |
-
|
27 |
-
<!-- START-DESC-STATS -->
|
28 |
-
- **Number of samples**: 63.40K
|
29 |
-
- **Number of tokens (Llama 3)**: 1.15B
|
30 |
-
- **Average document length in tokens (min, max)**: 18.17K (7, 2.60M)
|
31 |
-
<!-- END-DESC-STATS -->
|
32 |
-
|
33 |
-
|
34 |
-
## Dataset Structure
|
35 |
-
An example from the dataset looks as follows.
|
36 |
-
|
37 |
-
|
38 |
-
<!-- START-SAMPLE -->
|
39 |
-
```py
|
40 |
-
{
|
41 |
-
"id": "cellar_0",
|
42 |
-
"text": "\n\n\n\n© Европейски съюз, 2017 г.\n\nВъзпроизвеждането е разрешено при позоваване на оригинала.\n\n© Unión [...]",
|
43 |
-
"source": "cellar",
|
44 |
-
"added": "2025-03-25",
|
45 |
-
"created": "2024-01-01, 2026-01-01",
|
46 |
-
"token_count": 87018
|
47 |
-
}
|
48 |
-
```
|
49 |
-
|
50 |
-
### Data Fields
|
51 |
-
|
52 |
-
An entry in the dataset consists of the following fields:
|
53 |
-
|
54 |
-
- `id` (`str`): An unique identifier for each document.
|
55 |
-
- `text`(`str`): The content of the document.
|
56 |
-
- `source` (`str`): The source of the document (see [Source Data](#source-data)).
|
57 |
-
- `added` (`str`): An date for when the document was added to this collection.
|
58 |
-
- `created` (`str`): An date range for when the document was originally created.
|
59 |
-
- `token_count` (`int`): The number of tokens in the sample computed using the Llama 8B tokenizer
|
60 |
-
<!-- END-SAMPLE -->
|
61 |
-
|
62 |
-
|
63 |
-
### Dataset Statistics
|
64 |
-
|
65 |
-
<!-- START-DATASET PLOTS -->
|
66 |
-
<p align="center">
|
67 |
-
<img src="./images/dist_document_length.png" width="600" style="margin-right: 10px;" />
|
68 |
-
</p>
|
69 |
-
<!-- END-DATASET PLOTS -->
|
70 |
-
|
71 |
-
|
72 |
-
|
73 |
-
## Additional Information
|
74 |
-
|
75 |
-
### Citation Information
|
76 |
-
|
77 |
-
No citation is applicable for this work.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
data/cellar/cellar.parquet
DELETED
@@ -1,3 +0,0 @@
|
|
1 |
-
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:6162a90362e286ebc66a8344f39c3fbc835dec85f3e1d51318b7b39181ef4709
|
3 |
-
size 1426079196
|
|
|
|
|
|
|
|
data/cellar/create.py
DELETED
@@ -1,60 +0,0 @@
|
|
1 |
-
# /// script
|
2 |
-
# requires-python = ">=3.12"
|
3 |
-
# dependencies = [
|
4 |
-
# "datasets>=3.2.0",
|
5 |
-
# ]
|
6 |
-
# ///
|
7 |
-
|
8 |
-
from pathlib import Path
|
9 |
-
from typing import cast
|
10 |
-
from datasets import Dataset, load_dataset, concatenate_datasets
|
11 |
-
|
12 |
-
source = "cellar"
|
13 |
-
|
14 |
-
|
15 |
-
def convert_sample(example):
|
16 |
-
new_example = dict(
|
17 |
-
text_new=example["text"],
|
18 |
-
source=source,
|
19 |
-
domain="Legal",
|
20 |
-
license="cc-by-sa-4.0",
|
21 |
-
added="2025-03-25",
|
22 |
-
created="2024-01-01, 2026-01-01", # Scrape happened within these years - data likely written earlier
|
23 |
-
metadata={"source-pretty": "Cellar"},
|
24 |
-
)
|
25 |
-
|
26 |
-
return new_example
|
27 |
-
|
28 |
-
|
29 |
-
def main():
|
30 |
-
data_path = Path("/work/dfm-data/pre-training/cellar/documents")
|
31 |
-
data_paths = [p.as_posix() for p in data_path.glob("DAN*.jsonl.gz")]
|
32 |
-
dfs = []
|
33 |
-
for i, path in enumerate(data_paths):
|
34 |
-
print(i, path.split("/")[-1])
|
35 |
-
try:
|
36 |
-
ds = load_dataset(
|
37 |
-
"json", data_files=path, split="train"
|
38 |
-
) # a few datasets fail to load
|
39 |
-
dfs.append(ds)
|
40 |
-
print("\tSuccess")
|
41 |
-
except Exception:
|
42 |
-
print("\tFail")
|
43 |
-
|
44 |
-
ds = concatenate_datasets(dsets=dfs)
|
45 |
-
|
46 |
-
ds = cast(Dataset, ds)
|
47 |
-
|
48 |
-
ds = ds.map(convert_sample, remove_columns=ds.column_names)
|
49 |
-
ds = ds.rename_columns({"text_new": "text"})
|
50 |
-
ds = ds.add_column("id", [f"{source}_{i}" for i in range(len(ds))]) # type: ignore
|
51 |
-
ds = ds.select_columns(
|
52 |
-
["text", "source", "id", "added", "created", "license", "domain", "metadata"]
|
53 |
-
)
|
54 |
-
|
55 |
-
save_path = Path(__file__).parent / f"{source}.parquet"
|
56 |
-
ds.to_parquet(save_path)
|
57 |
-
|
58 |
-
|
59 |
-
if __name__ == "__main__":
|
60 |
-
main()
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
data/cellar/descriptive_stats.json
DELETED
@@ -1,9 +0,0 @@
|
|
1 |
-
{
|
2 |
-
"number_of_samples": 63399,
|
3 |
-
"number_of_tokens": 1152074881,
|
4 |
-
"min_length_tokens": 7,
|
5 |
-
"max_length_tokens": 2599840,
|
6 |
-
"number_of_characters": 3866568270,
|
7 |
-
"min_length_characters": 14,
|
8 |
-
"max_length_characters": 37287484
|
9 |
-
}
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
data/cellar/images/dist_document_length.png
DELETED
Git LFS Details
|
data/dannet/dannet.md
CHANGED
@@ -1,81 +1,84 @@
|
|
1 |
---
|
2 |
-
pretty_name: DanNet
|
3 |
language:
|
4 |
-
- da
|
5 |
-
license:
|
6 |
-
license_name: DanNet 1.0
|
7 |
size_categories:
|
8 |
-
- 10k-100k
|
9 |
task_categories:
|
10 |
-
- text-generation
|
11 |
-
- fill-mask
|
12 |
task_ids:
|
13 |
-
- language-modeling
|
14 |
-
source_datasets:
|
15 |
-
- danish-foundation-models/danish-gigaword
|
16 |
-
domains:
|
17 |
-
- Other
|
18 |
---
|
19 |
-
|
20 |
-
# Dataset Card for DanNet
|
21 |
-
|
22 |
-
<!-- START-SHORT DESCRIPTION -->
|
23 |
-
[DanNet](https://cst.ku.dk/projekter/dannet) is a Danish WordNet.
|
24 |
-
<!-- END-SHORT DESCRIPTION -->
|
25 |
-
|
26 |
-
|
27 |
-
A WordNet is a lexico-semantic network which show the meaning and the relation between words through named connections. It can be considered a machine-readable dictionary.
|
28 |
-
|
29 |
-
|
30 |
## Dataset Description
|
31 |
-
|
32 |
-
|
33 |
-
|
34 |
-
- **Number of samples**: 47.60K
|
35 |
-
- **Number of tokens (Llama 3)**: 1.48M
|
36 |
-
- **Average document length in tokens (min, max)**: 31.079364745919374 (2, 106)
|
37 |
-
<!-- END-DESC-STATS -->
|
38 |
-
|
39 |
-
|
40 |
-
|
41 |
-
## Dataset Structure
|
42 |
An example from the dataset looks as follows.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
43 |
|
|
|
|
|
|
|
44 |
|
45 |
-
|
46 |
-
```py
|
47 |
-
{
|
48 |
-
"id": "dannet_46506",
|
49 |
-
"text": "Når fodboldholdet fra 1. division i Ikast spiller hjemmekampe, lyder råbet ud over Ikast Stadion: We[...]",
|
50 |
-
"source": "dannet",
|
51 |
-
"added": "2020-09-24",
|
52 |
-
"created": "2000-01-01, 2022-01-01",
|
53 |
-
"token_count": 50
|
54 |
-
}
|
55 |
-
```
|
56 |
|
57 |
-
|
58 |
|
59 |
-
|
|
|
|
|
|
|
|
|
60 |
|
61 |
-
|
62 |
-
|
63 |
-
|
64 |
-
|
65 |
-
|
66 |
-
|
67 |
-
<!-- END-SAMPLE -->
|
68 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
69 |
|
70 |
-
|
|
|
|
|
|
|
|
|
|
|
71 |
|
72 |
-
|
73 |
-
|
74 |
-
|
75 |
-
|
76 |
-
|
77 |
|
|
|
78 |
|
|
|
|
|
|
|
|
|
|
|
|
|
79 |
|
80 |
## License Information
|
81 |
<details>
|
@@ -122,32 +125,3 @@ LICENSEE agrees to preserve same.
|
|
122 |
DanNet 2.1 Copyright 2009-12 by University of Copenhagen and Society for Danish
|
123 |
</p>
|
124 |
</details>
|
125 |
-
|
126 |
-
|
127 |
-
|
128 |
-
## Additional Information
|
129 |
-
|
130 |
-
<!-- TODO:
|
131 |
-
Add issue on:
|
132 |
-
|
133 |
-
Potential improvements for dannet
|
134 |
-
|
135 |
-
I imagine that there is a lot of information in DanNet
|
136 |
-
that could be used to create training datasets for LLMs (more than what is already present)
|
137 |
-
-->
|
138 |
-
|
139 |
-
### Citation Information
|
140 |
-
|
141 |
-
This dataset was initially published as part of the [Danish gigaword](https://huggingface.co/danish-foundation-models). We recommend that you cite and reference it if you use this dataset:
|
142 |
-
|
143 |
-
> Derczynski, L., Ciosici, M. R., et al. (2021). The Danish Gigaword Corpus. In Proceedings of the 23rd Nordic Conference on Computational Linguistics (NoDaLiDa 2021).
|
144 |
-
|
145 |
-
```bash
|
146 |
-
@inproceedings{dagw,
|
147 |
-
title = {{The Danish Gigaword Corpus}},
|
148 |
-
author = {Leon Derczynski and Manuel R. Ciosici and Rebekah Baglini and Morten H. Christiansen and Jacob Aarup Dalsgaard and Riccardo Fusaroli and Peter Juel Henrichsen and Rasmus Hvingelby and Andreas Kirkedal and Alex Speed Kjeldsen and Claus Ladefoged and Finn Årup Nielsen and Jens Madsen and Malte Lau Petersen and Jonathan Hvithamar Rystrøm and Daniel Varab},
|
149 |
-
year = 2021,
|
150 |
-
booktitle = {Proceedings of the 23rd Nordic Conference on Computational Linguistics},
|
151 |
-
publisher = {NEALT}
|
152 |
-
}
|
153 |
-
```
|
|
|
1 |
---
|
2 |
+
pretty_name: DanNet (Danish WordNet)
|
3 |
language:
|
4 |
+
- da
|
5 |
+
license: DanNet 1.0 License
|
6 |
+
license_name: DanNet 1.0 License
|
7 |
size_categories:
|
8 |
+
- 10k-100k
|
9 |
task_categories:
|
10 |
+
- text-generation
|
11 |
+
- fill-mask
|
12 |
task_ids:
|
13 |
+
- language-modeling
|
|
|
|
|
|
|
|
|
14 |
---
|
15 |
+
# Dataset Card for DanNet (Danish WordNet)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
16 |
## Dataset Description
|
17 |
+
- **Number of records:** 49040
|
18 |
+
- **Languages:** Danish
|
19 |
+
## Dataset Sturcture
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
20 |
An example from the dataset looks as follows.
|
21 |
+
```yaml
|
22 |
+
{
|
23 |
+
'text': 'Når fodboldholdet fra 1. division i Ikast spiller ',
|
24 |
+
'source': 'dannet',
|
25 |
+
'id': 'dannet_46506',
|
26 |
+
'added': '2020-09-24',
|
27 |
+
'created': '2000-01-01, 2022-01-01',
|
28 |
+
'metadata': {
|
29 |
+
'domain': 'dannet',
|
30 |
+
'license': 'Commercial Use of DanNet
|
31 |
|
32 |
+
DanNet may be used in commercial applications in accordance with the following
|
33 |
+
license agreement. An attorney representing the commercial interest should
|
34 |
+
review this DanNet license with respect to the intended use.
|
35 |
|
36 |
+
DanNet 1.0 License
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
37 |
|
38 |
+
DanNet Release 2.1
|
39 |
|
40 |
+
This software and database is being provided to you, the LICENSEE, by University
|
41 |
+
of Copenhagen and Society for Danish Language and Literature under the following
|
42 |
+
license. By obtaining, using and/or copying this software and database, you
|
43 |
+
agree that you have read, understood, and will comply with these terms and
|
44 |
+
conditions.
|
45 |
|
46 |
+
Permission to use, copy, modify and distribute this software and database and
|
47 |
+
its documentation for any purpose and without fee or royalty is hereby granted,
|
48 |
+
provided that you agree to comply with the following copyright notice and
|
49 |
+
statements, including the disclaimer, and that the same appear on ALL copies of
|
50 |
+
the software, database and documentation, including modifications that you make
|
51 |
+
for internal use or for distribution.
|
|
|
52 |
|
53 |
+
THIS SOFTWARE AND DATABASE IS PROVIDED "AS IS" AND UNIVERSITY OF COPENHAGEN and
|
54 |
+
SOCIETY FOR DANISH LANGUAGE AND LITERATURE MAKE NO REPRESENTATIONS OR
|
55 |
+
WARRANTIES, EXPRESS OR IMPLIED. BY WAY OF EXAMPLE, BUT NOT LIMITATION,
|
56 |
+
UNIVERSITY OF COPENHAGEN AND SOCIETY FOR DANISH LANGUAGE AND LITERATURE MAKE NO
|
57 |
+
REPRESENTATIONS OR WARRANTIES OF MERCHANTABILITY OR FITNESS FOR ANY PARTICULAR
|
58 |
+
PURPOSE OR THAT THE USE OF THE LICENSED SOFTWARE, DATABASE OR DOCUMENTATION WILL
|
59 |
+
NOT INFRINGE ANY THIRD PARTY PATENTS, COPYRIGHTS, TRADEMARKS OR OTHER RIGHTS.
|
60 |
|
61 |
+
The names of University of Copenhagen and Society for Danish Language and
|
62 |
+
Literature may not be used in advertising or publicity pertaining to
|
63 |
+
distribution of the software and/or database. Title to copyright in this
|
64 |
+
software, database and any associated documentation shall at all times remain
|
65 |
+
with University of Copenhagen and Society for Danish Language and Literature and
|
66 |
+
LICENSEE agrees to preserve same.
|
67 |
|
68 |
+
DanNet 2.1 Copyright 2009-12 by University of Copenhagen and Society for Danish',
|
69 |
+
'source-pretty': 'DanNet (Danish WordNet)'
|
70 |
+
}
|
71 |
+
}
|
72 |
+
```
|
73 |
|
74 |
+
## Data Fields
|
75 |
|
76 |
+
- **id**: source-specific identifier.
|
77 |
+
- **text**: textual content of the document.
|
78 |
+
- **source**: source of the data.
|
79 |
+
- **added**: timestamp ai2 acquired this data.
|
80 |
+
- **created**": timestamp when original document was created (best-guess if not available)
|
81 |
+
- **metadata**: source-specific metadata.
|
82 |
|
83 |
## License Information
|
84 |
<details>
|
|
|
125 |
DanNet 2.1 Copyright 2009-12 by University of Copenhagen and Society for Danish
|
126 |
</p>
|
127 |
</details>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
data/dannet/dannet.parquet
CHANGED
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
-
size
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:b9006617e35f568e7b7e4dacc87c4a490cf0a9170bd4e91488de77e00d3fb38c
|
3 |
+
size 4487008
|
data/dannet/descriptive_stats.json
DELETED
@@ -1,9 +0,0 @@
|
|
1 |
-
{
|
2 |
-
"number_of_samples": 47603,
|
3 |
-
"number_of_tokens": 1479471,
|
4 |
-
"min_length_tokens": 2,
|
5 |
-
"max_length_tokens": 106,
|
6 |
-
"number_of_characters": 4326120,
|
7 |
-
"min_length_characters": 2,
|
8 |
-
"max_length_characters": 340
|
9 |
-
}
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
data/dannet/images/dist_document_length.png
DELETED
Git LFS Details
|
data/danske-taler/create.py
DELETED
@@ -1,314 +0,0 @@
|
|
1 |
-
# /// script
|
2 |
-
# requires-python = ">=3.12"
|
3 |
-
# dependencies = [
|
4 |
-
# "beautifulsoup4==4.13.3",
|
5 |
-
# "datasets>=3.0.0",
|
6 |
-
# "transformers",
|
7 |
-
# "dynaword"
|
8 |
-
# ]
|
9 |
-
# [tool.uv.sources]
|
10 |
-
# dynaword = { git = "https://huggingface.co/datasets/danish-foundation-models/danish-dynaword", rev = "00e7f2aee7f7ad2da423419f77ecbb9c0536de0d" }
|
11 |
-
# ///
|
12 |
-
"""
|
13 |
-
Danske Taler API Downloader
|
14 |
-
This script downloads speeches/articles from the Danske Taler API: https://www.dansketaler.dk/api/v1
|
15 |
-
|
16 |
-
It saves it into the following structure:
|
17 |
-
|
18 |
-
```
|
19 |
-
{
|
20 |
-
"text": "Lav et referat af nedenstående tekst:\n\nTekst:\nOpdatering: Manden er nu fundet af Nordjyllands Politi[...]",
|
21 |
-
"source": "nordjyllandnews",
|
22 |
-
"id": "nordjyllandnews_0",
|
23 |
-
"added": "2024-12-16",
|
24 |
-
"created": "2000-01-01, 2024-01-01",
|
25 |
-
"license": "Creative Commons Legal Code\n\nCC0 1.0 Universal",
|
26 |
-
"domain": "News",
|
27 |
-
"metadata": {
|
28 |
-
"source-pretty": "Nordjylland News"
|
29 |
-
}
|
30 |
-
}
|
31 |
-
```
|
32 |
-
|
33 |
-
Note: To run this script, you need to set `GIT_LFS_SKIP_SMUDGE=1` to be able to install dynaword:
|
34 |
-
|
35 |
-
```bash
|
36 |
-
GIT_LFS_SKIP_SMUDGE=1 uv run data/memo/create.py
|
37 |
-
```
|
38 |
-
|
39 |
-
This second version fixed previous issues with the download and processing of the Danish Memo repository:
|
40 |
-
https://huggingface.co/datasets/danish-foundation-models/danish-dynaword/discussions/67
|
41 |
-
"""
|
42 |
-
|
43 |
-
import logging
|
44 |
-
import time
|
45 |
-
from datetime import date
|
46 |
-
from pathlib import Path
|
47 |
-
from typing import Any
|
48 |
-
|
49 |
-
from datasets import Dataset
|
50 |
-
import pandas as pd
|
51 |
-
import requests
|
52 |
-
from bs4 import BeautifulSoup, NavigableString
|
53 |
-
from tqdm import tqdm
|
54 |
-
|
55 |
-
from dynaword.process_dataset import (
|
56 |
-
add_token_count,
|
57 |
-
ensure_column_order,
|
58 |
-
remove_duplicate_text,
|
59 |
-
remove_empty_texts,
|
60 |
-
)
|
61 |
-
|
62 |
-
logger = logging.getLogger(__name__)
|
63 |
-
|
64 |
-
# Configuration
|
65 |
-
API_BASE_URL = "https://www.dansketaler.dk/api/v1"
|
66 |
-
|
67 |
-
KNOWN_HTML_TAGS = {
|
68 |
-
"html",
|
69 |
-
"head",
|
70 |
-
"body",
|
71 |
-
"title",
|
72 |
-
"meta",
|
73 |
-
"link",
|
74 |
-
"script",
|
75 |
-
"style",
|
76 |
-
"div",
|
77 |
-
"span",
|
78 |
-
"p",
|
79 |
-
"a",
|
80 |
-
"ul",
|
81 |
-
"ol",
|
82 |
-
"li",
|
83 |
-
"table",
|
84 |
-
"tr",
|
85 |
-
"td",
|
86 |
-
"th",
|
87 |
-
"img",
|
88 |
-
"h1",
|
89 |
-
"h2",
|
90 |
-
"h3",
|
91 |
-
"h4",
|
92 |
-
"h5",
|
93 |
-
"h6",
|
94 |
-
"strong",
|
95 |
-
"em",
|
96 |
-
"br",
|
97 |
-
"hr",
|
98 |
-
"form",
|
99 |
-
"input",
|
100 |
-
"button",
|
101 |
-
"label",
|
102 |
-
"select",
|
103 |
-
"option",
|
104 |
-
"textarea",
|
105 |
-
"iframe",
|
106 |
-
"nav",
|
107 |
-
"footer",
|
108 |
-
"header",
|
109 |
-
"main",
|
110 |
-
"section",
|
111 |
-
"article",
|
112 |
-
}
|
113 |
-
|
114 |
-
|
115 |
-
def contains_html_tags(text):
|
116 |
-
soup = BeautifulSoup(str(text), "html.parser")
|
117 |
-
return any(tag.name in KNOWN_HTML_TAGS for tag in soup.find_all())
|
118 |
-
|
119 |
-
|
120 |
-
def get_all_speeches() -> list[dict[str, Any]]:
|
121 |
-
# fetch first page, notably the total number of pages
|
122 |
-
url = f"{API_BASE_URL}/speeches?per_page=50"
|
123 |
-
response = requests.get(url)
|
124 |
-
response.raise_for_status()
|
125 |
-
speeches = response.json()
|
126 |
-
meta = speeches["meta"]
|
127 |
-
total_pages = meta["total_pages"]
|
128 |
-
|
129 |
-
# fetch all pages
|
130 |
-
all_speeches = []
|
131 |
-
for page in range(1, total_pages + 1):
|
132 |
-
url = f"{API_BASE_URL}/speeches?per_page=50&page={page}"
|
133 |
-
response = requests.get(url)
|
134 |
-
response.raise_for_status()
|
135 |
-
speeches = response.json()
|
136 |
-
all_speeches.extend(speeches["speeches"])
|
137 |
-
|
138 |
-
return all_speeches
|
139 |
-
|
140 |
-
|
141 |
-
def fetch_speech_content(
|
142 |
-
url: str, max_retries: int = 3, backoff_factor: float = 0.5
|
143 |
-
) -> tuple[str | None, str]:
|
144 |
-
"""
|
145 |
-
Fetches the license div from the page with retry logic.
|
146 |
-
|
147 |
-
Args:
|
148 |
-
url: The URL to fetch the license div from
|
149 |
-
max_retries: Maximum number of retry attempts
|
150 |
-
backoff_factor: Factor to determine exponential backoff time between retries
|
151 |
-
|
152 |
-
Returns:
|
153 |
-
The text content of the license div if found, None otherwise
|
154 |
-
"""
|
155 |
-
retries = 0
|
156 |
-
|
157 |
-
while retries <= max_retries:
|
158 |
-
try:
|
159 |
-
response = requests.get(url, timeout=10)
|
160 |
-
response.raise_for_status()
|
161 |
-
|
162 |
-
soup = BeautifulSoup(response.text, "html.parser")
|
163 |
-
license_div = soup.find("div", class_="speech-copyright")
|
164 |
-
speech_div = soup.find("div", class_="speech-article-content")
|
165 |
-
speech = ""
|
166 |
-
if speech_div:
|
167 |
-
# Iterate over the children of the found div
|
168 |
-
for child_div in speech_div.children: # type: ignore
|
169 |
-
if child_div.name == "div": # type: ignore
|
170 |
-
current_paragraph = []
|
171 |
-
for content in child_div.contents: # type: ignore
|
172 |
-
if isinstance(content, NavigableString):
|
173 |
-
# Append text content
|
174 |
-
current_paragraph.append(str(content).strip())
|
175 |
-
elif content.name == "br":
|
176 |
-
# If a <br> is encountered, join and print the current paragraph, then reset
|
177 |
-
if current_paragraph:
|
178 |
-
speech += "".join(current_paragraph)
|
179 |
-
speech += "\n" # Add a newline for paragraph break
|
180 |
-
current_paragraph = []
|
181 |
-
# Print any remaining text in the current_paragraph list
|
182 |
-
if current_paragraph:
|
183 |
-
speech += "".join(current_paragraph)
|
184 |
-
speech += "\n" # Add a newline for paragraph break
|
185 |
-
|
186 |
-
return (license_div.text if license_div else None, speech)
|
187 |
-
|
188 |
-
except (requests.RequestException, AttributeError) as e:
|
189 |
-
retries += 1
|
190 |
-
|
191 |
-
if retries > max_retries:
|
192 |
-
logger.info(
|
193 |
-
f"Failed to fetch license after {max_retries} attempts: {str(e)}"
|
194 |
-
)
|
195 |
-
return (None, "")
|
196 |
-
|
197 |
-
# Calculate backoff time using exponential backoff
|
198 |
-
wait_time = backoff_factor * (2 ** (retries - 1))
|
199 |
-
logger.info(
|
200 |
-
f"Attempt {retries} failed. Retrying in {wait_time:.2f} seconds..."
|
201 |
-
)
|
202 |
-
time.sleep(wait_time)
|
203 |
-
|
204 |
-
return (None, "")
|
205 |
-
|
206 |
-
|
207 |
-
def convert_to_license(license_information: str | None) -> str | None:
|
208 |
-
"""checks if "Materialet er fri af ophavsret" is in the page"""
|
209 |
-
|
210 |
-
if license_information and (
|
211 |
-
("Materialet er fri af ophavsret" in license_information)
|
212 |
-
or ("Materialet er fri af ophvasret" in license_information)
|
213 |
-
or ("Ophavsretten er bortfaldet" in license_information)
|
214 |
-
or ("Manuskriptet er fri af ophavsret" in license_information)
|
215 |
-
or ("Offentlig " == license_information)
|
216 |
-
):
|
217 |
-
return "cc0"
|
218 |
-
|
219 |
-
return license_information
|
220 |
-
|
221 |
-
|
222 |
-
def convert_to_row(speech_meta: dict[str, Any]) -> dict[str, Any]:
|
223 |
-
speech_id = speech_meta["id"]
|
224 |
-
|
225 |
-
date_of_speech = speech_meta["date"]["iso_date"]
|
226 |
-
date_of_speech_start = f"{date_of_speech}"
|
227 |
-
date_of_speech_end = f"{date_of_speech}"
|
228 |
-
|
229 |
-
(license_information, speech) = fetch_speech_content(speech_meta["url"])
|
230 |
-
|
231 |
-
row = {
|
232 |
-
"id": f"danske-taler_{speech_id}",
|
233 |
-
"text": speech,
|
234 |
-
"source": "danske-taler",
|
235 |
-
# current date
|
236 |
-
"added": date.today().isoformat(),
|
237 |
-
"created": f"{date_of_speech_start}, {date_of_speech_end}",
|
238 |
-
"license_information": license_information,
|
239 |
-
"domain": "Spoken",
|
240 |
-
"metadata": {"source-pretty": "Danske Taler"},
|
241 |
-
}
|
242 |
-
|
243 |
-
return row
|
244 |
-
|
245 |
-
|
246 |
-
def download_speeches() -> pd.DataFrame:
|
247 |
-
logger.info("Fetching all speeches from Danske Taler API")
|
248 |
-
speeches = get_all_speeches()
|
249 |
-
logger.info(f"Found {len(speeches)} speeches")
|
250 |
-
|
251 |
-
rows = []
|
252 |
-
for speech in tqdm(speeches):
|
253 |
-
row = convert_to_row(speech)
|
254 |
-
rows.append(row)
|
255 |
-
|
256 |
-
logger.info(f"Saving {len(rows)} speeches to dataset")
|
257 |
-
df = pd.DataFrame(rows)
|
258 |
-
return df
|
259 |
-
|
260 |
-
|
261 |
-
def main():
|
262 |
-
save_path = Path(__file__).parent / "danske-taler.parquet"
|
263 |
-
save_path_all = Path(__file__).parent / "tmp" / "danske-taler-all.parquet"
|
264 |
-
save_path_all.parent.mkdir(parents=False, exist_ok=True)
|
265 |
-
|
266 |
-
if save_path_all.exists():
|
267 |
-
logger.info(f"Loading dataset from {save_path_all}")
|
268 |
-
df = pd.read_parquet(save_path_all)
|
269 |
-
else:
|
270 |
-
logger.info(f"Downloading speeches and saving to {save_path_all}")
|
271 |
-
df = download_speeches()
|
272 |
-
df.to_parquet(save_path_all)
|
273 |
-
|
274 |
-
licenses = [convert_to_license(license) for license in df["license_information"]]
|
275 |
-
df["license"] = licenses
|
276 |
-
|
277 |
-
uniques_licenses = set(df["license"].tolist())
|
278 |
-
logger.info("Unique licenses:")
|
279 |
-
for license in uniques_licenses:
|
280 |
-
logger.info(f"\t{license}")
|
281 |
-
|
282 |
-
# remove documents without a cc0 license
|
283 |
-
len_df = len(df)
|
284 |
-
df = df[df["license"] == "cc0"]
|
285 |
-
logger.info(f"Removed {len_df - len(df)} documents without a cc0 license")
|
286 |
-
|
287 |
-
dataset = Dataset.from_pandas(df, preserve_index=False)
|
288 |
-
|
289 |
-
dataset = remove_empty_texts(dataset) # remove rows with empty text
|
290 |
-
dataset = remove_duplicate_text(dataset) # remove rows with duplicate text
|
291 |
-
dataset = add_token_count(dataset)
|
292 |
-
dataset = ensure_column_order(dataset)
|
293 |
-
|
294 |
-
assert len(set(dataset["id"])) == len(dataset), "IDs are not unique"
|
295 |
-
assert len(set(dataset["text"])) == len(dataset), "Texts are not unique"
|
296 |
-
assert len(set(df["license"])) == 1, "Multiple licenses found"
|
297 |
-
|
298 |
-
# check for html tags in text
|
299 |
-
assert not df["text"].apply(contains_html_tags).any(), "HTML tags found in text"
|
300 |
-
|
301 |
-
dataset.to_parquet(save_path)
|
302 |
-
|
303 |
-
|
304 |
-
if __name__ == "__main__":
|
305 |
-
log_path = Path(__file__).parent / "danske-taler.log"
|
306 |
-
logging.basicConfig(
|
307 |
-
level=logging.INFO,
|
308 |
-
format="%(asctime)s - %(levelname)s - %(message)s",
|
309 |
-
handlers=[
|
310 |
-
logging.StreamHandler(),
|
311 |
-
logging.FileHandler(log_path),
|
312 |
-
],
|
313 |
-
)
|
314 |
-
main()
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
data/danske-taler/danske-taler.log
DELETED
@@ -1,167 +0,0 @@
|
|
1 |
-
2025-03-29 14:14:08,846 - INFO - Downloading speeches and saving to /work/githubs/tmp/danish-dynaword/data/danske-taler/tmp/danske-taler-all.parquet
|
2 |
-
2025-03-29 14:14:08,847 - INFO - Fetching all speeches from Danske Taler API
|
3 |
-
2025-03-29 14:15:19,326 - INFO - Found 4725 speeches
|
4 |
-
13%|██████████▏ | 597/4725 [01:22<11:15, 6.11it/s]Attempt 1 failed. Retrying in 0.50 seconds...
|
5 |
-
Attempt 2 failed. Retrying in 1.00 seconds...
|
6 |
-
Attempt 3 failed. Retrying in 2.00 seconds...
|
7 |
-
Failed to fetch license after 3 attempts: 500 Server Error: Internal Server Error for url: https://www.dansketaler.dk/tale/niels-hoejlund-pedersens-translokationstale-2020
|
8 |
-
17%|██████████████ | 818/4725 [01:57<09:00, 7.23it/s]Attempt 1 failed. Retrying in 0.50 seconds...
|
9 |
-
Attempt 2 failed. Retrying in 1.00 seconds...
|
10 |
-
Attempt 3 failed. Retrying in 2.00 seconds...
|
11 |
-
Failed to fetch license after 3 attempts: 500 Server Error: Internal Server Error for url: https://www.dansketaler.dk/tale/katrine-lykke-pedersens-tale-til-unge-om-haab-i-en-coronatid
|
12 |
-
17%|█████████████▋ | 820/4725 [02:01<1:05:16, 1.00s/it]Attempt 1 failed. Retrying in 0.50 seconds...
|
13 |
-
Attempt 2 failed. Retrying in 1.00 seconds...
|
14 |
-
Attempt 3 failed. Retrying in 2.00 seconds...
|
15 |
-
Failed to fetch license after 3 attempts: 500 Server Error: Internal Server Error for url: https://www.dansketaler.dk/tale/anastacia-halkens-tale-til-unge-om-haab-i-en-coronatid
|
16 |
-
18%|██████████████▏ | 828/4725 [02:07<17:53, 3.63it/s]Attempt 1 failed. Retrying in 0.50 seconds...
|
17 |
-
Attempt 2 failed. Retrying in 1.00 seconds...
|
18 |
-
Attempt 3 failed. Retrying in 2.00 seconds...
|
19 |
-
Failed to fetch license after 3 attempts: 500 Server Error: Internal Server Error for url: https://www.dansketaler.dk/tale/thomas-vinterbergs-tale-ved-modtagelsen-af-oscar-prisen
|
20 |
-
22%|█████████████████▋ | 1042/4725 [02:41<10:04, 6.09it/s]Attempt 1 failed. Retrying in 0.50 seconds...
|
21 |
-
Attempt 2 failed. Retrying in 1.00 seconds...
|
22 |
-
Attempt 3 failed. Retrying in 2.00 seconds...
|
23 |
-
Failed to fetch license after 3 attempts: 500 Server Error: Internal Server Error for url: https://www.dansketaler.dk/tale/pernille-vermunds-tale-ved-folketingets-aabningsdebat-2021
|
24 |
-
22%|█████████████████▉ | 1059/4725 [02:48<08:22, 7.30it/s]Attempt 1 failed. Retrying in 0.50 seconds...
|
25 |
-
Attempt 2 failed. Retrying in 1.00 seconds...
|
26 |
-
Attempt 3 failed. Retrying in 2.00 seconds...
|
27 |
-
Failed to fetch license after 3 attempts: 500 Server Error: Internal Server Error for url: https://www.dansketaler.dk/tale/pernille-vermunds-tale-ved-nye-borgerliges-aarsmoede-2021
|
28 |
-
22%|█████████████████▌ | 1061/4725 [02:52<1:01:08, 1.00s/it]Attempt 1 failed. Retrying in 0.50 seconds...
|
29 |
-
Attempt 2 failed. Retrying in 1.00 seconds...
|
30 |
-
Attempt 3 failed. Retrying in 2.00 seconds...
|
31 |
-
Failed to fetch license after 3 attempts: 500 Server Error: Internal Server Error for url: https://www.dansketaler.dk/tale/mette-thiesens-tale-ved-nye-borgerliges-aarsmoede-2021
|
32 |
-
22%|█████████████████▌ | 1062/4725 [02:57<2:00:22, 1.97s/it]Attempt 1 failed. Retrying in 0.50 seconds...
|
33 |
-
Attempt 2 failed. Retrying in 1.00 seconds...
|
34 |
-
Attempt 3 failed. Retrying in 2.00 seconds...
|
35 |
-
Failed to fetch license after 3 attempts: 500 Server Error: Internal Server Error for url: https://www.dansketaler.dk/tale/peter-seier-christensens-tale-ved-nye-borgerliges-aarsmoede-2021
|
36 |
-
34%|███████████████████████████▍ | 1617/4725 [04:25<07:09, 7.24it/s]Attempt 1 failed. Retrying in 0.50 seconds...
|
37 |
-
Attempt 2 failed. Retrying in 1.00 seconds...
|
38 |
-
Attempt 3 failed. Retrying in 2.00 seconds...
|
39 |
-
Failed to fetch license after 3 attempts: 500 Server Error: Internal Server Error for url: https://www.dansketaler.dk/tale/silke-ena-svares-tale-ved-demonstrationen-for-born-og-unge
|
40 |
-
100%|████████████████████████████████████████████████████████████████████████████████| 4725/4725 [12:43<00:00, 6.19it/s]
|
41 |
-
2025-03-29 14:28:02,454 - INFO - Saving 4725 speeches to dataset
|
42 |
-
2025-03-29 14:28:03,330 - INFO - Unique licenses:
|
43 |
-
2025-03-29 14:28:03,331 - INFO - None
|
44 |
-
2025-03-29 14:28:03,331 - INFO - Materialet er beskyttet af ophavsret
|
45 |
-
2025-03-29 14:28:03,331 - INFO - cc0
|
46 |
-
2025-03-29 14:28:03,331 - INFO - Materialet er beskyttet af ophavsret, da talen ikke er holdt i offentligheden.
|
47 |
-
2025-03-29 14:28:03,331 - INFO - Materialet er omfattet af ophavsret
|
48 |
-
2025-03-29 14:28:03,331 - INFO - Manuskript taget fra ft.dk. med tilladelse fra udgiver.
|
49 |
-
2025-03-29 14:28:03,331 - INFO - Materialet et beskyttet af ophavsret
|
50 |
-
2025-03-29 14:28:03,331 - INFO - Manuskript taget fra ft.dk med tilladelse fra udgiver.
|
51 |
-
2025-03-29 14:28:03,331 - INFO - Materialet er beskyttet af ophavsret
|
52 |
-
2025-03-29 14:28:03,331 - INFO - Materialet er beskyttet af ophavsret
|
53 |
-
2025-03-29 14:28:03,461 - INFO - Removed 2063 documents without a cc0 license
|
54 |
-
2025-03-29 14:28:03,541 - INFO - Removed 0 duplicate ids
|
55 |
-
2025-03-29 14:28:03,549 - INFO - Removed 2 rows with empty text
|
56 |
-
2025-03-29 14:28:03,631 - INFO - Removed 2 rows with duplicate text
|
57 |
-
Creating parquet from Arrow format: 100%|██████████████████████████████████████████████████| 3/3 [00:00<00:00, 11.33ba/s]
|
58 |
-
2025-06-24 13:03:05,424 - INFO - Found 5103 speeches
|
59 |
-
2025-06-24 13:04:19,375 - INFO - Attempt 1 failed. Retrying in 0.50 seconds...
|
60 |
-
2025-06-24 13:04:29,734 - INFO - Attempt 1 failed. Retrying in 0.50 seconds...
|
61 |
-
2025-06-24 13:04:30,613 - INFO - Attempt 2 failed. Retrying in 1.00 seconds...
|
62 |
-
2025-06-24 13:04:31,856 - INFO - Attempt 3 failed. Retrying in 2.00 seconds...
|
63 |
-
2025-06-24 13:04:34,098 - INFO - Failed to fetch license after 3 attempts: 500 Server Error: Internal Server Error for url: https://www.dansketaler.dk/tale/niels-hoejlund-pedersens-translokationstale-2020
|
64 |
-
2025-06-24 13:05:10,223 - INFO - Attempt 1 failed. Retrying in 0.50 seconds...
|
65 |
-
2025-06-24 13:05:11,113 - INFO - Attempt 2 failed. Retrying in 1.00 seconds...
|
66 |
-
2025-06-24 13:05:12,575 - INFO - Attempt 3 failed. Retrying in 2.00 seconds...
|
67 |
-
2025-06-24 13:05:14,814 - INFO - Failed to fetch license after 3 attempts: 500 Server Error: Internal Server Error for url: https://www.dansketaler.dk/tale/katrine-lykke-pedersens-tale-til-unge-om-haab-i-en-coronatid
|
68 |
-
2025-06-24 13:05:15,208 - INFO - Attempt 1 failed. Retrying in 0.50 seconds...
|
69 |
-
2025-06-24 13:05:15,922 - INFO - Attempt 2 failed. Retrying in 1.00 seconds...
|
70 |
-
2025-06-24 13:05:17,117 - INFO - Attempt 3 failed. Retrying in 2.00 seconds...
|
71 |
-
2025-06-24 13:05:19,583 - INFO - Failed to fetch license after 3 attempts: 500 Server Error: Internal Server Error for url: https://www.dansketaler.dk/tale/anastacia-halkens-tale-til-unge-om-haab-i-en-coronatid
|
72 |
-
2025-06-24 13:05:20,875 - INFO - Attempt 1 failed. Retrying in 0.50 seconds...
|
73 |
-
2025-06-24 13:05:21,619 - INFO - Attempt 2 failed. Retrying in 1.00 seconds...
|
74 |
-
2025-06-24 13:05:22,844 - INFO - Attempt 3 failed. Retrying in 2.00 seconds...
|
75 |
-
2025-06-24 13:05:25,074 - INFO - Failed to fetch license after 3 attempts: 500 Server Error: Internal Server Error for url: https://www.dansketaler.dk/tale/thomas-vinterbergs-tale-ved-modtagelsen-af-oscar-prisen
|
76 |
-
2025-06-24 13:06:01,599 - INFO - Attempt 1 failed. Retrying in 0.50 seconds...
|
77 |
-
2025-06-24 13:06:02,313 - INFO - Attempt 2 failed. Retrying in 1.00 seconds...
|
78 |
-
2025-06-24 13:06:03,588 - INFO - Attempt 3 failed. Retrying in 2.00 seconds...
|
79 |
-
2025-06-24 13:06:05,817 - INFO - Failed to fetch license after 3 attempts: 500 Server Error: Internal Server Error for url: https://www.dansketaler.dk/tale/pernille-vermunds-tale-ved-folketingets-aabningsdebat-2021
|
80 |
-
2025-06-24 13:06:08,990 - INFO - Attempt 1 failed. Retrying in 0.50 seconds...
|
81 |
-
2025-06-24 13:06:09,675 - INFO - Attempt 2 failed. Retrying in 1.00 seconds...
|
82 |
-
2025-06-24 13:06:10,912 - INFO - Attempt 3 failed. Retrying in 2.00 seconds...
|
83 |
-
2025-06-24 13:06:13,120 - INFO - Failed to fetch license after 3 attempts: 500 Server Error: Internal Server Error for url: https://www.dansketaler.dk/tale/pernille-vermunds-tale-ved-nye-borgerliges-aarsmoede-2021
|
84 |
-
2025-06-24 13:06:13,512 - INFO - Attempt 1 failed. Retrying in 0.50 seconds...
|
85 |
-
2025-06-24 13:06:14,230 - INFO - Attempt 2 failed. Retrying in 1.00 seconds...
|
86 |
-
2025-06-24 13:06:15,462 - INFO - Attempt 3 failed. Retrying in 2.00 seconds...
|
87 |
-
2025-06-24 13:06:17,720 - INFO - Failed to fetch license after 3 attempts: 500 Server Error: Internal Server Error for url: https://www.dansketaler.dk/tale/mette-thiesens-tale-ved-nye-borgerliges-aarsmoede-2021
|
88 |
-
2025-06-24 13:06:17,920 - INFO - Attempt 1 failed. Retrying in 0.50 seconds...
|
89 |
-
2025-06-24 13:06:18,656 - INFO - Attempt 2 failed. Retrying in 1.00 seconds...
|
90 |
-
2025-06-24 13:06:19,902 - INFO - Attempt 3 failed. Retrying in 2.00 seconds...
|
91 |
-
2025-06-24 13:06:22,132 - INFO - Failed to fetch license after 3 attempts: 500 Server Error: Internal Server Error for url: https://www.dansketaler.dk/tale/peter-seier-christensens-tale-ved-nye-borgerliges-aarsmoede-2021
|
92 |
-
2025-06-24 13:07:56,628 - INFO - Attempt 1 failed. Retrying in 0.50 seconds...
|
93 |
-
2025-06-24 13:07:57,353 - INFO - Attempt 2 failed. Retrying in 1.00 seconds...
|
94 |
-
2025-06-24 13:07:58,586 - INFO - Attempt 3 failed. Retrying in 2.00 seconds...
|
95 |
-
2025-06-24 13:08:00,850 - INFO - Failed to fetch license after 3 attempts: 500 Server Error: Internal Server Error for url: https://www.dansketaler.dk/tale/silke-ena-svares-tale-ved-demonstrationen-for-born-og-unge
|
96 |
-
2025-06-24 13:19:38,142 - INFO - Saving 5103 speeches to dataset
|
97 |
-
2025-06-24 13:19:38,322 - INFO - Unique licenses:
|
98 |
-
2025-06-24 13:19:38,322 - INFO - None
|
99 |
-
2025-06-24 13:19:38,322 - INFO - cc0
|
100 |
-
2025-06-24 13:19:38,322 - INFO - Manuskript taget fra ft.dk. med tilladelse fra udgiver.
|
101 |
-
2025-06-24 13:19:38,322 - INFO - Manuskript tilsendt af taler og udgivet af Danske Taler med tilladelse fra taler.
|
102 |
-
2025-06-24 13:19:38,322 - INFO - Materialet er beskyttet af ophavsret, da talen ikke er holdt i offentligheden.
|
103 |
-
2025-06-24 13:19:38,322 - INFO - Materialet er beskyttet af ophavsret
|
104 |
-
2025-06-24 13:19:38,322 - INFO - Materialet er beskyttet af ophavsret
|
105 |
-
2025-06-24 13:19:38,322 - INFO - Materialet et beskyttet af ophavsret
|
106 |
-
2025-06-24 13:19:38,322 - INFO - Manuskript taget fra ft.dk med tilladelse fra udgiver.
|
107 |
-
2025-06-24 13:19:38,322 - INFO - Materialet er beskyttet af ophavsret
|
108 |
-
2025-06-24 13:19:38,322 - INFO - Materialet er omfattet af ophavsret
|
109 |
-
2025-06-24 13:19:38,325 - INFO - Removed 2188 documents without a cc0 license
|
110 |
-
2025-06-24 13:19:38,326 - INFO - Removed 0 duplicate ids
|
111 |
-
2025-06-24 13:19:38,332 - INFO - Removed 1 rows with empty text
|
112 |
-
2025-06-24 13:19:38,345 - INFO - Removed 2 rows with duplicate text2025-06-24 14:44:36,089 - INFO - Downloading speeches and saving to /Users/kristianjensen/Documents/danish-dynaword/data/danske-taler/tmp/danske-taler-all.parquet
|
113 |
-
2025-06-24 14:44:36,089 - INFO - Fetching all speeches from Danske Taler API
|
114 |
-
2025-06-24 14:45:43,887 - INFO - Found 5107 speeches
|
115 |
-
2025-06-24 14:46:53,929 - INFO - Attempt 1 failed. Retrying in 0.50 seconds...
|
116 |
-
2025-06-24 14:46:54,627 - INFO - Attempt 2 failed. Retrying in 1.00 seconds...
|
117 |
-
2025-06-24 14:46:55,824 - INFO - Attempt 3 failed. Retrying in 2.00 seconds...
|
118 |
-
2025-06-24 14:46:58,015 - INFO - Failed to fetch license after 3 attempts: 500 Server Error: Internal Server Error for url: https://www.dansketaler.dk/tale/niels-hoejlund-pedersens-translokationstale-2020
|
119 |
-
2025-06-24 14:47:34,505 - INFO - Attempt 1 failed. Retrying in 0.50 seconds...
|
120 |
-
2025-06-24 14:47:35,215 - INFO - Attempt 2 failed. Retrying in 1.00 seconds...
|
121 |
-
2025-06-24 14:47:36,514 - INFO - Attempt 3 failed. Retrying in 2.00 seconds...
|
122 |
-
2025-06-24 14:47:38,725 - INFO - Failed to fetch license after 3 attempts: 500 Server Error: Internal Server Error for url: https://www.dansketaler.dk/tale/katrine-lykke-pedersens-tale-til-unge-om-haab-i-en-coronatid
|
123 |
-
2025-06-24 14:47:39,093 - INFO - Attempt 1 failed. Retrying in 0.50 seconds...
|
124 |
-
2025-06-24 14:47:39,798 - INFO - Attempt 2 failed. Retrying in 1.00 seconds...
|
125 |
-
2025-06-24 14:47:41,013 - INFO - Attempt 3 failed. Retrying in 2.00 seconds...
|
126 |
-
2025-06-24 14:47:43,253 - INFO - Failed to fetch license after 3 attempts: 500 Server Error: Internal Server Error for url: https://www.dansketaler.dk/tale/anastacia-halkens-tale-til-unge-om-haab-i-en-coronatid
|
127 |
-
2025-06-24 14:47:44,528 - INFO - Attempt 1 failed. Retrying in 0.50 seconds...
|
128 |
-
2025-06-24 14:47:45,272 - INFO - Attempt 2 failed. Retrying in 1.00 seconds...
|
129 |
-
2025-06-24 14:47:46,492 - INFO - Attempt 3 failed. Retrying in 2.00 seconds...
|
130 |
-
2025-06-24 14:47:48,691 - INFO - Failed to fetch license after 3 attempts: 500 Server Error: Internal Server Error for url: https://www.dansketaler.dk/tale/thomas-vinterbergs-tale-ved-modtagelsen-af-oscar-prisen
|
131 |
-
2025-06-24 14:48:26,340 - INFO - Attempt 1 failed. Retrying in 0.50 seconds...
|
132 |
-
2025-06-24 14:48:27,037 - INFO - Attempt 2 failed. Retrying in 1.00 seconds...
|
133 |
-
2025-06-24 14:48:28,248 - INFO - Attempt 3 failed. Retrying in 2.00 seconds...
|
134 |
-
2025-06-24 14:48:30,496 - INFO - Failed to fetch license after 3 attempts: 500 Server Error: Internal Server Error for url: https://www.dansketaler.dk/tale/pernille-vermunds-tale-ved-folketingets-aabningsdebat-2021
|
135 |
-
2025-06-24 14:48:33,382 - INFO - Attempt 1 failed. Retrying in 0.50 seconds...
|
136 |
-
2025-06-24 14:48:34,125 - INFO - Attempt 2 failed. Retrying in 1.00 seconds...
|
137 |
-
2025-06-24 14:48:35,339 - INFO - Attempt 3 failed. Retrying in 2.00 seconds...
|
138 |
-
2025-06-24 14:48:37,570 - INFO - Failed to fetch license after 3 attempts: 500 Server Error: Internal Server Error for url: https://www.dansketaler.dk/tale/pernille-vermunds-tale-ved-nye-borgerliges-aarsmoede-2021
|
139 |
-
2025-06-24 14:48:37,940 - INFO - Attempt 1 failed. Retrying in 0.50 seconds...
|
140 |
-
2025-06-24 14:48:38,663 - INFO - Attempt 2 failed. Retrying in 1.00 seconds...
|
141 |
-
2025-06-24 14:48:39,884 - INFO - Attempt 3 failed. Retrying in 2.00 seconds...
|
142 |
-
2025-06-24 14:48:42,101 - INFO - Failed to fetch license after 3 attempts: 500 Server Error: Internal Server Error for url: https://www.dansketaler.dk/tale/mette-thiesens-tale-ved-nye-borgerliges-aarsmoede-2021
|
143 |
-
2025-06-24 14:48:42,357 - INFO - Attempt 1 failed. Retrying in 0.50 seconds...
|
144 |
-
2025-06-24 14:48:43,097 - INFO - Attempt 2 failed. Retrying in 1.00 seconds...
|
145 |
-
2025-06-24 14:48:44,340 - INFO - Attempt 3 failed. Retrying in 2.00 seconds...
|
146 |
-
2025-06-24 14:48:46,560 - INFO - Failed to fetch license after 3 attempts: 500 Server Error: Internal Server Error for url: https://www.dansketaler.dk/tale/peter-seier-christensens-tale-ved-nye-borgerliges-aarsmoede-2021
|
147 |
-
2025-06-24 14:50:22,691 - INFO - Attempt 1 failed. Retrying in 0.50 seconds...
|
148 |
-
2025-06-24 14:50:23,446 - INFO - Attempt 2 failed. Retrying in 1.00 seconds...
|
149 |
-
2025-06-24 14:50:24,662 - INFO - Attempt 3 failed. Retrying in 2.00 seconds...
|
150 |
-
2025-06-24 14:50:26,911 - INFO - Failed to fetch license after 3 attempts: 500 Server Error: Internal Server Error for url: https://www.dansketaler.dk/tale/silke-ena-svares-tale-ved-demonstrationen-for-born-og-unge
|
151 |
-
2025-06-24 15:02:20,338 - INFO - Saving 5107 speeches to dataset
|
152 |
-
2025-06-24 15:02:20,503 - INFO - Unique licenses:
|
153 |
-
2025-06-24 15:02:20,503 - INFO - None
|
154 |
-
2025-06-24 15:02:20,503 - INFO - cc0
|
155 |
-
2025-06-24 15:02:20,503 - INFO - Materialet et beskyttet af ophavsret
|
156 |
-
2025-06-24 15:02:20,503 - INFO - Materialet er beskyttet af ophavsret
|
157 |
-
2025-06-24 15:02:20,503 - INFO - Materialet er omfattet af ophavsret
|
158 |
-
2025-06-24 15:02:20,503 - INFO - Manuskript taget fra ft.dk. med tilladelse fra udgiver.
|
159 |
-
2025-06-24 15:02:20,503 - INFO - Materialet er beskyttet af ophavsret
|
160 |
-
2025-06-24 15:02:20,503 - INFO - Manuskript taget fra ft.dk med tilladelse fra udgiver.
|
161 |
-
2025-06-24 15:02:20,503 - INFO - Materialet er beskyttet af ophavsret
|
162 |
-
2025-06-24 15:02:20,503 - INFO - Materialet er beskyttet af ophavsret, da talen ikke er holdt i offentligheden.
|
163 |
-
2025-06-24 15:02:20,503 - INFO - Manuskript tilsendt af taler og udgivet af Danske Taler med tilladelse fra taler.
|
164 |
-
2025-06-24 15:02:20,506 - INFO - Removed 2191 documents without a cc0 license
|
165 |
-
2025-06-24 15:02:20,508 - INFO - Removed 0 duplicate ids
|
166 |
-
2025-06-24 15:02:20,516 - INFO - Removed 2 rows with empty text
|
167 |
-
2025-06-24 15:02:20,529 - INFO - Removed 2 rows with duplicate text
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
data/danske-taler/danske-taler.md
DELETED
@@ -1,135 +0,0 @@
|
|
1 |
-
---
|
2 |
-
pretty_name: Danske Taler
|
3 |
-
language:
|
4 |
-
- da
|
5 |
-
license: cc0-1.0
|
6 |
-
license_name: CC-0
|
7 |
-
task_categories:
|
8 |
-
- text-generation
|
9 |
-
- fill-mask
|
10 |
-
task_ids:
|
11 |
-
- language-modeling
|
12 |
-
domains:
|
13 |
-
- Conversation
|
14 |
-
- Speeches
|
15 |
-
- Spoken
|
16 |
-
---
|
17 |
-
|
18 |
-
# Dataset Card for Danske Taler
|
19 |
-
|
20 |
-
<!-- START-SHORT DESCRIPTION -->
|
21 |
-
Danish Speeches from [dansketaler.dk](https://www.dansketaler.dk).
|
22 |
-
<!-- END-SHORT DESCRIPTION -->
|
23 |
-
|
24 |
-
|
25 |
-
The database dansketaler.dk is managed by Danske Taler, an independent institution that in addition to managing the database and carries out cultural
|
26 |
-
and democratic projects based on speeches.
|
27 |
-
Danske Taler state as their goals that they seek to preserve our cultural heritage and promotes active citizenship and democratic confidence through its work.
|
28 |
-
Additionally, Danske Taler provides data to a number of online resources, including: lex.dk, sprogteknologi.dk, and ordnet.dk.
|
29 |
-
|
30 |
-
The goal of the dataset is to collect historical and timely speeches and make them available for the public.
|
31 |
-
|
32 |
-
Learn more about danske taler by reading their [about us](https://www.dansketaler.dk/om-os) page.
|
33 |
-
|
34 |
-
> NOTE: Danske-Taler is also collecting [sermons](https://www.dansketaler.dk/praedikener), but these are not included in this dataset.
|
35 |
-
|
36 |
-
## Dataset Description
|
37 |
-
|
38 |
-
|
39 |
-
<!-- START-DESC-STATS -->
|
40 |
-
- **Number of samples**: 2.91K
|
41 |
-
- **Number of tokens (Llama 3)**: 8.72M
|
42 |
-
- **Average document length in tokens (min, max)**: 3.00K (129, 53.40K)
|
43 |
-
<!-- END-DESC-STATS -->
|
44 |
-
|
45 |
-
|
46 |
-
## Dataset Structure
|
47 |
-
An example from the dataset looks as follows.
|
48 |
-
|
49 |
-
|
50 |
-
<!-- START-SAMPLE -->
|
51 |
-
```py
|
52 |
-
{
|
53 |
-
"id": "danske-taler_281",
|
54 |
-
"text": "Tyske landsmænd og -kvinder !\nSyv år er kort tid, en brøkdel af en enkel menneskelig normaltilværels[...]",
|
55 |
-
"source": "danske-taler",
|
56 |
-
"added": "2025-06-24",
|
57 |
-
"created": "1940-01-30, 1940-01-30",
|
58 |
-
"token_count": 3020
|
59 |
-
}
|
60 |
-
```
|
61 |
-
|
62 |
-
### Data Fields
|
63 |
-
|
64 |
-
An entry in the dataset consists of the following fields:
|
65 |
-
|
66 |
-
- `id` (`str`): An unique identifier for each document.
|
67 |
-
- `text`(`str`): The content of the document.
|
68 |
-
- `source` (`str`): The source of the document (see [Source Data](#source-data)).
|
69 |
-
- `added` (`str`): An date for when the document was added to this collection.
|
70 |
-
- `created` (`str`): An date range for when the document was originally created.
|
71 |
-
- `token_count` (`int`): The number of tokens in the sample computed using the Llama 8B tokenizer
|
72 |
-
<!-- END-SAMPLE -->
|
73 |
-
|
74 |
-
|
75 |
-
### Dataset Statistics
|
76 |
-
|
77 |
-
<!-- START-DATASET PLOTS -->
|
78 |
-
<p align="center">
|
79 |
-
<img src="./images/dist_document_length.png" width="600" style="margin-right: 10px;" />
|
80 |
-
</p>
|
81 |
-
<!-- END-DATASET PLOTS -->
|
82 |
-
|
83 |
-
|
84 |
-
|
85 |
-
## Additional Information
|
86 |
-
|
87 |
-
|
88 |
-
### Dataset Collection Process
|
89 |
-
|
90 |
-
This dataset was collected using the publicly available [API](https://www.dansketaler.dk/api/v1).
|
91 |
-
|
92 |
-
### Quality Assurance
|
93 |
-
We check for and remove exact duplicates, empty texts, duplicate ids after the initial download. We additionally check if the articles contain any HTML.
|
94 |
-
|
95 |
-
## Opportunities for Improvement
|
96 |
-
|
97 |
-
While this dataset can be updated to include the latest availabe speeches.
|
98 |
-
|
99 |
-
We consider the quality of the current collection high with a low chance of
|
100 |
-
incorrect formatting,
|
101 |
-
spelling errors,
|
102 |
-
empty documents or
|
103 |
-
misformatted segments.
|
104 |
-
This stems both from the quality assurance, source of documents and subjective inspection.
|
105 |
-
|
106 |
-
### License Information
|
107 |
-
Since the license information isn't avaiable through the API we collect this data directly from the webpage of each article under the header
|
108 |
-
"Ophavsret".
|
109 |
-
|
110 |
-
For speeches where it is noted that *"Materialet er fri af ophavsret"* (The material is in the public domain) or similarly we assign it a `cc0` license.
|
111 |
-
|
112 |
-
Such an example can be seen here:
|
113 |
-
|
114 |
-
> **Ophavsret**
|
115 |
-
>
|
116 |
-
> Materialet er fri af ophavsret. Taler, som er holdt i offentligheden, er ikke omfattet af ophavsret (Jf. ophavsretslovens § 26 og 32).
|
117 |
-
> Det betyder, at når en tale er indgået i Danske Talers database, kan den bruges af tredjeparter, fx til undervisning eller forskning.
|
118 |
-
>
|
119 |
-
> *source: [Ursula von der Leyens tale om europæisk forsvar og sikkerhed på Hærens Officersskole](https://www.dansketaler.dk/tale/tale-om-europaeisk-forsvar-og-sikkerhed-pa-haerens-officersskole)*
|
120 |
-
|
121 |
-
Speeches without this mention is removed. Such an example include:
|
122 |
-
|
123 |
-
> **Ophavsret**
|
124 |
-
>
|
125 |
-
> Materialet er beskyttet af ophavsret
|
126 |
-
>
|
127 |
-
> *Source: [Christina Egelunds tale ved Aarhus Universitets årsfest](https://www.dansketaler.dk/tale/christina-egelunds-tale-ved-aarhus-universitets-arsfest)*
|
128 |
-
|
129 |
-
We manually checked the unique set of license descriptions to see if any were open licenses that weren't included in the current criteria.
|
130 |
-
|
131 |
-
For specific filtering criteria see the `create.py` script.
|
132 |
-
|
133 |
-
### Citation Information
|
134 |
-
|
135 |
-
No citation is applicable for this work. We recommend citing the huggingface repository.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
data/danske-taler/danske-taler.parquet
DELETED
@@ -1,3 +0,0 @@
|
|
1 |
-
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:d007e606854f868febcf61a513302f7299ff35222fe9de487d17b9baaaedf248
|
3 |
-
size 16089529
|
|
|
|
|
|
|
|
data/danske-taler/descriptive_stats.json
DELETED
@@ -1,9 +0,0 @@
|
|
1 |
-
{
|
2 |
-
"number_of_samples": 2912,
|
3 |
-
"number_of_tokens": 8723951,
|
4 |
-
"min_length_tokens": 129,
|
5 |
-
"max_length_tokens": 53401,
|
6 |
-
"number_of_characters": 26616908,
|
7 |
-
"min_length_characters": 388,
|
8 |
-
"max_length_characters": 155429
|
9 |
-
}
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
data/danske-taler/images/dist_document_length.png
DELETED
Git LFS Details
|
data/depbank/depbank.md
CHANGED
@@ -1,115 +1,51 @@
|
|
1 |
---
|
2 |
pretty_name: Danish Dependency Treebank
|
3 |
language:
|
4 |
-
- da
|
5 |
license: cc-by-sa-4.0
|
6 |
-
license_name:
|
7 |
size_categories:
|
8 |
-
- 1-10k
|
9 |
task_categories:
|
10 |
-
- text-generation
|
11 |
-
- fill-mask
|
12 |
task_ids:
|
13 |
-
- language-modeling
|
14 |
-
source_datasets:
|
15 |
-
- danish-foundation-models/danish-gigaword
|
16 |
-
domains:
|
17 |
-
- Other
|
18 |
---
|
19 |
-
|
20 |
# Dataset Card for Danish Dependency Treebank
|
21 |
-
|
22 |
-
<!-- START-SHORT DESCRIPTION -->
|
23 |
-
The Danish subsection of the [Universal Dependencies Treebank](https://github.com/UniversalDependencies/UD_Danish-DDT).
|
24 |
-
<!-- END-SHORT DESCRIPTION -->
|
25 |
-
|
26 |
-
|
27 |
-
The Danish UD treebank has been converted from the Danish Dependency Treebank (Buch-Kromman, 2003) into Universal Dependencies (UD). It consists of 5,512 sentences (100k words). The Danish source texts and the Danish part-of-speech tags were created by the PAROLE-DK project (Keson 1998) by the Danish Society for Language and Literature.
|
28 |
-
|
29 |
-
While the dataset was initially intended as a rich annotation, this corpora only uses the raw text.
|
30 |
-
|
31 |
## Dataset Description
|
32 |
-
|
33 |
-
|
34 |
-
|
35 |
-
- **Number of samples**: 536
|
36 |
-
- **Number of tokens (Llama 3)**: 185.45K
|
37 |
-
- **Average document length in tokens (min, max)**: 345.99626865671644 (261, 517)
|
38 |
-
<!-- END-DESC-STATS -->
|
39 |
-
|
40 |
-
|
41 |
-
|
42 |
-
## Dataset Structure
|
43 |
An example from the dataset looks as follows.
|
44 |
-
|
45 |
-
|
46 |
-
<!-- START-SAMPLE -->
|
47 |
-
```py
|
48 |
{
|
49 |
-
|
50 |
-
|
51 |
-
|
52 |
-
|
53 |
-
|
54 |
-
|
|
|
|
|
|
|
|
|
55 |
}
|
56 |
```
|
57 |
|
58 |
-
|
59 |
-
|
60 |
-
An entry in the dataset consists of the following fields:
|
61 |
|
62 |
-
-
|
63 |
-
-
|
64 |
-
-
|
65 |
-
-
|
66 |
-
-
|
67 |
-
-
|
68 |
-
<!-- END-SAMPLE -->
|
69 |
|
70 |
-
|
71 |
-
|
72 |
-
|
73 |
-
|
74 |
-
|
75 |
-
<img src="./images/dist_document_length.png" width="600" style="margin-right: 10px;" />
|
76 |
</p>
|
77 |
-
|
78 |
-
|
79 |
-
|
80 |
-
|
81 |
-
## Additional Information
|
82 |
-
|
83 |
-
<!-- TODO:
|
84 |
-
Add issue on:
|
85 |
-
|
86 |
-
Potential improvements for depbank:
|
87 |
-
1) Pull directly from depbank
|
88 |
-
2) Compute texts into documents (seems like that is already done)
|
89 |
-
3) Add synthetic data instruction dataset
|
90 |
-
- NER: What are the following names in this sentence
|
91 |
-
- json output, html annotation, list at the end
|
92 |
-
- POS:
|
93 |
-
- Extract all POS-tags from the following sentence
|
94 |
-
- Find all NOUNS in the following text
|
95 |
-
- What POS tag does the ..
|
96 |
-
- Tokenization:
|
97 |
-
- split the following text into tokens
|
98 |
-
- ...
|
99 |
-
-->
|
100 |
-
|
101 |
-
### Citation Information
|
102 |
-
|
103 |
-
This dataset was initially published as part of the [Danish gigaword](https://huggingface.co/danish-foundation-models). We recommend that you cite and reference it if you use this dataset:
|
104 |
-
|
105 |
-
> Derczynski, L., Ciosici, M. R., et al. (2021). The Danish Gigaword Corpus. In Proceedings of the 23rd Nordic Conference on Computational Linguistics (NoDaLiDa 2021).
|
106 |
-
|
107 |
-
```bash
|
108 |
-
@inproceedings{dagw,
|
109 |
-
title = {{The Danish Gigaword Corpus}},
|
110 |
-
author = {Leon Derczynski and Manuel R. Ciosici and Rebekah Baglini and Morten H. Christiansen and Jacob Aarup Dalsgaard and Riccardo Fusaroli and Peter Juel Henrichsen and Rasmus Hvingelby and Andreas Kirkedal and Alex Speed Kjeldsen and Claus Ladefoged and Finn Årup Nielsen and Jens Madsen and Malte Lau Petersen and Jonathan Hvithamar Rystrøm and Daniel Varab},
|
111 |
-
year = 2021,
|
112 |
-
booktitle = {Proceedings of the 23rd Nordic Conference on Computational Linguistics},
|
113 |
-
publisher = {NEALT}
|
114 |
-
}
|
115 |
-
```
|
|
|
1 |
---
|
2 |
pretty_name: Danish Dependency Treebank
|
3 |
language:
|
4 |
+
- da
|
5 |
license: cc-by-sa-4.0
|
6 |
+
license_name: Creative Commons Attribution Share Alike 4.0
|
7 |
size_categories:
|
8 |
+
- 1-10k
|
9 |
task_categories:
|
10 |
+
- text-generation
|
11 |
+
- fill-mask
|
12 |
task_ids:
|
13 |
+
- language-modeling
|
|
|
|
|
|
|
|
|
14 |
---
|
|
|
15 |
# Dataset Card for Danish Dependency Treebank
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
16 |
## Dataset Description
|
17 |
+
- **Number of records:** 536
|
18 |
+
- **Languages:** Danish
|
19 |
+
## Dataset Sturcture
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
20 |
An example from the dataset looks as follows.
|
21 |
+
```yaml
|
|
|
|
|
|
|
22 |
{
|
23 |
+
'text': 'H.L. Hansen var en usædvanmlig og frodig personlig',
|
24 |
+
'source': 'depbank',
|
25 |
+
'id': 'depbank_0375',
|
26 |
+
'added': '2024-05-16',
|
27 |
+
'created': '2000-01-01, 2022-01-01',
|
28 |
+
'metadata': {
|
29 |
+
'domain': 'Other',
|
30 |
+
'license': 'Attribution-ShareAlike 4.0 International',
|
31 |
+
'source-pretty': 'Danish Dependency Treebank'
|
32 |
+
}
|
33 |
}
|
34 |
```
|
35 |
|
36 |
+
## Data Fields
|
|
|
|
|
37 |
|
38 |
+
- **id**: source-specific identifier.
|
39 |
+
- **text**: textual content of the document.
|
40 |
+
- **source**: source of the data.
|
41 |
+
- **added**: timestamp ai2 acquired this data.
|
42 |
+
- **created**": timestamp when original document was created (best-guess if not available)
|
43 |
+
- **metadata**: source-specific metadata.
|
|
|
44 |
|
45 |
+
## License Information
|
46 |
+
<details>
|
47 |
+
<summary>Creative Commons Attribution Share Alike 4.0</summary>
|
48 |
+
<p>
|
49 |
+
Attribution-ShareAlike 4.0 International
|
|
|
50 |
</p>
|
51 |
+
</details>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
data/depbank/depbank.parquet
CHANGED
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
-
size
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:3d4172e2ab4d7256ca5b76ad45b4d7326616e6679642056fdef20c5e3a8b1c62
|
3 |
+
size 392216
|
data/depbank/descriptive_stats.json
DELETED
@@ -1,9 +0,0 @@
|
|
1 |
-
{
|
2 |
-
"number_of_samples": 536,
|
3 |
-
"number_of_tokens": 185454,
|
4 |
-
"min_length_tokens": 261,
|
5 |
-
"max_length_tokens": 517,
|
6 |
-
"number_of_characters": 546130,
|
7 |
-
"min_length_characters": 773,
|
8 |
-
"max_length_characters": 1398
|
9 |
-
}
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
data/depbank/images/dist_document_length.png
DELETED
Git LFS Details
|
data/domsdatabasen/create.py
DELETED
@@ -1,344 +0,0 @@
|
|
1 |
-
# /// script
|
2 |
-
# requires-python = ">=3.12"
|
3 |
-
# dependencies = [
|
4 |
-
# "datasets",
|
5 |
-
# "dynaword",
|
6 |
-
# "marker-pdf",
|
7 |
-
# "requests",
|
8 |
-
# "torch",
|
9 |
-
# ]
|
10 |
-
#
|
11 |
-
# [tool.uv.sources]
|
12 |
-
# dynaword = { git = "https://huggingface.co/datasets/danish-foundation-models/danish-dynaword" }
|
13 |
-
# ///
|
14 |
-
|
15 |
-
"""
|
16 |
-
Script for downloading and processing the Domsdatabasen.dk site.
|
17 |
-
|
18 |
-
Note: To run this script, you need to set `GIT_LFS_SKIP_SMUDGE=1` to be able to install dynaword:
|
19 |
-
|
20 |
-
```bash
|
21 |
-
GIT_LFS_SKIP_SMUDGE=1 uv run data/domsdatabasen/create.py
|
22 |
-
```
|
23 |
-
|
24 |
-
Note: This script is designed to be run using a GPU.
|
25 |
-
"""
|
26 |
-
|
27 |
-
import atexit
|
28 |
-
import logging
|
29 |
-
import os
|
30 |
-
import csv
|
31 |
-
import time
|
32 |
-
from typing import cast
|
33 |
-
|
34 |
-
import torch
|
35 |
-
|
36 |
-
import gc
|
37 |
-
import requests
|
38 |
-
import torch.multiprocessing as mp
|
39 |
-
from pathlib import Path
|
40 |
-
from datetime import date, datetime
|
41 |
-
|
42 |
-
from datasets import Dataset, concatenate_datasets
|
43 |
-
from marker.converters.pdf import PdfConverter
|
44 |
-
from marker.models import create_model_dict
|
45 |
-
from marker.output import text_from_rendered
|
46 |
-
|
47 |
-
from dynaword.process_dataset import (
|
48 |
-
add_token_count,
|
49 |
-
ensure_column_order,
|
50 |
-
remove_duplicate_text,
|
51 |
-
remove_empty_texts,
|
52 |
-
)
|
53 |
-
|
54 |
-
logger = logging.getLogger(__name__)
|
55 |
-
|
56 |
-
# ----------------- Config ------------------
|
57 |
-
|
58 |
-
PDF_DIR = Path(__file__).parent / "pdfs"
|
59 |
-
LOG_FILE = Path(__file__).parent / "progress_log.csv"
|
60 |
-
PARQUET_FILE = Path(__file__).parent / "domsdatabasen.parquet"
|
61 |
-
MAX_WORKERS = 10
|
62 |
-
RETRY_COUNT = 3
|
63 |
-
RETRY_DELAY = 2
|
64 |
-
|
65 |
-
# ----------------- Headers ------------------
|
66 |
-
|
67 |
-
HEADERS = {
|
68 |
-
"Accept": "application/json, text/plain, */*",
|
69 |
-
"Accept-Encoding": "gzip, deflate, br, zstd",
|
70 |
-
"Accept-Language": "en-GB,en-US;q=0.9,en;q=0.8",
|
71 |
-
"Connection": "keep-alive",
|
72 |
-
"Content-Type": "application/json",
|
73 |
-
}
|
74 |
-
|
75 |
-
|
76 |
-
def init_csv():
|
77 |
-
if not LOG_FILE.exists():
|
78 |
-
with open(LOG_FILE, "w", newline="", encoding="utf-8") as f:
|
79 |
-
writer = csv.DictWriter(
|
80 |
-
f,
|
81 |
-
fieldnames=["document_id", "pdf_downloaded", "text_extracted", "error"],
|
82 |
-
)
|
83 |
-
writer.writeheader()
|
84 |
-
|
85 |
-
|
86 |
-
def append_log(document_id: str, pdf: bool, text: bool, error: str = ""):
|
87 |
-
with open(LOG_FILE, "a", newline="", encoding="utf-8") as f:
|
88 |
-
writer = csv.DictWriter(
|
89 |
-
f, fieldnames=["document_id", "pdf_downloaded", "text_extracted", "error"]
|
90 |
-
)
|
91 |
-
writer.writerow(
|
92 |
-
{
|
93 |
-
"document_id": document_id,
|
94 |
-
"pdf_downloaded": int(pdf),
|
95 |
-
"text_extracted": int(text),
|
96 |
-
"error": error,
|
97 |
-
}
|
98 |
-
)
|
99 |
-
|
100 |
-
|
101 |
-
def load_existing_ids() -> set:
|
102 |
-
if not PARQUET_FILE.exists():
|
103 |
-
return set()
|
104 |
-
ds = Dataset.from_parquet(str(PARQUET_FILE))
|
105 |
-
ds = cast(Dataset, ds)
|
106 |
-
return set(ds["id"])
|
107 |
-
|
108 |
-
|
109 |
-
# ----------------- Retry Helpers ------------------
|
110 |
-
|
111 |
-
|
112 |
-
def retry(func, *args, retries=RETRY_COUNT, delay=RETRY_DELAY, **kwargs):
|
113 |
-
for attempt in range(retries):
|
114 |
-
try:
|
115 |
-
return func(*args, **kwargs)
|
116 |
-
except Exception as e:
|
117 |
-
logger.warning(f"⚠️ Retry {attempt + 1}/{retries} failed: {e}")
|
118 |
-
time.sleep(delay)
|
119 |
-
raise RuntimeError(f"❌ All retries failed for {func.__name__}({args})")
|
120 |
-
|
121 |
-
|
122 |
-
# ----------------- PDF Download ------------------
|
123 |
-
|
124 |
-
|
125 |
-
def download_pdf(document: dict) -> Path | None:
|
126 |
-
document_id = document["id"]
|
127 |
-
out_path = PDF_DIR / f"document_{document_id}.pdf"
|
128 |
-
if out_path.exists():
|
129 |
-
logger.info(f"⏭️ Skipped PDF (exists): {document_id}")
|
130 |
-
return out_path
|
131 |
-
|
132 |
-
url = f"https://domsdatabasen.dk/webapi/api/Case/document/download/{document_id}"
|
133 |
-
try:
|
134 |
-
response = retry(requests.get, url, headers=HEADERS)
|
135 |
-
if response.status_code == 200:
|
136 |
-
with open(out_path, "wb") as f:
|
137 |
-
f.write(response.content)
|
138 |
-
logger.info(f"✅ Downloaded PDF: {document_id}")
|
139 |
-
append_log(document_id, pdf=True, text=False)
|
140 |
-
return out_path
|
141 |
-
else:
|
142 |
-
raise RuntimeError(f"Download failed: {response.status_code}")
|
143 |
-
except Exception as e:
|
144 |
-
append_log(document_id, pdf=False, text=False, error=str(e))
|
145 |
-
return None
|
146 |
-
|
147 |
-
|
148 |
-
# ----------------- Parallel Extract Text ------------------
|
149 |
-
|
150 |
-
|
151 |
-
def worker_init():
|
152 |
-
model_dict = create_model_dict()
|
153 |
-
|
154 |
-
global model_refs
|
155 |
-
model_refs = model_dict
|
156 |
-
|
157 |
-
# Ensure we clean up the model references on exit
|
158 |
-
atexit.register(worker_exit)
|
159 |
-
|
160 |
-
|
161 |
-
def worker_exit():
|
162 |
-
global model_refs
|
163 |
-
try:
|
164 |
-
del model_refs
|
165 |
-
except Exception:
|
166 |
-
pass
|
167 |
-
|
168 |
-
|
169 |
-
def process_document(document: dict) -> dict | None:
|
170 |
-
# from marker.output import text_from_rendered
|
171 |
-
# from marker.converters.pdf import PdfConverter
|
172 |
-
|
173 |
-
torch.set_num_threads(2)
|
174 |
-
|
175 |
-
document_id = document["id"]
|
176 |
-
verdict_date = document.get("verdictDateTime")
|
177 |
-
pdf_path = PDF_DIR / f"document_{document_id}.pdf"
|
178 |
-
|
179 |
-
if not pdf_path.exists():
|
180 |
-
url = (
|
181 |
-
f"https://domsdatabasen.dk/webapi/api/Case/document/download/{document_id}"
|
182 |
-
)
|
183 |
-
try:
|
184 |
-
response = retry(requests.get, url, headers=HEADERS)
|
185 |
-
if response.status_code == 200:
|
186 |
-
with open(pdf_path, "wb") as f:
|
187 |
-
f.write(response.content)
|
188 |
-
logger.info(f"✅ Downloaded PDF: {document_id}")
|
189 |
-
else:
|
190 |
-
raise RuntimeError(f"Download failed: {response.status_code}")
|
191 |
-
except Exception as e:
|
192 |
-
append_log(document_id, pdf=False, text=False, error=str(e))
|
193 |
-
return None
|
194 |
-
|
195 |
-
config = {"pdftext_workers": 1, "extract_images": False, "disable_tqdm": True}
|
196 |
-
|
197 |
-
try:
|
198 |
-
converter = PdfConverter(artifact_dict=model_refs, config=config)
|
199 |
-
rendered = retry(converter, str(pdf_path))
|
200 |
-
text, _, _ = text_from_rendered(rendered)
|
201 |
-
logger.info(f"🖍️ Extracted text: {document_id}")
|
202 |
-
append_log(document_id, pdf=True, text=True)
|
203 |
-
|
204 |
-
del rendered
|
205 |
-
del converter
|
206 |
-
|
207 |
-
return {
|
208 |
-
"id": document_id,
|
209 |
-
"text": text,
|
210 |
-
"source": "Domsdatabasen",
|
211 |
-
"created": format_created(verdict_date),
|
212 |
-
"added": date.today().isoformat(),
|
213 |
-
"metadata": {},
|
214 |
-
}
|
215 |
-
except Exception as e:
|
216 |
-
append_log(document_id, pdf=True, text=False, error=str(e))
|
217 |
-
return None
|
218 |
-
finally:
|
219 |
-
gc.collect()
|
220 |
-
|
221 |
-
|
222 |
-
# ----------------- Page Fetching ------------------
|
223 |
-
|
224 |
-
|
225 |
-
def fetch_case_page(page_num: int) -> tuple[list[dict], int]:
|
226 |
-
url = f"https://domsdatabasen.dk/webapi/api/Case/advanced?sorting=VerdictDateDesc&page={page_num}&pageSize=100"
|
227 |
-
response = retry(requests.post, url, headers=HEADERS, json={})
|
228 |
-
data = response.json()
|
229 |
-
|
230 |
-
document_entries = []
|
231 |
-
for case in data.get("cases", []):
|
232 |
-
for doc in case.get("documents", []):
|
233 |
-
document_entries.append(
|
234 |
-
{
|
235 |
-
"id": doc["id"],
|
236 |
-
"verdictDateTime": doc.get("verdictDateTime"),
|
237 |
-
}
|
238 |
-
)
|
239 |
-
|
240 |
-
return document_entries, data.get("pageCount", 1)
|
241 |
-
|
242 |
-
|
243 |
-
# ----------------- Utilities ------------------
|
244 |
-
|
245 |
-
|
246 |
-
def format_created(verdict_date: str | None) -> str:
|
247 |
-
if verdict_date:
|
248 |
-
try:
|
249 |
-
dt = datetime.fromisoformat(verdict_date)
|
250 |
-
formatted = dt.date().isoformat()
|
251 |
-
return f"{formatted}, {formatted}"
|
252 |
-
except Exception:
|
253 |
-
pass
|
254 |
-
today = date.today().isoformat()
|
255 |
-
return f"{today}, {today}"
|
256 |
-
|
257 |
-
|
258 |
-
# ----------------- Main Loop ------------------
|
259 |
-
|
260 |
-
|
261 |
-
def main():
|
262 |
-
PDF_DIR.mkdir(exist_ok=True)
|
263 |
-
init_csv()
|
264 |
-
|
265 |
-
all_records = []
|
266 |
-
page_num = 1
|
267 |
-
_, total_pages = fetch_case_page(1)
|
268 |
-
logger.info(f"📄 Total pages: {total_pages}")
|
269 |
-
|
270 |
-
existing_ids = load_existing_ids()
|
271 |
-
logger.info(f"🔄 Resuming with {len(existing_ids)} already processed IDs")
|
272 |
-
|
273 |
-
while page_num <= total_pages:
|
274 |
-
logger.info(f"\n🔎 Fetching page {page_num}/{total_pages}")
|
275 |
-
|
276 |
-
try:
|
277 |
-
doc_infos, _ = fetch_case_page(page_num)
|
278 |
-
except Exception as e:
|
279 |
-
logger.warning(f"❌ Failed to fetch page {page_num}: {e}")
|
280 |
-
page_num += 1
|
281 |
-
continue
|
282 |
-
|
283 |
-
doc_infos = [doc for doc in doc_infos if doc["id"] not in existing_ids]
|
284 |
-
|
285 |
-
# Extract text in parallel using multiprocessing
|
286 |
-
with mp.Pool(
|
287 |
-
processes=MAX_WORKERS, initializer=worker_init, maxtasksperchild=10
|
288 |
-
) as pool:
|
289 |
-
results = pool.map(process_document, doc_infos)
|
290 |
-
|
291 |
-
all_records.extend([r for r in results if r])
|
292 |
-
|
293 |
-
if all_records:
|
294 |
-
ds_new = Dataset.from_list(all_records)
|
295 |
-
|
296 |
-
if PARQUET_FILE.exists():
|
297 |
-
ds_old = Dataset.from_parquet(str(PARQUET_FILE))
|
298 |
-
ds_old = cast(Dataset, ds_old)
|
299 |
-
ds_combined = concatenate_datasets([ds_old, ds_new])
|
300 |
-
else:
|
301 |
-
ds_combined = ds_new
|
302 |
-
|
303 |
-
ds_combined.to_parquet(str(PARQUET_FILE))
|
304 |
-
logger.info(f"📦 Appended {len(all_records)} records to {PARQUET_FILE}")
|
305 |
-
existing_ids.update([r["id"] for r in all_records])
|
306 |
-
all_records.clear()
|
307 |
-
|
308 |
-
page_num += 1
|
309 |
-
|
310 |
-
ds = Dataset.from_parquet(str(PARQUET_FILE))
|
311 |
-
ds = cast(Dataset, ds)
|
312 |
-
ds = remove_empty_texts(ds)
|
313 |
-
ds = remove_duplicate_text(ds)
|
314 |
-
ds = add_token_count(ds)
|
315 |
-
ds = ensure_column_order(ds)
|
316 |
-
|
317 |
-
ds.to_parquet(str(PARQUET_FILE))
|
318 |
-
|
319 |
-
|
320 |
-
if __name__ == "__main__":
|
321 |
-
# Ensure threads don't contend
|
322 |
-
os.environ["MKL_DYNAMIC"] = "FALSE"
|
323 |
-
os.environ["OMP_DYNAMIC"] = "FALSE"
|
324 |
-
os.environ["OMP_NUM_THREADS"] = "2" # Avoid OpenMP issues with multiprocessing
|
325 |
-
os.environ["OPENBLAS_NUM_THREADS"] = "2"
|
326 |
-
os.environ["MKL_NUM_THREADS"] = "2"
|
327 |
-
os.environ["GRPC_VERBOSITY"] = "ERROR"
|
328 |
-
os.environ["GLOG_minloglevel"] = "2"
|
329 |
-
os.environ["PYTORCH_ENABLE_MPS_FALLBACK"] = (
|
330 |
-
"1" # Transformers uses .isin for a simple op, which is not supported on MPS
|
331 |
-
)
|
332 |
-
os.environ["IN_STREAMLIT"] = "true" # Avoid multiprocessing inside surya
|
333 |
-
|
334 |
-
mp.set_start_method("spawn", force=True)
|
335 |
-
log_path = Path(__file__).parent / "domsdatabasen.log"
|
336 |
-
logging.basicConfig(
|
337 |
-
level=logging.INFO,
|
338 |
-
format="%(asctime)s - %(levelname)s - %(message)s",
|
339 |
-
handlers=[
|
340 |
-
logging.StreamHandler(),
|
341 |
-
logging.FileHandler(log_path),
|
342 |
-
],
|
343 |
-
)
|
344 |
-
main()
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
data/domsdatabasen/descriptive_stats.json
DELETED
@@ -1,9 +0,0 @@
|
|
1 |
-
{
|
2 |
-
"number_of_samples": 8468,
|
3 |
-
"number_of_tokens": 86353024,
|
4 |
-
"min_length_tokens": 15,
|
5 |
-
"max_length_tokens": 1008826,
|
6 |
-
"number_of_characters": 256036077,
|
7 |
-
"min_length_characters": 35,
|
8 |
-
"max_length_characters": 3021437
|
9 |
-
}
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
data/domsdatabasen/domsdatabasen.md
DELETED
@@ -1,119 +0,0 @@
|
|
1 |
-
---
|
2 |
-
pretty_name: Domsdatabasen.dk
|
3 |
-
language:
|
4 |
-
- da
|
5 |
-
license: other
|
6 |
-
license_name: Danish Copyright Law
|
7 |
-
size_categories:
|
8 |
-
- 10k-100k
|
9 |
-
task_categories:
|
10 |
-
- text-generation
|
11 |
-
- fill-mask
|
12 |
-
task_ids:
|
13 |
-
- language-modeling
|
14 |
-
source_datasets:
|
15 |
-
- danish-foundation-models/danish-gigaword
|
16 |
-
domains:
|
17 |
-
- Legal
|
18 |
-
---
|
19 |
-
|
20 |
-
# Dataset Card for Domsdatabasen.dk
|
21 |
-
|
22 |
-
<!-- START-SHORT DESCRIPTION -->
|
23 |
-
[Domsdatabasen.dk](https://domsdatabasen.dk/) is a public database containing selected judgments from the Danish courts.
|
24 |
-
<!-- END-SHORT DESCRIPTION -->
|
25 |
-
|
26 |
-
Launched in early 2022, the platform aims to increase transparency and public insight into the workings of the judiciary in Denmark. It is accessible to everyone – legal professionals, citizens, companies, and public authorities interested in Danish case law.
|
27 |
-
|
28 |
-
## Dataset Description
|
29 |
-
|
30 |
-
### Purpose and Scope
|
31 |
-
The main goal of the database is to support the principle of openness in the administration of justice. It offers users access to selected civil and criminal decisions, with an initial focus on rulings from the higher courts, such as:
|
32 |
-
|
33 |
-
- The Supreme Court (Højesteret)
|
34 |
-
- The High Courts (Landsretterne)
|
35 |
-
- The Maritime and Commercial Court (Sø- og Handelsretten)
|
36 |
-
|
37 |
-
Some rulings from the district courts (byretterne) are also included, particularly when they are part of a case string that has been appealed.
|
38 |
-
Over time, the database will expand in coverage and volume, especially as the court system transitions to new digital case management systems.
|
39 |
-
|
40 |
-
### Pseudonymization and Data Protection
|
41 |
-
All published rulings are pseudonymized to protect the privacy of individuals involved, in accordance with the EU General Data Protection Regulation (GDPR), the Danish Data Protection Act, and rules from the Danish Data Protection Agency.
|
42 |
-
|
43 |
-
Pseudonymization involves replacing personally identifiable information (e.g., names, CPR numbers) with general terms such as “the accused”, “witness 1”, etc. Additional data such as addresses or health-related details may be redacted or pseudonymized based on a case-specific evaluation.
|
44 |
-
|
45 |
-
Some roles and names are not pseudonymized, including:
|
46 |
-
|
47 |
-
- Judges from higher courts
|
48 |
-
- Legal representatives (lawyers)
|
49 |
-
- Author names in cited legal literature (unless directly involved in the case)
|
50 |
-
- Names in EU court decisions
|
51 |
-
|
52 |
-
Businesses involved in cases are typically not pseudonymized unless their name reveals personal information or constitutes a trade secret.
|
53 |
-
|
54 |
-
### Access and Development
|
55 |
-
Domsdatabasen is continuously being developed. As digitization progresses and technical workflows improve, the number of published decisions is expected to grow. The judgments are published as full case strings, including decisions at multiple judicial levels, providing context and legal reasoning throughout the appeal process.
|
56 |
-
|
57 |
-
|
58 |
-
<!-- START-DESC-STATS -->
|
59 |
-
- **Number of samples**: 8.47K
|
60 |
-
- **Number of tokens (Llama 3)**: 86.35M
|
61 |
-
- **Average document length in tokens (min, max)**: 10.20K (15, 1.01M)
|
62 |
-
<!-- END-DESC-STATS -->
|
63 |
-
|
64 |
-
|
65 |
-
## Dataset Structure
|
66 |
-
An example from the dataset looks as follows.
|
67 |
-
|
68 |
-
|
69 |
-
<!-- START-SAMPLE -->
|
70 |
-
```py
|
71 |
-
{
|
72 |
-
"id": "11389",
|
73 |
-
"text": "## **Ikke grundlag for varetægtsfængsling af hensyn til retshåndhævelsen**\n\nDer var ikke særligt bes[...]",
|
74 |
-
"source": "Domsdatabasen",
|
75 |
-
"added": "2025-07-04",
|
76 |
-
"created": "2025-07-04, 2025-07-04",
|
77 |
-
"token_count": 796
|
78 |
-
}
|
79 |
-
```
|
80 |
-
|
81 |
-
### Data Fields
|
82 |
-
|
83 |
-
An entry in the dataset consists of the following fields:
|
84 |
-
|
85 |
-
- `id` (`str`): An unique identifier for each document.
|
86 |
-
- `text`(`str`): The content of the document.
|
87 |
-
- `source` (`str`): The source of the document (see [Source Data](#source-data)).
|
88 |
-
- `added` (`str`): An date for when the document was added to this collection.
|
89 |
-
- `created` (`str`): An date range for when the document was originally created.
|
90 |
-
- `token_count` (`int`): The number of tokens in the sample computed using the Llama 8B tokenizer
|
91 |
-
<!-- END-SAMPLE -->
|
92 |
-
|
93 |
-
|
94 |
-
## License Information
|
95 |
-
<details>
|
96 |
-
<summary>Danish Copyright Law</summary>
|
97 |
-
<p>
|
98 |
-
Danish Copyright law at https://www.retsinformation.dk/forms/r0710.aspx?id=164796 states
|
99 |
-
|
100 |
-
§ 9. Love, administrative forskrifter, retsafgørelser og lignende offentlige aktstykker er ikke genstand for ophavsret.
|
101 |
-
|
102 |
-
Stk. 2. Bestemmelsen i stk. 1 gælder ikke for værker, der fremtræder som selvstændige bidrag i de i stk. 1 nævnte aktstykker. Sådanne værker må dog gengives i forbindelse med aktstykket. Retten til videre udnyttelse afhænger af de i øvrigt gældende regler.
|
103 |
-
|
104 |
-
</p>
|
105 |
-
</details>
|
106 |
-
|
107 |
-
|
108 |
-
### Dataset Statistics
|
109 |
-
|
110 |
-
<!-- START-DATASET PLOTS -->
|
111 |
-
<p align="center">
|
112 |
-
<img src="./images/dist_document_length.png" width="600" style="margin-right: 10px;" />
|
113 |
-
</p>
|
114 |
-
<!-- END-DATASET PLOTS -->
|
115 |
-
|
116 |
-
|
117 |
-
## Additional Information
|
118 |
-
|
119 |
-
**Extraction of text:** The documents being downloaded from [domsdatabasen.dk](https://www.domsdatabasen.dk/) is PDFs. To extract the texts from those, the `create.py` script uses the [marker-pdf](https://github.com/datalab-to/marker/tree/master) package.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
data/domsdatabasen/domsdatabasen.parquet
DELETED
@@ -1,3 +0,0 @@
|
|
1 |
-
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:132f593c951564e56c262520116bd02eea193f10443b9d12305e130dde16ee99
|
3 |
-
size 123195077
|
|
|
|
|
|
|
|
data/domsdatabasen/images/dist_document_length.png
DELETED
Git LFS Details
|
data/enevaeldens_nyheder/create.py
DELETED
@@ -1,96 +0,0 @@
|
|
1 |
-
# /// script
|
2 |
-
# requires-python = ">=3.12"
|
3 |
-
# dependencies = [
|
4 |
-
# "datasets",
|
5 |
-
# "dynaword",
|
6 |
-
# ]
|
7 |
-
#
|
8 |
-
# [tool.uv.sources]
|
9 |
-
# dynaword = { git = "https://huggingface.co/datasets/danish-foundation-models/danish-dynaword" }
|
10 |
-
# ///
|
11 |
-
|
12 |
-
"""
|
13 |
-
Script for downloading and processing the dataset
|
14 |
-
|
15 |
-
Note: To run this script, you need to set `GIT_LFS_SKIP_SMUDGE=1` to be able to install dynaword:
|
16 |
-
|
17 |
-
```bash
|
18 |
-
GIT_LFS_SKIP_SMUDGE=1 uv run data/enevaeldens_nyheder/create.py
|
19 |
-
```
|
20 |
-
"""
|
21 |
-
|
22 |
-
import logging
|
23 |
-
from datetime import date
|
24 |
-
from pathlib import Path
|
25 |
-
from typing import Any, cast
|
26 |
-
|
27 |
-
from datasets import Dataset, load_dataset
|
28 |
-
|
29 |
-
from dynaword.process_dataset import (
|
30 |
-
add_token_count,
|
31 |
-
ensure_column_order,
|
32 |
-
remove_duplicate_text,
|
33 |
-
remove_empty_texts,
|
34 |
-
)
|
35 |
-
|
36 |
-
logger = logging.getLogger(__name__)
|
37 |
-
|
38 |
-
SOURCE = "enevaeldens_nyheder"
|
39 |
-
|
40 |
-
|
41 |
-
def reformat_samples(example: dict[str, Any]) -> dict[str, Any]:
|
42 |
-
creation_date = example["date"]
|
43 |
-
# Reformatting the date to YYYY-MM-DD format
|
44 |
-
start = creation_date
|
45 |
-
end = creation_date
|
46 |
-
return {
|
47 |
-
"id": f"{SOURCE}_{example['id']}",
|
48 |
-
"text": example["text"],
|
49 |
-
"source": SOURCE,
|
50 |
-
"added": date.today().strftime("%Y-%m-%d"),
|
51 |
-
"created": f"{start}, {end}",
|
52 |
-
}
|
53 |
-
|
54 |
-
|
55 |
-
def main():
|
56 |
-
dataset = load_dataset(
|
57 |
-
"JohanHeinsen/ENO",
|
58 |
-
split="train",
|
59 |
-
revision="009f45ef63a1a41705781840807eb620f380d17d",
|
60 |
-
)
|
61 |
-
dataset = cast(Dataset, dataset)
|
62 |
-
|
63 |
-
logger.info("Removing 1 word texts")
|
64 |
-
len_ds = len(dataset)
|
65 |
-
dataset = dataset.filter(
|
66 |
-
lambda x: len(x["text"].split()) >= 2
|
67 |
-
) # require at least 2 word in the text
|
68 |
-
logger.info(f"Filtered {len_ds - len(dataset)} 1 word examples")
|
69 |
-
|
70 |
-
logger.info("Filtering out texts with predicted word acuracy < 0.7")
|
71 |
-
dataset = dataset.filter(lambda x: x["pwa"] >= 0.7)
|
72 |
-
logger.info(f"Filtered {len_ds - len(dataset)} low accuracy examples")
|
73 |
-
|
74 |
-
dataset = dataset.map(reformat_samples)
|
75 |
-
|
76 |
-
dataset = remove_empty_texts(dataset) # remove rows with empty text
|
77 |
-
dataset = remove_duplicate_text(dataset) # remove rows with duplicate text
|
78 |
-
dataset = add_token_count(dataset)
|
79 |
-
dataset = ensure_column_order(dataset)
|
80 |
-
|
81 |
-
dataset.to_parquet(
|
82 |
-
Path(__file__).parent / f"{SOURCE}.parquet",
|
83 |
-
)
|
84 |
-
|
85 |
-
|
86 |
-
if __name__ == "__main__":
|
87 |
-
log_path = Path(__file__).parent / f"{SOURCE}.log"
|
88 |
-
logging.basicConfig(
|
89 |
-
level=logging.INFO,
|
90 |
-
format="%(asctime)s - %(levelname)s - %(message)s",
|
91 |
-
handlers=[
|
92 |
-
logging.StreamHandler(),
|
93 |
-
logging.FileHandler(log_path),
|
94 |
-
],
|
95 |
-
)
|
96 |
-
main()
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
data/enevaeldens_nyheder/descriptive_stats.json
DELETED
@@ -1,9 +0,0 @@
|
|
1 |
-
{
|
2 |
-
"number_of_samples": 4593228,
|
3 |
-
"number_of_tokens": 1034308344,
|
4 |
-
"min_length_tokens": 3,
|
5 |
-
"max_length_tokens": 37294,
|
6 |
-
"number_of_characters": 2889445364,
|
7 |
-
"min_length_characters": 4,
|
8 |
-
"max_length_characters": 111182
|
9 |
-
}
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
data/enevaeldens_nyheder/enevaeldens_nyheder.log
DELETED
@@ -1,9 +0,0 @@
|
|
1 |
-
2025-08-05 13:09:29,533 - INFO - Removing 1 word texts
|
2 |
-
2025-08-05 13:10:14,475 - INFO - Filtered 42635 1 word examples
|
3 |
-
2025-08-05 13:10:14,475 - INFO - Filtering out texts with predicted word acuracy < 0.7
|
4 |
-
2025-08-05 13:11:24,300 - INFO - Filtered 76655 low accuracy examples
|
5 |
-
2025-08-05 13:15:33,389 - INFO - Removing empty texts
|
6 |
-
2025-08-05 13:15:50,876 - INFO - Filtered 0 empty examples
|
7 |
-
2025-08-05 13:15:50,876 - INFO - Removing duplicate texts
|
8 |
-
2025-08-05 13:19:48,194 - INFO - Filtered 161196 duplicate examples
|
9 |
-
2025-08-05 13:32:46,967 - INFO - Ensuring columns are in the correct order and are present
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
data/enevaeldens_nyheder/enevaeldens_nyheder.md
DELETED
@@ -1,172 +0,0 @@
|
|
1 |
-
---
|
2 |
-
pretty_name: "Enev\xE6ldens Nyheder Online"
|
3 |
-
language:
|
4 |
-
- da
|
5 |
-
license: cc-by-sa-4.0
|
6 |
-
license_name: CC-BY-SA 4.0
|
7 |
-
task_categories:
|
8 |
-
- text-generation
|
9 |
-
- fill-mask
|
10 |
-
task_ids:
|
11 |
-
- language-modeling
|
12 |
-
domains:
|
13 |
-
- News
|
14 |
-
source_datasets:
|
15 |
-
- JohanHeinsen/ENO
|
16 |
-
---
|
17 |
-
|
18 |
-
# Dataset Card for Enevældens Nyheder Online
|
19 |
-
|
20 |
-

|
21 |
-
<!-- START-SHORT DESCRIPTION -->
|
22 |
-
High quality OCR'd texts from Danish and Norwegian newspapers during the period of constitutional absolutism in Denmark (1660–1849).
|
23 |
-
<!-- END-SHORT DESCRIPTION -->
|
24 |
-
|
25 |
-
|
26 |
-
During the eighteenth century, newspapers became a ubiquitous medium. They informed a relatively large reading public about everything from high politics to the mundanities of local markets.
|
27 |
-
The dataset was created by re-processing over 550.000 digital images scanned from microfilm and held in the Danish Royal Library's collection. They had initially been OCR-processed, but the results were generally unreadable. ENO reprocessed the images using tailored pylaia models in Transkribus. The OCR-quality is generally high, despite the difficult state of the original images.
|
28 |
-
The newspaper editions have been segmented into individual texts using a model designed by the project team. Such texts are the base entity of the dataset. They include mainly two genres: news items and advertisements.
|
29 |
-
|
30 |
-
## Dataset Description
|
31 |
-
|
32 |
-
|
33 |
-
<!-- START-DESC-STATS -->
|
34 |
-
- **Number of samples**: 4.59M
|
35 |
-
- **Number of tokens (Llama 3)**: 1.03B
|
36 |
-
- **Average document length in tokens (min, max)**: 225.1811458085686 (3, 37.29K)
|
37 |
-
<!-- END-DESC-STATS -->
|
38 |
-
|
39 |
-
|
40 |
-
* **Curated by**: Johan Heinsen and Camilla Bøgeskov, Historisk Datalaboratorium, Aalborg University. With assistance from Sofus Landor Dam, Anders Birkemose, Kamilla Matthiassen and Louise Karoline Sort.
|
41 |
-
* **Funded by**: MASSHINE, Aalborg University.
|
42 |
-
|
43 |
-
|
44 |
-
The dataset contains a wide range of newspapers. The total distribution can be studied here. They cover most of Denmark as well as the three oldest newspapers of Norway, running until the separation of the Danish-Norwegian conglomerate in 1814. This dataset represents version 0.9 (updated 5th of August 2025).
|
45 |
-
|
46 |
-
|
47 |
-
### Dataset Sources
|
48 |
-
|
49 |
-
The sources of the dataset can be studied in more detail at the [project website](https://hislab.quarto.pub/eno/).
|
50 |
-
Most of the original image material is available in [LOAR](https://loar.kb.dk/handle/1902/7803) – a data repository of the Danish Royal Library. The Norwegian material was downloaded via the API of Nettbiblioteket. The scans of Nyeste Skilderie af Kjøbenhavn were taken from the Internet Archive repository of [Niels Jensen](https://archive.org/details/@uforbederlig). The scans for Politivennen stem from [Københavns Biblioteker](https://bibliotek.kk.dk/din/bag-om-kobenhavn/politivennen). Some early newspapers come from recent scans made available to the project by the Danish Royal Library. These are not yet available online.
|
51 |
-
|
52 |
-
## Uses
|
53 |
-
|
54 |
-
This dataset represents an effort to enable analysis of Denmark-Norway in the seventeenth, eighteenth, and nineteenth centuries. The data can be used to study and model sentiments, political and cultural currents, and the minutiae of urban life.
|
55 |
-
|
56 |
-
In addition, this dataset is part of Danish Dynaword, a collection of datasets intended for training language models, thereby integrating Danish cultural heritage into the next generation of digital technologies.
|
57 |
-
|
58 |
-
|
59 |
-
|
60 |
-
## Dataset Structure
|
61 |
-
An example from the dataset looks as follows.
|
62 |
-
|
63 |
-
|
64 |
-
<!-- START-SAMPLE -->
|
65 |
-
```py
|
66 |
-
{
|
67 |
-
"id": "enevaeldens_nyheder_aalborg1767_1767-01-02_1000001",
|
68 |
-
"text": "Et Menneske er skabt ey for sig selv allene: Hvert Lem paa Legemet det heele tiene maae, En Stolpes [...]",
|
69 |
-
"source": "enevaeldens_nyheder",
|
70 |
-
"added": "2025-08-05",
|
71 |
-
"created": "1767-01-02, 1767-01-02",
|
72 |
-
"token_count": 2377
|
73 |
-
}
|
74 |
-
```
|
75 |
-
|
76 |
-
### Data Fields
|
77 |
-
|
78 |
-
An entry in the dataset consists of the following fields:
|
79 |
-
|
80 |
-
- `id` (`str`): An unique identifier for each document.
|
81 |
-
- `text`(`str`): The content of the document.
|
82 |
-
- `source` (`str`): The source of the document (see [Source Data](#source-data)).
|
83 |
-
- `added` (`str`): An date for when the document was added to this collection.
|
84 |
-
- `created` (`str`): An date range for when the document was originally created.
|
85 |
-
- `token_count` (`int`): The number of tokens in the sample computed using the Llama 8B tokenizer
|
86 |
-
<!-- END-SAMPLE -->
|
87 |
-
|
88 |
-
|
89 |
-
|
90 |
-
## Dataset Creation
|
91 |
-
|
92 |
-
### Curation Rationale
|
93 |
-
|
94 |
-
The newspapers in the dataset generally represent the longest-running newspaper series in the Danish and Norwegian libraries. We prioritised long-running newspapers to enable historical analysis of changes over time. As historians, this was our initial ambition: to allow us to get quality serial text data.
|
95 |
-
We also prioritised geographical diversity, representing different regions of Denmark-Norway. Of course, this varies over time, as newspapers were most common in Copenhagen until the late eighteenth century.
|
96 |
-
Since the newspapers of Denmark's Caribbean colonies were primarily in English, they are not included. The text recognition model designed for the project struggles with English text.
|
97 |
-
Besides long-running series, we also included a few smaller newspaper series, mainly with an eye towards diversity of subject matter. These include Politivennen, which was concerned with very local news from Copenhagen and carried a lot of reader contributions, offering a unique insight into urban sentiments at the time. A similar inclusion was made with Jyllandsposten (of 1838), which was defined by a somewhat radical rural horizon.
|
98 |
-
|
99 |
-
As a rule of thumb, publications have been digitised in total, as they exist in their respective collections.
|
100 |
-
This means that they sometimes include appendices and sometimes do not, depending on whether these exist. Holes in the dataset mirror holes in the archival collections.
|
101 |
-
The one exception to this rule is the newspaper Københavns Adresseavis. This advertisement paper has survived continuously from its inception in 1759, but from 1804 onwards, it is only included here with samples every fifth year.
|
102 |
-
The reason for sampling is a combination of the massive extent of this advertisement paper and the poor condition of the digital images available for this specific period. Combined this meant that the results of the text recognition process were not entirely satisfying relative to the resources necessary for the effort. Therefore, we decided to prioritize other publications that would yield better quality text.
|
103 |
-
|
104 |
-
Most publications contain title page marginalia (date, title, etc.). Because these were set with large ornamental types, they are typically recognised with much less accuracy than the regular text. We are currently working on implementing a step in the workflow to identify and filter out these elements.
|
105 |
-
|
106 |
-
### Data Collection and Processing
|
107 |
-
|
108 |
-
The text recognition model used to create the dataset is available via [Transkribus](https://app.transkribus.org/models/public/text/danish-newspapers-1750-1850). A description of the text segmentation process can be found [here](https://hislab.quarto.pub/eno/dokumentation.html). Besides segmentation into separate news items / advertisements, no further processing of the text has taken place. We are currently experimenting with automated error correction using decoder-models.
|
109 |
-
|
110 |
-
For Danish Dynaword we apply additional filtering including:
|
111 |
-
|
112 |
-
- 1) Removing 1 word documents (using a whitespace split)
|
113 |
-
- 2) Removing document with a PWA < 0.7
|
114 |
-
|
115 |
-
PWA is defined as:
|
116 |
-
|
117 |
-
> A predicted word accuracy [PWA] based on a dictionary consisting of words from literary works, personal names and place names from the census of 1787, and a manually curated list of common words that are present in the material, but not represented in canonical literature. This is an estimate. In general we advise that you filter the dataset on this variable in case of using the material for language modelling. This will also filter out texts in other languages than Danish.
|
118 |
-
>
|
119 |
-
> source: [JohanHeinsen/ENO](https://huggingface.co/datasets/JohanHeinsen/ENO#dataset-structure)
|
120 |
-
|
121 |
-
Below you see 10 examples of documents (truncated to 200 characters) filtered out due to the PWA filtering:
|
122 |
-
|
123 |
-
```
|
124 |
-
['Under Staders Segl. nespil.',
|
125 |
-
'Frisk Selter=, Permonter=, Bitter, og FachingerVand bekommes paa Løveapotheket.',
|
126 |
-
'Søesyglinsk, Christoph. Auf Anordning der Liquidations=Commission, den ten August 1834. (Ges.) Mitglied der Commission, Regierungsrath: Pestof. Stellvertretender Secretair. Gabriel Ostrowski.',
|
127 |
-
'J de Reussiske Koge: Bordelummer Seil.',
|
128 |
-
'Scriptores historiae Byzantinae vird bei uns un entgeltlich ansgegeben. Anch sind bei und fortige Bogen dieses Werkes in den verschiedenen Ansgeden auf Druck., Schreibe und Velinpapier niedergelegt, z',
|
129 |
-
'Gammel Conjac. Potten.',
|
130 |
-
'NOTIFICATION. Von der 5ten Classe, der 7ten Königl. allen privilegitten Kopenhagner Lotteren, deren Ziehung den 17ten Passati geendiget worden, werden die Gewinne den 8ten hujus und følgende Werkeltag',
|
131 |
-
'Jm Verlag des Unterzeichneten har die Presse verlassen: Uever dis religiøse Bestimmung der Jugend, in einigen Predigten von K. C. von Gehren. Jn dieser Samlung sind følgende Gegenstande behandelt: 1) ',
|
132 |
-
"ditoyens fortund, ) vous qui, loin des combats, Pouves jouir en pair dans vos heureur ClimatsDes trefors annuel d'unne moisson fertileDont il plait aux saisons de couronner votre ile, Vous, diseje, a ",
|
133 |
-
'AVERTISSEMENTS. Ausser der am Seelandischen Langericht geschehene Proclamation, wird auch hiedurch zu dreien mahlen kund gethan, das die Theilungs Berichtigung nach dem menland Johann Georg Kanneworff']
|
134 |
-
```
|
135 |
-
|
136 |
-
### Dataset Statistics
|
137 |
-
|
138 |
-
<!-- START-DATASET PLOTS -->
|
139 |
-
<p align="center">
|
140 |
-
<img src="./images/dist_document_length.png" width="600" style="margin-right: 10px;" />
|
141 |
-
</p>
|
142 |
-
<!-- END-DATASET PLOTS -->
|
143 |
-
|
144 |
-
The coverage of the newspapers included can be seen here:
|
145 |
-
|
146 |
-

|
147 |
-
|
148 |
-
The distribution of texts pr. year is as follows:
|
149 |
-
|
150 |
-

|
151 |
-
|
152 |
-
|
153 |
-
## Personal and Sensitive Information
|
154 |
-
|
155 |
-
Due to the historical nature of the data, ENO contains no personal or sensitive information.
|
156 |
-
|
157 |
-
## Bias, Risks, and Limitations
|
158 |
-
|
159 |
-
The data reflects the time of its initial creation. This means that it mirrors and describes a deeply hierarchical society that was structured by deep-seated biases and forms of discrimination that are alien even to some of the worst among the living today. For example, the material contains racist language in describing contemporary phenomena such as the Transatlantic slave trade and the persecution of Jewish diasporas. Use cases which might relay or perpetuate such sentiments should be aware of these risks. It is a historical text corpora, warts and all.
|
160 |
-
|
161 |
-
Please also note that, although the newspapers are all in Danish, they do contain intermittent passages in German and Latin.
|
162 |
-
|
163 |
-
Some advertisements were reprinted verbatim. The dataset, therefore, includes occasional duplicate texts.
|
164 |
-
|
165 |
-
|
166 |
-
### License Information
|
167 |
-
|
168 |
-
The dataset is licensed under CC BY-SA 4.0. Please note that this license only pertains to the digitised text and dataset curation, not the original images. The original images of all material stemming from The Danish Royal Library, Nettbiblioteket, Københavns Biblioteker as well as the scans of Nyeste Skilderie af Kiøbenhavn made available by Niels Jensen are all in the public domain.
|
169 |
-
|
170 |
-
## More Information
|
171 |
-
|
172 |
-
For questions related to the dataset, curation, and annotation we please contact Johan Heinsen, Aalborg University <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
data/enevaeldens_nyheder/enevaeldens_nyheder.parquet
DELETED
@@ -1,3 +0,0 @@
|
|
1 |
-
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:8f0ccbf865189f37c9735e001219ef85da11ea3b5849621993a995f138c7f51d
|
3 |
-
size 1856788258
|
|
|
|
|
|
|
|
data/enevaeldens_nyheder/images/coverage-of-newspapers.jpeg
DELETED
Git LFS Details
|
data/enevaeldens_nyheder/images/dist_document_length.png
DELETED
Git LFS Details
|
data/enevaeldens_nyheder/images/distribution-pr-year.jpeg
DELETED
Git LFS Details
|