update-retsinformationdk

#72
by kris927b - opened
CHANGELOG.md CHANGED
@@ -5,6 +5,24 @@ All notable changes to this project will be documented in this file.
5
 
6
  The format is based on [Keep a Changelog](http://keepachangelog.com/en/1.0.0/).
7
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
8
  ## [v1.2.1] - 2025-06-24
9
 
10
  ### Fixed
 
5
 
6
  The format is based on [Keep a Changelog](http://keepachangelog.com/en/1.0.0/).
7
 
8
+ ## [v1.2.3] - 2025-06-30
9
+
10
+ ### Added
11
+
12
+ - Added a `create.py` script for the `retsinformationdk` dataset.
13
+ - Resulted in a boost in tokens and documents
14
+
15
+ ### Changed
16
+
17
+ - Did a full stats update on datasets, resulting in minor changes in a few datasheets
18
+
19
+ ## [v1.2.2] - 2025-06-26
20
+
21
+ ### Added
22
+
23
+ - Added the new `scrape_hovedstaden` dataset.
24
+ - Added a new domain type `Medical`.
25
+
26
  ## [v1.2.1] - 2025-06-24
27
 
28
  ### Fixed
README.md CHANGED
@@ -141,6 +141,10 @@ configs:
141
  data_files:
142
  - split: train
143
  path: data/nota/*.parquet
 
 
 
 
144
  annotations_creators:
145
  - no-annotation
146
  language_creators:
@@ -174,7 +178,7 @@ https://github.com/huggingface/datasets/blob/main/templates/README_guide.md
174
  <!-- START README TABLE -->
175
  | | |
176
  | ------------ | ----------------------------------------------------------------------------------------------------------------------------------------------------------- |
177
- | **Version** | 1.2.1 ([Changelog](/CHANGELOG.md)) |
178
  | **Language** | dan, dansk, Danish |
179
  | **License** | Openly Licensed, See the respective dataset |
180
  | **Models** | For model trained used this data see [danish-foundation-models](https://huggingface.co/danish-foundation-models) |
@@ -211,9 +215,9 @@ https://github.com/huggingface/datasets/blob/main/templates/README_guide.md
211
 
212
  <!-- START-DESC-STATS -->
213
  - **Language**: dan, dansk, Danish
214
- - **Number of samples**: 891.35K
215
- - **Number of tokens (Llama 3)**: 4.37B
216
- - **Average document length (characters)**: 15083.99
217
  <!-- END-DESC-STATS -->
218
 
219
 
@@ -311,43 +315,44 @@ This data generally contains no annotation besides the metadata attached to each
311
  Below follows a brief overview of the sources in the corpus along with their individual license.
312
 
313
  <!-- START-MAIN TABLE -->
314
- | Source | Description | Domain | N. Tokens | License |
315
- |:--------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:-------------|:------------|:-----------------------|
316
- | [cellar] | The official digital repository for European Union legal documents and open data | Legal | 1.15B | [CC-BY-SA 4.0] |
317
- | [ncc_books] | Danish books extracted from the [Norwegian Colossal Corpus](https://huggingface.co/datasets/NbAiLab/NCC) derived from OCR | Books | 531.97M | [CC-0] |
318
- | [retsinformationdk] | [retsinformation.dk](https://www.retsinformation.dk) (legal-information.dk) the official legal information system of Denmark | Legal | 516.35M | [Danish Copyright Law] |
319
- | [hest] | Samples from the Danish debate forum www.heste-nettet.dk | Social Media | 389.32M | [CC-0] |
320
- | [ncc_parliament] | Collections from the Norwegian parliament in Danish. Extracted from the [Norwegian Colossal Corpus](https://huggingface.co/datasets/NbAiLab/NCC) derived from ocr | Other | 338.87M | [NLOD 2.0] |
321
- | [opensubtitles] | Danish subsection of [OpenSubtitles](https://opus.nlpl.eu/OpenSubtitles/corpus/version/OpenSubtitles) | Conversation | 271.60M | [CC-0] |
322
- | [ai-aktindsigt] | Multiple web scrapes from municipality websites collected as a part of the [AI-aktindsigt](https://ai-aktindsigt.dk) project | Web | 139.23M | [Apache 2.0] |
323
- | [miljoeportalen] | Data from [Danmarks Miljøportalen](https://www.miljoeportal.dk/om-danmarks-miljoeportal/) (Denmark's Environment Portal) | Web | 127.38M | [CC-0] |
324
- | [skat] | Skat is the Danish tax authority. This dataset contains content from its website skat.dk | Legal | 122.11M | [CC-0] |
325
- | [wiki] | The Danish subsection of [wikipedia](https://en.wikipedia.org/wiki/Main_Page) | Encyclopedic | 122.00M | [CC-0] |
326
- | [ft] | Records from all meetings of The Danish parliament (Folketinget) in the parliament hall | Conversation | 114.09M | [CC-0] |
327
- | [memo] | The MeMo corpus comprising almost all Danish novels from the period 1870-1899, known as the Modern Breakthrough | Books | 113.74M | [CC-BY-SA 4.0] |
328
- | [ep] | The Danish subsection of [Europarl](https://aclanthology.org/2005.mtsummit-papers.11/) | Conversation | 100.84M | [CC-0] |
329
- | [adl] | Danish literature from 1700-2023 from the [Archive for Danish Literature](https://tekster.kb.dk/text?editorial=no&f%5Bsubcollection_ssi%5D%5B%5D=adl&match=one&search_field=Alt) (ADL) | Books | 58.49M | [CC-0] |
330
- | [retspraksis] | Case law or judical practice in Denmark derived from [Retspraksis](https://da.wikipedia.org/wiki/Retspraksis) | Legal | 56.26M | [CC-0] |
331
- | [fm-udgivelser] | The official publication series of the Danish Ministry of Finance containing economic analyses, budget proposals, and fiscal policy documents | Legal | 50.34M | [CC-BY-SA 4.0] |
332
- | [nordjyllandnews] | Articles from the Danish Newspaper [TV2 Nord](https://www.tv2nord.dk) | News | 37.90M | [CC-0] |
333
- | [eur-lex-sum-da] | The Danish subsection of EUR-lex SUM consisting of EU legislation paired with professionally written summaries | Legal | 31.37M | [CC-BY-SA 4.0] |
334
- | [ncc_maalfrid] | Danish content from Norwegian institutions websites | Web | 29.26M | [NLOD 2.0] |
335
- | [tv2r] | Contemporary Danish newswire articles published between 2010 and 2019 | News | 21.67M | [CC-BY-SA 4.0] |
336
- | [danske-taler] | Danish Speeches from [dansketaler.dk](https://www.dansketaler.dk) | Conversation | 8.81M | [CC-0] |
337
- | [nota] | The text only part of the [Nota lyd- og tekstdata](https://sprogteknologi.dk/dataset/nota-lyd-og-tekstdata) dataset | Readaloud | 7.30M | [CC-0] |
338
- | [gutenberg] | The Danish subsection from Project [Gutenberg](https://www.gutenberg.org) | Books | 6.76M | [Gutenberg] |
339
- | [wikibooks] | The Danish Subsection of [Wikibooks](https://www.wikibooks.org) | Books | 6.24M | [CC-0] |
340
- | [wikisource] | The Danish subsection of [Wikisource](https://en.wikisource.org/wiki/Main_Page) | Encyclopedic | 5.34M | [CC-0] |
341
- | [jvj] | The works of the Danish author and poet, [Johannes V. Jensen](https://da.wikipedia.org/wiki/Johannes_V._Jensen) | Books | 3.55M | [CC-BY-SA 4.0] |
342
- | [spont] | Conversational samples collected as a part of research projects at Aarhus University | Conversation | 1.56M | [CC-0] |
343
- | [dannet] | [DanNet](https://cst.ku.dk/projekter/dannet) is a Danish WordNet | Other | 1.48M | [DanNet 1.0] |
344
- | [relig] | Danish religious text from the 1700-2022 | Books | 1.24M | [CC-0] |
345
- | [ncc_newspaper] | OCR'd Newspapers derived from [NCC](https://huggingface.co/datasets/NbAiLab/NCC) | News | 1.05M | [CC-0] |
346
- | [botxt] | The Bornholmsk Ordbog Dictionary Project | Dialect | 847.97K | [CC-0] |
347
- | [naat] | Danish speeches from 1930-2022 | Conversation | 286.68K | [CC-0] |
348
- | [depbank] | The Danish subsection of the [Universal Dependencies Treebank](https://github.com/UniversalDependencies/UD_Danish-DDT) | Other | 185.45K | [CC-BY-SA 4.0] |
349
- | [synne] | Dataset collected from [synnejysk forening's website](https://www.synnejysk.dk), covering the Danish dialect sønderjysk | Other | 52.02K | [CC-0] |
350
- | **Total** | | | 4.37B | |
 
351
 
352
  [ai-aktindsigt]: data/ai-aktindsigt/ai-aktindsigt.md
353
  [cellar]: data/cellar/cellar.md
@@ -383,6 +388,7 @@ Below follows a brief overview of the sources in the corpus along with their ind
383
  [nordjyllandnews]: data/nordjyllandnews/nordjyllandnews.md
384
  [relig]: data/relig/relig.md
385
  [nota]: data/nota/nota.md
 
386
 
387
 
388
  [CC-0]: https://creativecommons.org/publicdomain/zero/1.0/legalcode.en
 
141
  data_files:
142
  - split: train
143
  path: data/nota/*.parquet
144
+ - config_name: scrape_hovedstaden
145
+ data_files:
146
+ - split: train
147
+ path: data/scrape_hovedstaden/*.parquet
148
  annotations_creators:
149
  - no-annotation
150
  language_creators:
 
178
  <!-- START README TABLE -->
179
  | | |
180
  | ------------ | ----------------------------------------------------------------------------------------------------------------------------------------------------------- |
181
+ | **Version** | 1.2.3 ([Changelog](/CHANGELOG.md)) |
182
  | **Language** | dan, dansk, Danish |
183
  | **License** | Openly Licensed, See the respective dataset |
184
  | **Models** | For model trained used this data see [danish-foundation-models](https://huggingface.co/danish-foundation-models) |
 
215
 
216
  <!-- START-DESC-STATS -->
217
  - **Language**: dan, dansk, Danish
218
+ - **Number of samples**: 951.89K
219
+ - **Number of tokens (Llama 3)**: 4.70B
220
+ - **Average document length (characters)**: 15168.87
221
  <!-- END-DESC-STATS -->
222
 
223
 
 
315
  Below follows a brief overview of the sources in the corpus along with their individual license.
316
 
317
  <!-- START-MAIN TABLE -->
318
+ | Source | Description | Domain | N. Tokens | License |
319
+ |:---------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:-------------|:------------|:-----------------------|
320
+ | [cellar] | The official digital repository for European Union legal documents and open data | Legal | 1.15B | [CC-BY-SA 4.0] |
321
+ | [retsinformationdk] | [retsinformation.dk](https://www.retsinformation.dk) (legal-information.dk) the official legal information system of Denmark | Legal | 818.25M | [Danish Copyright Law] |
322
+ | [ncc_books] | Danish books extracted from the [Norwegian Colossal Corpus](https://huggingface.co/datasets/NbAiLab/NCC) derived from OCR | Books | 531.97M | [CC-0] |
323
+ | [hest] | Samples from the Danish debate forum www.heste-nettet.dk | Social Media | 389.32M | [CC-0] |
324
+ | [ncc_parliament] | Collections from the Norwegian parliament in Danish. Extracted from the [Norwegian Colossal Corpus](https://huggingface.co/datasets/NbAiLab/NCC) derived from ocr | Other | 338.87M | [NLOD 2.0] |
325
+ | [opensubtitles] | Danish subsection of [OpenSubtitles](https://opus.nlpl.eu/OpenSubtitles/corpus/version/OpenSubtitles) | Conversation | 271.60M | [CC-0] |
326
+ | [ai-aktindsigt] | Multiple web scrapes from municipality websites collected as a part of the [AI-aktindsigt](https://ai-aktindsigt.dk) project | Web | 139.23M | [Apache 2.0] |
327
+ | [miljoeportalen] | Data from [Danmarks Miljøportalen](https://www.miljoeportal.dk/om-danmarks-miljoeportal/) (Denmark's Environment Portal) | Web | 127.38M | [CC-0] |
328
+ | [skat] | Skat is the Danish tax authority. This dataset contains content from its website skat.dk | Legal | 122.11M | [CC-0] |
329
+ | [wiki] | The Danish subsection of [wikipedia](https://en.wikipedia.org/wiki/Main_Page) | Encyclopedic | 122.00M | [CC-0] |
330
+ | [ft] | Records from all meetings of The Danish parliament (Folketinget) in the parliament hall | Conversation | 114.09M | [CC-0] |
331
+ | [memo] | The MeMo corpus comprising almost all Danish novels from the period 1870-1899, known as the Modern Breakthrough | Books | 113.74M | [CC-BY-SA 4.0] |
332
+ | [ep] | The Danish subsection of [Europarl](https://aclanthology.org/2005.mtsummit-papers.11/) | Conversation | 100.84M | [CC-0] |
333
+ | [adl] | Danish literature from 1700-2023 from the [Archive for Danish Literature](https://tekster.kb.dk/text?editorial=no&f%5Bsubcollection_ssi%5D%5B%5D=adl&match=one&search_field=Alt) (ADL) | Books | 58.49M | [CC-0] |
334
+ | [retspraksis] | Case law or judical practice in Denmark derived from [Retspraksis](https://da.wikipedia.org/wiki/Retspraksis) | Legal | 56.26M | [CC-0] |
335
+ | [fm-udgivelser] | The official publication series of the Danish Ministry of Finance containing economic analyses, budget proposals, and fiscal policy documents | Legal | 50.34M | [CC-BY-SA 4.0] |
336
+ | [nordjyllandnews] | Articles from the Danish Newspaper [TV2 Nord](https://www.tv2nord.dk) | News | 37.90M | [CC-0] |
337
+ | [eur-lex-sum-da] | The Danish subsection of EUR-lex SUM consisting of EU legislation paired with professionally written summaries | Legal | 31.37M | [CC-BY-SA 4.0] |
338
+ | [ncc_maalfrid] | Danish content from Norwegian institutions websites | Web | 29.26M | [NLOD 2.0] |
339
+ | [scrape_hovedstaden] | Guidelines and informational documents for healthcare professionals from the Capital Region | Medical | 27.07M | [CC-0] |
340
+ | [tv2r] | Contemporary Danish newswire articles published between 2010 and 2019 | News | 21.67M | [CC-BY-SA 4.0] |
341
+ | [danske-taler] | Danish Speeches from [dansketaler.dk](https://www.dansketaler.dk) | Conversation | 8.72M | [CC-0] |
342
+ | [nota] | The text only part of the [Nota lyd- og tekstdata](https://sprogteknologi.dk/dataset/nota-lyd-og-tekstdata) dataset | Readaloud | 7.30M | [CC-0] |
343
+ | [gutenberg] | The Danish subsection from Project [Gutenberg](https://www.gutenberg.org) | Books | 6.76M | [Gutenberg] |
344
+ | [wikibooks] | The Danish Subsection of [Wikibooks](https://www.wikibooks.org) | Books | 6.24M | [CC-0] |
345
+ | [wikisource] | The Danish subsection of [Wikisource](https://en.wikisource.org/wiki/Main_Page) | Encyclopedic | 5.34M | [CC-0] |
346
+ | [jvj] | The works of the Danish author and poet, [Johannes V. Jensen](https://da.wikipedia.org/wiki/Johannes_V._Jensen) | Books | 3.55M | [CC-BY-SA 4.0] |
347
+ | [spont] | Conversational samples collected as a part of research projects at Aarhus University | Conversation | 1.56M | [CC-0] |
348
+ | [dannet] | [DanNet](https://cst.ku.dk/projekter/dannet) is a Danish WordNet | Other | 1.48M | [DanNet 1.0] |
349
+ | [relig] | Danish religious text from the 1700-2022 | Books | 1.24M | [CC-0] |
350
+ | [ncc_newspaper] | OCR'd Newspapers derived from [NCC](https://huggingface.co/datasets/NbAiLab/NCC) | News | 1.05M | [CC-0] |
351
+ | [botxt] | The Bornholmsk Ordbog Dictionary Project | Dialect | 847.97K | [CC-0] |
352
+ | [naat] | Danish speeches from 1930-2022 | Conversation | 286.68K | [CC-0] |
353
+ | [depbank] | The Danish subsection of the [Universal Dependencies Treebank](https://github.com/UniversalDependencies/UD_Danish-DDT) | Other | 185.45K | [CC-BY-SA 4.0] |
354
+ | [synne] | Dataset collected from [synnejysk forening's website](https://www.synnejysk.dk), covering the Danish dialect sønderjysk | Other | 52.02K | [CC-0] |
355
+ | **Total** | | | 4.70B | |
356
 
357
  [ai-aktindsigt]: data/ai-aktindsigt/ai-aktindsigt.md
358
  [cellar]: data/cellar/cellar.md
 
388
  [nordjyllandnews]: data/nordjyllandnews/nordjyllandnews.md
389
  [relig]: data/relig/relig.md
390
  [nota]: data/nota/nota.md
391
+ [scrape_hovedstaden]: data/scrape_hovedstaden/scrape_hovedstaden.md
392
 
393
 
394
  [CC-0]: https://creativecommons.org/publicdomain/zero/1.0/legalcode.en
data/danske-taler/danske-taler.md CHANGED
@@ -40,8 +40,8 @@ Learn more about danske taler by reading their [about us](https://www.dansketale
40
  - **Language**: dan, dansk, Danish
41
  - **Domains**: Conversation, Speeches, Spoken
42
  - **Number of samples**: 2.91K
43
- - **Number of tokens (Llama 3)**: 8.81M
44
- - **Average document length (characters)**: 9228.65
45
  <!-- END-DESC-STATS -->
46
 
47
 
@@ -53,7 +53,7 @@ An example from the dataset looks as follows.
53
  ```py
54
  {
55
  "id": "danske-taler_281",
56
- "text": "Tyske landsmænd og -kvinder !\n\nSyv år er kort tid, en brøkdel af en enkel menneskelig normaltilværel[...]",
57
  "source": "danske-taler",
58
  "added": "2025-06-24",
59
  "created": "1940-01-30, 1940-01-30",
 
40
  - **Language**: dan, dansk, Danish
41
  - **Domains**: Conversation, Speeches, Spoken
42
  - **Number of samples**: 2.91K
43
+ - **Number of tokens (Llama 3)**: 8.72M
44
+ - **Average document length (characters)**: 9140.42
45
  <!-- END-DESC-STATS -->
46
 
47
 
 
53
  ```py
54
  {
55
  "id": "danske-taler_281",
56
+ "text": "Tyske landsmænd og -kvinder !\nSyv år er kort tid, en brøkdel af en enkel menneskelig normaltilværels[...]",
57
  "source": "danske-taler",
58
  "added": "2025-06-24",
59
  "created": "1940-01-30, 1940-01-30",
data/danske-taler/descriptive_stats.json CHANGED
@@ -1,6 +1,6 @@
1
  {
2
  "number_of_samples": 2912,
3
- "average_document_length": 9228.645260989011,
4
- "number_of_tokens": 8809004,
5
- "revision": "8d056aba9953ef0cf4c402ccb9deff745d8307af"
6
  }
 
1
  {
2
  "number_of_samples": 2912,
3
+ "average_document_length": 9140.421703296703,
4
+ "number_of_tokens": 8723951,
5
+ "revision": "bcb374cb39c593a23e26534d5f2f182dee3edceb"
6
  }
data/danske-taler/images/dist_document_length.png CHANGED

Git LFS Details

  • SHA256: eea9882f1b9cc4bc36a728144f1e95c55337ac67b6c4f3c67a36d34ba0a8fd64
  • Pointer size: 131 Bytes
  • Size of remote file: 560 kB

Git LFS Details

  • SHA256: 8a6cc3946783f2d8e4725e50acc17b4ffbc84c38bb521253a5c2dca9087aa34d
  • Pointer size: 131 Bytes
  • Size of remote file: 553 kB
data/ncc_newspaper/ncc_newspaper.md CHANGED
@@ -25,6 +25,7 @@ The Norwegian Colossal Corpus is a collection of multiple smaller Norwegian corp
25
 
26
  <!-- START-DESC-STATS -->
27
  - **Language**: dan, dansk, Danish
 
28
  - **Number of samples**: 5.37K
29
  - **Number of tokens (Llama 3)**: 1.05M
30
  - **Average document length (characters)**: 571.69
 
25
 
26
  <!-- START-DESC-STATS -->
27
  - **Language**: dan, dansk, Danish
28
+ - **Domains**: News
29
  - **Number of samples**: 5.37K
30
  - **Number of tokens (Llama 3)**: 1.05M
31
  - **Average document length (characters)**: 571.69
data/retsinformationdk/create.py ADDED
@@ -0,0 +1,166 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # /// script
2
+ # requires-python = ">=3.12"
3
+ # dependencies = [
4
+ # "datasets==3.2.0",
5
+ # "pandas",
6
+ # "requests",
7
+ # "trafilatura",
8
+ # "dynaword"
9
+ # ]
10
+ # [tool.uv.sources]
11
+ # dynaword = { git = "https://huggingface.co/datasets/danish-foundation-models/danish-dynaword", rev = "00e7f2aee7f7ad2da423419f77ecbb9c0536de0d" }
12
+ # ///
13
+
14
+ from datetime import date, datetime
15
+ from io import StringIO
16
+ import logging
17
+ from pathlib import Path
18
+ import pandas as pd
19
+ import requests
20
+ from requests.adapters import HTTPAdapter
21
+ from urllib3 import Retry
22
+ from trafilatura import extract
23
+ from datasets import Dataset
24
+ from tqdm import tqdm
25
+
26
+ from dynaword.process_dataset import (
27
+ add_token_count,
28
+ ensure_column_order,
29
+ remove_duplicate_text,
30
+ remove_empty_texts,
31
+ )
32
+
33
+ TMP_DIR = Path(__file__).parent / "tmp"
34
+
35
+ BASE_URL = "https://www.retsinformation.dk/api/document/eli"
36
+
37
+ logger = logging.getLogger(__name__)
38
+ today = date.today()
39
+
40
+
41
+ def create_session_with_retries(retries=2, backoff_factor=0.5):
42
+ session = requests.Session()
43
+ retry_strategy = Retry(
44
+ total=retries,
45
+ backoff_factor=backoff_factor,
46
+ status_forcelist=[500, 502, 503, 504],
47
+ allowed_methods=["GET"],
48
+ respect_retry_after_header=True,
49
+ )
50
+ adapter = HTTPAdapter(max_retries=retry_strategy)
51
+ session.mount("http://", adapter)
52
+ session.mount("https://", adapter)
53
+ return session
54
+
55
+
56
+ def fetch_document_list():
57
+ download = True
58
+ csv_content = ""
59
+
60
+ df: pd.DataFrame = pd.DataFrame()
61
+
62
+ if TMP_DIR.exists():
63
+ files = list(TMP_DIR.glob("*.csv"))
64
+ file = sorted(files, reverse=True)[0]
65
+
66
+ file_date = datetime.strptime(file.stem, "%Y-%m-%d").date()
67
+
68
+ if (today - file_date).days < 180:
69
+ download = False
70
+ df = pd.read_csv(file)
71
+
72
+ if download:
73
+ logger.info("Downloading list of files from Retsinformation.dk")
74
+ response = requests.get(
75
+ "https://www.retsinformation.dk/api/documentsearch/csv?dt=10&dt=1480&dt=20&dt=30&dt=40&dt=50&dt=90&dt=120&dt=270&dt=60&dt=100&dt=80&dt=110&dt=130&dt=140&dt=150&dt=160&dt=170&dt=180&dt=200&dt=210&dt=220&dt=1510&dt=1490&dt=-10&dt=230&dt=240&dt=250&dt=260&dt=980&dt=360&dt=400&dt=380&dt=420&dt=1530&dt=440&dt=450&dt=430&dt=1540&dt=460&dt=410&dt=370&dt=480&dt=390&dt=500&dt=510&dt=520&dt=490&dt=300&dt=310&dt=320&dt=330&dt=340&dt=350&o=40"
76
+ )
77
+ # response = requests.get(url, headers=headers)
78
+ response.raise_for_status() # Raise error for bad responses
79
+
80
+ # The response is a gzip-compressed CSV in plain text
81
+ csv_content = response.content.decode("utf-16", errors="replace")
82
+ logger.info("Downloaded list of documents")
83
+
84
+ # Optionally parse with pandas
85
+ df = pd.read_csv(StringIO(csv_content), sep=";") # Assuming semicolon separator
86
+
87
+ df.to_csv(TMP_DIR / (today.strftime("%Y-%m-%d") + ".csv"), index=False)
88
+
89
+ return df[
90
+ [
91
+ "DokumentType",
92
+ "DokumentId",
93
+ "Titel",
94
+ "Ressort",
95
+ "Historisk",
96
+ "PubliceretTidspunkt",
97
+ "EliUrl",
98
+ ]
99
+ ]
100
+
101
+
102
+ def fetch_document(doc_info: pd.Series, session: requests.Session) -> dict:
103
+ url = BASE_URL + doc_info["EliUrl"].strip().split("eli")[1]
104
+
105
+ response = session.post(
106
+ url,
107
+ headers={
108
+ "Accept": "application/json",
109
+ "Content-Type": "application/json",
110
+ },
111
+ json={},
112
+ )
113
+ response.raise_for_status()
114
+
115
+ return response.json()[0]
116
+
117
+
118
+ def main():
119
+ save_path = Path(__file__).parent / "retsinformationdk.parquet"
120
+ documents = fetch_document_list()
121
+
122
+ logger.info(f"Found {len(documents)} documents from retsinformationdk")
123
+
124
+ session = create_session_with_retries()
125
+ docs = []
126
+ for idx, doc_info in tqdm(documents.iterrows(), total=len(documents)):
127
+ if doc_info["Historisk"]:
128
+ continue
129
+ try:
130
+ doc = fetch_document(doc_info, session)
131
+ text = extract(doc["documentHtml"], output_format="markdown")
132
+ docs.append(
133
+ {
134
+ "id": doc_info["DokumentId"],
135
+ "text": text if text else "",
136
+ "source": "retsinformationdk",
137
+ "added": today.strftime("%Y-%m-%d"),
138
+ "created": f"{date.fromisoformat(str(doc_info['PubliceretTidspunkt'])).strftime('%Y-%m-%d')}, {date.fromisoformat(str(doc_info['PubliceretTidspunkt'])).strftime('%Y-%m-%d')}",
139
+ }
140
+ )
141
+ except Exception as e:
142
+ logger.error(f"Ran in to error: {e}")
143
+ logger.error(f"Skipping doc {doc_info['DokumentId']}")
144
+
145
+ ds = Dataset.from_list(docs)
146
+
147
+ # quality checks and processing
148
+ ds = remove_empty_texts(ds)
149
+ ds = remove_duplicate_text(ds)
150
+ ds = add_token_count(ds)
151
+ ds = ensure_column_order(ds)
152
+
153
+ ds.to_parquet(save_path)
154
+
155
+
156
+ if __name__ == "__main__":
157
+ log_path = Path(__file__).parent / "retsinformationdk.log"
158
+ logging.basicConfig(
159
+ level=logging.INFO,
160
+ format="%(asctime)s - %(levelname)s - %(message)s",
161
+ handlers=[
162
+ logging.StreamHandler(),
163
+ logging.FileHandler(log_path),
164
+ ],
165
+ )
166
+ main()
data/retsinformationdk/descriptive_stats.json CHANGED
@@ -1,6 +1,6 @@
1
  {
2
- "number_of_samples": 63979,
3
- "average_document_length": 22262.417246283938,
4
- "number_of_tokens": 516352722,
5
- "revision": "1546256ca9562ecef403e433276c36770859089e"
6
  }
 
1
  {
2
+ "number_of_samples": 100524,
3
+ "average_document_length": 23265.030191794995,
4
+ "number_of_tokens": 818252220,
5
+ "revision": "2c91001b440e33497c34fbfa9b40dfffffa25620"
6
  }
data/retsinformationdk/images/dist_document_length.png CHANGED

Git LFS Details

  • SHA256: 310522bbd483fb50794c9bca4975c2cad2ad3c3ef5be7a73252222213bcd9a3b
  • Pointer size: 131 Bytes
  • Size of remote file: 573 kB

Git LFS Details

  • SHA256: 0c7be68f9042207eae602d768b6cb499a6322e8681e925271df205ba27865a2a
  • Pointer size: 131 Bytes
  • Size of remote file: 584 kB
data/retsinformationdk/retsinformationdk.md CHANGED
@@ -39,9 +39,9 @@ It serves as a central repository for Danish legislation, administrative regulat
39
  <!-- START-DESC-STATS -->
40
  - **Language**: dan, dansk, Danish
41
  - **Domains**: Legal
42
- - **Number of samples**: 63.98K
43
- - **Number of tokens (Llama 3)**: 516.35M
44
- - **Average document length (characters)**: 22262.42
45
  <!-- END-DESC-STATS -->
46
 
47
 
@@ -52,12 +52,12 @@ An example from the dataset looks as follows.
52
  <!-- START-SAMPLE -->
53
  ```py
54
  {
55
- "id": "retsinformationdk_173889",
56
- "text": "Den fulde tekst Pressenævnets kendelse i sag nr. 15-70-00822\nResumé\nForeningen for Skånsomt Kystfisk[...]",
57
  "source": "retsinformationdk",
58
- "added": "2019-11-22",
59
- "created": "2000-01-01, 2022-01-01",
60
- "token_count": 900
61
  }
62
  ```
63
 
 
39
  <!-- START-DESC-STATS -->
40
  - **Language**: dan, dansk, Danish
41
  - **Domains**: Legal
42
+ - **Number of samples**: 100.52K
43
+ - **Number of tokens (Llama 3)**: 818.25M
44
+ - **Average document length (characters)**: 23265.03
45
  <!-- END-DESC-STATS -->
46
 
47
 
 
52
  <!-- START-SAMPLE -->
53
  ```py
54
  {
55
+ "id": "AA014851",
56
+ "text": "Indsamlingsnævnets afgørelse i sag nr. 22-730-00015\n\nIndsamlingsnævnet fandt det kritisabelt, at Gad[...]",
57
  "source": "retsinformationdk",
58
+ "added": "2025-06-26",
59
+ "created": "2025-06-25, 2025-06-25",
60
+ "token_count": 4062
61
  }
62
  ```
63
 
data/retsinformationdk/retsinformationdk.parquet CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:b5949575b39426d8084c9001213d838d94b71b739fa63725445402ef73c4c1f2
3
- size 648707629
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:191bab8a3e7ae419394a622b74ae0fe64e9b5033066eeab4a3b3d2960153d48a
3
+ size 1017748370
data/scrape_hovedstaden/create.py ADDED
@@ -0,0 +1,76 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # /// script
2
+ # requires-python = "==3.12"
3
+ # dependencies = [
4
+ # "datasets==3.2.0",
5
+ # "dynaword"
6
+ # ]
7
+ # [tool.uv.sources]
8
+ # dynaword = { git = "https://huggingface.co/datasets/danish-foundation-models/danish-dynaword", rev = "00e7f2aee7f7ad2da423419f77ecbb9c0536de0d" }
9
+ # ///
10
+ """
11
+ Script for downloading and processing the Scrape Hovedstaden texts.
12
+
13
+ Note: To run this script, you need to set `GIT_LFS_SKIP_SMUDGE=1` to be able to install dynaword:
14
+
15
+ ```bash
16
+ GIT_LFS_SKIP_SMUDGE=1 uv run data/scrape_hovedstaden/create.py
17
+ ```
18
+ """
19
+
20
+ import logging
21
+ import subprocess
22
+ from datetime import datetime
23
+ from pathlib import Path
24
+ from typing import Any, cast
25
+
26
+ import pandas as pd
27
+ from datasets import Dataset, load_dataset
28
+
29
+ from dynaword.process_dataset import (
30
+ add_token_count,
31
+ ensure_column_order,
32
+ remove_duplicate_text,
33
+ remove_empty_texts,
34
+ )
35
+
36
+ logger = logging.getLogger(__name__)
37
+
38
+ download_path = Path(__file__).parent / "tmp"
39
+
40
+
41
+ def main():
42
+ save_path = Path(__file__).parent / "scrape_hovedstaden.parquet"
43
+ # Download data from repo: Den-Intelligente-Patientjournal/region_hovedstaden_text
44
+ ds = load_dataset(
45
+ "Den-Intelligente-Patientjournal/region_hovedstaden_text", split="train"
46
+ )
47
+ dataset: Dataset = cast(Dataset, ds)
48
+
49
+ # Extract the cleaned column
50
+ dataset = dataset.rename_column("cleaned", "text")
51
+
52
+ # Add created column: 2015 and 2020
53
+ dataset = dataset.add_column("created", ["2015-01-01, 2020-12-31"] * len(dataset)) # type: ignore
54
+ # Add added column: today
55
+ dataset = dataset.add_column(
56
+ "added", [datetime.today().date().strftime("%Y-%m-%d")] * len(dataset)
57
+ ) # type: ignore
58
+ # Add source column: scrape_hovedstaden
59
+ dataset = dataset.add_column("source", ["scrape_hovedstaden"] * len(dataset)) # type: ignore
60
+ # Add id column: scrape_hovedstade_{idx}
61
+ dataset = dataset.add_column(
62
+ "id", [f"scrape_hovedstaden_{i}" for i in range(len(dataset))]
63
+ ) # type: ignore
64
+
65
+ # quality checks and processing
66
+ dataset = remove_empty_texts(dataset)
67
+ dataset = remove_duplicate_text(dataset)
68
+ dataset = add_token_count(dataset)
69
+ dataset = ensure_column_order(dataset)
70
+
71
+ # save to parquet
72
+ dataset.to_parquet(save_path)
73
+
74
+
75
+ if __name__ == "__main__":
76
+ main()
data/scrape_hovedstaden/descriptive_stats.json ADDED
@@ -0,0 +1,6 @@
 
 
 
 
 
 
 
1
+ {
2
+ "number_of_samples": 23996,
3
+ "average_document_length": 3329.0515919319887,
4
+ "number_of_tokens": 27066716,
5
+ "revision": "78cc135f92c8c12ee8ba131d1a03befc5c78477d"
6
+ }
data/scrape_hovedstaden/images/dist_document_length.png ADDED

Git LFS Details

  • SHA256: dbfbc421ebef7ef85478cac74dfb04326596f78b1094d168737360663d776ea0
  • Pointer size: 131 Bytes
  • Size of remote file: 572 kB
data/scrape_hovedstaden/scrape_hovedstaden.md ADDED
@@ -0,0 +1,98 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ pretty_name: Health Hovedstaden
3
+ language:
4
+ - da
5
+ license: cc0-1.0
6
+ license_name: CC-0
7
+ task_categories:
8
+ - text-generation
9
+ - fill-mask
10
+ task_ids:
11
+ - language-modeling
12
+ source_datasets:
13
+ - Den-Intelligente-Patientjournal/region_hovedstaden_text
14
+ domains:
15
+ - Medical
16
+ - Encyclopedic
17
+ ---
18
+
19
+ # Dataset Card for Health Hovedstaden
20
+
21
+ <!-- START-SHORT DESCRIPTION -->
22
+ Guidelines and informational documents for healthcare professionals from the Capital Region
23
+ <!-- END-SHORT DESCRIPTION -->
24
+
25
+ The document collection consists of guidelines and informational documents for healthcare professionals in the Capital Region of Denmark. The documents therefore contain a number of specialized terms and concepts that are frequently used within the healthcare sector.
26
+
27
+ The corpus was created based on the texts in the document collection and has been post-processed so that the texts can be used for the development of language technology.
28
+
29
+ Martin Sundahl Laursen and Thiusius R. Savarimuthu from the University of Southern Denmark have assisted the Danish Agency for Digital Government with the post-processing of the data. Read their joint paper on "Automatic Annotation of Training Data for Deep Learning Based De-identification of Narrative Clinical Text."
30
+
31
+
32
+
33
+
34
+ ## Dataset Description
35
+
36
+ <!-- START-DESC-STATS -->
37
+ - **Language**: dan, dansk, Danish
38
+ - **Domains**: Medical, Encyclopedic
39
+ - **Number of samples**: 24.00K
40
+ - **Number of tokens (Llama 3)**: 27.07M
41
+ - **Average document length (characters)**: 3329.05
42
+ <!-- END-DESC-STATS -->
43
+
44
+
45
+ ## Dataset Structure
46
+ An example from the dataset looks as follows.
47
+
48
+ <!-- START-SAMPLE -->
49
+ ```py
50
+ {
51
+ "id": "scrape_hovedstaden_0",
52
+ "text": "Acetylsalicylsyre - Aspirin, Akutlægebil\n\nMålgrupper og anvendelsesområde\nDefinitioner\nFremgangsmåde[...]",
53
+ "source": "scrape_hovedstaden",
54
+ "added": "2025-06-25",
55
+ "created": "2015-01-01, 2020-12-31",
56
+ "token_count": 766
57
+ }
58
+ ```
59
+
60
+ ### Data Fields
61
+
62
+ An entry in the dataset consists of the following fields:
63
+
64
+ - `id` (`str`): An unique identifier for each document.
65
+ - `text`(`str`): The content of the document.
66
+ - `source` (`str`): The source of the document (see [Source Data](#source-data)).
67
+ - `added` (`str`): An date for when the document was added to this collection.
68
+ - `created` (`str`): An date range for when the document was originally created.
69
+ - `token_count` (`int`): The number of tokens in the sample computed using the Llama 8B tokenizer
70
+ <!-- END-SAMPLE -->
71
+
72
+
73
+ ### Additional Processing
74
+
75
+
76
+
77
+ ### Unintended Uses
78
+
79
+ Please note that the corpus has been developed for the purpose of language technology development and should not be used as a source of healthcare information. The documents were scraped at a specific time and will therefore not be updated with changes. In this regard, please refer to the Capital Region of Denmark's document collection.
80
+
81
+
82
+ ### Dataset Statistics
83
+
84
+ <!-- START-DATASET PLOTS -->
85
+ <p align="center">
86
+ <img src="./images/dist_document_length.png" width="600" style="margin-right: 10px;" />
87
+ </p>
88
+ <!-- END-DATASET PLOTS -->
89
+
90
+
91
+ # Additional Information
92
+
93
+ ## License Information
94
+ The dataset have been released under a CC-0 license.
95
+
96
+ ### Citation Information
97
+
98
+ If you are using the data please reference the following paper [Automatic Annotation of Training Data for Deep Learning Based De-identification of Narrative Clinical Text](https://ceur-ws.org/Vol-3416/paper_5.pdf)
data/scrape_hovedstaden/scrape_hovedstaden.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:830fcdc9a16310bf1d165db79ba8b49bdee33cd7b3849ca0564b010d9f3df318
3
+ size 41434842
descriptive_stats.json CHANGED
@@ -1,6 +1,6 @@
1
  {
2
- "number_of_samples": 891348,
3
- "average_document_length": 15083.994534121353,
4
- "number_of_tokens": 4369589385,
5
- "revision": "8d056aba9953ef0cf4c402ccb9deff745d8307af"
6
  }
 
1
  {
2
+ "number_of_samples": 951889,
3
+ "average_document_length": 15168.871661506751,
4
+ "number_of_tokens": 4698470546,
5
+ "revision": "bcb374cb39c593a23e26534d5f2f182dee3edceb"
6
  }
images/dist_document_length.png CHANGED

Git LFS Details

  • SHA256: a061e831f01059e36d3a75145e656ee33c8e9fb6cafaa94dacb057460f6936fd
  • Pointer size: 132 Bytes
  • Size of remote file: 1.89 MB

Git LFS Details

  • SHA256: 2da47bd3c2e1f46fdc567e5dcc8e9ecea05dc5ad6c4abc6f9f525ccbe7ac8363
  • Pointer size: 132 Bytes
  • Size of remote file: 1.94 MB
images/domain_distribution.png CHANGED

Git LFS Details

  • SHA256: f0d13382a5aeeb05a4ee15e59b0dd7e4c8f89a07d63bb639f759069325884923
  • Pointer size: 131 Bytes
  • Size of remote file: 338 kB

Git LFS Details

  • SHA256: 9567ebaa5fe2b1b5df155a22287c4b67ee34b280723a9c9e477db9a89377c3d2
  • Pointer size: 131 Bytes
  • Size of remote file: 332 kB
pyproject.toml CHANGED
@@ -1,6 +1,6 @@
1
  [project]
2
  name = "dynaword"
3
- version = "1.2.1"
4
  description = "project code for the danish dynaword project"
5
  readme = "README.md"
6
  requires-python = ">=3.12,<3.13" # 3.13 have issues with spacy and pytorch
 
1
  [project]
2
  name = "dynaword"
3
+ version = "1.2.3"
4
  description = "project code for the danish dynaword project"
5
  readme = "README.md"
6
  requires-python = ">=3.12,<3.13" # 3.13 have issues with spacy and pytorch
src/dynaword/typings.py CHANGED
@@ -6,6 +6,7 @@ DOMAIN = Literal[
6
  "Dialect",
7
  "Encyclopedic",
8
  "Legal",
 
9
  "News",
10
  "Other",
11
  "Readaloud",
 
6
  "Dialect",
7
  "Encyclopedic",
8
  "Legal",
9
+ "Medical",
10
  "News",
11
  "Other",
12
  "Readaloud",
test_results.log CHANGED
@@ -1,24 +1,24 @@
1
  ============================= test session starts ==============================
2
- platform darwin -- Python 3.12.9, pytest-8.3.4, pluggy-1.5.0
3
  rootdir: /Users/kristianjensen/Documents/danish-dynaword
4
  configfile: pyproject.toml
5
- collected 310 items
6
 
7
  src/tests/test_dataset_schema.py ....................................... [ 12%]
8
- ............................. [ 21%]
9
  src/tests/test_datasheets.py ........................................... [ 35%]
10
- ........................................................................ [ 59%]
11
- ....................................................... [ 76%]
12
  src/tests/test_load.py .. [ 77%]
13
- src/tests/test_quality/test_duplicates.py .............................. [ 87%]
14
- ....s [ 88%]
15
- src/tests/test_quality/test_short_texts.py ............................. [ 98%]
16
- ..... [ 99%]
17
  src/tests/test_unique_ids.py . [100%]
18
 
19
  =============================== warnings summary ===============================
20
- src/tests/test_quality/test_short_texts.py: 34 warnings
21
  /Users/kristianjensen/Documents/danish-dynaword/.venv/lib/python3.12/site-packages/datasets/utils/_dill.py:385: DeprecationWarning: co_lnotab is deprecated, use co_lines instead.
22
 
23
  -- Docs: https://docs.pytest.org/en/stable/how-to/capture-warnings.html
24
- ================= 309 passed, 1 skipped, 34 warnings in 46.75s =================
 
1
  ============================= test session starts ==============================
2
+ platform darwin -- Python 3.12.0, pytest-8.3.4, pluggy-1.5.0
3
  rootdir: /Users/kristianjensen/Documents/danish-dynaword
4
  configfile: pyproject.toml
5
+ collected 319 items
6
 
7
  src/tests/test_dataset_schema.py ....................................... [ 12%]
8
+ ............................... [ 21%]
9
  src/tests/test_datasheets.py ........................................... [ 35%]
10
+ ........................................................................ [ 57%]
11
+ ............................................................ [ 76%]
12
  src/tests/test_load.py .. [ 77%]
13
+ src/tests/test_quality/test_duplicates.py .............................. [ 86%]
14
+ .....s [ 88%]
15
+ src/tests/test_quality/test_short_texts.py ............................. [ 97%]
16
+ ...... [ 99%]
17
  src/tests/test_unique_ids.py . [100%]
18
 
19
  =============================== warnings summary ===============================
20
+ src/tests/test_quality/test_short_texts.py: 35 warnings
21
  /Users/kristianjensen/Documents/danish-dynaword/.venv/lib/python3.12/site-packages/datasets/utils/_dill.py:385: DeprecationWarning: co_lnotab is deprecated, use co_lines instead.
22
 
23
  -- Docs: https://docs.pytest.org/en/stable/how-to/capture-warnings.html
24
+ ================= 318 passed, 1 skipped, 35 warnings in 27.88s =================
uv.lock CHANGED
@@ -257,7 +257,7 @@ wheels = [
257
 
258
  [[package]]
259
  name = "dynaword"
260
- version = "1.2.1"
261
  source = { editable = "." }
262
  dependencies = [
263
  { name = "datasets" },
 
257
 
258
  [[package]]
259
  name = "dynaword"
260
+ version = "1.2.2"
261
  source = { editable = "." }
262
  dependencies = [
263
  { name = "datasets" },