Datasets:
Tasks:
Text Generation
Formats:
parquet
Sub-tasks:
language-modeling
Languages:
Danish
Size:
1M - 10M
License:
Updated changelog
Browse files- CHANGELOG.md +22 -1
- README.md +38 -42
- data/danske-taler/danske-taler.md +1 -1
- data/wikisource/descriptive_stats.json +1 -1
- data/wikisource/wikisource.md +1 -1
- descriptive_stats.json +1 -1
- src/dynaword/datasheet.py +9 -3
- src/dynaword/plots.py +6 -0
- src/dynaword/tables.py +3 -1
- src/dynaword/update_descriptive_statistics.py +1 -1
CHANGELOG.md
CHANGED
@@ -5,6 +5,27 @@ All notable changes to this project will be documented in this file.
|
|
5 |
|
6 |
The format is based on [Keep a Changelog](http://keepachangelog.com/en/1.0.0/).
|
7 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
8 |
## [v1.0.11] - 2025-03-29
|
9 |
|
10 |
### Added
|
@@ -21,4 +42,4 @@ The format is based on [Keep a Changelog](http://keepachangelog.com/en/1.0.0/).
|
|
21 |
|
22 |
- Sorted main table in readme
|
23 |
- Added Changelog
|
24 |
-
- Minor
|
|
|
5 |
|
6 |
The format is based on [Keep a Changelog](http://keepachangelog.com/en/1.0.0/).
|
7 |
|
8 |
+
## [v1.1.0] - 2025-04-29
|
9 |
+
|
10 |
+
### Added
|
11 |
+
|
12 |
+
- Added multiple quality controls
|
13 |
+
- Removed all empty string
|
14 |
+
- Removed duplicates across within datasets
|
15 |
+
- Removed
|
16 |
+
- Restructured datasets
|
17 |
+
- Removed columns from the dataset to make the structure more lightweight, these include domain, metadata, and license. These have been moved to the individual datasheets. It is still possible to filter for license by using the dataset name
|
18 |
+
- Added column for number of tokens
|
19 |
+
- For developers
|
20 |
+
- Restructered CI codebase substantially
|
21 |
+
- Added `DataSheet` to make CI for convenient
|
22 |
+
- factored out plots and tables
|
23 |
+
|
24 |
+
### Docs
|
25 |
+
|
26 |
+
- Sorted overview table
|
27 |
+
- Minor changes to dataset documentation
|
28 |
+
|
29 |
## [v1.0.11] - 2025-03-29
|
30 |
|
31 |
### Added
|
|
|
42 |
|
43 |
- Sorted main table in readme
|
44 |
- Added Changelog
|
45 |
+
- Minor changes to dataset documentation
|
README.md
CHANGED
@@ -4,7 +4,7 @@ configs:
|
|
4 |
- config_name: default
|
5 |
data_files:
|
6 |
- split: train
|
7 |
-
path:
|
8 |
- config_name: ai-aktindsigt
|
9 |
data_files:
|
10 |
- split: train
|
@@ -268,15 +268,12 @@ Each entry in the dataset consists of a single text with associated metadata
|
|
268 |
|
269 |
An entry in the dataset consists of the following fields:
|
270 |
|
|
|
271 |
- `text`(`str`): The content of the document.
|
272 |
- `source` (`str`): The source of the document (see [Source Data](#source-data)).
|
273 |
-
- `id` (`str`): An unique identifier for each document.
|
274 |
- `added` (`str`): An date for when the document was added to this collection.
|
275 |
- `created` (`str`): An date range for when the document was originally created.
|
276 |
-
- `
|
277 |
-
- `domain` (`str`): The domain of the source
|
278 |
-
- `metadata/source-pretty` (`str`): The long form version of the short-form source name
|
279 |
-
- `metadata/*`: Potentially additional metadata
|
280 |
<!-- END-SAMPLE -->
|
281 |
|
282 |
### Data Splits
|
@@ -298,39 +295,39 @@ This data generally contains no annotation besides the metadata attached to each
|
|
298 |
Below follows a brief overview of the sources in the corpus along with their individual license.
|
299 |
|
300 |
<!-- START-MAIN TABLE -->
|
301 |
-
| Source | Description | N. Tokens | License |
|
302 |
-
|
303 |
-
|
|
304 |
-
| [
|
305 |
-
| [
|
306 |
-
| [
|
307 |
-
| [
|
308 |
-
| [
|
309 |
-
| [
|
310 |
-
| [
|
311 |
-
| [
|
312 |
-
| [
|
313 |
-
| [
|
314 |
-
| [
|
315 |
-
| [
|
316 |
-
| [
|
317 |
-
| [
|
318 |
-
| [eur-lex-sum-da] | The Danish subsection of EUR-lex SUM consisting of EU legislation paired with professionally written summaries | 31.37M | [CC-BY-SA 4.0] |
|
319 |
-
| [
|
320 |
-
| [
|
321 |
-
| [
|
322 |
-
| [
|
323 |
-
| [
|
324 |
-
| [
|
325 |
-
| [
|
326 |
-
| [
|
327 |
-
| [
|
328 |
-
| [
|
329 |
-
| [
|
330 |
-
| [
|
331 |
-
| [
|
332 |
-
| [
|
333 |
-
|
|
334 |
|
335 |
[ai-aktindsigt]: data/ai-aktindsigt/ai-aktindsigt.md
|
336 |
[cellar]: data/cellar/cellar.md
|
@@ -389,8 +386,7 @@ TODO:
|
|
389 |
|
390 |
<!-- START-DATASET PLOTS -->
|
391 |
<p align="center">
|
392 |
-
<img src="./images/dist_document_length.png" width="600" style="margin-right: 10px;" />
|
393 |
-
<img>
|
394 |
</p>
|
395 |
<!-- END-DATASET PLOTS -->
|
396 |
|
@@ -423,4 +419,4 @@ We will comply with legitimate requests by removing the affected sources from th
|
|
423 |
<img src="./docs/icon.png" width="30" style="margin-right: 10px;" />
|
424 |
</a>
|
425 |
A <a href=https://www.foundationmodels.dk>Danish Foundation Models</a> dataset
|
426 |
-
</h3>
|
|
|
4 |
- config_name: default
|
5 |
data_files:
|
6 |
- split: train
|
7 |
+
path: data/*/*.parquet
|
8 |
- config_name: ai-aktindsigt
|
9 |
data_files:
|
10 |
- split: train
|
|
|
268 |
|
269 |
An entry in the dataset consists of the following fields:
|
270 |
|
271 |
+
- `id` (`str`): An unique identifier for each document.
|
272 |
- `text`(`str`): The content of the document.
|
273 |
- `source` (`str`): The source of the document (see [Source Data](#source-data)).
|
|
|
274 |
- `added` (`str`): An date for when the document was added to this collection.
|
275 |
- `created` (`str`): An date range for when the document was originally created.
|
276 |
+
- `token_count` (`int`): The number of tokens in the sample computed using the Llama 8B tokenizer
|
|
|
|
|
|
|
277 |
<!-- END-SAMPLE -->
|
278 |
|
279 |
### Data Splits
|
|
|
295 |
Below follows a brief overview of the sources in the corpus along with their individual license.
|
296 |
|
297 |
<!-- START-MAIN TABLE -->
|
298 |
+
| Source | Description | Domain | N. Tokens | License |
|
299 |
+
|:--------------------|:----------------------------------------------------------------------------------------------------------------------------------------------|:--------------|:------------|:-----------------------|
|
300 |
+
| **Total** | | | 3.36B | |
|
301 |
+
| [cellar] | The official digital repository for European Union legal documents and open data | Legal | 1.15B | [CC-BY-SA 4.0] |
|
302 |
+
| [retsinformationdk] | [retsinformation.dk](https://www.retsinformation.dk) (legal-information.dk) the official legal information system of Denmark | Legal | 516.35M | [Danish Copyright Law] |
|
303 |
+
| [hest] | Samples from the Danish debate forum www.heste-nettet.dk | Social Media | 389.32M | [CC-0] |
|
304 |
+
| [opensubtitles] | Danish subsection of [OpenSubtitles](https://opus.nlpl.eu/OpenSubtitles/corpus/version/OpenSubtitles) | Conversations | 271.60M | [CC-0] |
|
305 |
+
| [ai-aktindsigt] | Multiple web scrapes from municipality websites collected as a part of the [AI-aktindsigt](https://ai-aktindsigt.dk) project | Web | 139.23M | [Apache 2.0] |
|
306 |
+
| [miljoeportalen] | Data from [Danmarks Miljøportalen](https://www.miljoeportal.dk/om-danmarks-miljoeportal/) (Denmark's Environment Portal) | Web | 127.38M | [CC-0] |
|
307 |
+
| [skat] | Skat is the Danish tax authority. This dataset contains content from its website skat.dk | Legal | 122.11M | [CC-0] |
|
308 |
+
| [wiki] | The Danish subsection of [wikipedia](https://en.wikipedia.org/wiki/Main_Page) | Encyclopedic | 122.00M | [CC-0] |
|
309 |
+
| [ft] | Records from all meetings of The Danish parliament (Folketinget) in the parliament hall | Conversation | 114.09M | [CC-0] |
|
310 |
+
| [ep] | The Danish subsection of [Europarl](https://aclanthology.org/2005.mtsummit-papers.11/) | Conversation | 100.84M | [CC-0] |
|
311 |
+
| [adl] | Danish literature from 1700-2023 from the Archive for Danish Literature (ADL) | Books | 58.49M | [CC-0] |
|
312 |
+
| [retspraksis] | Case law or judical practice in Denmark derived from [Retspraksis](https://da.wikipedia.org/wiki/Retspraksis) | Legal | 56.26M | [CC-0] |
|
313 |
+
| [fm-udgivelser] | The official publication series of the Danish Ministry of Finance containing economic analyses, budget proposals, and fiscal policy documents | Legal | 50.34M | [CC-BY-SA 4.0] |
|
314 |
+
| [nordjyllandnews] | Articles from the Danish Newspaper [TV2 Nord](https://www.tv2nord.dk) | News | 37.90M | [CC-0] |
|
315 |
+
| [eur-lex-sum-da] | The Danish subsection of EUR-lex SUM consisting of EU legislation paired with professionally written summaries | Legal | 31.37M | [CC-BY-SA 4.0] |
|
316 |
+
| [tv2r] | Contemporary Danish newswire articles published between 2010 and 2019 | News | 21.67M | [CC-BY-SA 4.0] |
|
317 |
+
| [memo] | The MeMo corpus comprising almost all Danish novels from the period 1870-1899, known as the Modern Breakthrough | Books | 9.28M | [CC-BY-SA 4.0] |
|
318 |
+
| [danske-taler] | Danish Speeches from [dansketaler.dk](https://www.dansketaler.dk) | Speeches | 8.23M | [CC-0] |
|
319 |
+
| [nota] | The text only part of the [Nota lyd- og tekstdata](https://sprogteknologi.dk/dataset/nota-lyd-og-tekstdata) dataset | Readaloud | 7.30M | [CC-0] |
|
320 |
+
| [gutenberg] | The Danish subsection from Project [Gutenberg](https://www.gutenberg.org) | Books | 6.76M | [Gutenberg] |
|
321 |
+
| [wikibooks] | The Danish Subsection of [Wikibooks](https://www.wikibooks.org) | Encyclopedic | 6.24M | [CC-0] |
|
322 |
+
| [wikisource] | The Danish subsection of [Wikisource](https://en.wikisource.org/wiki/Main_Page) | Encyclopedic | 5.34M | [CC-0] |
|
323 |
+
| [jvj] | The works of the Danish author and poet, [Johannes V. Jensen](https://da.wikipedia.org/wiki/Johannes_V._Jensen) | Books | 3.55M | [CC-BY-SA 4.0] |
|
324 |
+
| [spont] | Conversational samples collected as a part of research projects at Aarhus University | Conversations | 1.56M | [CC-0] |
|
325 |
+
| [dannet] | [DanNet](https://cst.ku.dk/projekter/dannet) is a Danish WordNet | Other | 1.48M | [DanNet 1.0] |
|
326 |
+
| [relig] | Danish religious text from the 1700-2022 | Books | 1.24M | [CC-0] |
|
327 |
+
| [botxt] | The Bornholmsk Ordbog Dictionary Project | Dialect | 847.97K | [CC-0] |
|
328 |
+
| [naat] | Danish speeches from 1930-2022 | Conversations | 286.68K | [CC-0] |
|
329 |
+
| [depbank] | The Danish subsection of the [Universal Dependencies Treebank](https://github.com/UniversalDependencies/UD_Danish-DDT) | Other | 185.45K | [CC-BY-SA 4.0] |
|
330 |
+
| [synne] | Dataset collected from [synnejysk forening's website](https://www.synnejysk.dk), covering the Danish dialect sønderjysk | Other | 52.02K | [CC-0] |
|
331 |
|
332 |
[ai-aktindsigt]: data/ai-aktindsigt/ai-aktindsigt.md
|
333 |
[cellar]: data/cellar/cellar.md
|
|
|
386 |
|
387 |
<!-- START-DATASET PLOTS -->
|
388 |
<p align="center">
|
389 |
+
<img src="./images/dist_document_length.png" width="600" style="margin-right: 10px;" />
|
|
|
390 |
</p>
|
391 |
<!-- END-DATASET PLOTS -->
|
392 |
|
|
|
419 |
<img src="./docs/icon.png" width="30" style="margin-right: 10px;" />
|
420 |
</a>
|
421 |
A <a href=https://www.foundationmodels.dk>Danish Foundation Models</a> dataset
|
422 |
+
</h3>
|
data/danske-taler/danske-taler.md
CHANGED
@@ -14,7 +14,7 @@ domains:
|
|
14 |
- Spoken
|
15 |
---
|
16 |
|
17 |
-
# Dataset Card for
|
18 |
|
19 |
<!-- START-SHORT DESCRIPTION -->
|
20 |
Danish Speeches from [dansketaler.dk](https://www.dansketaler.dk).
|
|
|
14 |
- Spoken
|
15 |
---
|
16 |
|
17 |
+
# Dataset Card for Danske Taler
|
18 |
|
19 |
<!-- START-SHORT DESCRIPTION -->
|
20 |
Danish Speeches from [dansketaler.dk](https://www.dansketaler.dk).
|
data/wikisource/descriptive_stats.json
CHANGED
@@ -2,5 +2,5 @@
|
|
2 |
"number_of_samples": 2419,
|
3 |
"average_document_length": 6398.767259198015,
|
4 |
"number_of_tokens": 5344862,
|
5 |
-
"revision": "
|
6 |
}
|
|
|
2 |
"number_of_samples": 2419,
|
3 |
"average_document_length": 6398.767259198015,
|
4 |
"number_of_tokens": 5344862,
|
5 |
+
"revision": "43d839aa9331a74eaa55c04d23d77b7dc19a20d8"
|
6 |
}
|
data/wikisource/wikisource.md
CHANGED
@@ -29,7 +29,7 @@ The Danish subsection of [Wikisource](https://en.wikisource.org/wiki/Main_Page).
|
|
29 |
|
30 |
<!-- START-DESC-STATS -->
|
31 |
- **Language**: dan, dansk, Danish
|
32 |
-
- **Domains**:
|
33 |
- **Number of samples**: 2.42K
|
34 |
- **Number of tokens (Llama 3)**: 5.34M
|
35 |
- **Average document length (characters)**: 6398.77
|
|
|
29 |
|
30 |
<!-- START-DESC-STATS -->
|
31 |
- **Language**: dan, dansk, Danish
|
32 |
+
- **Domains**: Encyclopedic
|
33 |
- **Number of samples**: 2.42K
|
34 |
- **Number of tokens (Llama 3)**: 5.34M
|
35 |
- **Average document length (characters)**: 6398.77
|
descriptive_stats.json
CHANGED
@@ -2,5 +2,5 @@
|
|
2 |
"number_of_samples": 846387,
|
3 |
"average_document_length": 12453.449737531413,
|
4 |
"number_of_tokens": 3363395483,
|
5 |
-
"revision": "
|
6 |
}
|
|
|
2 |
"number_of_samples": 846387,
|
3 |
"average_document_length": 12453.449737531413,
|
4 |
"number_of_tokens": 3363395483,
|
5 |
+
"revision": "43d839aa9331a74eaa55c04d23d77b7dc19a20d8"
|
6 |
}
|
src/dynaword/datasheet.py
CHANGED
@@ -168,6 +168,8 @@ class DataSheet(BaseModel):
|
|
168 |
raise ValueError(f"tag ({tag}) not found in readme")
|
169 |
|
170 |
def get_tag_content(self, tag: str | DEFAULT_SECTION_TAGS) -> str:
|
|
|
|
|
171 |
s, e = self.get_tag_idx(tag=tag)
|
172 |
tag_start = f"<!-- START-{tag} -->"
|
173 |
return self.body[s + len(tag_start) : e].strip()
|
@@ -191,12 +193,16 @@ class DataSheet(BaseModel):
|
|
191 |
|
192 |
if self.domains:
|
193 |
domains = ", ".join(self.domains)
|
194 |
-
package += f"- **Domains**: {domains}"
|
195 |
|
196 |
-
package
|
|
|
|
|
197 |
- **Number of tokens (Llama 3)**: {human_readable_large_int(d_stats.number_of_tokens)}
|
198 |
- **Average document length (characters)**: {d_stats.average_document_length:.2f}
|
199 |
-
""")
|
|
|
|
|
200 |
|
201 |
return self.replace_tag(
|
202 |
package=package,
|
|
|
168 |
raise ValueError(f"tag ({tag}) not found in readme")
|
169 |
|
170 |
def get_tag_content(self, tag: str | DEFAULT_SECTION_TAGS) -> str:
|
171 |
+
if isinstance(tag, Enum):
|
172 |
+
tag = tag.value
|
173 |
s, e = self.get_tag_idx(tag=tag)
|
174 |
tag_start = f"<!-- START-{tag} -->"
|
175 |
return self.body[s + len(tag_start) : e].strip()
|
|
|
193 |
|
194 |
if self.domains:
|
195 |
domains = ", ".join(self.domains)
|
196 |
+
package += f"- **Domains**: {domains}\n"
|
197 |
|
198 |
+
package += (
|
199 |
+
dedent(f"""
|
200 |
+
- **Number of samples**: {human_readable_large_int(d_stats.number_of_samples)}
|
201 |
- **Number of tokens (Llama 3)**: {human_readable_large_int(d_stats.number_of_tokens)}
|
202 |
- **Average document length (characters)**: {d_stats.average_document_length:.2f}
|
203 |
+
""").strip()
|
204 |
+
+ "\n"
|
205 |
+
)
|
206 |
|
207 |
return self.replace_tag(
|
208 |
package=package,
|
src/dynaword/plots.py
CHANGED
@@ -8,6 +8,12 @@ from datasets import Dataset
|
|
8 |
logger = logging.getLogger(__name__)
|
9 |
|
10 |
|
|
|
|
|
|
|
|
|
|
|
|
|
11 |
def create_descriptive_statistics_plots(
|
12 |
dataset: Dataset,
|
13 |
save_dir: Path,
|
|
|
8 |
logger = logging.getLogger(__name__)
|
9 |
|
10 |
|
11 |
+
# TODO:
|
12 |
+
# make double pie chart
|
13 |
+
# inner layer: Domains
|
14 |
+
# Outer layer: source
|
15 |
+
|
16 |
+
|
17 |
def create_descriptive_statistics_plots(
|
18 |
dataset: Dataset,
|
19 |
save_dir: Path,
|
src/dynaword/tables.py
CHANGED
@@ -60,9 +60,11 @@ def create_overview_table(repo_path: Path = repo_path) -> pd.DataFrame:
|
|
60 |
|
61 |
sheet = DataSheet.load_from_path(readme_path)
|
62 |
desc_stats = sheet.get_descritive_stats()
|
|
|
63 |
|
64 |
table["Source"] += [f"[{dataset_path.name}]"]
|
65 |
table["License"] += [f"[{sheet.license_name}]"]
|
|
|
66 |
table["Description"] += [sheet.short_description]
|
67 |
table["N. Tokens"] += [desc_stats.number_of_tokens]
|
68 |
|
@@ -74,7 +76,7 @@ def create_overview_table(repo_path: Path = repo_path) -> pd.DataFrame:
|
|
74 |
table["N. Tokens"] += [sum(table["N. Tokens"])]
|
75 |
|
76 |
df = pd.DataFrame.from_dict(table)
|
77 |
-
df = df.sort_values("N. Tokens")
|
78 |
df["N. Tokens"] = df["N. Tokens"].apply(human_readable_large_int)
|
79 |
|
80 |
return df
|
|
|
60 |
|
61 |
sheet = DataSheet.load_from_path(readme_path)
|
62 |
desc_stats = sheet.get_descritive_stats()
|
63 |
+
main_domain = sheet.domains[0] if sheet.domains else ""
|
64 |
|
65 |
table["Source"] += [f"[{dataset_path.name}]"]
|
66 |
table["License"] += [f"[{sheet.license_name}]"]
|
67 |
+
table["Domain"] += [main_domain]
|
68 |
table["Description"] += [sheet.short_description]
|
69 |
table["N. Tokens"] += [desc_stats.number_of_tokens]
|
70 |
|
|
|
76 |
table["N. Tokens"] += [sum(table["N. Tokens"])]
|
77 |
|
78 |
df = pd.DataFrame.from_dict(table)
|
79 |
+
df = df.sort_values("N. Tokens", ascending=False)
|
80 |
df["N. Tokens"] = df["N. Tokens"].apply(human_readable_large_int)
|
81 |
|
82 |
return df
|
src/dynaword/update_descriptive_statistics.py
CHANGED
@@ -75,8 +75,8 @@ def update_dataset(
|
|
75 |
sheet.body = sheet.add_sample_and_description(ds)
|
76 |
sheet.body = sheet.add_dataset_plots(ds, create_plot=True)
|
77 |
|
78 |
-
logger.info("Updating Overview table")
|
79 |
if dataset_name == "default":
|
|
|
80 |
package = create_overview_table_str()
|
81 |
sheet.body = sheet.replace_tag(package=package, tag="MAIN TABLE")
|
82 |
|
|
|
75 |
sheet.body = sheet.add_sample_and_description(ds)
|
76 |
sheet.body = sheet.add_dataset_plots(ds, create_plot=True)
|
77 |
|
|
|
78 |
if dataset_name == "default":
|
79 |
+
logger.info("Updating Overview table")
|
80 |
package = create_overview_table_str()
|
81 |
sheet.body = sheet.replace_tag(package=package, tag="MAIN TABLE")
|
82 |
|