--- pretty_name: European Parliament language: - da license: cc0-1.0 license_name: CC-0 size_categories: - 1-10k task_categories: - text-generation - fill-mask task_ids: - language-modeling source_datasets: - danish-foundation-models/danish-gigaword domains: - Conversation - Spoken --- # Dataset Card for European Parliament The Danish subsection of [Europarl](https://aclanthology.org/2005.mtsummit-papers.11/). The europarl is a corpus of parallel text in 11 languages from the proceedings of the European Parliament, which are published on the web. This corpus has found widespread use in the NLP community. It was initially intended as training data for statistical machine translation. ## Dataset Description - **Language**: dan, dansk, Danish - **Domains**: Conversation, Spoken - **Number of samples**: 3.93K - **Number of tokens (Llama 3)**: 100.84M - **Average document length (characters)**: 79360.12 ## Dataset Structure An example from the dataset looks as follows. ```py { "id": "ep_07-02-01-008", "text": "TALER 6703: Jeg har stemt for henstillingen om godkendelse af opdelingsanordninger til beskyttelse a[...]", "source": "ep", "added": "2019-11-20", "created": "2004-01-01, 2009-01-01", "token_count": 16237 } ``` ### Data Fields An entry in the dataset consists of the following fields: - `id` (`str`): An unique identifier for each document. - `text`(`str`): The content of the document. - `source` (`str`): The source of the document (see [Source Data](#source-data)). - `added` (`str`): An date for when the document was added to this collection. - `created` (`str`): An date range for when the document was originally created. - `token_count` (`int`): The number of tokens in the sample computed using the Llama 8B tokenizer ### Dataset Statistics