The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
Error code: DatasetGenerationError Exception: TypeError Message: Mask must be a pyarrow.Array of type boolean Traceback: Traceback (most recent call last): File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1626, in _prepare_split_single writer.write(example, key) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/arrow_writer.py", line 538, in write self.write_examples_on_file() File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/arrow_writer.py", line 496, in write_examples_on_file self.write_batch(batch_examples=batch_examples) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/arrow_writer.py", line 610, in write_batch self.write_table(pa_table, writer_batch_size) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/arrow_writer.py", line 625, in write_table pa_table = embed_table_storage(pa_table) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2271, in embed_table_storage arrays = [ File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2272, in <listcomp> embed_array_storage(table[name], feature) if require_storage_embed(feature) else table[name] File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 1796, in wrapper return pa.chunked_array([func(chunk, *args, **kwargs) for chunk in array.chunks]) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 1796, in <listcomp> return pa.chunked_array([func(chunk, *args, **kwargs) for chunk in array.chunks]) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2141, in embed_array_storage return feature.embed_storage(array) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/features/audio.py", line 273, in embed_storage storage = pa.StructArray.from_arrays([bytes_array, path_array], ["bytes", "path"], mask=bytes_array.is_null()) File "pyarrow/array.pxi", line 3257, in pyarrow.lib.StructArray.from_arrays File "pyarrow/array.pxi", line 3697, in pyarrow.lib.c_mask_inverted_from_obj TypeError: Mask must be a pyarrow.Array of type boolean During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1635, in _prepare_split_single num_examples, num_bytes = writer.finalize() File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/arrow_writer.py", line 637, in finalize self.write_examples_on_file() File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/arrow_writer.py", line 496, in write_examples_on_file self.write_batch(batch_examples=batch_examples) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/arrow_writer.py", line 610, in write_batch self.write_table(pa_table, writer_batch_size) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/arrow_writer.py", line 625, in write_table pa_table = embed_table_storage(pa_table) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2271, in embed_table_storage arrays = [ File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2272, in <listcomp> embed_array_storage(table[name], feature) if require_storage_embed(feature) else table[name] File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 1796, in wrapper return pa.chunked_array([func(chunk, *args, **kwargs) for chunk in array.chunks]) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 1796, in <listcomp> return pa.chunked_array([func(chunk, *args, **kwargs) for chunk in array.chunks]) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2141, in embed_array_storage return feature.embed_storage(array) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/features/audio.py", line 273, in embed_storage storage = pa.StructArray.from_arrays([bytes_array, path_array], ["bytes", "path"], mask=bytes_array.is_null()) File "pyarrow/array.pxi", line 3257, in pyarrow.lib.StructArray.from_arrays File "pyarrow/array.pxi", line 3697, in pyarrow.lib.c_mask_inverted_from_obj TypeError: Mask must be a pyarrow.Array of type boolean The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1433, in compute_config_parquet_and_info_response parquet_operations, partial, estimated_dataset_info = stream_convert_to_parquet( File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 989, in stream_convert_to_parquet builder._prepare_split( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1487, in _prepare_split for job_id, done, content in self._prepare_split_single( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1644, in _prepare_split_single raise DatasetGenerationError("An error occurred while generating the dataset") from e datasets.exceptions.DatasetGenerationError: An error occurred while generating the dataset
Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
audio
audio | label
class label |
---|---|
1Controls_Norwegian
|
|
1Controls_Norwegian
|
|
1Controls_Norwegian
|
|
1Controls_Norwegian
|
|
1Controls_Norwegian
|
|
1Controls_Norwegian
|
|
1Controls_Norwegian
|
|
1Controls_Norwegian
|
|
1Controls_Norwegian
|
|
1Controls_Norwegian
|
|
1Controls_Norwegian
|
|
1Controls_Norwegian
|
|
1Controls_Norwegian
|
|
1Controls_Norwegian
|
|
1Controls_Norwegian
|
|
1Controls_Norwegian
|
|
1Controls_Norwegian
|
|
1Controls_Norwegian
|
|
1Controls_Norwegian
|
|
1Controls_Norwegian
|
|
1Controls_Norwegian
|
|
1Controls_Norwegian
|
|
1Controls_Norwegian
|
|
1Controls_Norwegian
|
|
1Controls_Norwegian
|
|
1Controls_Norwegian
|
|
1Controls_Norwegian
|
|
1Controls_Norwegian
|
|
1Controls_Norwegian
|
|
1Controls_Norwegian
|
|
1Controls_Norwegian
|
|
1Controls_Norwegian
|
|
1Controls_Norwegian
|
|
1Controls_Norwegian
|
|
1Controls_Norwegian
|
|
1Controls_Norwegian
|
|
1Controls_Norwegian
|
|
1Controls_Norwegian
|
|
1Controls_Norwegian
|
|
1Controls_Norwegian
|
|
1Controls_Norwegian
|
|
1Controls_Norwegian
|
|
1Controls_Norwegian
|
|
1Controls_Norwegian
|
|
1Controls_Norwegian
|
|
1Controls_Norwegian
|
|
1Controls_Norwegian
|
|
1Controls_Norwegian
|
|
1Controls_Norwegian
|
|
1Controls_Norwegian
|
|
1Controls_Norwegian
|
|
1Controls_Norwegian
|
|
1Controls_Norwegian
|
|
1Controls_Norwegian
|
|
1Controls_Norwegian
|
|
1Controls_Norwegian
|
|
1Controls_Norwegian
|
|
1Controls_Norwegian
|
|
1Controls_Norwegian
|
|
1Controls_Norwegian
|
|
1Controls_Norwegian
|
|
1Controls_Norwegian
|
|
1Controls_Norwegian
|
|
1Controls_Norwegian
|
|
1Controls_Norwegian
|
|
1Controls_Norwegian
|
|
1Controls_Norwegian
|
|
1Controls_Norwegian
|
|
1Controls_Norwegian
|
|
1Controls_Norwegian
|
|
1Controls_Norwegian
|
|
1Controls_Norwegian
|
|
1Controls_Norwegian
|
|
1Controls_Norwegian
|
|
1Controls_Norwegian
|
|
1Controls_Norwegian
|
|
1Controls_Norwegian
|
|
1Controls_Norwegian
|
|
1Controls_Norwegian
|
|
1Controls_Norwegian
|
|
1Controls_Norwegian
|
|
1Controls_Norwegian
|
|
1Controls_Norwegian
|
|
1Controls_Norwegian
|
|
1Controls_Norwegian
|
|
1Controls_Norwegian
|
|
1Controls_Norwegian
|
|
1Controls_Norwegian
|
|
1Controls_Norwegian
|
|
1Controls_Norwegian
|
|
1Controls_Norwegian
|
|
1Controls_Norwegian
|
|
1Controls_Norwegian
|
|
1Controls_Norwegian
|
|
1Controls_Norwegian
|
|
1Controls_Norwegian
|
|
1Controls_Norwegian
|
|
1Controls_Norwegian
|
|
1Controls_Norwegian
|
|
1Controls_Norwegian
|
Dataset Card for the LnNor Corpus
IMPORTANT: This is the raw version of the LnNor corpus. If you intend to use the dataset for training or evaluation, you may be more interested in MultiBridge/LnNor, which contains the same audio recordings, but segmented into smaller samples and converted to mono at 16 kHz.
A multilingual dataset of high-quality speech recordings in Norwegian, English, and Polish, designed for research into cross-linguistic influence, multilingual language acquisition, and applications in NLP and speech processing such as ASR, TTS, and linguistic variability modeling. The dataset includes 2,783 recordings, totaling 101 hours, with a size of 50.1 GB. These recordings capture phonological, syntactic, and semantic variability through structured tasks like reading, picture description, and spontaneous conversation.
Dataset Details
Dataset Description
- Curated by: Magdalena Wrembel, Krzysztof Hwaszcz, Agnieszka Pludra, Anna Skałba, Jarosław Weckwerth, Kamil Malarski, Zuzanna Ewa Cal, Hanna Kędzierska, Tristan Czarnecki-Verner, Anna Balas, Kamil Kaźmierski, Sylwiusz Żychliński, Justyna Gruszecka
- Funded by: Norwegian Financial Mechanism 2014-2021 project number 2019/34/H/HS2/00495
- Language(s) (NLP): Norwegian, English, Polish
- License: Creative Commons Attribution 4.0
Dataset Sources
- Repository: https://adim.web.amu.edu.pl/en/lnnor-corpus/
Uses
Direct Use
- Multilingual ASR training: Supports building and evaluating ASR systems for multilingual and code-switching scenarios.
- Linguistic modeling: Enables research on phonological, syntactic, and semantic variability in multilingual contexts.
- TTS and speech synthesis: Provides diverse phonetic data for training multilingual text-to-speech models.
- Cross-linguistic NLP research: Facilitates studies on L3 acquisition and cross-linguistic influence in multilinguals.
Out-of-Scope Use
- Privacy-violating applications: The dataset is anonymized and must not be used for speaker identification or biometric analysis tasks.
- Non-supported languages: The dataset is tailored for Norwegian, English, and Polish only.
Dataset Structure
The recordings are systematically labeled using a structured format: PROJECT_SPEAKER ID_LANGUAGE STATUS_TASK.
Each component of the label provides specific details:
- PROJECT: The project under which the data was collected. Possible values:
- A for ADIM,
- C for CLIMAD.
- SPEAKER ID: A unique 8-character identifier assigned to each speaker.
- LANGUAGE STATUS: The language used in the recording and its status for the speaker; examples:
- L1PL (Polish as L1),
- L2EN (English as L2),
- L3NO (Norwegian as L3).
- TASK: The type of speech task recorded. Examples include:
- WR (word reading),
- SR (sentence reading),
- TR (text reading "The North Wind and the Sun"),
- PD (picture description),
- ST (story telling),
- VT (video story telling),
- VD (video description),
- TP/TE (translation from Polish/English into Norwegian).
If a task type was repeated, sequential numbers (e.g., SR1, SR2) are appended to distinguish iterations.
Dataset Creation
Curation Rationale
The dataset was developed to advance research in multilingualism and third language (L3) acquisition, with a specific focus on Norwegian, English, and Polish. Its primary aim is to enable studies on cross-linguistic influence, phonological, syntactic and semantic variability, and multilingual language processing. It supports the development of technologies such as multilingual ASR, TTS, and NLP systems.
Source Data
The dataset was collected as part of two research projects, CLIMAD (Cross-linguistic Influence in Multilingualism across Domains: Phonology and Syntax; PI prof. Magdalena Wrembel UAM) and ADIM (Across-domain Investigations in Multilingualism: Modeling L3 Acquisition in Diverse Settings; PI prof. Magdalena Wrembel UAM & prof. Marit Westergaard UiT), which focused on cross-linguistic influence and L3 acquisition in multilingual settings. The dataset comprises recordings from 231 speakers across three languages: Norwegian, English, and Polish. Speakers include L1 Polish learners of Norwegian, L1 English and L1 Norwegian natives, and L2/L3/Ln speakers of English and Norwegian. Speech was elicited using a range of tasks such as word, sentence, and text readings, picture descriptions, video story retelling, and socio-phonetic interviews. Metadata is based on the Language History Questionnaire and includes age, gender, language proficiency, exposure, and other sociolinguistic factors.
Data Collection and Processing
Data were recorded between 2021 and 2024 using Shure SM-35 unidirectional cardioid microphones and Marantz PMD620 recorders, ensuring minimal noise interference. Recordings were captured at 48 kHz, 16-bit resolution. Some of the recordings were annotated with orthographic and/or phonetic transcriptions and aligned at a word and phoneme level. Metadata includes speaker characteristics, language status (L1, L2, L3/Ln), task type, and audio details.
Who are the source data producers?
Source data producers include:
- Polish L1 speakers learning Norwegian as L3/Ln in formal and naturalistic contexts,
- native speakers of Norwegian and English as control groups,
- speakers of English and Norwegian as L2/L3/Ln with diverse L1 backgrounds.
Annotations
The dataset includes the following types of annotations:
- Orthographic transcriptions (available for selected recordings)
- Phonetic transcriptions (available for selected recordings)
- Word-level alignments (available for selected recordings)
- Phoneme-level alignments (available for selected recordings)
- Speaker metadata (available for all recordings)
- speaker ID, age, gender, education, current residence, language proficiency (native and additional languages), language status (L1, L2, L3/Ln)
- Audio metadata (available for all recordings)
- recording ID, task type (e.g., word reading, sentence reading), sampling rate
Annotation process
The annotation process combined both automated and manual methods. It consisted of the following steps:
- Orthographic transcriptions: For Polish and English recordings, transcriptions were generated using a STT tool or created manually by linguists with a high level of proficiency in the respective languages. Norwegian transcriptions were entirely human-generated to ensure high accuracy.
- Phonetic transcriptions: Phonetic transcriptions were automatically generated using WebMAUS. The output was encoded in SAMPA (Speech Assessment Methods Phonetic Alphabet), ensuring consistency and compatibility with downstream processing.
- Alignments: Word- and phoneme-level alignments were created using WebMAUS, which produced TextGrids that aligned the transcriptions with corresponding audio files.
- Speaker metadata: The speaker metadata were collected before the recording sessions through the Linguistic History Questionnaire (LHQ) and supplementary forms provided to participants. These forms were designed to capture detailed linguistic and demographic information, ensuring a comprehensive profile of each speaker.
- Audio metadata: The audio metadata were automatically captured during the recording process by the equipment used for data collection and embedded into the corresponding audio files.
Who are the annotators?
The annotations were created under the supervision of a team of linguists and language experts from Adam Mickiewicz University in Poznań and the University of Szczecin, all of whom were members of the CLIMAD and ADIM projects. The annotators had extensive experience in transcription, phonetic analysis, and linguistic research in Polish, English, and Norwegian. Their role in the annotation process included:
- providing expertise in phonetic analysis and transcription techniques,
- supervising the use of automated tools such as WebMAUS for phonetic transcriptions and alignments,
- generating transcriptions for recordings that featured languages with limited support in STT tools (i.e., Norwegian) or contained challenging audio (overlapping speech or atypical pronunciations that required careful transcription),
- validating a subset of annotations to ensure high-quality outputs for critical data points.
While the majority of annotations were generated using automated tools, the annotators’ oversight ensured consistency and accuracy across the dataset.
Personal and Sensitive Information
During the recordings, the participants were asked not to disclose any personal or sensitive information, which was especially relevant for the tasks eliciting free speech. The remaining tasks based on text reading did not contain any personal or sensitive information.
Bias, Risks, and Limitations
Potential biases in this dataset include:
- Participant demographics: The majority of participants were young adults aged 18 to 25.
- Gender distribution: Women constituted 68% of the speakers.
- Linguistic scope: The speech samples are limited mostly to three languages under investigation, i.e., Norwegian, English, and Polish.
Recommendations
We recommend to use the set of short audio files (under 30s) for any subsequent analysis. The raw recordings of full tasks can be found [TBA].
Dataset Card Authors
Agnieszka Pludra
Izabela Krysińska
Piotr Kabaciński
Dataset Card Contact
- Downloads last month
- 188