nigerian-pidgin-1.0 / README.md
Mardiyyah's picture
Update README.md
87c6150 verified
metadata
dataset_info:
  features:
    - name: sentence
      dtype: string
    - name: filename
      dtype: string
    - name: audio
      dtype: audio
  splits:
    - name: train
      num_bytes: 613191326.096
      num_examples: 2708
    - name: validation
      num_bytes: 158124184.2
      num_examples: 677
    - name: test
      num_bytes: 205139856
      num_examples: 892
  download_size: 955876356
  dataset_size: 976455366.296
license: cc-by-4.0
task_categories:
  - automatic-speech-recognition
size_categories:
  - 1K<n<10K
tags:
  - asr
  - low-resource
  - nlp

Language: - Nigerian Pidgin English (West African Pidgin variant)

Dataset Description

Dataset Summary The Nigerian Pidgin ASR dataset (v1.0) is the first publicly released speech-to-text corpus for Nigerian Pidgin English, a widely spoken lingua franca across Nigeria and West Africa. This dataset comprises over 3,000 audio recordings paired with sentence-level transcriptions, recorded by native speakers across different genders and age groups. It is tailored for speech-related tasks.

The goal of this project is to address the resource gap in West African low-resource languages and foster research in ASR for Nigerian Pidgin.

Dataset Details

The dataset focuses on speech recordings of Nigerian Pidgin English. Each entry consists of a sentence-length utterance (8–14 words on average) and its transcription. While originally designed for ASR, the transcriptions and linguistic content could make the dataset suited for tasks such as TTS, entity extraction, and topic modeling.

Supported Tasks

  • Automatic Speech Recognition (ASR): Train and evaluate ASR models on Nigerian Pidgin.
  • Text-to-Speech (TTS): Build speech synthesis models from transcriptions.
  • Topic Modeling: Discover themes and topics prevalent in everyday Nigerian Pidgin discourse.

Funding information

Funding was not required during this data collection process. The LIG-Aikuma app used for data collection was free and the recording was done voluntarily. All volunteers were collectively acknowledged in our paper. In the future, when scaling to a larger data collection process, some funding would be helpful.

Dataset Composition

What are the instances?

Each instance in the dataset consists of:

  • An audio recording (WAV, 16kHz)
  • A corresponding sentence-level transcription
  • A unique audio ID

What experiments were initially run on this dataset?

The dataset has been used to train end-to-end speech recognition models such as QuartzNet using NVIDIA NeMo, Wav2Vec base 100H and Wav2Vec-XLSR53. Information on the model weights and WER can be found here: link to model repo

Data Collection Process

How was the data collected?

  • Textual Corpus: We leverage a text-to-text parallel corpus crawled by * as a base for our speech recordings. The initial total crawled data consist of 56,695 sentences and 32,925 distinct words, covering topics ranging from sports, politics, and entertainment to everyday life. From this, we selected 4288 utterances for recording speech data, with each utterance averaging between 8 to 14 words.

  • Speech Recording: We carried out the task of recording our own speech corpus using the selected crawled instances from above. Recording was done using the LigAikuma Android app to record utterances in quiet settings. The demographic of this speech recordings consists of 10 native speakers (ages 20–28; 5 Male/5 Female).The total size of the collected speech recordings after data quality filtering is 4277, which were subsequently partitioned into training, validation, and testing sets. The composition of the final speech dataset is shown below:

Are there any known errors, sources of noise, or redundancies in the data?

Occasional background sounds (doors, table slams), speaker hesitations, and laughter appear in some recordings; these are not reflected in transcripts.

.

Data Preprocessing

What preprocessing/cleaning was done?

  • Removal of empty or unintelligible audio clips
  • Standardization of all audio files to 16kHz
  • Trimming of recordings to <30s limit
  • Standard text preprocessing on transcripts

Dataset Distribution

  • How is the dataset distributed? The dataset is freely available for use and reproduction. Proper citation of the authors is required (see citation).

  • When will the dataset be released/first distributed? The dataset has been released publicly from date 18th July, 2025 and is now accessible via this repository.

Dataset Maintenance

Who is supporting/hosting/maintaining the dataset?

The dataset is maintained by the original authors and contributors of the Nigerian Pidgin Project.

How does one contact the owner/curator/manager of the dataset?

Via the GitHub issues page or Hugging Face discussion forums associated with the project.

Will the dataset be updated?

Yes, updates may be released in the future.

How often and by whom?

The dataset will be updated by the project team should new data be curated or corrections made.

How will updates/revisions be documented and communicated?

Updates will be documented via GitHub, using version tags and communicated on the main huggingface repository.

Is there a repository to link to any/all papers/systems that use this dataset?

Yes, a GitHub repository will track publications and systems using this dataset.

If others want to extend/augment/build on this dataset, is there a mechanism for them to do so?

Yes, contributions are encouraged via GitHub. Quality will be assessed through pull requests, and accepted contributions will be communicated to users via version tags and release notes.


Citation

@misc{rufai2025endtoendtrainingautomaticspeech, title={Towards End-to-End Training of Automatic Speech Recognition for Nigerian Pidgin}, author={Amina Mardiyyah Rufai and Afolabi Abeeb and Esther Oduntan and Tayo Arulogun and Oluwabukola Adegboro and Daniel Ajisafe}, year={2025}, eprint={2010.11123}, archivePrefix={arXiv}, primaryClass={eess.AS}, url={https://arxiv.org/abs/2010.11123}, }