dmusingu's picture
Update README.md
dd618de verified
---
dataset_info:
- config_name: 10 minutes
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: File No.
dtype: int64
- name: ENVIRONMENT
dtype: string
- name: YEAR
dtype: int64
- name: AGE
dtype: int64
- name: GENDER
dtype: string
- name: SPEAKER_ID
dtype: int64
- name: Transcriptions
dtype: string
- name: duration
dtype: float64
splits:
- name: train
num_bytes: 19890637.0
num_examples: 32
- name: test
num_bytes: 152943614.0
num_examples: 241
download_size: 168193251
dataset_size: 172834251.0
- config_name: 120 minutes
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: File No.
dtype: int64
- name: ENVIRONMENT
dtype: string
- name: YEAR
dtype: int64
- name: AGE
dtype: int64
- name: GENDER
dtype: string
- name: SPEAKER_ID
dtype: int64
- name: Transcriptions
dtype: string
- name: duration
dtype: float64
splits:
- name: train
num_bytes: 230619340.0
num_examples: 368
- name: test
num_bytes: 152943614.0
num_examples: 241
download_size: 372662025
dataset_size: 383562954.0
- config_name: 240 minutes
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: File No.
dtype: int64
- name: ENVIRONMENT
dtype: string
- name: YEAR
dtype: int64
- name: AGE
dtype: int64
- name: GENDER
dtype: string
- name: SPEAKER_ID
dtype: int64
- name: Transcriptions
dtype: string
- name: duration
dtype: float64
splits:
- name: train
num_bytes: 461415371.0
num_examples: 739
- name: test
num_bytes: 152943614.0
num_examples: 241
download_size: 597533036
dataset_size: 614358985.0
- config_name: 60 minutes
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: File No.
dtype: int64
- name: ENVIRONMENT
dtype: string
- name: YEAR
dtype: int64
- name: AGE
dtype: int64
- name: GENDER
dtype: string
- name: SPEAKER_ID
dtype: int64
- name: Transcriptions
dtype: string
- name: duration
dtype: float64
splits:
- name: train
num_bytes: 115549400.0
num_examples: 185
- name: test
num_bytes: 152943614.0
num_examples: 241
download_size: 260859532
dataset_size: 268493014.0
- config_name: default
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: File No.
dtype: int64
- name: ENVIRONMENT
dtype: string
- name: YEAR
dtype: int64
- name: AGE
dtype: int64
- name: GENDER
dtype: string
- name: SPEAKER_ID
dtype: int64
- name: Transcriptions
dtype: string
- name: duration
dtype: float64
splits:
- name: train
num_bytes: 19890637
num_examples: 32
- name: test
num_bytes: 150820301.5320057
num_examples: 241
download_size: 168193251
dataset_size: 170710938.5320057
configs:
- config_name: 10 minutes
data_files:
- split: train
path: 10 minutes/train-*
- split: test
path: 10 minutes/test-*
- config_name: 120 minutes
data_files:
- split: train
path: 120 minutes/train-*
- split: test
path: 120 minutes/test-*
- config_name: 240 minutes
data_files:
- split: train
path: 240 minutes/train-*
- split: test
path: 240 minutes/test-*
- config_name: 60 minutes
data_files:
- split: train
path: 60 minutes/train-*
- split: test
path: 60 minutes/test-*
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
license: apache-2.0
language:
- ak
---
# Dataset Card for Akan Data Efficiency Benchmark Dataset
<!-- Provide a quick summary of the dataset. -->
This the Ewe Data Efficiency Benchmark, designed to evaluate the performance of automatic speech recognition (ASR) models in low-resource settings. It consists of unique MP3 audio files paired with corresponding text transcriptions. Each audio sample is accompanied by metadata, including recording environment, duration, and speaker demographic information such as age and gender.
The dataset contains 4 splits of transcribed audio data, providing a valuable resource for training and evaluating ASR models in scenarios with limited annotated speech.
The dataset is split into:
- 10 minutes
- 60 minutes
- 120 minutes
- 240 minutes
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
The Ewe Data Efficiency Benchmark is a speech recognition dataset designed to evaluate how well automatic speech recognition (ASR) models perform under limited data conditions. While many state-of-the-art ASR models rely on large volumes of transcribed audio for training, such resources are scarce or nonexistent for the majority of the approximately 2,000 languages spoken across Africa. This benchmark specifically addresses that gap by encouraging the development of ASR systems that are data-efficient and effective in low-resource settings. The benchmark provides transcribed Ewe audio at four different scales—10, 60, 120, and 240 minutes—allowing for systematic evaluation of model performance as a function of available data.
- **Curated by:** Makerere AI Lab
- **Funded by:** Gates Foundation
- **Shared by:** Makerere AI Lab
- **Language(s) (NLP):** Ewe
- **License:** Apache 2.0
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
The dataset can be used to evaluate the data efficiency of different ASR models.
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
The dataset should be used for training ASR models.
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
The dataset should not be used to re-identify the people behind the audios.
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
A typical data point comprises the path to the audio file and its sentence. Additional fields include environment, age, gender, and duration.
```
{
'File No' : 'ewe_data_efficiency_benchmark.mp3',
'audio': {
'path': 'ewe_data_efficiency_benchmark.mp3',
'array': array([-0.00048828, -0.00018311, -0.00137329, ..., 0.00079346, 0.00091553, 0.00085449], dtype=float32),
'sampling_rate': 16000
},
'transcript': 'ɖeviwo ɖekaɖeka nɔ be adre wo le xexea bublɔ lada dzi kotokuwo tse le wobe ŋgɔ ye wokpɔ dzidzɔ kpakpakpa wo le wobe ɖokui tse kpɔ',
'Speaker ID': 384,
'environment': 'Indoor',
'age': 20,
'gender': 'Female',
'duration': 34,
'year': 2023
}
```
## Data Fields
``File No (string)``: Unique identifier for eahc audio file.
``Audio (dict)``: A dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate. Note that when accessing the audio column: dataset[0]["audio"] the audio file is automatically decoded and resampled to dataset.features["audio"].sampling_rate. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the "audio" column, i.e. dataset[0]["audio"] should always be preferred over dataset["audio"][0].
``Transcription (string)``: The text corresponding to the audio.
``Age (int)``: The age of the speaker.
``Gender (string)``: The gender of the speaker.
``Speaker ID (int)``: Unique identifier for each speaker.
``Environment (string)``: The environment in which the audio was recorded.
``Year (int)``: The year in which the audio was recorded.
## Data Splits
The speech data has been subdivided into splits for train and test.
## Data Loading Recommended by Hugging Face
The following are data preprocessing steps advised by the Hugging Face. They are accompanied by an example code snippet that shows hot to put them to practice.
```
from datasets import load_dataset
ds = load_dataset("asr-africa/AkanDataEfficientBenchmark", "10 minutes", use_auth_token=True)
```
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
The dataset was curated to evaluate the data efficiency of ASR models on Ewe. Most ASR models perform well when large amounts of data are available. However, for most African languages such as Ewe, trascribed data is extremely scarce. This dataset was created to encourage researchers to develop data efficient models that reflect the setting of most African languages.
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed]