--- dataset_info: - config_name: 10 minutes features: - name: audio dtype: audio: sampling_rate: 16000 - name: File No. dtype: int64 - name: ENVIRONMENT dtype: string - name: YEAR dtype: int64 - name: AGE dtype: int64 - name: GENDER dtype: string - name: SPEAKER_ID dtype: int64 - name: Transcriptions dtype: string - name: duration dtype: float64 splits: - name: train num_bytes: 19890637.0 num_examples: 32 - name: test num_bytes: 152943614.0 num_examples: 241 download_size: 168193251 dataset_size: 172834251.0 - config_name: 120 minutes features: - name: audio dtype: audio: sampling_rate: 16000 - name: File No. dtype: int64 - name: ENVIRONMENT dtype: string - name: YEAR dtype: int64 - name: AGE dtype: int64 - name: GENDER dtype: string - name: SPEAKER_ID dtype: int64 - name: Transcriptions dtype: string - name: duration dtype: float64 splits: - name: train num_bytes: 230619340.0 num_examples: 368 - name: test num_bytes: 152943614.0 num_examples: 241 download_size: 372662025 dataset_size: 383562954.0 - config_name: 240 minutes features: - name: audio dtype: audio: sampling_rate: 16000 - name: File No. dtype: int64 - name: ENVIRONMENT dtype: string - name: YEAR dtype: int64 - name: AGE dtype: int64 - name: GENDER dtype: string - name: SPEAKER_ID dtype: int64 - name: Transcriptions dtype: string - name: duration dtype: float64 splits: - name: train num_bytes: 461415371.0 num_examples: 739 - name: test num_bytes: 152943614.0 num_examples: 241 download_size: 597533036 dataset_size: 614358985.0 - config_name: 60 minutes features: - name: audio dtype: audio: sampling_rate: 16000 - name: File No. dtype: int64 - name: ENVIRONMENT dtype: string - name: YEAR dtype: int64 - name: AGE dtype: int64 - name: GENDER dtype: string - name: SPEAKER_ID dtype: int64 - name: Transcriptions dtype: string - name: duration dtype: float64 splits: - name: train num_bytes: 115549400.0 num_examples: 185 - name: test num_bytes: 152943614.0 num_examples: 241 download_size: 260859532 dataset_size: 268493014.0 - config_name: default features: - name: audio dtype: audio: sampling_rate: 16000 - name: File No. dtype: int64 - name: ENVIRONMENT dtype: string - name: YEAR dtype: int64 - name: AGE dtype: int64 - name: GENDER dtype: string - name: SPEAKER_ID dtype: int64 - name: Transcriptions dtype: string - name: duration dtype: float64 splits: - name: train num_bytes: 19890637 num_examples: 32 - name: test num_bytes: 150820301.5320057 num_examples: 241 download_size: 168193251 dataset_size: 170710938.5320057 configs: - config_name: 10 minutes data_files: - split: train path: 10 minutes/train-* - split: test path: 10 minutes/test-* - config_name: 120 minutes data_files: - split: train path: 120 minutes/train-* - split: test path: 120 minutes/test-* - config_name: 240 minutes data_files: - split: train path: 240 minutes/train-* - split: test path: 240 minutes/test-* - config_name: 60 minutes data_files: - split: train path: 60 minutes/train-* - split: test path: 60 minutes/test-* - config_name: default data_files: - split: train path: data/train-* - split: test path: data/test-* license: apache-2.0 language: - ak --- # Dataset Card for Akan Data Efficiency Benchmark Dataset This the Ewe Data Efficiency Benchmark, designed to evaluate the performance of automatic speech recognition (ASR) models in low-resource settings. It consists of unique MP3 audio files paired with corresponding text transcriptions. Each audio sample is accompanied by metadata, including recording environment, duration, and speaker demographic information such as age and gender. The dataset contains 4 splits of transcribed audio data, providing a valuable resource for training and evaluating ASR models in scenarios with limited annotated speech. The dataset is split into: - 10 minutes - 60 minutes - 120 minutes - 240 minutes ## Dataset Details ### Dataset Description The Ewe Data Efficiency Benchmark is a speech recognition dataset designed to evaluate how well automatic speech recognition (ASR) models perform under limited data conditions. While many state-of-the-art ASR models rely on large volumes of transcribed audio for training, such resources are scarce or nonexistent for the majority of the approximately 2,000 languages spoken across Africa. This benchmark specifically addresses that gap by encouraging the development of ASR systems that are data-efficient and effective in low-resource settings. The benchmark provides transcribed Ewe audio at four different scales—10, 60, 120, and 240 minutes—allowing for systematic evaluation of model performance as a function of available data. - **Curated by:** Makerere AI Lab - **Funded by:** Gates Foundation - **Shared by:** Makerere AI Lab - **Language(s) (NLP):** Ewe - **License:** Apache 2.0 ### Dataset Sources [optional] - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses The dataset can be used to evaluate the data efficiency of different ASR models. ### Direct Use The dataset should be used for training ASR models. ### Out-of-Scope Use The dataset should not be used to re-identify the people behind the audios. ## Dataset Structure A typical data point comprises the path to the audio file and its sentence. Additional fields include environment, age, gender, and duration. ``` { 'File No' : 'ewe_data_efficiency_benchmark.mp3', 'audio': { 'path': 'ewe_data_efficiency_benchmark.mp3', 'array': array([-0.00048828, -0.00018311, -0.00137329, ..., 0.00079346, 0.00091553, 0.00085449], dtype=float32), 'sampling_rate': 16000 }, 'transcript': 'ɖeviwo ɖekaɖeka nɔ be adre wo le xexea bublɔ lada dzi kotokuwo tse le wobe ŋgɔ ye wokpɔ dzidzɔ kpakpakpa wo le wobe ɖokui tse kpɔ', 'Speaker ID': 384, 'environment': 'Indoor', 'age': 20, 'gender': 'Female', 'duration': 34, 'year': 2023 } ``` ## Data Fields ``File No (string)``: Unique identifier for eahc audio file. ``Audio (dict)``: A dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate. Note that when accessing the audio column: dataset[0]["audio"] the audio file is automatically decoded and resampled to dataset.features["audio"].sampling_rate. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the "audio" column, i.e. dataset[0]["audio"] should always be preferred over dataset["audio"][0]. ``Transcription (string)``: The text corresponding to the audio. ``Age (int)``: The age of the speaker. ``Gender (string)``: The gender of the speaker. ``Speaker ID (int)``: Unique identifier for each speaker. ``Environment (string)``: The environment in which the audio was recorded. ``Year (int)``: The year in which the audio was recorded. ## Data Splits The speech data has been subdivided into splits for train and test. ## Data Loading Recommended by Hugging Face The following are data preprocessing steps advised by the Hugging Face. They are accompanied by an example code snippet that shows hot to put them to practice. ``` from datasets import load_dataset ds = load_dataset("asr-africa/AkanDataEfficientBenchmark", "10 minutes", use_auth_token=True) ``` ## Dataset Creation ### Curation Rationale The dataset was curated to evaluate the data efficiency of ASR models on Ewe. Most ASR models perform well when large amounts of data are available. However, for most African languages such as Ewe, trascribed data is extremely scarce. This dataset was created to encourage researchers to develop data efficient models that reflect the setting of most African languages. ### Source Data #### Data Collection and Processing [More Information Needed] #### Who are the source data producers? [More Information Needed] ### Annotations [optional] #### Annotation process #### Who are the annotators? #### Personal and Sensitive Information ## Bias, Risks, and Limitations ### Recommendations Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations. ## Citation **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] [More Information Needed] ## More Information [optional] [More Information Needed] ## Dataset Card Authors [optional] [More Information Needed] ## Dataset Card Contact [More Information Needed]