--- dataset_info: - config_name: Afrikaans features: - name: id dtype: int32 - name: num_samples dtype: int32 - name: path dtype: string - name: audio dtype: audio: sampling_rate: 16000 - name: transcription dtype: string - name: raw_transcription dtype: string - name: gender dtype: class_label: names: '0': male '1': female '2': other - name: lang_id dtype: class_label: names: '0': af_za '1': am_et '2': ar_eg '3': as_in '4': ast_es '5': az_az '6': be_by '7': bg_bg '8': bn_in '9': bs_ba '10': ca_es '11': ceb_ph '12': ckb_iq '13': cmn_hans_cn '14': cs_cz '15': cy_gb '16': da_dk '17': de_de '18': el_gr '19': en_us '20': es_419 '21': et_ee '22': fa_ir '23': ff_sn '24': fi_fi '25': fil_ph '26': fr_fr '27': ga_ie '28': gl_es '29': gu_in '30': ha_ng '31': he_il '32': hi_in '33': hr_hr '34': hu_hu '35': hy_am '36': id_id '37': ig_ng '38': is_is '39': it_it '40': ja_jp '41': jv_id '42': ka_ge '43': kam_ke '44': kea_cv '45': kk_kz '46': km_kh '47': kn_in '48': ko_kr '49': ky_kg '50': lb_lu '51': lg_ug '52': ln_cd '53': lo_la '54': lt_lt '55': luo_ke '56': lv_lv '57': mi_nz '58': mk_mk '59': ml_in '60': mn_mn '61': mr_in '62': ms_my '63': mt_mt '64': my_mm '65': nb_no '66': ne_np '67': nl_nl '68': nso_za '69': ny_mw '70': oc_fr '71': om_et '72': or_in '73': pa_in '74': pl_pl '75': ps_af '76': pt_br '77': ro_ro '78': ru_ru '79': sd_in '80': sk_sk '81': sl_si '82': sn_zw '83': so_so '84': sr_rs '85': sv_se '86': sw_ke '87': ta_in '88': te_in '89': tg_tj '90': th_th '91': tr_tr '92': uk_ua '93': umb_ao '94': ur_pk '95': uz_uz '96': vi_vn '97': wo_sn '98': xh_za '99': yo_ng '100': yue_hant_hk '101': zu_za '102': all - name: language dtype: string - name: lang_group_id dtype: class_label: names: '0': western_european_we '1': eastern_european_ee '2': central_asia_middle_north_african_cmn '3': sub_saharan_african_ssa '4': south_asian_sa '5': south_east_asian_sea '6': chinese_japanase_korean_cjk - name: length dtype: float64 splits: - name: train num_bytes: 116033281.68604651 num_examples: 285 - name: test num_bytes: 103728931.0 num_examples: 264 download_size: 197536592 dataset_size: 219762212.6860465 - config_name: Akan features: - name: audio dtype: audio: sampling_rate: 16000 - name: File No. dtype: int64 - name: ENVIRONMENT dtype: string - name: YEAR dtype: int64 - name: AGE dtype: int64 - name: GENDER dtype: string - name: SPEAKER_ID dtype: int64 - name: Transcriptions dtype: string - name: duration dtype: float64 splits: - name: train num_bytes: 115549400.0 num_examples: 185 - name: test num_bytes: 152943614.0 num_examples: 241 download_size: 260859532 dataset_size: 268493014.0 - config_name: Amharic features: - name: audio dtype: audio: sampling_rate: 16000 - name: transcription dtype: string - name: duration dtype: float64 splits: - name: train num_bytes: 121002029.011 num_examples: 335 - name: test num_bytes: 361200086.6 num_examples: 1000 download_size: 455604102 dataset_size: 482202115.61100006 - config_name: Bambara features: - name: audio dtype: audio: sampling_rate: 16000 - name: transcription dtype: string - name: duration dtype: float64 splits: - name: train num_bytes: 333204719.9915005 num_examples: 1200 - name: test num_bytes: 1568283548.7599957 num_examples: 5648 download_size: 1729001274 dataset_size: 1901488268.7514963 - config_name: Bemba features: - name: transcription dtype: string - name: speaker_id dtype: int64 - name: duration dtype: float64 - name: audio dtype: audio: sampling_rate: 16000 splits: - name: train num_bytes: 119252799.3676658 num_examples: 510 - name: test num_bytes: 671293880.966 num_examples: 2779 download_size: 756805690 dataset_size: 790546680.3336657 - config_name: Ewe features: - name: audio dtype: audio: sampling_rate: 16000 - name: File No. dtype: int64 - name: ENVIRONMENT dtype: string - name: YEAR dtype: int64 - name: AGE dtype: int64 - name: GENDER dtype: string - name: SPEAKER_ID dtype: int64 - name: Transcriptions dtype: string - name: duration dtype: float64 splits: - name: train num_bytes: 115549400.0 num_examples: 185 - name: test num_bytes: 152943614.0 num_examples: 241 download_size: 260859532 dataset_size: 268493014.0 - config_name: Fula features: - name: id dtype: int32 - name: num_samples dtype: int32 - name: audio dtype: audio: sampling_rate: 16000 - name: transcription dtype: string - name: raw_transcription dtype: string - name: gender dtype: class_label: names: '0': male '1': female '2': other - name: language dtype: string - name: duration dtype: float64 splits: - name: train num_bytes: 116292927.69397219 num_examples: 235 - name: test num_bytes: 302858767.0 num_examples: 660 download_size: 360864712 dataset_size: 419151694.6939722 - config_name: Hausa features: - name: audio dtype: audio - name: text dtype: string - name: gender dtype: string - name: age_range dtype: string - name: duration dtype: float64 splits: - name: train num_bytes: 349994672.18 num_examples: 1150 - name: test num_bytes: 1141286974.5 num_examples: 3750 download_size: 1408400985 dataset_size: 1491281646.68 - config_name: Igbo features: - name: audio dtype: audio - name: text dtype: string - name: gender dtype: string - name: age_range dtype: string - name: duration dtype: float64 splits: - name: train num_bytes: 372393502.2 num_examples: 1000 - name: test num_bytes: 1489574008.8 num_examples: 4000 download_size: 1700050259 dataset_size: 1861967511.0 - config_name: Kinyarwanda features: - name: audio dtype: audio: sampling_rate: 16000 - name: transcription dtype: string - name: age dtype: string - name: gender dtype: string - name: duration dtype: float64 splits: - name: train num_bytes: 121355626.14724463 num_examples: 688 - name: test num_bytes: 590903121.5018452 num_examples: 3350 download_size: 670156333 dataset_size: 712258747.6490898 - config_name: Luganda features: - name: id dtype: int64 - name: audio dtype: audio: sampling_rate: 16000 - name: raw_sentence dtype: string - name: sentence dtype: string - name: lang_id dtype: int64 - name: gender dtype: string - name: duration dtype: float64 splits: - name: train num_bytes: 116073271.1586968 num_examples: 205 - name: test num_bytes: 321209611.0 num_examples: 612 download_size: 395355386 dataset_size: 437282882.1586968 - config_name: Oromo features: - name: audio dtype: audio - name: transcription dtype: string - name: duration dtype: float64 splits: - name: train num_bytes: 160709015.62228024 num_examples: 505 - name: test num_bytes: 475515162.25 num_examples: 1478 download_size: 581969323 dataset_size: 636224177.8722802 - config_name: Shona features: - name: creator dtype: string - name: project_name dtype: string - name: speaker_id dtype: string - name: audio dtype: audio: sampling_rate: 16000 - name: image_path dtype: string - name: transcription dtype: string - name: locale dtype: string - name: gender dtype: string - name: age dtype: string - name: year dtype: int64 - name: duration dtype: float64 - name: transcription_length dtype: int64 splits: - name: train num_bytes: 345656456.1824605 num_examples: 175 - name: test num_bytes: 1738158179.660373 num_examples: 880 download_size: 1753390988 dataset_size: 2083814635.8428335 - config_name: Wolof features: - name: id dtype: int32 - name: num_samples dtype: int32 - name: path dtype: string - name: audio dtype: audio: sampling_rate: 16000 - name: transcription dtype: string - name: raw_transcription dtype: string - name: gender dtype: class_label: names: '0': male '1': female '2': other - name: lang_id dtype: class_label: names: '0': af_za '1': am_et '2': ar_eg '3': as_in '4': ast_es '5': az_az '6': be_by '7': bg_bg '8': bn_in '9': bs_ba '10': ca_es '11': ceb_ph '12': ckb_iq '13': cmn_hans_cn '14': cs_cz '15': cy_gb '16': da_dk '17': de_de '18': el_gr '19': en_us '20': es_419 '21': et_ee '22': fa_ir '23': ff_sn '24': fi_fi '25': fil_ph '26': fr_fr '27': ga_ie '28': gl_es '29': gu_in '30': ha_ng '31': he_il '32': hi_in '33': hr_hr '34': hu_hu '35': hy_am '36': id_id '37': ig_ng '38': is_is '39': it_it '40': ja_jp '41': jv_id '42': ka_ge '43': kam_ke '44': kea_cv '45': kk_kz '46': km_kh '47': kn_in '48': ko_kr '49': ky_kg '50': lb_lu '51': lg_ug '52': ln_cd '53': lo_la '54': lt_lt '55': luo_ke '56': lv_lv '57': mi_nz '58': mk_mk '59': ml_in '60': mn_mn '61': mr_in '62': ms_my '63': mt_mt '64': my_mm '65': nb_no '66': ne_np '67': nl_nl '68': nso_za '69': ny_mw '70': oc_fr '71': om_et '72': or_in '73': pa_in '74': pl_pl '75': ps_af '76': pt_br '77': ro_ro '78': ru_ru '79': sd_in '80': sk_sk '81': sl_si '82': sn_zw '83': so_so '84': sr_rs '85': sv_se '86': sw_ke '87': ta_in '88': te_in '89': tg_tj '90': th_th '91': tr_tr '92': uk_ua '93': umb_ao '94': ur_pk '95': uz_uz '96': vi_vn '97': wo_sn '98': xh_za '99': yo_ng '100': yue_hant_hk '101': zu_za '102': all - name: language dtype: string - name: lang_group_id dtype: class_label: names: '0': western_european_we '1': eastern_european_ee '2': central_asia_middle_north_african_cmn '3': sub_saharan_african_ssa '4': south_asian_sa '5': south_east_asian_sea '6': chinese_japanase_korean_cjk - name: duration dtype: float64 splits: - name: train num_bytes: 119189484.18165863 num_examples: 270 - name: test num_bytes: 201476895.0 num_examples: 371 download_size: 263284028 dataset_size: 320666379.1816586 - config_name: Xhosa features: - name: id dtype: int32 - name: num_samples dtype: int32 - name: path dtype: string - name: audio dtype: audio: sampling_rate: 16000 - name: transcription dtype: string - name: raw_transcription dtype: string - name: gender dtype: class_label: names: '0': male '1': female '2': other - name: lang_id dtype: class_label: names: '0': af_za '1': am_et '2': ar_eg '3': as_in '4': ast_es '5': az_az '6': be_by '7': bg_bg '8': bn_in '9': bs_ba '10': ca_es '11': ceb_ph '12': ckb_iq '13': cmn_hans_cn '14': cs_cz '15': cy_gb '16': da_dk '17': de_de '18': el_gr '19': en_us '20': es_419 '21': et_ee '22': fa_ir '23': ff_sn '24': fi_fi '25': fil_ph '26': fr_fr '27': ga_ie '28': gl_es '29': gu_in '30': ha_ng '31': he_il '32': hi_in '33': hr_hr '34': hu_hu '35': hy_am '36': id_id '37': ig_ng '38': is_is '39': it_it '40': ja_jp '41': jv_id '42': ka_ge '43': kam_ke '44': kea_cv '45': kk_kz '46': km_kh '47': kn_in '48': ko_kr '49': ky_kg '50': lb_lu '51': lg_ug '52': ln_cd '53': lo_la '54': lt_lt '55': luo_ke '56': lv_lv '57': mi_nz '58': mk_mk '59': ml_in '60': mn_mn '61': mr_in '62': ms_my '63': mt_mt '64': my_mm '65': nb_no '66': ne_np '67': nl_nl '68': nso_za '69': ny_mw '70': oc_fr '71': om_et '72': or_in '73': pa_in '74': pl_pl '75': ps_af '76': pt_br '77': ro_ro '78': ru_ru '79': sd_in '80': sk_sk '81': sl_si '82': sn_zw '83': so_so '84': sr_rs '85': sv_se '86': sw_ke '87': ta_in '88': te_in '89': tg_tj '90': th_th '91': tr_tr '92': uk_ua '93': umb_ao '94': ur_pk '95': uz_uz '96': vi_vn '97': wo_sn '98': xh_za '99': yo_ng '100': yue_hant_hk '101': zu_za '102': all - name: language dtype: string - name: lang_group_id dtype: class_label: names: '0': western_european_we '1': eastern_european_ee '2': central_asia_middle_north_african_cmn '3': sub_saharan_african_ssa '4': south_asian_sa '5': south_east_asian_sea '6': chinese_japanase_korean_cjk - name: duration dtype: float64 splits: - name: train num_bytes: 119810720.92902482 num_examples: 270 - name: test num_bytes: 436280282.875 num_examples: 1041 download_size: 352509021 dataset_size: 556091003.8040248 - config_name: Yoruba features: - name: audio dtype: audio - name: text dtype: string - name: gender dtype: string - name: age_range dtype: string - name: duration dtype: float64 splits: - name: train num_bytes: 353202052.84 num_examples: 1040 - name: test num_bytes: 1630163320.8 num_examples: 4800 download_size: 1833305483 dataset_size: 1983365373.6399999 - config_name: Zulu features: - name: id dtype: int32 - name: num_samples dtype: int32 - name: path dtype: string - name: audio dtype: audio: sampling_rate: 16000 - name: transcription dtype: string - name: raw_transcription dtype: string - name: gender dtype: class_label: names: '0': male '1': female '2': other - name: lang_id dtype: class_label: names: '0': af_za '1': am_et '2': ar_eg '3': as_in '4': ast_es '5': az_az '6': be_by '7': bg_bg '8': bn_in '9': bs_ba '10': ca_es '11': ceb_ph '12': ckb_iq '13': cmn_hans_cn '14': cs_cz '15': cy_gb '16': da_dk '17': de_de '18': el_gr '19': en_us '20': es_419 '21': et_ee '22': fa_ir '23': ff_sn '24': fi_fi '25': fil_ph '26': fr_fr '27': ga_ie '28': gl_es '29': gu_in '30': ha_ng '31': he_il '32': hi_in '33': hr_hr '34': hu_hu '35': hy_am '36': id_id '37': ig_ng '38': is_is '39': it_it '40': ja_jp '41': jv_id '42': ka_ge '43': kam_ke '44': kea_cv '45': kk_kz '46': km_kh '47': kn_in '48': ko_kr '49': ky_kg '50': lb_lu '51': lg_ug '52': ln_cd '53': lo_la '54': lt_lt '55': luo_ke '56': lv_lv '57': mi_nz '58': mk_mk '59': ml_in '60': mn_mn '61': mr_in '62': ms_my '63': mt_mt '64': my_mm '65': nb_no '66': ne_np '67': nl_nl '68': nso_za '69': ny_mw '70': oc_fr '71': om_et '72': or_in '73': pa_in '74': pl_pl '75': ps_af '76': pt_br '77': ro_ro '78': ru_ru '79': sd_in '80': sk_sk '81': sl_si '82': sn_zw '83': so_so '84': sr_rs '85': sv_se '86': sw_ke '87': ta_in '88': te_in '89': tg_tj '90': th_th '91': tr_tr '92': uk_ua '93': umb_ao '94': ur_pk '95': uz_uz '96': vi_vn '97': wo_sn '98': xh_za '99': yo_ng '100': yue_hant_hk '101': zu_za '102': all - name: language dtype: string - name: lang_group_id dtype: class_label: names: '0': western_european_we '1': eastern_european_ee '2': central_asia_middle_north_african_cmn '3': sub_saharan_african_ssa '4': south_asian_sa '5': south_east_asian_sea '6': chinese_japanase_korean_cjk - name: duration dtype: float64 splits: - name: train num_bytes: 116296817.93002099 num_examples: 194 - name: test num_bytes: 446892965.0 num_examples: 854 download_size: 510884620 dataset_size: 563189782.930021 configs: - config_name: Afrikaans data_files: - split: train path: Afrikaans/train-* - split: test path: Afrikaans/test-* - config_name: Akan data_files: - split: train path: Akan/train-* - split: test path: Akan/test-* - config_name: Amharic data_files: - split: train path: Amharic/train-* - split: test path: Amharic/test-* - config_name: Bambara data_files: - split: train path: Bambara/train-* - split: test path: Bambara/test-* - config_name: Bemba data_files: - split: train path: Bemba/train-* - split: test path: Bemba/test-* - config_name: Ewe data_files: - split: train path: Ewe/train-* - split: test path: Ewe/test-* - config_name: Fula data_files: - split: train path: Fula/train-* - split: test path: Fula/test-* - config_name: Hausa data_files: - split: train path: Hausa/train-* - split: test path: Hausa/test-* - config_name: Igbo data_files: - split: train path: Igbo/train-* - split: test path: Igbo/test-* - config_name: Kinyarwanda data_files: - split: train path: Kinyarwanda/train-* - split: test path: Kinyarwanda/test-* - config_name: Luganda data_files: - split: train path: Luganda/train-* - split: test path: Luganda/test-* - config_name: Oromo data_files: - split: train path: Oromo/train-* - split: test path: Oromo/test-* - config_name: Shona data_files: - split: train path: Shona/train-* - split: test path: Shona/test-* - config_name: Wolof data_files: - split: train path: Wolof/train-* - split: test path: Wolof/test-* - config_name: Xhosa data_files: - split: train path: Xhosa/train-* - split: test path: Xhosa/test-* - config_name: Yoruba data_files: - split: train path: Yoruba/train-* - split: test path: Yoruba/test-* - config_name: Zulu data_files: - split: train path: Zulu/train-* - split: test path: Zulu/test-* --- # Dataset Card for Africa ASR Data Efficiency Benchmark Dataset This dataset is part of the ASR Africa Data Efficiency Benchmark, designed to evaluate the performance of automatic speech recognition (ASR) models in low-resource settings. It consists of unique MP3 audio files paired with corresponding text transcriptions. Each audio sample is accompanied by metadata, including recording environment, duration, and speaker demographic information such as age and gender. The dataset contains one hour of transcribed audio data for each language, providing a valuable resource for training and evaluating ASR models in scenarios with limited annotated speech. ## Dataset Details ### Dataset Description The ASR Africa Data Efficiency Benchmark is a speech recognition dataset designed to evaluate how well automatic speech recognition (ASR) models perform under limited data conditions. While many state-of-the-art ASR models rely on large volumes of transcribed audio for training, such resources are scarce or nonexistent for the majority of the approximately 2,000 languages spoken across Africa. This benchmark specifically addresses that gap by encouraging the development of ASR systems that are data-efficient and effective in low-resource settings. - **Curated by:** Makerere AI Lab - **Funded by:** Gates Foundation - **Shared by:** Makerere AI Lab - **Language(s) (NLP):** Ewe, Afrikaans, Zulu, Xhosa, Akan, Luganda, Swahili, Bambara, Shona, Wolof, Fula, Bemba - **License:** Apache 2.0 ### Dataset Sources [optional] - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses The dataset can be used to evaluate the data efficiency of different ASR models. ### Direct Use The dataset should be used to train and evaluate data efficient ASR models. ### Out-of-Scope Use The dataset should not be used to re-identify the people behind the audios. ## Dataset Structure A typical data point comprises the path to the audio file and its sentence. Additional fields include environment, age, gender, and duration. ``` { 'File No' : 'Afrikaans_data_efficiency_benchmark.mp3', 'audio': { 'path': 'ewe_data_efficiency_benchmark.mp3', 'array': array([-0.00048828, -0.00018311, -0.00137329, ..., 0.00079346, 0.00091553, 0.00085449], dtype=float32), 'sampling_rate': 16000 }, 'transcript': 'ɖeviwo ɖekaɖeka nɔ be adre wo le xexea bublɔ lada dzi kotokuwo tse le wobe ŋgɔ ye wokpɔ dzidzɔ kpakpakpa wo le wobe ɖokui tse kpɔ', 'Speaker ID': 384, 'gender': 'Female', 'length': 34, 'year': 2023 } ``` ## Data Fields ``File No (string)``: Unique identifier for each audio file. ``Audio (dict)``: A dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate. Note that when accessing the audio column: dataset[0]["audio"] the audio file is automatically decoded and resampled to dataset.features["audio"].sampling_rate. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the "audio" column, i.e. dataset[0]["audio"] should always be preferred over dataset["audio"][0]. ``Transcription (string)``: The text corresponding to the audio. ``Gender (string)``: The gender of the speaker. ``Speaker ID (int)``: Unique identifier for each speaker. ## Data Splits The speech data has been subdivided into splits for train and test. ## Data Loading Recommended by Hugging Face The following are data preprocessing steps advised by the Hugging Face. They are accompanied by an example code snippet that shows how to put them to practice. ``` from datasets import load_dataset ds = load_dataset("asr-africa/ASRAfricaDataEfficiencyBenchmark", "Afrikaans", use_auth_token=True) ``` ## Dataset Creation ### Curation Rationale The dataset was curated to train and evaluate ASR models for data efficiency. Most ASR models perform well when large amounts of data are available. However, for most African languages such as Ewe, Afrikaans trascribed data is extremely scarce. This dataset was created to encourage researchers to develop data efficient models that reflect the setting of most African languages. ### Source Data The dataset was obtained from other open source ASR datasets. #### Data Collection and Processing [More Information Needed] #### Who are the source data producers? [More Information Needed] ### Annotations [optional] #### Annotation process #### Who are the annotators? #### Personal and Sensitive Information ## Bias, Risks, and Limitations ### Recommendations Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations. ## Citation ## Dataset Card Authors [optional] Makerere AI Lab ## Dataset Card Contact denismusinguzi2511@gmail.com