Datasets:
id
stringlengths 8
11
| audio
audioduration (s) 4.17
12
|
---|---|
ai_00001 | |
ai_00002 | |
ai_00003 | |
ai_00004 | |
ai_00005 | |
ai_00006 | |
ai_00007 | |
ai_00008 | |
ai_00009 | |
ai_00010 | |
ai_00011 | |
ai_00012 | |
ai_00013 | |
ai_00014 | |
ai_00015 | |
ai_00016 | |
ai_00017 | |
ai_00018 | |
ai_00019 | |
ai_00020 | |
ai_00021 | |
ai_00022 | |
ai_00023 | |
ai_00024 | |
ai_00025 | |
ai_00026 | |
ai_00027 | |
ai_00028 | |
ai_00029 | |
ai_00030 | |
ai_00031 | |
ai_00032 | |
ai_00033 | |
ai_00034 | |
ai_00035 | |
human_00001 | |
human_00002 | |
human_00003 | |
human_00004 | |
human_00005 | |
human_00006 | |
human_00007 | |
human_00008 | |
human_00009 | |
human_00010 | |
human_00011 | |
human_00012 | |
human_00013 | |
human_00014 | |
human_00015 | |
human_00016 | |
human_00017 | |
human_00018 | |
human_00019 | |
human_00020 | |
human_00021 | |
human_00022 | |
human_00023 | |
human_00024 | |
human_00025 | |
human_00026 | |
human_00027 | |
human_00028 | |
human_00029 | |
human_00030 | |
human_00031 | |
human_00032 | |
human_00033 | |
human_00034 | |
human_00035 | |
human_00036 | |
human_00037 | |
human_00038 | |
human_00039 | |
human_00040 | |
human_00041 | |
human_00042 | |
human_00043 | |
human_00044 | |
human_00045 | |
human_00046 | |
human_00047 | |
human_00048 | |
human_00049 | |
human_00050 | |
human_00051 | |
human_00052 | |
human_00053 | |
human_00054 | |
human_00055 | |
human_00056 | |
human_00057 | |
human_00058 | |
human_00059 | |
human_00060 | |
human_00061 | |
human_00062 | |
human_00063 | |
human_00064 | |
human_00065 |
📚 Audio Turing Test Audios
A high‑quality, multidimensional Chinese audio corpus generated from textual transcripts, designed to evaluate the human-likeness and naturalness of Text-to-Speech (TTS) systems—the “Audio Turing Test.”
About Audio Turing Test (ATT)
ATT is an evaluation framework featuring a standardized human evaluation protocol and an accompanying dataset, addressing the lack of unified evaluation standards in TTS research. To enhance rapid iteration and evaluation, we trained the Auto-ATT model based on Qwen2-Audio-7B, enabling a model-as-a-judge evaluation on the ATT dataset. Full details and related resources are available in the ATT Collection.
Dataset Description
The dataset includes 104 "trap" audio clips for attentiveness checks during evaluations:
- 35 flawed synthetic audio clips: intentionally synthesized to highlight obvious flaws and unnaturalness.
- 69 authentic human recordings: genuine human speech, serving as control samples.
How to Use This Dataset
- Evaluate: Use our Auto-ATT evaluation model to score your own or existing TTS audio clips.
- Benchmark: Compare your evaluation scores against these reference audio samples from top-performing TTS models described in our research paper and these "trap" audio clips.
Data Format
Audio files are provided in high-quality .wav
format.
Citation
If you use this dataset, please cite:
@software{Audio-Turing-Test-Audios,
author = {Wang, Xihuai and Zhao, Ziyi and Ren, Siyu and Zhang, Shao and Li, Song and Li, Xiaoyu and Wang, Ziwen and Qiu, Lin and Wan, Guanglu and Cao, Xuezhi and Cai, Xunliang and Zhang, Weinan},
title = {Audio Turing Test: Benchmarking the Human-likeness and Naturalness of Large Language Model-based Text-to-Speech Systems in Chinese},
year = {2025},
url = {https://huggingface.co/datasets/Meituan/Audio-Turing-Test-Audios},
publisher = {huggingface},
}
- Downloads last month
- 68