metadata
dataset_info:
features:
- name: id
dtype: string
- name: category
dtype: string
- name: text
dtype: string
splits:
- name: all
num_bytes: 91813
num_examples: 290
- name: easy
num_bytes: 9124
num_examples: 50
- name: medium
num_bytes: 20234
num_examples: 50
- name: hard
num_bytes: 27971
num_examples: 50
- name: scbx
num_bytes: 17314
num_examples: 50
- name: name
num_bytes: 10118
num_examples: 50
- name: other
num_bytes: 7052
num_examples: 40
download_size: 103240
dataset_size: 183626
configs:
- config_name: default
data_files:
- split: all
path: data/all-*
- split: easy
path: data/easy-*
- split: medium
path: data/medium-*
- split: hard
path: data/hard-*
- split: scbx
path: data/scbx-*
- split: name
path: data/name-*
- split: other
path: data/other-*
Thai-TTS-Intelligibility-Eval
Thai-TTS-Intelligibility-Eval is a curated evaluation set for measuring intelligibility of Thai Text-to-Speech (TTS) systems.
All 290 items are short, challenging phrases that commonly trip up phoneme-to-grapheme converters, prosody models, or pronunciation lexicons.
It is not intended for training; use it purely for benchmarking and regression tests.
Dataset Summary
Split | #Utterances | Description |
---|---|---|
easy |
50 | Everyday phrases that most TTS systems should read correctly |
medium |
50 | More challening than easy |
hard |
50 | Hard phrases, e.g., mixed Thai and English and unique names |
scbx |
50 | SCBX-specific terminology, products, and names |
name |
50 | Synthetic Thai personal names (mixed Thai & foreign roots) |
other |
40 | Miscellaneous edge-cases not covered above |
Total | 290 |
Each record contains:
id
string
Unique identifiertext
string
sentence/phrasecategory
string
One of easy, medium, hard, scbx, name, other
Loading With 🤗 datasets
from datasets import load_dataset
ds = load_dataset(
"scb10x/thai-tts-intelligiblity-eval",
)
ds_scbx = ds["scbx"]
print(ds[0])
# {'id': '53ef39464d9c1e6f', 'text': '...', 'category': 'scbx'}
Intended Use
- Objective evaluation
- Compute WER/CER between automatic transcripts of your TTS output and the gold reference text.
- Code: https://github.com/scb-10x/thai-tts-eval/tree/main/intelligibility
- Subjective evaluation
- Conduct human listening tests (MOS, ABX, etc.)—the dataset is small enough for quick rounds.
- Future work
- Regression testing
- Track intelligibility across model versions with a fixed set of hard sentences.
- Future work
CER Evaluation Results
- CER: lower is better
System | All | Easy | Medium | Hard | SCBX | Name | Other |
---|---|---|---|---|---|---|---|
Azure Premwadee | 9.39 | 2.87 | 2.92 | 13.80 | 10.44 | 13.07 | 7.57 |
facebook-mms-tts-tha |
28.47 | 10.31 | 12.40 | 38.83 | 36.04 | 26.33 | 30.83 |
VIZINTZOR-MMS-TTS-THAI-FEMALEV1 |
27.42 | 13.30 | 13.13 | 30.92 | 34.76 | 25.53 | 54.60 |