--- dataset_info: features: - name: ID dtype: string - name: speaker_id dtype: string - name: Language dtype: string - name: audio dtype: audio: sampling_rate: 16000 - name: transcript dtype: string - name: length dtype: float32 - name: dataset_name dtype: string - name: confidence_score dtype: float64 splits: - name: train num_examples: 100 download_size: 0 dataset_size: 0 configs: - config_name: default data_files: - split: train path: data/train/*.parquet --- # Thanarit/Thai-Voice Combined Thai audio dataset from multiple sources ## Dataset Details - **Total samples**: 100 - **Total duration**: 0.11 hours - **Language**: Thai (th) - **Audio format**: 16kHz mono WAV - **Volume normalization**: -20dB ## Sources Processed 1 datasets in streaming mode ## Source Datasets 1. **GigaSpeech2**: Large-scale multilingual speech corpus ## Usage ```python from datasets import load_dataset # Load with streaming to avoid downloading everything dataset = load_dataset("Thanarit/Thai-Voice-10000000", streaming=True) # Iterate through samples for sample in dataset['train']: print(sample['ID'], sample['transcript'][:50]) # Process audio: sample['audio'] break ``` ## Schema - `ID`: Unique identifier (S1, S2, S3, ...) - `speaker_id`: Speaker identifier (SPK_00001, SPK_00002, ...) - `Language`: Language code (always "th" for Thai) - `audio`: Audio data with 16kHz sampling rate - `transcript`: Text transcript of the audio - `length`: Duration in seconds - `dataset_name`: Source dataset name (e.g., "GigaSpeech2", "ProcessedVoiceTH", "MozillaCommonVoice") - `confidence_score`: Confidence score of the transcript (0.0-1.0) - 1.0: Original transcript from source dataset - <1.0: STT-generated transcript - 0.0: Fallback transcript (e.g., [NO_TRANSCRIPT]) ## Processing Details This dataset was created using streaming processing to handle large-scale data without requiring full downloads. Audio has been standardized to 16kHz mono with -20dB volume normalization.