The dataset is currently empty. Upload or create new data files. Then, you will be able to explore them in the Dataset Viewer.
A Data-Centric Framework for Addressing Phonetic and Prosodic Challenges in Russian Speech Generative Models
Russian speech synthesis presents distinctive challenges, including vowel reduction, consonant devoicing, variable stress patterns, homograph ambiguity, and unnatural intonation. This paper introduces Balalaika, a novel dataset comprising more than 2,000 hours of studio-quality Russian speech with comprehensive textual annotations, including punctuation and stress markings. Experimental results show that models trained on Balalaika significantly outperform those trained on existing datasets in both speech synthesis and enhancement tasks.
Quick Start π
git clone https://github.com/mtuciru/balalaika && cd balalaika
bash create_user_env.sh # sets up venv + pip deps
bash use_meta_500h.sh # pick 100h / 500h / 1000h / 2000h as needed
Prerequisites
Ensure you have the following tools installed on your system:
sudo apt update && sudo apt install -y \
ffmpeg \ # video/audio toolkit
python3 \ # Python
python3-pip \ # Pip package manager
python3-venv \ # std-lib virtual-env support
python3-dev \ # headers for compiling native wheels
python-is-python3
wget -qO- https://astral.sh/uv/install.sh | sh
Installation
Clone the repository and set up the environment:
git clone https://github.com/mtuciru/balalaika
cd balalaika
# Use this if you want to annotate/modify the dataset
bash create_dev_env.sh
# Use this if you only want to use the pre-annotated dataset
bash create_user_env.sh
Data Preparation
Quick Setup (Default Parameters)
To download and prepare the dataset with default settings, choose one of the preconfigured dataset sizes:
100-hour dataset
bash use_meta_100h.sh
500-hour dataset
bash use_meta_500h.sh
1000-hour dataset
bash use_meta_1000h.sh
2000-hour dataset
bash use_meta_2000h.sh
All metadata can also be downloaded from Hugging Face β MTUCI.
Custom Metadata Download
If you already have generated metadata files (balalaika.parquet
and balalaika.pkl
), place them in the project root and run:
bash use_meta.sh
Running the Pipeline
Basic Scenario (Local Processing)
This scenario will:
- Download datasets
- Split audio into semantic chunks
- Transcribe all segments
- Perform speaker segmentation
- Apply phonemization
To execute locally, run:
bash base.sh configs/config.yaml
All output metadata will be saved in podcasts/result.csv
.
Configuration
The main configuration file is located at configs/config.yaml
. This file is organized into several sections, each corresponding to a specific stage of the podcast processing pipeline. Below is a detailed explanation of the key parameters within each section.
Global Parameters
podcasts_path
: It specifies the absolute path to the directory where all downloaded podcast files will be stored and where subsequent processing (preprocessing, separation, transcription, etc.) will look for and save its output.
download
Section
This section controls how podcast episodes are downloaded.
podcasts_path
: (As explained above) The directory where downloaded podcasts will be saved.episodes_limit
: This sets a limit on the number of episodes to download from a single podcast playlist.num_workers
: Specifies the number of parallel processes to use for downloading. A higher number can speed up downloads but will consume more system resources.podcasts_urls_file
: This parameter points to the path of a.pkl
file that contains a list of podcast URLs to be downloaded.
preprocess
Section
This section handles the initial processing of downloaded audio files, such as chopping them into smaller segments.
podcasts_path
: (As explained above) The directory containing the raw downloaded podcasts that need to be preprocessed.duration
: Defines the maximum length in seconds for each audio sample (segment).num_workers
: Specifies the number of parallel processes to use during preprocessing.whisper_model
: Specifies the name or path of the Faster-Whisper compatible model to be used for initial audio processing.compute_type
: Determines the computation type for the Whisper model, affecting performance and memory usage.beam_size
: This parameter is related to the beam search algorithm used in the Whisper model's decoding process.
separation
Section
This section calculates metrics for each audio
podcasts_path
: (As explained above) The directory where the chopped podcasts (from thepreprocess
stage) are located.num_workers
: The number of parallel processes to use for audio separation.nisqa_config
: Specifies the path to the configuration file for NISQAone_speaker
: A boolean flag (True
/False
) that, when enabled (True
), instructs the system to download and process only those audio recordings that should contain a single speaker.
transcription
Section
This section is responsible for converting audio into text.
podcasts_path
: (As explained above) The directory containing the processed audio files ready for transcription.model_name
: Specifies the type of automatic speech recognition (ASR) model to use. Options typically include"ctc" or "rnnt"
.num_workers
: The number of parallel processes per GPU to use for transcription.with_timestamps
: A boolean flag (True
/False
) that, when enabled, allows the transcription process to generate timestamps for each word or segment. it only works with ctclm_path
: Specifies the path to a language model file (.bin
). A language model can improve transcription accuracy by providing contextual information.
punctuation
Section
This section focuses on adding proper punctuation to the transcribed text.
podcasts_path
: (As explained above) The directory where the transcribed text files are located.model_name
: Specifies the name of the RUPunct model to be used for punctuation restoration.num_workers
: The number of parallel processes per GPU to use for punctuation.
accent
Section
In the transcribed text this part is restored with accents.
podcasts_path
: (As explained above) The directory containing the relevant podcast files.num_workers
: The number of parallel processes per GPU to use for accent processing.model_name
: Specifies the name of the ruAccent model to be used.
phonemizer
Section
This section is responsible for converting text into phonetic representations (phonemes).
podcasts_path
: (As explained above) The directory where the text files (from transcription and punctuation stages) are located.num_workers
: The number of parallel processes per GPU to use for phonemization.
classification
Section
This section relates to global speaker clustering.
podcasts_path
: (As explained above) The directory containing the podcast files relevant for classification.num_workers
: The number of parallel processes per GPU to use for classification.threshold
: This is the speaker classification confidence threshold. Values typically range from0.6
to0.9
. A higher threshold means the model needs to be more confident in its classification to assign a label.model_path
: Specifies the path to the pretrained speaker classification model in.pt
format.
Execution Scripts
Each processing script (*_yaml.sh
and *_args.sh
) offers flexibility in how parameters are provided:
*_yaml.sh
: These scripts read all necessary parameters directly from the mainconfig.yaml
file, ensuring consistency across different stages.*_args.sh
: These scripts allow for hardcoded arguments directly within the shell script itself, which can be useful for quick tests or specific overrides without modifying the main configuration file.
Environment Variables
Create a .env
file in the project root with the following:
HF_TOKEN=<your_huggingface_token>
YANDEX_KEY=<your_yandex_music_token>
HF_TOKEN
: Required for speaker count estimation.YANDEX_KEY
: Required for dataset downloads.
Important Notes
- All scripts must be executed from the project root directory.
- Paths in the config file must be absolute.
- The processing scripts (punctuation, accents) should be run sequentially.
- Youβll need:
- Yandex Music API key (How to get one)
- Hugging Face token
Models
Place all required models under the models/
directory with the following structure:
models/
βββ voxblink_resnet/ # Speaker classification model
β βββ ...
βββ nisqa_s.tar # Audio quality assessment model
Supported models:
- NISQA β Audio quality assessment.
- GigaAM β ASR.
- ruAccent β Accent restoration.
- RUPunct β Punctuation restoration.
- VoxBlink ResNet β Speaker classification.
- TryIPaG2P β Phonemization.
- Speaker Diarization β Speaker diarization.
- Whisper β ASR + segmentation
Citation
If you use this pipeline in your research or production, please cite:
@misc{borodin2025datacentricframeworkaddressingphonetic,
title={A Data-Centric Framework for Addressing Phonetic and Prosodic Challenges in Russian Speech Generative Models},
author={Kirill Borodin and Nikita Vasiliev and Vasiliy Kudryavtsev and Maxim Maslov and Mikhail Gorodnichev and Oleg Rogov and Grach Mkrtchian},
year={2025},
eprint={2507.13563},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2507.13563},
}
License
Dataset: Balalaika
- CC BY-NC-ND 4.0 β non-commercial, no derivatives, research use only.
- Cite the corpus and do not redistribute files without written permission.
Code
- CC BY-NC-SA 4.0 β You may use, modify, and share the material for academic, non-commercial purposes only. -You must retain the copyright and license notices; contact the authors for commercial use.
Third-Party Models & Libraries
Comply with each componentβs original license in addition to the above:
Component | License |
---|---|
NISQA-s | Apache 2.0 |
GigaAM | MIT |
ruAccent | CC BY-NC-ND 4.0 |
RUPunct | CC BY-NC-ND 4.0 |
VoxBlink ResNet | Apache 2.0 |
TryIPaG2P | MIT |
pyannote-audio | MIT |
Faster-Whisper | MIT |
- Downloads last month
- 38