Gender Classification Model
This model combines the SpeechBrain ECAPA-TDNN speaker embedding model with an SVM classifier to predict speaker gender from audio input. The model was trained and evaluated on the VoxCeleb2, Mozilla Common Voice v10.0, and TIMIT datasets
Model Details
- Input: Audio file (will be converted to 16kHz, mono, single channel)
- Output: Gender prediction ("male" or "female")
- Speaker embedding: 192-dimensional ECAPA-TDNN embedding from SpeechBrain
- Classifier: Support Vector Machine optimized through Optuna (200 trials)
- Performance:
- VoxCeleb2 test set: 98.9% accuracy, 0.9885 F1-score
- Mozilla Common Voice v10.0 English validated test set: 92.3% accuracy
- TIMIT test set: 99.6% accuracy
Training Data
The model was trained on VoxCeleb2 dataset:
- Training set: 1,691 speakers (845 females, 846 males)
- Validation set: 785 speakers (396 females, 389 males)
- Test set: 1,647 speakers (828 females, 819 males)
- No speaker overlap between sets
- Audio preprocessing:
- Converted to WAV format, single channel, 16kHz sampling rate, 256 kp/s bitrate
- Applied SileroVAD for voice activity detection, taking the first voiced segment
Installation
You can install the package directly from GitHub:
pip install git+https://github.com/griko/voice-gender-classification.git
Usage
from voice_gender_classification import GenderClassificationPipeline
# Load the pipeline
classifier = GenderClassificationPipeline.from_pretrained(
"griko/gender_cls_svm_ecapa_voxceleb"
)
# Single file prediction
result = classifier("path/to/audio.wav")
print(result) # ["female"] or ["male"]
# Batch prediction
results = classifier(["audio1.wav", "audio2.wav"])
print(results) # ["female", "male", "female"]
Limitations
- Model was trained on celebrity voices from YouTube interviews
- Performance may vary on different audio qualities or recording conditions
- Designed for binary gender classification only
Citation
If you use this model in your research, please cite:
TBD
- Downloads last month
- 26