You need to agree to share your contact information to access this model

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Log in or Sign Up to review the conditions and access this model content.

BIA-XTTSv2-moore-v0

Model Description

BIA-XTTSv2-moore-v0 is a fine-tuned version of Coqui's XTTS v2 model specifically trained for Moore language, a Gur language spoken in Burkina Faso. This model enables high-quality text-to-speech synthesis for Moore speakers and supports various voice cloning capabilities.

Language Information

  • Language: Moore (Mooré)
  • Language Code: mos
  • Region: Burkina Faso, West Africa
  • Speakers: ~5 million native speakers
  • Language Family: Niger-Congo → Gur → Oti-Volta → Moore

Usage

import torch, torchaudio, os
import numpy as np

from   tqdm import tqdm
from TTS.tts.configs.xtts_config import XttsConfig
from TTS.tts.models.xtts         import Xtts

checkpoint_path = "checkpints"
model_path      = "best_model.pth"

device = "cuda:0" if torch.cuda.is_available() else "cpu"

xtts_checkpoint = os.path.join(checkpoint_path, model_path)
xtts_config     = os.path.join(checkpoint_path,"config.json")
xtts_vocab      = checkpoint_path+"vocab.json"

# Load model
config     = XttsConfig()
config.load_json(xtts_config)
XTTS_MODEL = Xtts.init_from_config(config)
XTTS_MODEL.load_checkpoint(config,
                           checkpoint_path = xtts_checkpoint,
                           vocab_path      = xtts_vocab,
                          

Model Details

Training Data

  • Dataset: Curated Moore speech dataset
  • Hours: High-quality Moore audio recordings 50
  • Speakers: Multiple native Moore speakers
  • Quality: Studio and field recordings, carefully preprocessed

Training Configuration

  • Base Model: XTTS v2
  • Fine-tuning Steps: 9,000+ steps OF BACT
  • Batch Size: 2 and gradient accumulation of 4
  • Learning Rate: Adaptive scheduling
  • Hardware: GPU-accelerated training

Citation

If you use this model in your research, please cite:

@misc{bia-xtts-moore-v0,
  title={BIA-XTTSv2-moore-v0: A Fine-tuned XTTS Model for Moore Language},
  author={BIA Team},
  year={2024},
  howpublished={\url{https://huggingface.co/BIA/BIA-XTTSv2-moore-v0}}
}

Acknowledgments

  • Coqui TTS team for the original XTTS v2 architecture
  • Moore language community in Burkina Faso
  • Data contributors who provided high-quality recordings
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support