YAML Metadata
Warning:
empty or missing yaml metadata in repo card
(https://huggingface.co/docs/hub/model-cards#model-card-metadata)
my_vits_model
Model Description
A VITS-based TTS model for English speech synthesis
- Language(s): English
- Type: Single-speaker Text-to-Speech
- Model Type: VITS
- Framework: Coqui TTS
- Uploaded: 2025-05-29
Intended Use
- Primary Use: Generating single-speaker speech from text input for applications like virtual assistants, audiobooks, or accessibility tools.
- Out of Scope: Real-time applications if not optimized for low latency.
Usage
To load and use the model:
from safetensors.torch import load_file
from TTS.config import load_config
from TTS.tts.models import setup_model
# Load configuration
config = load_config("config.json")
model = setup_model(config)
# Load weights
state_dict = load_file("my_vits_model.safetensors")
model.load_state_dict(state_dict)
model.eval()
# Example inference
text = "Hello, this is a test."
wav = model.inference(text, speaker_id=0 if False else None)
Training Data
- Dataset: Custom dataset
- Preprocessing: Text normalized, audio sampled at 22050 Hz
Evaluation
- Metrics: [Add metrics, e.g., Mean Opinion Score (MOS), Word Error Rate (WER)]
- Results: [Add results, e.g., "Achieved MOS of 4.2 on test set"]
Limitations
- Limited to English language(s).
- Performance may vary with noisy or complex input text.
License
- Released under apache-2.0.
Ethical Considerations
- Ensure responsible use to avoid generating misleading or harmful audio content.
- Verify input text to prevent biased or offensive outputs.
Dependencies
TTS
(Coqui TTS)safetensors
torch
- Downloads last month
- 20
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support