OuteTTS-1.0-0.6B GGUF Models
Model Generation Details
This model was generated using llama.cpp at commit 92ecdcc0
.
Choosing the Right Model Format
Selecting the correct model format depends on your hardware capabilities and memory constraints.
BF16 (Brain Float 16) β Use if BF16 acceleration is available
- A 16-bit floating-point format designed for faster computation while retaining good precision.
- Provides similar dynamic range as FP32 but with lower memory usage.
- Recommended if your hardware supports BF16 acceleration (check your device's specs).
- Ideal for high-performance inference with reduced memory footprint compared to FP32.
π Use BF16 if:
β Your hardware has native BF16 support (e.g., newer GPUs, TPUs).
β You want higher precision while saving memory.
β You plan to requantize the model into another format.
π Avoid BF16 if:
β Your hardware does not support BF16 (it may fall back to FP32 and run slower).
β You need compatibility with older devices that lack BF16 optimization.
F16 (Float 16) β More widely supported than BF16
- A 16-bit floating-point high precision but with less of range of values than BF16.
- Works on most devices with FP16 acceleration support (including many GPUs and some CPUs).
- Slightly lower numerical precision than BF16 but generally sufficient for inference.
π Use F16 if:
β Your hardware supports FP16 but not BF16.
β You need a balance between speed, memory usage, and accuracy.
β You are running on a GPU or another device optimized for FP16 computations.
π Avoid F16 if:
β Your device lacks native FP16 support (it may run slower than expected).
β You have memory limitations.
Quantized Models (Q4_K, Q6_K, Q8, etc.) β For CPU & Low-VRAM Inference
Quantization reduces model size and memory usage while maintaining as much accuracy as possible.
- Lower-bit models (Q4_K) β Best for minimal memory usage, may have lower precision.
- Higher-bit models (Q6_K, Q8_0) β Better accuracy, requires more memory.
π Use Quantized Models if:
β You are running inference on a CPU and need an optimized model.
β Your device has low VRAM and cannot load full-precision models.
β You want to reduce memory footprint while keeping reasonable accuracy.
π Avoid Quantized Models if:
β You need maximum accuracy (full-precision models are better for this).
β Your hardware has enough VRAM for higher-precision formats (BF16/F16).
Very Low-Bit Quantization (IQ3_XS, IQ3_S, IQ3_M, Q4_K, Q4_0)
These models are optimized for extreme memory efficiency, making them ideal for low-power devices or large-scale deployments where memory is a critical constraint.
IQ3_XS: Ultra-low-bit quantization (3-bit) with extreme memory efficiency.
- Use case: Best for ultra-low-memory devices where even Q4_K is too large.
- Trade-off: Lower accuracy compared to higher-bit quantizations.
IQ3_S: Small block size for maximum memory efficiency.
- Use case: Best for low-memory devices where IQ3_XS is too aggressive.
IQ3_M: Medium block size for better accuracy than IQ3_S.
- Use case: Suitable for low-memory devices where IQ3_S is too limiting.
Q4_K: 4-bit quantization with block-wise optimization for better accuracy.
- Use case: Best for low-memory devices where Q6_K is too large.
Q4_0: Pure 4-bit quantization, optimized for ARM devices.
- Use case: Best for ARM-based devices or low-memory environments.
Summary Table: Model Format Selection
Model Format | Precision | Memory Usage | Device Requirements | Best Use Case |
---|---|---|---|---|
BF16 | Highest | High | BF16-supported GPU/CPUs | High-speed inference with reduced memory |
F16 | High | High | FP16-supported devices | GPU inference when BF16 isn't available |
Q4_K | Medium Low | Low | CPU or Low-VRAM devices | Best for memory-constrained environments |
Q6_K | Medium | Moderate | CPU with more memory | Better accuracy while still being quantized |
Q8_0 | High | Moderate | CPU or GPU with enough VRAM | Best accuracy among quantized models |
IQ3_XS | Very Low | Very Low | Ultra-low-memory devices | Extreme memory efficiency and low accuracy |
Q4_0 | Low | Low | ARM or low-memory devices | llama.cpp can optimize for ARM devices |
Included Files & Details
OuteTTS-1.0-0.6B-bf16.gguf
- Model weights preserved in BF16.
- Use this if you want to requantize the model into a different format.
- Best if your device supports BF16 acceleration.
OuteTTS-1.0-0.6B-f16.gguf
- Model weights stored in F16.
- Use if your device supports FP16, especially if BF16 is not available.
OuteTTS-1.0-0.6B-bf16-q8_0.gguf
- Output & embeddings remain in BF16.
- All other layers quantized to Q8_0.
- Use if your device supports BF16 and you want a quantized version.
OuteTTS-1.0-0.6B-f16-q8_0.gguf
- Output & embeddings remain in F16.
- All other layers quantized to Q8_0.
OuteTTS-1.0-0.6B-q4_k.gguf
- Output & embeddings quantized to Q8_0.
- All other layers quantized to Q4_K.
- Good for CPU inference with limited memory.
OuteTTS-1.0-0.6B-q4_k_s.gguf
- Smallest Q4_K variant, using less memory at the cost of accuracy.
- Best for very low-memory setups.
OuteTTS-1.0-0.6B-q6_k.gguf
- Output & embeddings quantized to Q8_0.
- All other layers quantized to Q6_K .
OuteTTS-1.0-0.6B-q8_0.gguf
- Fully Q8 quantized model for better accuracy.
- Requires more memory but offers higher precision.
OuteTTS-1.0-0.6B-iq3_xs.gguf
- IQ3_XS quantization, optimized for extreme memory efficiency.
- Best for ultra-low-memory devices.
OuteTTS-1.0-0.6B-iq3_m.gguf
- IQ3_M quantization, offering a medium block size for better accuracy.
- Suitable for low-memory devices.
OuteTTS-1.0-0.6B-q4_0.gguf
- Pure Q4_0 quantization, optimized for ARM devices.
- Best for low-memory environments.
- Prefer IQ4_NL for better accuracy.
π If you find these models useful
β€ Please click "Like" if you find this useful!
Help me test my AI-Powered Network Monitor Assistant with quantum-ready security checks:
π Quantum Network Monitor
π¬ How to test:
Choose an AI assistant type:
TurboLLM
(GPT-4o-mini)HugLLM
(Hugginface Open-source)TestLLM
(Experimental CPU-only)
What Iβm Testing
Iβm pushing the limits of small open-source models for AI network monitoring, specifically:
- Function calling against live network services
- How small can a model go while still handling:
- Automated Nmap scans
- Quantum-readiness checks
- Network Monitoring tasks
π‘ TestLLM β Current experimental model (llama.cpp on 2 CPU threads):
- β Zero-configuration setup
- β³ 30s load time (slow inference but no API costs)
- π§ Help wanted! If youβre into edge-device AI, letβs collaborate!
Other Assistants
π’ TurboLLM β Uses gpt-4o-mini for:
- Create custom cmd processors to run .net code on Quantum Network Monitor Agents
- Real-time network diagnostics and monitoring
- Security Audits
- Penetration testing (Nmap/Metasploit)
π΅ HugLLM β Latest Open-source models:
- π Runs on Hugging Face Inference API
π‘ Example commands to you could test:
"Give me info on my websites SSL certificate"
"Check if my server is using quantum safe encyption for communication"
"Run a comprehensive security audit on my server"
- '"Create a cmd processor to .. (what ever you want)" Note you need to install a Quantum Network Monitor Agent to run the .net code from. This is a very flexible and powerful feature. Use with caution!
Final Word
I fund the servers used to create these model files, run the Quantum Network Monitor service, and pay for inference from Novita and OpenAIβall out of my own pocket. All the code behind the model creation and the Quantum Network Monitor project is open source. Feel free to use whatever you find helpful.
If you appreciate the work, please consider buying me a coffee β. Your support helps cover service costs and allows me to raise token limits for everyone.
I'm also open to job opportunities or sponsorship.
Thank you! π
Oute A I
Important Sampling Considerations
When using OuteTTS version 1.0, it is crucial to use the settings specified in the Sampling Configuration section. The repetition penalty implementation is particularly important - this model requires penalization applied to a 64-token recent window, rather than across the entire context window. Penalizing the entire context will cause the model to produce broken or low-quality output.
To address this limitation, all necessary samplers and patches for all backends are set up automatically in the outetts library. If using a custom implementation, ensure you correctly implement these requirements.
OuteTTS Version 1.0
This update brings significant improvements in speech synthesis and voice cloningβdelivering a more powerful, accurate, and user-friendly experience in a compact size.
OuteTTS Python Package v0.4.2
New version adds batched inference generation with the latest OuteTTS release.
β‘ Batched RTF Benchmarks
Tested with NVIDIA L40S GPU
Quick Start Guide
Getting started with OuteTTS is simple:
Installation
π Installation instructions
Basic Setup
from outetts import Interface, ModelConfig, GenerationConfig, Backend, InterfaceVersion, Models, GenerationType
# Initialize the interface
interface = Interface(
ModelConfig.auto_config(
model=Models.VERSION_1_0_SIZE_0_6B,
backend=Backend.HF,
)
)
# Load the default **English** speaker profile
speaker = interface.load_default_speaker("EN-FEMALE-1-NEUTRAL")
# Or create your own speaker (Use this once)
# speaker = interface.create_speaker("path/to/audio.wav")
# interface.save_speaker(speaker, "speaker.json")
# Load your speaker from saved file
# speaker = interface.load_speaker("speaker.json")
# Generate speech & save to file
output = interface.generate(
GenerationConfig(
text="Hello, how are you doing?",
speaker=speaker,
)
)
output.save("output.wav")
β‘ Batch Setup
from outetts import Interface, ModelConfig, GenerationConfig, Backend, GenerationType
if __name__ == "__main__":
# Initialize the interface with a batch-capable backend
interface = Interface(
ModelConfig(
model_path="OuteAI/OuteTTS-1.0-0.6B-FP8",
tokenizer_path="OuteAI/OuteTTS-1.0-0.6B",
backend=Backend.VLLM
# For EXL2, use backend=Backend.EXL2ASYNC + exl2_cache_seq_multiply={should be same as max_batch_size in GenerationConfig}
# For LLAMACPP_ASYNC_SERVER, use backend=Backend.LLAMACPP_ASYNC_SERVER and provide server_host in GenerationConfig
)
)
# Load your speaker profile
speaker = interface.load_default_speaker("EN-FEMALE-1-NEUTRAL") # Or load/create custom speaker
# Generate speech using BATCH type
# Note: For EXL2ASYNC, VLLM, LLAMACPP_ASYNC_SERVER, BATCH is automatically selected.
output = interface.generate(
GenerationConfig(
text="This is a longer text that will be automatically split into chunks and processed in batches.",
speaker=speaker,
generation_type=GenerationType.BATCH,
max_batch_size=32, # Adjust based on your GPU memory and server capacity
dac_decoding_chunk=2048, # Adjust chunk size for DAC decoding
# If using LLAMACPP_ASYNC_SERVER, add:
# server_host="http://localhost:8000" # Replace with your server address
)
)
# Save to file
output.save("output_batch.wav")
More Configuration Options
For advanced settings and customization, visit the official repository:
Multilingual Capabilities
Trained Languages: English, Chinese, Dutch, French, Georgian, German, Hungarian, Italian, Japanese, Korean, Latvian, Polish, Russian, Spanish
Beyond Supported Languages: The model can generate speech in untrained languages with varying success. Experiment with unlisted languages, though results may not be optimal.
Usage Recommendations
Speaker Reference
The model is designed to be used with a speaker reference. Without one, it generates random vocal characteristics, often leading to lower-quality outputs. The model inherits the referenced speaker's emotion, style, and accent. When transcribing to other languages with the same speaker, you may observe the model retaining the original accent.
Multilingual Application
It is recommended to create a speaker profile in the language you intend to use. This helps achieve the best results in that specific language, including tone, accent, and linguistic features.
While the model supports cross-lingual speech, it still relies on the reference speaker. If the speaker has a distinct accentβsuch as British Englishβother languages may carry that accent as well.
Optimal Audio Length
- Best Performance: Generate audio around 42 seconds in a single run (approximately 8,192 tokens). It is recomended not to near the limits of this windows when generating. Usually, the best results are up to 7,000 tokens.
- Context Reduction with Speaker Reference: If the speaker reference is 10 seconds long, the effective context is reduced to approximately 32 seconds.
Temperature Setting Recommendations
Testing shows that a temperature of 0.4 is an ideal starting point for accuracy (with the sampling settings below). However, some voice references may benefit from higher temperatures for enhanced expressiveness or slightly lower temperatures for more precise voice replication.
Verifying Speaker Encoding
If the cloned voice quality is subpar, check the encoded speaker sample.
interface.decode_and_save_speaker(speaker=your_speaker, path="speaker.wav")
The DAC audio reconstruction model is lossy, and samples with clipping, excessive loudness, or unusual vocal features may introduce encoding issues that impact output quality.
Sampling Configuration
For optimal results with this TTS model, use the following sampling settings.
Parameter | Value |
---|---|
Temperature | 0.4 |
Repetition Penalty | 1.1 |
Repetition Range | 64 |
Top-k | 40 |
Top-p | 0.9 |
Min-p | 0.05 |
π Model Specifications
Model | Training Data | Context Length | Supported Languages |
---|---|---|---|
Llama-OuteTTS-1.0-1B | 60k hours of audio | 8,192 tokens | 23+ languages |
OuteTTS-1.0-0.6B | 20k hours of audio | 8,192 tokens | 14+ languages |
Acknowledgments
- Audio encoding and decoding utilize ibm-research/DAC.speech.v1.0
- OuteTTS is built with Qwen3 0.6B as the base model, with continued pre-training and fine-tuning.
- Datasets used: Multilingual LibriSpeech (MLS) (CC BY 4.0), Common Voice Corpus (CC-0)
Ethical Use Guidelines
Intended Purpose: This model is intended for legitimate applications that enhance accessibility, creativity, and communication.
Prohibited Uses:
- Impersonation of individuals without their explicit, informed consent.
- Creation of deliberately misleading, false, or deceptive content (e.g., "deepfakes" for malicious purposes).
- Generation of harmful, hateful, harassing, or defamatory material.
- Voice cloning of any individual without their explicit prior permission.
- Any uses that violate applicable local, national, or international laws, regulations, or copyrights.
Responsibility: Users are responsible for the content they generate and how it is used. We encourage thoughtful consideration of the potential impact of synthetic media.
- Downloads last month
- 217
3-bit
4-bit
5-bit
6-bit
8-bit
16-bit