---
language: en
tags:
- text-to-speech
- tts
- audio
- speech-synthesis
- orpheus
- gguf
license: apache-2.0
datasets:
- internal
---
# Orpheus-3b-FT-Q2_K
This is a quantised version of [canopylabs/orpheus-3b-0.1-ft](https://huggingface.co/canopylabs/orpheus-3b-0.1-ft).
Orpheus is a high-performance Text-to-Speech model fine-tuned for natural, emotional speech synthesis. This repository hosts the 8-bit quantised version of the 3B parameter model, optimised for efficiency while maintaining high-quality output.
## Model Description
**Orpheus-3b-FT-Q2_K** is a 3 billion parameter Text-to-Speech model that converts text inputs into natural-sounding speech with support for multiple voices and emotional expressions. The model has been quantised to 8-bit (Q2_K) format for efficient inference, making it accessible on consumer hardware.
Key features:
- 8 distinct voice options with different characteristics
- Support for emotion tags like laughter, sighs, etc.
- Optimised for CUDA acceleration on RTX GPUs
- Produces high-quality 24kHz mono audio
- Fine-tuned for conversational naturalness
## How to Use
This model is designed to be used with an LLM inference server that connects to the [Orpheus-FastAPI](https://github.com/Lex-au/Orpheus-FastAPI) frontend, which provides both a web UI and OpenAI-compatible API endpoints.
### Compatible Inference Servers
This quantised model can be loaded into any of these LLM inference servers:
- [GPUStack](https://github.com/gpustack/gpustack) - GPU optimised LLM inference server (My pick) - supports LAN/WAN tensor split parallelisation
- [LM Studio](https://lmstudio.ai/) - Load the GGUF model and start the local server
- [llama.cpp server](https://github.com/ggerganov/llama.cpp) - Run with the appropriate model parameters
- Any compatible OpenAI API-compatible server
### Quick Start
1. Download this quantised model from [lex-au's Orpheus-FASTAPI collection](https://huggingface.co/collections/lex-au/orpheus-fastapi-67e125ae03fc96dae0517707)
2. Load the model in your preferred inference server and start the server.
3. Clone the Orpheus-FastAPI repository:
```bash
git clone https://github.com/Lex-au/Orpheus-FastAPI.git
cd Orpheus-FastAPI
```
4. Configure the FastAPI server to connect to your inference server by setting the `ORPHEUS_API_URL` environment variable.
5. Follow the complete installation and setup instructions in the [repository README](https://github.com/Lex-au/Orpheus-FastAPI).
### Audio Samples
Listen to the model in action with different voices and emotions:
#### Default Voice Sample
#### Leah (Happy)
#### Tara (Sad)
#### Zac (Contemplative)
### Available Voices
The model supports 8 different voices:
- `tara`: Female, conversational, clear
- `leah`: Female, warm, gentle
- `jess`: Female, energetic, youthful
- `leo`: Male, authoritative, deep
- `dan`: Male, friendly, casual
- `mia`: Female, professional, articulate
- `zac`: Male, enthusiastic, dynamic
- `zoe`: Female, calm, soothing
### Emotion Tags
You can add expressiveness to speech by inserting tags:
- ``, ``: For laughter sounds
- ``: For sighing sounds
- ``, ``: For subtle interruptions
- ``, ``, ``: For additional emotional expression
## Technical Specifications
- **Architecture**: Specialised token-to-audio sequence model
- **Parameters**: ~3 billion
- **Quantisation**: 8-bit (GGUF Q2_K format)
- **Audio Sample Rate**: 24kHz
- **Input**: Text with optional voice selection and emotion tags
- **Output**: High-quality WAV audio
- **Language**: English
- **Hardware Requirements**: CUDA-compatible GPU (recommended: RTX series)
- **Integration Method**: External LLM inference server + Orpheus-FastAPI frontend
## Limitations
- Currently supports English text only
- Best performance achieved on CUDA-compatible GPUs
- Generation speed depends on GPU capability
## License
This model is available under the [Apache License 2.0](https://www.apache.org/licenses/LICENSE-2.0).
## Citation & Attribution
The original Orpheus model was created by Canopy Labs. This repository contains a quantised version optimised for use with the Orpheus-FastAPI server.
If you use this quantised model in your research or applications, please cite:
```
@misc{orpheus-tts-2025,
author = {Canopy Labs},
title = {Orpheus-3b-0.1-ft: Text-to-Speech Model},
year = {2025},
publisher = {HuggingFace},
howpublished = {\url{https://huggingface.co/canopylabs/orpheus-3b-0.1-ft}}
}
@misc{orpheus-quantised-2025,
author = {Lex-au},
title = {Orpheus-3b-FT-Q2_K: Quantised TTS Model with FastAPI Server},
note = {GGUF quantisation of canopylabs/orpheus-3b-0.1-ft},
year = {2025},
publisher = {HuggingFace},
howpublished = {\url{https://huggingface.co/lex-au/Orpheus-3b-FT-Q4_K_M.gguf}}
}
```