Uploaded model

  • Converted by: Prince-1
  • License: apache-2.0
  • Original model : Prince-1/orpheus_3b_0.1_ft_16bit

Orpheus TTS is a state-of-the-art, Llama-based Speech-LLM designed for high-quality, empathetic text-to-speech generation. This model has been finetuned to deliver human-level speech synthesis, achieving exceptional clarity, expressiveness, and real-time streaming performances.

Model Details

Model Capabilities

  • Human-Like Speech: Natural intonation, emotion, and rhythm that is superior to SOTA closed source models
  • Zero-Shot Voice Cloning: Clone voices without prior fine-tuning
  • Guided Emotion and Intonation: Control speech and emotion characteristics with simple tags
  • Low Latency: ~200ms streaming latency for realtime applications, reducible to ~100ms with input streaming

Prerequisites

Before starting the conversion process, ensure your system meets the following requirements:

  • NVIDIA GPU with CUDA toolkit installe0d

  • Minimum 16 GB RAM (recommended)

  • Python with pip installed

Model Sources

Conversion Steps

Clone the Repository

  1. First, clone the official ONNX Runtime GenAI repository:
git clone https://github.com/microsoft/onnxruntime-genai

Download Huggingface model

  1. Download the Huggingface model using the following cli command
huggingface-cli download Prince-1/orpheus_3b_0.1_ft_16bit  --local-dir main

Run the Model Builder

  1. Use the model builder script to convert the Orpheus 3B model to ONNX format:
# Set the path to the builder script
$script_path="onnxruntime-genai/src/python/py/models/builder.py"
# Run the conversion
python $script_path -m "Prince-1/orpheus_3b_0.1_ft_16bit" -i "main" -o "onnx" -p "fp16" -e cuda

The command parameters:

-m: The model name/path (HuggingFace model identifier)

-o: Output directory for the ONNX model

-p: Precision setting (fp16 for half-precision floating point)

-e: Execution provider (cuda for NVIDIA GPU acceleration)

Model Misuse

Do not use our models for impersonation without consent, misinformation or deception (including fake news or fraudulent calls), or any illegal or harmful activity. By using this model, you agree to follow all applicable laws and ethical guidelines. We disclaim responsibility for any use.

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for Prince-1/OrpheusTTS-ONNX

Dataset used to train Prince-1/OrpheusTTS-ONNX