isaiahbjork commited on
Commit
af161b1
·
verified ·
1 Parent(s): 0a20b1e

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +64 -31
README.md CHANGED
@@ -10,46 +10,79 @@ tags:
10
  - gguf-my-repo
11
  ---
12
 
13
- # isaiahbjork/orpheus-3b-0.1-ft-Q4_K_M-GGUF
14
- This model was converted to GGUF format from [`canopylabs/orpheus-3b-0.1-ft`](https://huggingface.co/canopylabs/orpheus-3b-0.1-ft) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
15
- Refer to the [original model card](https://huggingface.co/canopylabs/orpheus-3b-0.1-ft) for more details on the model.
16
 
17
- ## Use with llama.cpp
18
- Install llama.cpp through brew (works on Mac and Linux)
19
 
20
- ```bash
21
- brew install llama.cpp
22
 
23
- ```
24
- Invoke the llama.cpp server or the CLI.
25
 
26
- ### CLI:
27
- ```bash
28
- llama-cli --hf-repo isaiahbjork/orpheus-3b-0.1-ft-Q4_K_M-GGUF --hf-file orpheus-3b-0.1-ft-q4_k_m.gguf -p "The meaning to life and the universe is"
29
- ```
30
 
31
- ### Server:
32
- ```bash
33
- llama-server --hf-repo isaiahbjork/orpheus-3b-0.1-ft-Q4_K_M-GGUF --hf-file orpheus-3b-0.1-ft-q4_k_m.gguf -c 2048
34
- ```
35
 
36
- Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
 
 
 
 
 
 
 
 
 
 
 
 
 
37
 
38
- Step 1: Clone llama.cpp from GitHub.
39
- ```
40
- git clone https://github.com/ggerganov/llama.cpp
41
- ```
42
 
43
- Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
44
  ```
45
- cd llama.cpp && LLAMA_CURL=1 make
46
  ```
47
 
48
- Step 3: Run inference through the main binary.
49
- ```
50
- ./llama-cli --hf-repo isaiahbjork/orpheus-3b-0.1-ft-Q4_K_M-GGUF --hf-file orpheus-3b-0.1-ft-q4_k_m.gguf -p "The meaning to life and the universe is"
51
- ```
52
- or
53
- ```
54
- ./llama-server --hf-repo isaiahbjork/orpheus-3b-0.1-ft-Q4_K_M-GGUF --hf-file orpheus-3b-0.1-ft-q4_k_m.gguf -c 2048
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
55
  ```
 
 
 
 
 
 
 
10
  - gguf-my-repo
11
  ---
12
 
13
+ # Orpheus-TTS-Local
 
 
14
 
15
+ A lightweight client for running [Orpheus TTS](https://huggingface.co/canopylabs/orpheus-3b-0.1-ft) locally using LM Studio API.
 
16
 
17
+ {Github Repo](https://github.com/isaiahbjork/orpheus-tts-local)
 
18
 
19
+ ## Features
 
20
 
21
+ - 🎧 High-quality Text-to-Speech using the Orpheus TTS model
22
+ - 💻 Completely local - no cloud API keys needed
23
+ - 🔊 Multiple voice options (tara, leah, jess, leo, dan, mia, zac, zoe)
24
+ - 💾 Save audio to WAV files
25
 
26
+ ## Quick Setup
 
 
 
27
 
28
+ 1. Install [LM Studio](https://lmstudio.ai/)
29
+ 2. Install the [Orpheus TTS model (orpheus-3b-0.1-ft-q4_k_m.gguf)](https://huggingface.co/isaiahbjork/orpheus-3b-0.1-ft-Q4_K_M-GGUF) in LM Studio
30
+ 3. Load the Orpheus model in LM Studio
31
+ 4. Start the local server in LM Studio (default: http://127.0.0.1:1234)
32
+ 5. Install dependencies:
33
+ ```
34
+ python3 -m venv venv
35
+ source venv/bin/activate
36
+ pip install -r requirements.txt
37
+ ```
38
+ 6. Run the script:
39
+ ```
40
+ python gguf_orpheus.py --text "Hello, this is a test" --voice tara
41
+ ```
42
 
43
+ ## Usage
 
 
 
44
 
 
45
  ```
46
+ python gguf_orpheus.py --text "Your text here" --voice tara --output "output.wav"
47
  ```
48
 
49
+ ### Options
50
+
51
+ - `--text`: The text to convert to speech
52
+ - `--voice`: The voice to use (default: tara)
53
+ - `--output`: Output WAV file path (default: auto-generated filename)
54
+ - `--list-voices`: Show available voices
55
+ - `--temperature`: Temperature for generation (default: 0.6)
56
+ - `--top_p`: Top-p sampling parameter (default: 0.9)
57
+ - `--repetition_penalty`: Repetition penalty (default: 1.1)
58
+
59
+ ## Available Voices
60
+
61
+ - tara - Best overall voice for general use (default)
62
+ - leah
63
+ - jess
64
+ - leo
65
+ - dan
66
+ - mia
67
+ - zac
68
+ - zoe
69
+
70
+ ## Emotion
71
+ You can add emotion to the speech by adding the following tags:
72
+ ```xml
73
+ <giggle>
74
+ <laugh>
75
+ <chuckle>
76
+ <sigh>
77
+ <cough>
78
+ <sniffle>
79
+ <groan>
80
+ <yawn>
81
+ <gasp>
82
  ```
83
+
84
+ ## License
85
+
86
+ Apache 2.0
87
+
88
+