Purpose

The purpose of this repository is to store various TTS.cpp compatible GGUF encoded model files for the Kokoro TTS model.

Model Types

Currently there are two types of model each with five levels of quantization. The GGUF model files containing _espeak are configured to expect and use espeak for phonemization and those containing _no_espeak support tts native phonemization. The GGUF model files with no quantization suffix (i.e. Kokoro_espeak.gguf and Kokoro_no_espeak.gguf) use only 32bit floating point precision and the GGUF model files with Q4, Q5, Q8, and F16 suffixes support Q4_0 quantization, Q5_0 quantization, Q8_0 quantization, and 16 bit floating point precision respectively.

Kokoro

This page only contains the GGUF encoded model files of the original Kokoro model. For the original model please see the repository here.

Downloads last month
1,139
GGUF
Model size
87.7M params
Architecture
kokoro
Hardware compatibility
Log In to view the estimation

16-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for mmwillet2/Kokoro_GGUF

Finetuned
hexgrad/Kokoro-82M
Quantized
(9)
this model