Llama.cpp ultravox-v0_5-llama-3_1-8b by fixie-ai

Original model: https://huggingface.co/fixie-ai/ultravox-v0_5-llama-3_1-8b

This is a F16 mmproj file intented to be used in conjunction with Llama-3.1-8B-Instruct. A high performance hybrid quant of Llama-3.1-8B-Instruct is available here: https://huggingface.co/steampunque/Llama-3.1-8B-Instruct-Hybrid-GGUF

Usage:

Llama-3.1-8B-Instruct is made into an audio capable model using the fixie-ai audio multimedia projector tuned to work with it. This enables the model to input both audio (.mp3 and .wav files) and text inputs and generate text outputs. The mmproj file is made available in this repository and the hybrid quant model file is linked above and below. More information about running multimedia may be found in the docs in the mtmd readme in the tools directory of the llama.cpp source tree https://github.com/ggml-org/llama.cpp/blob/master/tools/mtmd/README.md.

Extensive testing show that as of 7/1/2025 this is the only useful/usable small audio model available for llama.cpp. The fixie 1b model based on Llama 3.2 1b and qwen omni 7b were both found to be extremely unreliable at the task of transcribing audio streams which is one of the most useful practical applications of an audio model. This model can transcribe audio streams extremely accurately as long as the audio is broken up into ~60s chunks maximum. ffmpeg can easily handle breaking up the audio into chunks and formatting it to a desired type and sample rate. Recommendeded sample rate is 16000, single audio channel, .wav format with 16 bits per sample with input audio broken up into 30s to 60s chunks for transcription. This config was tested to work well over a wide range of test audio clips ranging in duration up to 20m, with extremely accurate transcription found for 3 different tested voices.

The Deepseek R1 distill of Llama 3.1 8b is also compatible with this mmproj. A hybrid quant of this model is available here : https://huggingface.co/steampunque/Deepseek-R1-Distill-Llama-8B-Hybrid-GGUF

Note that a file ultravox-v0_5-llama-3_1-8b.Q6_K_H.gguf was made available in this repository, but its use is deprecated in favor of Llama-3.1-8B-Instruct naming to avoid confusion. Future ultravox hybrid quant releases will only provide the original model name in a separate model repository to avoid duplicating the exact file with a different name in the repository.

Benchmarks:

Audio benchmarks for the model will eventually be given here: https://huggingface.co/spaces/steampunque/benchlm

Download the file from below:

Link Type Size/e9 B Notes
Llama-3.1-8B-Instruct.Q6_K_H.gguf Q6_K_H 6e9 B 0.6B smaller than Q6_K
ultravox-v0_5-llama-3_1-8b.mmproj.gguf mmproj 1.38e9 B multimedia projector

A discussion thread about the hybrid layer quant approach can be found here on the llama.cpp git repository:

https://github.com/ggml-org/llama.cpp/discussions/13040

Downloads last month
188
GGUF
Model size
687M params
Architecture
clip
Hardware compatibility
Log In to view the estimation

6-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for steampunque/ultravox-v0_5-llama-3_1-8b-Hybrid-GGUF

Quantized
(1)
this model