giangndm/qwen2.5-omni-3b-mlx-8bit
This model giangndm/qwen2.5-omni-3b-mlx-8bit was converted to MLX format from Qwen/Qwen2.5-Omni-3B using mlx-lm version 0.24.0.
Use with mlx (https://github.com/giangndm/mlx-lm-omni)
uv add mlx-lm-omni
# or
uv add https://github.com/giangndm/mlx-lm-omni.git
from mlx_lm_omni import load, generate
import librosa
from io import BytesIO
from urllib.request import urlopen
model, tokenizer = load("giangndm/qwen2.5-omni-3b-mlx-8bit")
audio_path = "https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen2-Audio/audio/1272-128104-0000.flac"
audio = librosa.load(BytesIO(urlopen(audio_path).read()), sr=16000)[0]
messages = [
{"role": "system", "content": "You are a speech recognition model."},
{"role": "user", "content": "Transcribe the English audio into text without any punctuation marks.", "audio": audio},
]
prompt = tokenizer.apply_chat_template(
messages, add_generation_prompt=True
)
text = generate(model, tokenizer, prompt=prompt, verbose=True)
- Downloads last month
- 131
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support
Model tree for giangndm/qwen2.5-omni-3b-mlx-8bit
Base model
Qwen/Qwen2.5-Omni-3B