whisper-large-v3-turbo-german-f16
This model was converted to MLX format from primeline/whisper-large-v3-turbo-german
made with a custom script for converting safetensor whisper models.
it is in float16, works well. quantized version: 4bit, float16
Use with MLX
git clone https://github.com/ml-explore/mlx-examples.git
cd mlx-examples/whisper/
pip install -r requirements.txt
import mlx_whisper
result = mlx_whisper.transcribe("test.mp3", path_or_hf_repo="mlx-community/whisper-large-v3-turbo-german-f16")
print(result)
- Downloads last month
- 143
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support
Model tree for mlx-community/whisper-large-v3-turbo-german-f16
Base model
primeline/whisper-large-v3-german