Whisper-medium-Malayalam (MLX)
Apple MLX-converted weights for vrclc/Whisper-medium-Malayalam
optimized for Apple Silicon.
- Base model:
vrclc/Whisper-medium-Malayalam
- Format: MLX (
weights.safetensors
,config.json
) - Intended runtime:
mlx-whisper
on Apple Silicon (M-series)
Usage (Python)
import mlx_whisper
result = mlx_whisper.transcribe(
"/path/to/audio.wav",
path_or_hf_repo="<this-repo>",
# Optional decoding controls
language="ml", # Malayalam
task="transcribe", # or "translate"
temperature=0.0,
no_speech_threshold=0.3,
logprob_threshold=-1.0,
compression_ratio_threshold=2.4,
)
print(result["text"])
Local HTTP server (FastAPI)
With the server at whisper/server_mlx.py
from avatar-npm
:
export WHISPER_MODEL=<this-repo-or-local-mlx-path>
export WHISPER_LANGUAGE=ml
python server_mlx.py
# POST /transcribe with form field `file`
Notes
- This repo contains only the MLX weights and config. Tokenization and audio
preprocessing are handled by
mlx-whisper
. - If you need the original (non-MLX) model, see
vrclc/Whisper-medium-Malayalam
.
License
The original model’s license applies. See the upstream repository for details.
Maintainers
- thanveerdev
- Downloads last month
- -
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support