Also see:
- 24B Instruct GGUF (this model)
- 24B Instruct HF
- 24B Base HF
GGUF quants for Mistral Small 3.1 Instruct 24B, compatible with llama.cpp (or almost any other llama.cpp app) in the Mistral format.
Use the Mistral chat template.
Only the text component has been converted to GGUF, does not work as a vision model.
No imatrix yet, sorry!
- Downloads last month
- 16,452
Inference Providers
NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API:
The model has no library tag.
Model tree for mrfakename/mistral-small-3.1-24b-instruct-2503-gguf
Base model
mistralai/Mistral-Small-3.1-24B-Base-2503