metadata
license: apache-2.0
pipeline_tag: text-generation
tags:
- gguf
base_model:
- mrfakename/mistral-small-3.1-24b-instruct-2503-hf
Also see:
- 24B Instruct GGUF (this model)
- 24B Instruct HF
- 24B Base HF
GGUF quants for Mistral Small 3.1 Instruct 24B, compatible with llama.cpp (or almost any other llama.cpp app) in the Mistral format.
Use the Mistral chat template.
Only the text component has been converted to GGUF, does not work as a vision model.
No imatrix yet, sorry!