--- license: apache-2.0 pipeline_tag: text-generation tags: - gguf base_model: - mrfakename/mistral-small-3.1-24b-instruct-2503-hf --- **Also see:** - **24B Instruct GGUF (this model)** - [24B Instruct HF](https://huggingface.co/mrfakename/mistral-small-3.1-24b-instruct-2503-hf) - [24B Base HF](https://huggingface.co/mrfakename/mistral-small-3.1-24b-base-2503-hf) GGUF quants for [Mistral Small 3.1 Instruct 24B](https://huggingface.co/mistralai/Mistral-Small-3.1-24B-Instruct-2503), compatible with llama.cpp (or almost any other llama.cpp app) in the Mistral format. Use the Mistral chat template. Only the text component has been converted to GGUF, does not work as a vision model. No imatrix yet, sorry!