Llama.cpp hybrid layer quantization of Mistral-Small-3.1-24B-Instruct-2503 by mistralai
Original model: https://huggingface.co/mistralai/Mistral-Small-3.1-24B-Instruct-2503
The hybrid quant employs different quantization levels on a per layer basis to increased flexibility of trading off performance vs file size. Less parameter bits are used at deep layers and more bits at cortex layers to simultaneously optimize quantized size and model performance. These quants were specifically optimized so the vision mode of the model produced good outputs with no nonsense words across all the quants on a test case, while reducing the file size significantly to enable full offload (in non vision mode) of the smallest two quants on a 12G VRAM GPU. Three quants are available : Q2_K_H, Q3_K_H, and Q4_K_H. The layer quants are as follows:
Q2_K_H:
LAYER_TYPES='[
[0 ,"Q2_K" ],[1 ,"Q2_K_S"],[2 ,"Q2_K" ],[3 ,"Q2_K_S"],[4 ,"Q2_K" ],[5 ,"Q2_K_S"],[6 ,"Q2_K" ],[7 ,"Q2_K_S"],
[8 ,"Q3_K_S"],[9 ,"Q2_K" ],[10,"Q3_K_S"],[11,"Q2_K" ],[12,"Q3_K_S"],[13,"Q2_K" ],[14,"Q3_K_S"],[15,"Q2_K" ],
[16,"Q3_K_S"],[17,"Q2_K" ],[18,"Q3_K_S"],[19,"Q2_K" ],[20,"Q3_K_S"],[21,"Q2_K" ],[22,"Q3_K_S"],[23,"Q2_K" ],
[24,"Q3_K_S"],[25,"Q3_K_S"],[26,"Q3_K_S"],[27,"Q3_K_S"],[28,"Q3_K_S"],[29,"Q3_K_S"],[30,"Q3_K_S"],[31,"Q3_K_S"],
[32,"Q3_K_S"],[33,"Q3_K_S"],[34,"Q3_K_S"],[35,"Q3_K_S"],[36,"Q3_K_M"],[37,"Q3_K_M"],[38,"Q3_K_M"],[39,"Q3_K_M"]
]'
FLAGS="--token-embedding-type Q3_K --output-tensor-type Q5_K"
Q3_K_H:
LAYER_TYPES='[
[0 ,"Q3_K_S"],[1 ,"Q2_K" ],[2 ,"Q3_K_S"],[3 ,"Q2_K" ],[4 ,"Q3_K_S"],[5 ,"Q2_K" ],[6 ,"Q3_K_S"],[7 ,"Q2_K" ],
[8 ,"Q3_K_S"],[9 ,"Q2_K" ],[10,"Q3_K_S"],[11,"Q2_K" ],[12,"Q3_K_S"],[13,"Q2_K" ],[14,"Q3_K_S"],[15,"Q2_K" ],
[16,"Q3_K_S"],[17,"Q3_K_S"],[18,"Q3_K_S"],[19,"Q3_K_S"],[20,"Q3_K_S"],[21,"Q3_K_S"],[22,"Q3_K_S"],[23,"Q3_K_S"],
[24,"Q3_K_S"],[25,"Q3_K_S"],[26,"Q3_K_S"],[27,"Q3_K_S"],[28,"Q3_K_S"],[29,"Q3_K_S"],[30,"Q3_K_S"],[31,"Q3_K_M"],
[32,"Q3_K_M"],[33,"Q3_K_M"],[34,"Q3_K_M"],[35,"Q3_K_M"],[36,"Q3_K_M"],[37,"Q3_K_M"],[38,"Q3_K_L"],[39,"Q4_K_S"]
]'
FLAGS="--token-embedding-type Q4_K --output-tensor-type Q6_K"
fi
Q4_K_H:
LAYER_TYPES='[
[0 ,"Q3_K_M"],[1 ,"Q3_K_M"],[2 ,"Q3_K_M"],[3 ,"Q3_K_M"],[4 ,"Q3_K_M"],[5 ,"Q3_K_M"],[6 ,"Q3_K_M"],[7 ,"Q3_K_M"],
[8 ,"Q3_K_M"],[9 ,"Q3_K_M"],[10,"Q3_K_M"],[11,"Q3_K_M"],[12,"Q3_K_M"],[13,"Q3_K_M"],[14,"Q3_K_M"],[15,"Q3_K_M"],
[16,"Q3_K_L"],[17,"Q3_K_M"],[18,"Q3_K_L"],[19,"Q3_K_M"],[20,"Q3_K_L"],[21,"Q3_K_M"],[22,"Q3_K_L"],[23,"Q3_K_M"],
[24,"Q3_K_L"],[25,"Q3_K_M"],[26,"Q3_K_L"],[27,"Q3_K_M"],[28,"Q3_K_L"],[29,"Q3_K_M"],[30,"Q3_K_L"],[31,"Q3_K_M"],
[32,"Q3_K_L"],[33,"Q4_K_S"],[34,"Q4_K_S"],[35,"Q4_K_S"],[36,"Q4_K_M"],[37,"Q5_K_S"],[38,"Q5_K_M"],[39,"Q6_K"]
]'
FLAGS="--token-embedding-type Q4_K --output-tensor-type Q6_K"
fi
These quants were optimized for both good reasoning and vision performance.
Comparison:
Quant | size | PPL | Comment |
---|---|---|---|
Q2_K | 8.89e9 | 6.62 | not tested, most likely unusable |
Q2_K_H | 9.8e9 | 5.96 | optimized for good performance in vision mode |
Q3_K_H | 10.5e9 | 5.82 | slighly better than Q2_K_H |
Q3_K_M | 11.5e9 | 5.58 | not tested, should work well |
Q4_K_H | 12.5e9 | 5.49 | slightly smaller than IQ4_XS, similar performance |
IQ4_XS | 12.9e9 | 5.38 | not tested, should work well |
Usage:
This is a vision capable model. It can be used together with its multimedia projector layers to process images and text inputs and generate text outputs. The mmproj file is made available in this repository. To test vision mode follow the docs in the mtmd readme in the tools directory of the source tree https://github.com/ggml-org/llama.cpp/blob/master/tools/mtmd/README.md . Use of the best available model (Q4_K_H) is recommended to maximize the accuracy of vision mode. To run it on a 12G VRAM GPU use --ngl 32. Generation speed is still quite good with partial offload.
Benchmarks:
A full set of benchmarks for the model will eventually be given here: https://huggingface.co/spaces/steampunque/benchlm
Mistral-Small-3.1-24B-Instruct-2503 compares most closely with gemma-3-27B-it available here: https://huggingface.co/steampunque/gemma-3-27b-it-Hybrid-GGUF . A short summary of some key evals comparing the two models is given here for convenience:
model | gemma-3-27b-it | Mistral-Small-3.1-24B-Instruct-2503 |
---|---|---|
quant | Q4_K_H | Q4_K_H |
alignment | strict | permissive |
TEST | ||
Winogrande | 0.748 | 0.784 |
Lambada | 0.742 | 0.798 |
Hellaswag | 0.802 | 0.899 |
BoolQ | 0.701 | 0.646 |
Jeopardy | 0.830 | 0.740 |
GSM8K | 0.964 | 0.940 |
Apple | 0.850 | 0.820 |
Humaneval | 0.890 | 0.853 |
Download the file from below:
Link | Type | Size/e9 B | Notes |
---|---|---|---|
Mistral-Small-3.1-24B-Instruct-2503.Q2_K_H.gguf | Q2_K_H | 9.8e9 B | good quality |
Mistral-Small-3.1-24B-Instruct-2503.Q3_K_H.gguf | Q3_K_H | 10.5e9 B | solid quality |
Mistral-Small-3.1-24B-Instruct-2503.Q4_K_H.gguf | Q4_K_H | 12.5e9 B | best quality |
Mistral-Small-3.1-24B-Instruct-2503.mmproj.gguf | mmproj | 0.88e9 B | multimedia projector |
A discussion thread about the hybrid layer quant approach can be found here on the llama.cpp git repository:
- Downloads last month
- 160
2-bit
Model tree for steampunque/Mistral-Small-3.1-24B-Instruct-2503-Hybrid-GGUF
Base model
mistralai/Mistral-Small-3.1-24B-Base-2503