--- license: apache-2.0 base_model: Qwen/Qwen2.5-VL-32B-Instruct base_model_relation: quantized tags: - Qwen - Qwen2.5 - GGUF - quantized - 6-bit --- ## Llama.cpp hybrid layer quantization of Qwen2.5-VL-32B-Instruct by Alibaba Original model: https://huggingface.co/Qwen/Qwen2.5-VL-32B-Instruct The hybrid quant employs different quantization levels on a per layer basis to enable both high performance and small file size at the same time. The quants employed are all K to avoid slow CPU or older GPU processing of IQ quants. For this file the layer quants are as follows: ``` LAYER_TYPES='[ [0 ,"Q4_K_M"],[1 ,"Q4_K_S"],[2 ,"Q3_K_M"],[3 ,"Q3_K_M"],[4 ,"Q3_K_M"],[5 ,"Q3_K_M"],[6 ,"Q3_K_M"],[7 ,"Q3_K_M"], [8 ,"Q3_K_M"],[9 ,"Q3_K_M"],[10,"Q3_K_M"],[11,"Q3_K_M"],[12,"Q3_K_M"],[13,"Q3_K_M"],[14,"Q3_K_M"],[15,"Q3_K_M"], [16,"Q3_K_L"],[17,"Q3_K_M"],[18,"Q3_K_M"],[19,"Q3_K_M"],[20,"Q3_K_L"],[21,"Q3_K_M"],[22,"Q3_K_M"],[23,"Q3_K_M"], [24,"Q3_K_L"],[25,"Q3_K_L"],[26,"Q3_K_L"],[27,"Q3_K_L"],[28,"Q3_K_L"],[29,"Q3_K_L"],[30,"Q3_K_L"],[31,"Q3_K_L"], [32,"Q3_K_L"],[33,"Q3_K_L"],[34,"Q3_K_L"],[35,"Q3_K_L"],[36,"Q3_K_L"],[37,"Q3_K_L"],[38,"Q3_K_L"],[39,"Q3_K_L"], [40,"Q4_K_S"],[41,"Q3_K_L"],[42,"Q4_K_S"],[43,"Q3_K_L"],[44,"Q4_K_S"],[45,"Q3_K_L"],[46,"Q4_K_S"],[47,"Q3_K_L"], [48,"Q4_K_S"],[49,"Q4_K_S"],[50,"Q4_K_S"],[51,"Q4_K_S"],[52,"Q4_K_M"],[53,"Q4_K_M"],[54,"Q4_K_M"],[55,"Q4_K_M"], [56,"Q4_K_M"],[57,"Q4_K_M"],[58,"Q4_K_M"],[59,"Q4_K_M"],[60,"Q4_K_M"],[61,"Q5_K_S"],[62,"Q5_K_M"],[63,"Q6_K" ] ]' FLAGS="--token-embedding-type Q4_K --output-tensor-type Q6_K" ``` Comparison: Quant | size | PPL | Comment ---------|---------|------|----------- IQ4_XS | 17.9e9 | 6.4 | IQ4_XS with default embedding and output Q4_K_H | 18e9 | 6.15 | Hybrid quant with Q4_K embedding Q6_K output Usage: Qwen2.5-VL-32B-Instruct is a vision capable model. It can be used together with its multimedia projector layers to process images and text inputs and generate text outputs. The mmproj file is made available in this repository. To test vision mode follow the docs in the mtmd readme in the tools directory of the source tree https://github.com/ggml-org/llama.cpp/blob/master/tools/mtmd/README.md . Benchmarks: A full set of vision benchmarks for the model will eventually be given here: https://huggingface.co/spaces/steampunque/benchlm ## Download the file from below: | Link | Type | Size/e9 B | Notes | |------|------|-----------|-------| | [Qwen2.5-VL-32B-Instruct.Q4_K_H.gguf](https://huggingface.co/steampunque/Qwen2.5-VL-32B-Instruct-Hybrid-GGUF/resolve/main/Qwen2.5-VL-32B-Instruct.Q4_K_H.gguf) | Q4_K_H | 17.9e9 B | ~IQ4_XS size better performance | | [Qwen2.5-VL-32B-Instruct.mmproj.gguf](https://huggingface.co/steampunque/Qwen2.5-VL-32B-Instruct-Hybrid-GGUF/resolve/main/Qwen2.5-VL-32B-Instruct.mmproj.gguf) | mmproj | 1.38e9 B | multimedia projector | A discussion thread about the hybrid layer quant approach can be found here on the llama.cpp git repository: https://github.com/ggml-org/llama.cpp/discussions/13040