steampunque commited on
Commit
4194f34
·
verified ·
1 Parent(s): e11a037

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +59 -0
README.md ADDED
@@ -0,0 +1,59 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ base_model: Qwen/Qwen2.5-VL-32B-Instruct
4
+ base_model_relation: quantized
5
+ tags:
6
+ - Qwen
7
+ - Qwen2.5
8
+ - GGUF
9
+ - quantized
10
+ - 6-bit
11
+ ---
12
+
13
+ ## Llama.cpp hybrid layer quantization of Qwen2.5-VL-32B-Instruct by Alibaba
14
+
15
+ Original model: https://huggingface.co/Qwen/Qwen2.5-VL-32B-Instruct
16
+
17
+ The hybrid quant employs different quantization levels on a per layer basis to enable
18
+ both high performance and small file size at the same time. The quants
19
+ employed are all K to avoid slow CPU or older GPU processing of IQ quants. For this
20
+ file the layer quants are as follows:
21
+ ```
22
+ LAYER_TYPES='[
23
+ [0 ,"Q4_K_M"],[1 ,"Q4_K_S"],[2 ,"Q3_K_M"],[3 ,"Q3_K_M"],[4 ,"Q3_K_M"],[5 ,"Q3_K_M"],[6 ,"Q3_K_M"],[7 ,"Q3_K_M"],
24
+ [8 ,"Q3_K_M"],[9 ,"Q3_K_M"],[10,"Q3_K_M"],[11,"Q3_K_M"],[12,"Q3_K_M"],[13,"Q3_K_M"],[14,"Q3_K_M"],[15,"Q3_K_M"],
25
+ [16,"Q3_K_L"],[17,"Q3_K_M"],[18,"Q3_K_M"],[19,"Q3_K_M"],[20,"Q3_K_L"],[21,"Q3_K_M"],[22,"Q3_K_M"],[23,"Q3_K_M"],
26
+ [24,"Q3_K_L"],[25,"Q3_K_L"],[26,"Q3_K_L"],[27,"Q3_K_L"],[28,"Q3_K_L"],[29,"Q3_K_L"],[30,"Q3_K_L"],[31,"Q3_K_L"],
27
+ [32,"Q3_K_L"],[33,"Q3_K_L"],[34,"Q3_K_L"],[35,"Q3_K_L"],[36,"Q3_K_L"],[37,"Q3_K_L"],[38,"Q3_K_L"],[39,"Q3_K_L"],
28
+ [40,"Q4_K_S"],[41,"Q3_K_L"],[42,"Q4_K_S"],[43,"Q3_K_L"],[44,"Q4_K_S"],[45,"Q3_K_L"],[46,"Q4_K_S"],[47,"Q3_K_L"],
29
+ [48,"Q4_K_S"],[49,"Q4_K_S"],[50,"Q4_K_S"],[51,"Q4_K_S"],[52,"Q4_K_M"],[53,"Q4_K_M"],[54,"Q4_K_M"],[55,"Q4_K_M"],
30
+ [56,"Q4_K_M"],[57,"Q4_K_M"],[58,"Q4_K_M"],[59,"Q4_K_M"],[60,"Q4_K_M"],[61,"Q5_K_S"],[62,"Q5_K_M"],[63,"Q6_K" ]
31
+ ]'
32
+ FLAGS="--token-embedding-type Q4_K --output-tensor-type Q6_K"
33
+ ```
34
+ Comparison:
35
+
36
+ Quant | size | PPL | Comment
37
+ ---------|---------|------|-----------
38
+ IQ4_XS | 17.9e9 | 6.4 | IQ4_XS with default embedding and output
39
+ Q4_K_H | 18e9 | 6.15 | Hybrid quant with Q4_K embedding Q6_K output
40
+
41
+ Usage:
42
+
43
+ Qwen2.5-VL-32B-Instruct is a vision capable model. It can be used together with its multimedia projector layers to process images
44
+ and text inputs and generate text outputs. The mmproj file is made available in this repository. To test vision mode follow
45
+ the docs in the mtmd readme in the tools directory of the source tree https://github.com/ggml-org/llama.cpp/blob/master/tools/mtmd/README.md .
46
+
47
+ Benchmarks:
48
+
49
+ A full set of vision benchmarks for the model will eventually be given here: https://huggingface.co/spaces/steampunque/benchlm
50
+
51
+ ## Download the file from below:
52
+ | Link | Type | Size/e9 B | Notes |
53
+ |------|------|-----------|-------|
54
+ | [Qwen2.5-VL-32B-Instruct.Q4_K_H.gguf](https://huggingface.co/steampunque/Qwen2.5-VL-32B-Instruct-Hybrid-GGUF/resolve/main/Qwen2.5-VL-32B-Instruct.Q4_K_H.gguf) | Q4_K_H | 17.9e9 B | ~IQ4_XS size better performance |
55
+ | [Qwen2.5-VL-32B-Instruct.mmproj.gguf](https://huggingface.co/steampunque/Qwen2.5-VL-32B-Instruct-Hybrid-GGUF/resolve/main/Qwen2.5-VL-32B-Instruct.mmproj.gguf) | mmproj | 1.38e9 B | multimedia projector |
56
+
57
+ A discussion thread about the hybrid layer quant approach can be found here on the llama.cpp git repository:
58
+
59
+ https://github.com/ggml-org/llama.cpp/discussions/13040