Files changed (1) hide show
  1. README.md +177 -165
README.md CHANGED
@@ -1,166 +1,178 @@
1
- ---
2
- base_model: Qwen/Qwen2.5-0.5B-Instruct
3
- inference: false
4
- language:
5
- - en
6
- library_name: gguf
7
- license: apache-2.0
8
- license_link: https://huggingface.co/Qwen/Qwen2.5-0.5B-Instruct/blob/main/LICENSE
9
- pipeline_tag: text-generation
10
- quantized_by: legraphista
11
- tags:
12
- - chat
13
- - quantized
14
- - GGUF
15
- - quantization
16
- - imat
17
- - imatrix
18
- - static
19
- - 16bit
20
- - 8bit
21
- - 6bit
22
- - 5bit
23
- - 4bit
24
- - 3bit
25
- - 2bit
26
- - 1bit
27
- ---
28
-
29
- # Qwen2.5-0.5B-Instruct-IMat-GGUF
30
- _Llama.cpp imatrix quantization of Qwen/Qwen2.5-0.5B-Instruct_
31
-
32
- Original Model: [Qwen/Qwen2.5-0.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-0.5B-Instruct)
33
- Original dtype: `BF16` (`bfloat16`)
34
- Quantized by: llama.cpp [b3785](https://github.com/ggerganov/llama.cpp/releases/tag/b3785)
35
- IMatrix dataset: [here](https://gist.githubusercontent.com/bartowski1182/eb213dccb3571f863da82e99418f81e8/raw/b2869d80f5c16fd7082594248e80144677736635/calibration_datav3.txt)
36
-
37
- - [Files](#files)
38
- - [IMatrix](#imatrix)
39
- - [Common Quants](#common-quants)
40
- - [All Quants](#all-quants)
41
- - [Downloading using huggingface-cli](#downloading-using-huggingface-cli)
42
- - [Inference](#inference)
43
- - [Simple chat template](#simple-chat-template)
44
- - [Chat template with system prompt](#chat-template-with-system-prompt)
45
- - [Llama.cpp](#llama-cpp)
46
- - [FAQ](#faq)
47
- - [Why is the IMatrix not applied everywhere?](#why-is-the-imatrix-not-applied-everywhere)
48
- - [How do I merge a split GGUF?](#how-do-i-merge-a-split-gguf)
49
-
50
- ---
51
-
52
- ## Files
53
-
54
- ### IMatrix
55
- Status: βœ… Available
56
- Link: [here](https://huggingface.co/legraphista/Qwen2.5-0.5B-Instruct-IMat-GGUF/blob/main/imatrix.dat)
57
-
58
- ### Common Quants
59
- | Filename | Quant type | File Size | Status | Uses IMatrix | Is Split |
60
- | -------- | ---------- | --------- | ------ | ------------ | -------- |
61
- | [Qwen2.5-0.5B-Instruct.Q8_0.gguf](https://huggingface.co/legraphista/Qwen2.5-0.5B-Instruct-IMat-GGUF/blob/main/Qwen2.5-0.5B-Instruct.Q8_0.gguf) | Q8_0 | 531.07MB | βœ… Available | βšͺ Static | πŸ“¦ No
62
- | [Qwen2.5-0.5B-Instruct.Q6_K.gguf](https://huggingface.co/legraphista/Qwen2.5-0.5B-Instruct-IMat-GGUF/blob/main/Qwen2.5-0.5B-Instruct.Q6_K.gguf) | Q6_K | 505.74MB | βœ… Available | βšͺ Static | πŸ“¦ No
63
- | [Qwen2.5-0.5B-Instruct.Q4_K.gguf](https://huggingface.co/legraphista/Qwen2.5-0.5B-Instruct-IMat-GGUF/blob/main/Qwen2.5-0.5B-Instruct.Q4_K.gguf) | Q4_K | 397.81MB | βœ… Available | 🟒 IMatrix | πŸ“¦ No
64
- | [Qwen2.5-0.5B-Instruct.Q3_K.gguf](https://huggingface.co/legraphista/Qwen2.5-0.5B-Instruct-IMat-GGUF/blob/main/Qwen2.5-0.5B-Instruct.Q3_K.gguf) | Q3_K | 355.47MB | βœ… Available | 🟒 IMatrix | πŸ“¦ No
65
- | [Qwen2.5-0.5B-Instruct.Q2_K.gguf](https://huggingface.co/legraphista/Qwen2.5-0.5B-Instruct-IMat-GGUF/blob/main/Qwen2.5-0.5B-Instruct.Q2_K.gguf) | Q2_K | 338.61MB | βœ… Available | 🟒 IMatrix | πŸ“¦ No
66
-
67
-
68
- ### All Quants
69
- | Filename | Quant type | File Size | Status | Uses IMatrix | Is Split |
70
- | -------- | ---------- | --------- | ------ | ------------ | -------- |
71
- | [Qwen2.5-0.5B-Instruct.BF16.gguf](https://huggingface.co/legraphista/Qwen2.5-0.5B-Instruct-IMat-GGUF/blob/main/Qwen2.5-0.5B-Instruct.BF16.gguf) | BF16 | 994.16MB | βœ… Available | βšͺ Static | πŸ“¦ No
72
- | [Qwen2.5-0.5B-Instruct.FP16.gguf](https://huggingface.co/legraphista/Qwen2.5-0.5B-Instruct-IMat-GGUF/blob/main/Qwen2.5-0.5B-Instruct.FP16.gguf) | F16 | 994.16MB | βœ… Available | βšͺ Static | πŸ“¦ No
73
- | [Qwen2.5-0.5B-Instruct.Q8_0.gguf](https://huggingface.co/legraphista/Qwen2.5-0.5B-Instruct-IMat-GGUF/blob/main/Qwen2.5-0.5B-Instruct.Q8_0.gguf) | Q8_0 | 531.07MB | βœ… Available | βšͺ Static | πŸ“¦ No
74
- | [Qwen2.5-0.5B-Instruct.Q6_K.gguf](https://huggingface.co/legraphista/Qwen2.5-0.5B-Instruct-IMat-GGUF/blob/main/Qwen2.5-0.5B-Instruct.Q6_K.gguf) | Q6_K | 505.74MB | βœ… Available | βšͺ Static | πŸ“¦ No
75
- | [Qwen2.5-0.5B-Instruct.Q5_K.gguf](https://huggingface.co/legraphista/Qwen2.5-0.5B-Instruct-IMat-GGUF/blob/main/Qwen2.5-0.5B-Instruct.Q5_K.gguf) | Q5_K | 420.09MB | βœ… Available | βšͺ Static | πŸ“¦ No
76
- | [Qwen2.5-0.5B-Instruct.Q5_K_S.gguf](https://huggingface.co/legraphista/Qwen2.5-0.5B-Instruct-IMat-GGUF/blob/main/Qwen2.5-0.5B-Instruct.Q5_K_S.gguf) | Q5_K_S | 412.71MB | βœ… Available | βšͺ Static | πŸ“¦ No
77
- | [Qwen2.5-0.5B-Instruct.Q4_K.gguf](https://huggingface.co/legraphista/Qwen2.5-0.5B-Instruct-IMat-GGUF/blob/main/Qwen2.5-0.5B-Instruct.Q4_K.gguf) | Q4_K | 397.81MB | βœ… Available | 🟒 IMatrix | πŸ“¦ No
78
- | [Qwen2.5-0.5B-Instruct.Q4_K_S.gguf](https://huggingface.co/legraphista/Qwen2.5-0.5B-Instruct-IMat-GGUF/blob/main/Qwen2.5-0.5B-Instruct.Q4_K_S.gguf) | Q4_K_S | 385.47MB | βœ… Available | 🟒 IMatrix | πŸ“¦ No
79
- | [Qwen2.5-0.5B-Instruct.IQ4_NL.gguf](https://huggingface.co/legraphista/Qwen2.5-0.5B-Instruct-IMat-GGUF/blob/main/Qwen2.5-0.5B-Instruct.IQ4_NL.gguf) | IQ4_NL | 352.67MB | βœ… Available | 🟒 IMatrix | πŸ“¦ No
80
- | [Qwen2.5-0.5B-Instruct.IQ4_XS.gguf](https://huggingface.co/legraphista/Qwen2.5-0.5B-Instruct-IMat-GGUF/blob/main/Qwen2.5-0.5B-Instruct.IQ4_XS.gguf) | IQ4_XS | 349.40MB | βœ… Available | 🟒 IMatrix | πŸ“¦ No
81
- | [Qwen2.5-0.5B-Instruct.Q3_K.gguf](https://huggingface.co/legraphista/Qwen2.5-0.5B-Instruct-IMat-GGUF/blob/main/Qwen2.5-0.5B-Instruct.Q3_K.gguf) | Q3_K | 355.47MB | βœ… Available | 🟒 IMatrix | πŸ“¦ No
82
- | [Qwen2.5-0.5B-Instruct.Q3_K_L.gguf](https://huggingface.co/legraphista/Qwen2.5-0.5B-Instruct-IMat-GGUF/blob/main/Qwen2.5-0.5B-Instruct.Q3_K_L.gguf) | Q3_K_L | 369.36MB | βœ… Available | 🟒 IMatrix | πŸ“¦ No
83
- | [Qwen2.5-0.5B-Instruct.Q3_K_S.gguf](https://huggingface.co/legraphista/Qwen2.5-0.5B-Instruct-IMat-GGUF/blob/main/Qwen2.5-0.5B-Instruct.Q3_K_S.gguf) | Q3_K_S | 338.26MB | βœ… Available | 🟒 IMatrix | πŸ“¦ No
84
- | [Qwen2.5-0.5B-Instruct.IQ3_M.gguf](https://huggingface.co/legraphista/Qwen2.5-0.5B-Instruct-IMat-GGUF/blob/main/Qwen2.5-0.5B-Instruct.IQ3_M.gguf) | IQ3_M | 342.75MB | βœ… Available | 🟒 IMatrix | πŸ“¦ No
85
- | [Qwen2.5-0.5B-Instruct.IQ3_S.gguf](https://huggingface.co/legraphista/Qwen2.5-0.5B-Instruct-IMat-GGUF/blob/main/Qwen2.5-0.5B-Instruct.IQ3_S.gguf) | IQ3_S | 338.61MB | βœ… Available | 🟒 IMatrix | πŸ“¦ No
86
- | [Qwen2.5-0.5B-Instruct.IQ3_XS.gguf](https://huggingface.co/legraphista/Qwen2.5-0.5B-Instruct-IMat-GGUF/blob/main/Qwen2.5-0.5B-Instruct.IQ3_XS.gguf) | IQ3_XS | 338.61MB | βœ… Available | 🟒 IMatrix | πŸ“¦ No
87
- | [Qwen2.5-0.5B-Instruct.IQ3_XXS.gguf](https://huggingface.co/legraphista/Qwen2.5-0.5B-Instruct-IMat-GGUF/blob/main/Qwen2.5-0.5B-Instruct.IQ3_XXS.gguf) | IQ3_XXS | 333.70MB | βœ… Available | 🟒 IMatrix | πŸ“¦ No
88
- | [Qwen2.5-0.5B-Instruct.Q2_K.gguf](https://huggingface.co/legraphista/Qwen2.5-0.5B-Instruct-IMat-GGUF/blob/main/Qwen2.5-0.5B-Instruct.Q2_K.gguf) | Q2_K | 338.61MB | βœ… Available | 🟒 IMatrix | πŸ“¦ No
89
- | [Qwen2.5-0.5B-Instruct.Q2_K_S.gguf](https://huggingface.co/legraphista/Qwen2.5-0.5B-Instruct-IMat-GGUF/blob/main/Qwen2.5-0.5B-Instruct.Q2_K_S.gguf) | Q2_K_S | 331.05MB | βœ… Available | 🟒 IMatrix | πŸ“¦ No
90
- | [Qwen2.5-0.5B-Instruct.IQ2_M.gguf](https://huggingface.co/legraphista/Qwen2.5-0.5B-Instruct-IMat-GGUF/blob/main/Qwen2.5-0.5B-Instruct.IQ2_M.gguf) | IQ2_M | 328.60MB | βœ… Available | 🟒 IMatrix | πŸ“¦ No
91
- | [Qwen2.5-0.5B-Instruct.IQ2_S.gguf](https://huggingface.co/legraphista/Qwen2.5-0.5B-Instruct-IMat-GGUF/blob/main/Qwen2.5-0.5B-Instruct.IQ2_S.gguf) | IQ2_S | 325.74MB | βœ… Available | 🟒 IMatrix | πŸ“¦ No
92
- | [Qwen2.5-0.5B-Instruct.IQ2_XS.gguf](https://huggingface.co/legraphista/Qwen2.5-0.5B-Instruct-IMat-GGUF/blob/main/Qwen2.5-0.5B-Instruct.IQ2_XS.gguf) | IQ2_XS | 324.41MB | βœ… Available | 🟒 IMatrix | πŸ“¦ No
93
- | [Qwen2.5-0.5B-Instruct.IQ2_XXS.gguf](https://huggingface.co/legraphista/Qwen2.5-0.5B-Instruct-IMat-GGUF/blob/main/Qwen2.5-0.5B-Instruct.IQ2_XXS.gguf) | IQ2_XXS | 321.55MB | βœ… Available | 🟒 IMatrix | πŸ“¦ No
94
- | [Qwen2.5-0.5B-Instruct.IQ1_M.gguf](https://huggingface.co/legraphista/Qwen2.5-0.5B-Instruct-IMat-GGUF/blob/main/Qwen2.5-0.5B-Instruct.IQ1_M.gguf) | IQ1_M | 317.97MB | βœ… Available | 🟒 IMatrix | πŸ“¦ No
95
- | [Qwen2.5-0.5B-Instruct.IQ1_S.gguf](https://huggingface.co/legraphista/Qwen2.5-0.5B-Instruct-IMat-GGUF/blob/main/Qwen2.5-0.5B-Instruct.IQ1_S.gguf) | IQ1_S | 315.83MB | βœ… Available | 🟒 IMatrix | πŸ“¦ No
96
-
97
-
98
- ## Downloading using huggingface-cli
99
- If you do not have hugginface-cli installed:
100
- ```
101
- pip install -U "huggingface_hub[cli]"
102
- ```
103
- Download the specific file you want:
104
- ```
105
- huggingface-cli download legraphista/Qwen2.5-0.5B-Instruct-IMat-GGUF --include "Qwen2.5-0.5B-Instruct.Q8_0.gguf" --local-dir ./
106
- ```
107
- If the model file is big, it has been split into multiple files. In order to download them all to a local folder, run:
108
- ```
109
- huggingface-cli download legraphista/Qwen2.5-0.5B-Instruct-IMat-GGUF --include "Qwen2.5-0.5B-Instruct.Q8_0/*" --local-dir ./
110
- # see FAQ for merging GGUF's
111
- ```
112
-
113
- ---
114
-
115
- ## Inference
116
-
117
- ### Simple chat template
118
- ```
119
- <|im_start|>system
120
- You are Qwen, created by Alibaba Cloud. You are a helpful assistant.<|im_end|>
121
- <|im_start|>user
122
- {user_prompt}<|im_end|>
123
- <|im_start|>assistant
124
- {assistant_response}<|im_end|>
125
- <|im_start|>user
126
- {next_user_prompt}<|im_end|>
127
-
128
- ```
129
-
130
- ### Chat template with system prompt
131
- ```
132
- <|im_start|>system
133
- {system_prompt}<|im_end|>
134
- <|im_start|>user
135
- {user_prompt}<|im_end|>
136
- <|im_start|>assistant
137
- {assistant_response}<|im_end|>
138
- <|im_start|>user
139
- {next_user_prompt}<|im_end|>
140
-
141
- ```
142
-
143
- ### Llama.cpp
144
- ```
145
- llama.cpp/main -m Qwen2.5-0.5B-Instruct.Q8_0.gguf --color -i -p "prompt here (according to the chat template)"
146
- ```
147
-
148
- ---
149
-
150
- ## FAQ
151
-
152
- ### Why is the IMatrix not applied everywhere?
153
- According to [this investigation](https://www.reddit.com/r/LocalLLaMA/comments/1993iro/ggufs_quants_can_punch_above_their_weights_now/), it appears that lower quantizations are the only ones that benefit from the imatrix input (as per hellaswag results).
154
-
155
- ### How do I merge a split GGUF?
156
- 1. Make sure you have `gguf-split` available
157
- - To get hold of `gguf-split`, navigate to https://github.com/ggerganov/llama.cpp/releases
158
- - Download the appropriate zip for your system from the latest release
159
- - Unzip the archive and you should be able to find `gguf-split`
160
- 2. Locate your GGUF chunks folder (ex: `Qwen2.5-0.5B-Instruct.Q8_0`)
161
- 3. Run `gguf-split --merge Qwen2.5-0.5B-Instruct.Q8_0/Qwen2.5-0.5B-Instruct.Q8_0-00001-of-XXXXX.gguf Qwen2.5-0.5B-Instruct.Q8_0.gguf`
162
- - Make sure to point `gguf-split` to the first chunk of the split.
163
-
164
- ---
165
-
 
 
 
 
 
 
 
 
 
 
 
 
166
  Got a suggestion? Ping me [@legraphista](https://x.com/legraphista)!
 
1
+ ---
2
+ base_model: Qwen/Qwen2.5-0.5B-Instruct
3
+ inference: false
4
+ language:
5
+ - zho
6
+ - eng
7
+ - fra
8
+ - spa
9
+ - por
10
+ - deu
11
+ - ita
12
+ - rus
13
+ - jpn
14
+ - kor
15
+ - vie
16
+ - tha
17
+ - ara
18
+ library_name: gguf
19
+ license: apache-2.0
20
+ license_link: https://huggingface.co/Qwen/Qwen2.5-0.5B-Instruct/blob/main/LICENSE
21
+ pipeline_tag: text-generation
22
+ quantized_by: legraphista
23
+ tags:
24
+ - chat
25
+ - quantized
26
+ - GGUF
27
+ - quantization
28
+ - imat
29
+ - imatrix
30
+ - static
31
+ - 16bit
32
+ - 8bit
33
+ - 6bit
34
+ - 5bit
35
+ - 4bit
36
+ - 3bit
37
+ - 2bit
38
+ - 1bit
39
+ ---
40
+
41
+ # Qwen2.5-0.5B-Instruct-IMat-GGUF
42
+ _Llama.cpp imatrix quantization of Qwen/Qwen2.5-0.5B-Instruct_
43
+
44
+ Original Model: [Qwen/Qwen2.5-0.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-0.5B-Instruct)
45
+ Original dtype: `BF16` (`bfloat16`)
46
+ Quantized by: llama.cpp [b3785](https://github.com/ggerganov/llama.cpp/releases/tag/b3785)
47
+ IMatrix dataset: [here](https://gist.githubusercontent.com/bartowski1182/eb213dccb3571f863da82e99418f81e8/raw/b2869d80f5c16fd7082594248e80144677736635/calibration_datav3.txt)
48
+
49
+ - [Files](#files)
50
+ - [IMatrix](#imatrix)
51
+ - [Common Quants](#common-quants)
52
+ - [All Quants](#all-quants)
53
+ - [Downloading using huggingface-cli](#downloading-using-huggingface-cli)
54
+ - [Inference](#inference)
55
+ - [Simple chat template](#simple-chat-template)
56
+ - [Chat template with system prompt](#chat-template-with-system-prompt)
57
+ - [Llama.cpp](#llama-cpp)
58
+ - [FAQ](#faq)
59
+ - [Why is the IMatrix not applied everywhere?](#why-is-the-imatrix-not-applied-everywhere)
60
+ - [How do I merge a split GGUF?](#how-do-i-merge-a-split-gguf)
61
+
62
+ ---
63
+
64
+ ## Files
65
+
66
+ ### IMatrix
67
+ Status: βœ… Available
68
+ Link: [here](https://huggingface.co/legraphista/Qwen2.5-0.5B-Instruct-IMat-GGUF/blob/main/imatrix.dat)
69
+
70
+ ### Common Quants
71
+ | Filename | Quant type | File Size | Status | Uses IMatrix | Is Split |
72
+ | -------- | ---------- | --------- | ------ | ------------ | -------- |
73
+ | [Qwen2.5-0.5B-Instruct.Q8_0.gguf](https://huggingface.co/legraphista/Qwen2.5-0.5B-Instruct-IMat-GGUF/blob/main/Qwen2.5-0.5B-Instruct.Q8_0.gguf) | Q8_0 | 531.07MB | βœ… Available | βšͺ Static | πŸ“¦ No
74
+ | [Qwen2.5-0.5B-Instruct.Q6_K.gguf](https://huggingface.co/legraphista/Qwen2.5-0.5B-Instruct-IMat-GGUF/blob/main/Qwen2.5-0.5B-Instruct.Q6_K.gguf) | Q6_K | 505.74MB | βœ… Available | βšͺ Static | πŸ“¦ No
75
+ | [Qwen2.5-0.5B-Instruct.Q4_K.gguf](https://huggingface.co/legraphista/Qwen2.5-0.5B-Instruct-IMat-GGUF/blob/main/Qwen2.5-0.5B-Instruct.Q4_K.gguf) | Q4_K | 397.81MB | βœ… Available | 🟒 IMatrix | πŸ“¦ No
76
+ | [Qwen2.5-0.5B-Instruct.Q3_K.gguf](https://huggingface.co/legraphista/Qwen2.5-0.5B-Instruct-IMat-GGUF/blob/main/Qwen2.5-0.5B-Instruct.Q3_K.gguf) | Q3_K | 355.47MB | βœ… Available | 🟒 IMatrix | πŸ“¦ No
77
+ | [Qwen2.5-0.5B-Instruct.Q2_K.gguf](https://huggingface.co/legraphista/Qwen2.5-0.5B-Instruct-IMat-GGUF/blob/main/Qwen2.5-0.5B-Instruct.Q2_K.gguf) | Q2_K | 338.61MB | βœ… Available | 🟒 IMatrix | πŸ“¦ No
78
+
79
+
80
+ ### All Quants
81
+ | Filename | Quant type | File Size | Status | Uses IMatrix | Is Split |
82
+ | -------- | ---------- | --------- | ------ | ------------ | -------- |
83
+ | [Qwen2.5-0.5B-Instruct.BF16.gguf](https://huggingface.co/legraphista/Qwen2.5-0.5B-Instruct-IMat-GGUF/blob/main/Qwen2.5-0.5B-Instruct.BF16.gguf) | BF16 | 994.16MB | βœ… Available | βšͺ Static | πŸ“¦ No
84
+ | [Qwen2.5-0.5B-Instruct.FP16.gguf](https://huggingface.co/legraphista/Qwen2.5-0.5B-Instruct-IMat-GGUF/blob/main/Qwen2.5-0.5B-Instruct.FP16.gguf) | F16 | 994.16MB | βœ… Available | βšͺ Static | πŸ“¦ No
85
+ | [Qwen2.5-0.5B-Instruct.Q8_0.gguf](https://huggingface.co/legraphista/Qwen2.5-0.5B-Instruct-IMat-GGUF/blob/main/Qwen2.5-0.5B-Instruct.Q8_0.gguf) | Q8_0 | 531.07MB | βœ… Available | βšͺ Static | πŸ“¦ No
86
+ | [Qwen2.5-0.5B-Instruct.Q6_K.gguf](https://huggingface.co/legraphista/Qwen2.5-0.5B-Instruct-IMat-GGUF/blob/main/Qwen2.5-0.5B-Instruct.Q6_K.gguf) | Q6_K | 505.74MB | βœ… Available | βšͺ Static | πŸ“¦ No
87
+ | [Qwen2.5-0.5B-Instruct.Q5_K.gguf](https://huggingface.co/legraphista/Qwen2.5-0.5B-Instruct-IMat-GGUF/blob/main/Qwen2.5-0.5B-Instruct.Q5_K.gguf) | Q5_K | 420.09MB | βœ… Available | βšͺ Static | πŸ“¦ No
88
+ | [Qwen2.5-0.5B-Instruct.Q5_K_S.gguf](https://huggingface.co/legraphista/Qwen2.5-0.5B-Instruct-IMat-GGUF/blob/main/Qwen2.5-0.5B-Instruct.Q5_K_S.gguf) | Q5_K_S | 412.71MB | βœ… Available | βšͺ Static | πŸ“¦ No
89
+ | [Qwen2.5-0.5B-Instruct.Q4_K.gguf](https://huggingface.co/legraphista/Qwen2.5-0.5B-Instruct-IMat-GGUF/blob/main/Qwen2.5-0.5B-Instruct.Q4_K.gguf) | Q4_K | 397.81MB | βœ… Available | 🟒 IMatrix | πŸ“¦ No
90
+ | [Qwen2.5-0.5B-Instruct.Q4_K_S.gguf](https://huggingface.co/legraphista/Qwen2.5-0.5B-Instruct-IMat-GGUF/blob/main/Qwen2.5-0.5B-Instruct.Q4_K_S.gguf) | Q4_K_S | 385.47MB | βœ… Available | 🟒 IMatrix | πŸ“¦ No
91
+ | [Qwen2.5-0.5B-Instruct.IQ4_NL.gguf](https://huggingface.co/legraphista/Qwen2.5-0.5B-Instruct-IMat-GGUF/blob/main/Qwen2.5-0.5B-Instruct.IQ4_NL.gguf) | IQ4_NL | 352.67MB | βœ… Available | 🟒 IMatrix | πŸ“¦ No
92
+ | [Qwen2.5-0.5B-Instruct.IQ4_XS.gguf](https://huggingface.co/legraphista/Qwen2.5-0.5B-Instruct-IMat-GGUF/blob/main/Qwen2.5-0.5B-Instruct.IQ4_XS.gguf) | IQ4_XS | 349.40MB | βœ… Available | 🟒 IMatrix | πŸ“¦ No
93
+ | [Qwen2.5-0.5B-Instruct.Q3_K.gguf](https://huggingface.co/legraphista/Qwen2.5-0.5B-Instruct-IMat-GGUF/blob/main/Qwen2.5-0.5B-Instruct.Q3_K.gguf) | Q3_K | 355.47MB | βœ… Available | 🟒 IMatrix | πŸ“¦ No
94
+ | [Qwen2.5-0.5B-Instruct.Q3_K_L.gguf](https://huggingface.co/legraphista/Qwen2.5-0.5B-Instruct-IMat-GGUF/blob/main/Qwen2.5-0.5B-Instruct.Q3_K_L.gguf) | Q3_K_L | 369.36MB | βœ… Available | 🟒 IMatrix | πŸ“¦ No
95
+ | [Qwen2.5-0.5B-Instruct.Q3_K_S.gguf](https://huggingface.co/legraphista/Qwen2.5-0.5B-Instruct-IMat-GGUF/blob/main/Qwen2.5-0.5B-Instruct.Q3_K_S.gguf) | Q3_K_S | 338.26MB | βœ… Available | 🟒 IMatrix | πŸ“¦ No
96
+ | [Qwen2.5-0.5B-Instruct.IQ3_M.gguf](https://huggingface.co/legraphista/Qwen2.5-0.5B-Instruct-IMat-GGUF/blob/main/Qwen2.5-0.5B-Instruct.IQ3_M.gguf) | IQ3_M | 342.75MB | βœ… Available | 🟒 IMatrix | πŸ“¦ No
97
+ | [Qwen2.5-0.5B-Instruct.IQ3_S.gguf](https://huggingface.co/legraphista/Qwen2.5-0.5B-Instruct-IMat-GGUF/blob/main/Qwen2.5-0.5B-Instruct.IQ3_S.gguf) | IQ3_S | 338.61MB | βœ… Available | 🟒 IMatrix | πŸ“¦ No
98
+ | [Qwen2.5-0.5B-Instruct.IQ3_XS.gguf](https://huggingface.co/legraphista/Qwen2.5-0.5B-Instruct-IMat-GGUF/blob/main/Qwen2.5-0.5B-Instruct.IQ3_XS.gguf) | IQ3_XS | 338.61MB | βœ… Available | 🟒 IMatrix | πŸ“¦ No
99
+ | [Qwen2.5-0.5B-Instruct.IQ3_XXS.gguf](https://huggingface.co/legraphista/Qwen2.5-0.5B-Instruct-IMat-GGUF/blob/main/Qwen2.5-0.5B-Instruct.IQ3_XXS.gguf) | IQ3_XXS | 333.70MB | βœ… Available | 🟒 IMatrix | πŸ“¦ No
100
+ | [Qwen2.5-0.5B-Instruct.Q2_K.gguf](https://huggingface.co/legraphista/Qwen2.5-0.5B-Instruct-IMat-GGUF/blob/main/Qwen2.5-0.5B-Instruct.Q2_K.gguf) | Q2_K | 338.61MB | βœ… Available | 🟒 IMatrix | πŸ“¦ No
101
+ | [Qwen2.5-0.5B-Instruct.Q2_K_S.gguf](https://huggingface.co/legraphista/Qwen2.5-0.5B-Instruct-IMat-GGUF/blob/main/Qwen2.5-0.5B-Instruct.Q2_K_S.gguf) | Q2_K_S | 331.05MB | βœ… Available | 🟒 IMatrix | πŸ“¦ No
102
+ | [Qwen2.5-0.5B-Instruct.IQ2_M.gguf](https://huggingface.co/legraphista/Qwen2.5-0.5B-Instruct-IMat-GGUF/blob/main/Qwen2.5-0.5B-Instruct.IQ2_M.gguf) | IQ2_M | 328.60MB | βœ… Available | 🟒 IMatrix | πŸ“¦ No
103
+ | [Qwen2.5-0.5B-Instruct.IQ2_S.gguf](https://huggingface.co/legraphista/Qwen2.5-0.5B-Instruct-IMat-GGUF/blob/main/Qwen2.5-0.5B-Instruct.IQ2_S.gguf) | IQ2_S | 325.74MB | βœ… Available | 🟒 IMatrix | πŸ“¦ No
104
+ | [Qwen2.5-0.5B-Instruct.IQ2_XS.gguf](https://huggingface.co/legraphista/Qwen2.5-0.5B-Instruct-IMat-GGUF/blob/main/Qwen2.5-0.5B-Instruct.IQ2_XS.gguf) | IQ2_XS | 324.41MB | βœ… Available | 🟒 IMatrix | πŸ“¦ No
105
+ | [Qwen2.5-0.5B-Instruct.IQ2_XXS.gguf](https://huggingface.co/legraphista/Qwen2.5-0.5B-Instruct-IMat-GGUF/blob/main/Qwen2.5-0.5B-Instruct.IQ2_XXS.gguf) | IQ2_XXS | 321.55MB | βœ… Available | 🟒 IMatrix | πŸ“¦ No
106
+ | [Qwen2.5-0.5B-Instruct.IQ1_M.gguf](https://huggingface.co/legraphista/Qwen2.5-0.5B-Instruct-IMat-GGUF/blob/main/Qwen2.5-0.5B-Instruct.IQ1_M.gguf) | IQ1_M | 317.97MB | βœ… Available | 🟒 IMatrix | πŸ“¦ No
107
+ | [Qwen2.5-0.5B-Instruct.IQ1_S.gguf](https://huggingface.co/legraphista/Qwen2.5-0.5B-Instruct-IMat-GGUF/blob/main/Qwen2.5-0.5B-Instruct.IQ1_S.gguf) | IQ1_S | 315.83MB | βœ… Available | 🟒 IMatrix | πŸ“¦ No
108
+
109
+
110
+ ## Downloading using huggingface-cli
111
+ If you do not have hugginface-cli installed:
112
+ ```
113
+ pip install -U "huggingface_hub[cli]"
114
+ ```
115
+ Download the specific file you want:
116
+ ```
117
+ huggingface-cli download legraphista/Qwen2.5-0.5B-Instruct-IMat-GGUF --include "Qwen2.5-0.5B-Instruct.Q8_0.gguf" --local-dir ./
118
+ ```
119
+ If the model file is big, it has been split into multiple files. In order to download them all to a local folder, run:
120
+ ```
121
+ huggingface-cli download legraphista/Qwen2.5-0.5B-Instruct-IMat-GGUF --include "Qwen2.5-0.5B-Instruct.Q8_0/*" --local-dir ./
122
+ # see FAQ for merging GGUF's
123
+ ```
124
+
125
+ ---
126
+
127
+ ## Inference
128
+
129
+ ### Simple chat template
130
+ ```
131
+ <|im_start|>system
132
+ You are Qwen, created by Alibaba Cloud. You are a helpful assistant.<|im_end|>
133
+ <|im_start|>user
134
+ {user_prompt}<|im_end|>
135
+ <|im_start|>assistant
136
+ {assistant_response}<|im_end|>
137
+ <|im_start|>user
138
+ {next_user_prompt}<|im_end|>
139
+
140
+ ```
141
+
142
+ ### Chat template with system prompt
143
+ ```
144
+ <|im_start|>system
145
+ {system_prompt}<|im_end|>
146
+ <|im_start|>user
147
+ {user_prompt}<|im_end|>
148
+ <|im_start|>assistant
149
+ {assistant_response}<|im_end|>
150
+ <|im_start|>user
151
+ {next_user_prompt}<|im_end|>
152
+
153
+ ```
154
+
155
+ ### Llama.cpp
156
+ ```
157
+ llama.cpp/main -m Qwen2.5-0.5B-Instruct.Q8_0.gguf --color -i -p "prompt here (according to the chat template)"
158
+ ```
159
+
160
+ ---
161
+
162
+ ## FAQ
163
+
164
+ ### Why is the IMatrix not applied everywhere?
165
+ According to [this investigation](https://www.reddit.com/r/LocalLLaMA/comments/1993iro/ggufs_quants_can_punch_above_their_weights_now/), it appears that lower quantizations are the only ones that benefit from the imatrix input (as per hellaswag results).
166
+
167
+ ### How do I merge a split GGUF?
168
+ 1. Make sure you have `gguf-split` available
169
+ - To get hold of `gguf-split`, navigate to https://github.com/ggerganov/llama.cpp/releases
170
+ - Download the appropriate zip for your system from the latest release
171
+ - Unzip the archive and you should be able to find `gguf-split`
172
+ 2. Locate your GGUF chunks folder (ex: `Qwen2.5-0.5B-Instruct.Q8_0`)
173
+ 3. Run `gguf-split --merge Qwen2.5-0.5B-Instruct.Q8_0/Qwen2.5-0.5B-Instruct.Q8_0-00001-of-XXXXX.gguf Qwen2.5-0.5B-Instruct.Q8_0.gguf`
174
+ - Make sure to point `gguf-split` to the first chunk of the split.
175
+
176
+ ---
177
+
178
  Got a suggestion? Ping me [@legraphista](https://x.com/legraphista)!