morriszms commited on
Commit
ebd11ef
·
verified ·
1 Parent(s): b689d5f

Upload folder using huggingface_hub

Browse files
.gitattributes CHANGED
@@ -33,3 +33,15 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
 
 
 
 
 
 
 
 
 
 
 
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ QwQ-32B-bf16-Q2_K.gguf filter=lfs diff=lfs merge=lfs -text
37
+ QwQ-32B-bf16-Q3_K_L.gguf filter=lfs diff=lfs merge=lfs -text
38
+ QwQ-32B-bf16-Q3_K_M.gguf filter=lfs diff=lfs merge=lfs -text
39
+ QwQ-32B-bf16-Q3_K_S.gguf filter=lfs diff=lfs merge=lfs -text
40
+ QwQ-32B-bf16-Q4_0.gguf filter=lfs diff=lfs merge=lfs -text
41
+ QwQ-32B-bf16-Q4_K_M.gguf filter=lfs diff=lfs merge=lfs -text
42
+ QwQ-32B-bf16-Q4_K_S.gguf filter=lfs diff=lfs merge=lfs -text
43
+ QwQ-32B-bf16-Q5_0.gguf filter=lfs diff=lfs merge=lfs -text
44
+ QwQ-32B-bf16-Q5_K_M.gguf filter=lfs diff=lfs merge=lfs -text
45
+ QwQ-32B-bf16-Q5_K_S.gguf filter=lfs diff=lfs merge=lfs -text
46
+ QwQ-32B-bf16-Q6_K.gguf filter=lfs diff=lfs merge=lfs -text
47
+ QwQ-32B-bf16-Q8_0.gguf filter=lfs diff=lfs merge=lfs -text
QwQ-32B-bf16-Q2_K.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4d81305bb34a5d12707dd1c25254da25a46567befcadc09c5a3d2996ddb791a6
3
+ size 12313098880
QwQ-32B-bf16-Q3_K_L.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:18bdb710fd33757460ff62c6d7c50443d9bdb660adb81747d03e6e57bf2fc902
3
+ size 17247079040
QwQ-32B-bf16-Q3_K_M.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7f37251c75042ac6a678ee70b611eb9b5fe099daa9170cacd691200681683b4c
3
+ size 15935048320
QwQ-32B-bf16-Q3_K_S.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:67c6f8075a48bc1b09a3dd24f3b54a13cd43c7e9431b2e80ae046e5296306dfd
3
+ size 14392330880
QwQ-32B-bf16-Q4_0.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:44374233ca9378ae7492f1b2c8a68a734337430fa60daa6d7665b1b7b7d43595
3
+ size 18640231040
QwQ-32B-bf16-Q4_K_M.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:45a174c91c8f5d2921fc046cb6967fa10663a0c7cc5b1fafb92036e2d5a4a007
3
+ size 19851336320
QwQ-32B-bf16-Q4_K_S.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:15f9472a6c3559d98411fcb0b3665b575efa199bd0de23ef1553c0256980b686
3
+ size 18784410240
QwQ-32B-bf16-Q5_0.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3226969bffa347740b4110a5cea93ef0fd21d405ceb1d72cd55389422042c82d
3
+ size 22638254720
QwQ-32B-bf16-Q5_K_M.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f1a1d97e97132b91f526d21d1f5c5fe20213a417c7a23232f7b71ea728824fb8
3
+ size 23262157440
QwQ-32B-bf16-Q5_K_S.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:69551f740615a7399b889509621a4cdfa4f48dc52c6f7885cb9c9d0033a2c1de
3
+ size 22638254720
QwQ-32B-bf16-Q6_K.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d688b11643c3cd7cdbf5446bbb1714a6d589e14b894e22aad16f537c9346d7ad
3
+ size 26886154880
QwQ-32B-bf16-Q8_0.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4f40f767ba7595710b55c979465a98864e4e9838587a224ef5252c7545def84e
3
+ size 34820885120
README.md ADDED
@@ -0,0 +1,87 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ license_link: https://huggingface.co/Qwen/QWQ-32B/blob/main/LICENSE
4
+ language:
5
+ - en
6
+ pipeline_tag: text-generation
7
+ base_model: mlx-community/QwQ-32B-bf16
8
+ tags:
9
+ - chat
10
+ - mlx
11
+ - TensorBlock
12
+ - GGUF
13
+ ---
14
+
15
+ <div style="width: auto; margin-left: auto; margin-right: auto">
16
+ <img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
17
+ </div>
18
+ <div style="display: flex; justify-content: space-between; width: 100%;">
19
+ <div style="display: flex; flex-direction: column; align-items: flex-start;">
20
+ <p style="margin-top: 0.5em; margin-bottom: 0em;">
21
+ Feedback and support: TensorBlock's <a href="https://x.com/tensorblock_aoi">Twitter/X</a>, <a href="https://t.me/TensorBlock">Telegram Group</a> and <a href="https://x.com/tensorblock_aoi">Discord server</a>
22
+ </p>
23
+ </div>
24
+ </div>
25
+
26
+ ## mlx-community/QwQ-32B-bf16 - GGUF
27
+
28
+ This repo contains GGUF format model files for [mlx-community/QwQ-32B-bf16](https://huggingface.co/mlx-community/QwQ-32B-bf16).
29
+
30
+ The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4882](https://github.com/ggml-org/llama.cpp/commit/be7c3034108473beda214fd1d7c98fd6a7a3bdf5).
31
+
32
+ <div style="text-align: left; margin: 20px 0;">
33
+ <a href="https://tensorblock.co/waitlist/client" style="display: inline-block; padding: 10px 20px; background-color: #007bff; color: white; text-decoration: none; border-radius: 5px; font-weight: bold;">
34
+ Run them on the TensorBlock client using your local machine ↗
35
+ </a>
36
+ </div>
37
+
38
+ ## Prompt template
39
+
40
+ ```
41
+ <|im_start|>system
42
+ {system_prompt}<|im_end|>
43
+ <|im_start|>user
44
+ {prompt}<|im_end|>
45
+ <|im_start|>assistant
46
+ <think>
47
+ ```
48
+
49
+ ## Model file specification
50
+
51
+ | Filename | Quant type | File Size | Description |
52
+ | -------- | ---------- | --------- | ----------- |
53
+ | [QwQ-32B-bf16-Q2_K.gguf](https://huggingface.co/tensorblock/QwQ-32B-bf16-GGUF/blob/main/QwQ-32B-bf16-Q2_K.gguf) | Q2_K | 12.313 GB | smallest, significant quality loss - not recommended for most purposes |
54
+ | [QwQ-32B-bf16-Q3_K_S.gguf](https://huggingface.co/tensorblock/QwQ-32B-bf16-GGUF/blob/main/QwQ-32B-bf16-Q3_K_S.gguf) | Q3_K_S | 14.392 GB | very small, high quality loss |
55
+ | [QwQ-32B-bf16-Q3_K_M.gguf](https://huggingface.co/tensorblock/QwQ-32B-bf16-GGUF/blob/main/QwQ-32B-bf16-Q3_K_M.gguf) | Q3_K_M | 15.935 GB | very small, high quality loss |
56
+ | [QwQ-32B-bf16-Q3_K_L.gguf](https://huggingface.co/tensorblock/QwQ-32B-bf16-GGUF/blob/main/QwQ-32B-bf16-Q3_K_L.gguf) | Q3_K_L | 17.247 GB | small, substantial quality loss |
57
+ | [QwQ-32B-bf16-Q4_0.gguf](https://huggingface.co/tensorblock/QwQ-32B-bf16-GGUF/blob/main/QwQ-32B-bf16-Q4_0.gguf) | Q4_0 | 18.640 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
58
+ | [QwQ-32B-bf16-Q4_K_S.gguf](https://huggingface.co/tensorblock/QwQ-32B-bf16-GGUF/blob/main/QwQ-32B-bf16-Q4_K_S.gguf) | Q4_K_S | 18.784 GB | small, greater quality loss |
59
+ | [QwQ-32B-bf16-Q4_K_M.gguf](https://huggingface.co/tensorblock/QwQ-32B-bf16-GGUF/blob/main/QwQ-32B-bf16-Q4_K_M.gguf) | Q4_K_M | 19.851 GB | medium, balanced quality - recommended |
60
+ | [QwQ-32B-bf16-Q5_0.gguf](https://huggingface.co/tensorblock/QwQ-32B-bf16-GGUF/blob/main/QwQ-32B-bf16-Q5_0.gguf) | Q5_0 | 22.638 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
61
+ | [QwQ-32B-bf16-Q5_K_S.gguf](https://huggingface.co/tensorblock/QwQ-32B-bf16-GGUF/blob/main/QwQ-32B-bf16-Q5_K_S.gguf) | Q5_K_S | 22.638 GB | large, low quality loss - recommended |
62
+ | [QwQ-32B-bf16-Q5_K_M.gguf](https://huggingface.co/tensorblock/QwQ-32B-bf16-GGUF/blob/main/QwQ-32B-bf16-Q5_K_M.gguf) | Q5_K_M | 23.262 GB | large, very low quality loss - recommended |
63
+ | [QwQ-32B-bf16-Q6_K.gguf](https://huggingface.co/tensorblock/QwQ-32B-bf16-GGUF/blob/main/QwQ-32B-bf16-Q6_K.gguf) | Q6_K | 26.886 GB | very large, extremely low quality loss |
64
+ | [QwQ-32B-bf16-Q8_0.gguf](https://huggingface.co/tensorblock/QwQ-32B-bf16-GGUF/blob/main/QwQ-32B-bf16-Q8_0.gguf) | Q8_0 | 34.821 GB | very large, extremely low quality loss - not recommended |
65
+
66
+
67
+ ## Downloading instruction
68
+
69
+ ### Command line
70
+
71
+ Firstly, install Huggingface Client
72
+
73
+ ```shell
74
+ pip install -U "huggingface_hub[cli]"
75
+ ```
76
+
77
+ Then, downoad the individual model file the a local directory
78
+
79
+ ```shell
80
+ huggingface-cli download tensorblock/QwQ-32B-bf16-GGUF --include "QwQ-32B-bf16-Q2_K.gguf" --local-dir MY_LOCAL_DIR
81
+ ```
82
+
83
+ If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
84
+
85
+ ```shell
86
+ huggingface-cli download tensorblock/QwQ-32B-bf16-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
87
+ ```