Update README.md
Browse files
README.md
CHANGED
@@ -7,6 +7,11 @@ library_name: transformers
|
|
7 |
license: apache-2.0
|
8 |
quantized_by: mradermacher
|
9 |
---
|
|
|
|
|
|
|
|
|
|
|
10 |
## About
|
11 |
|
12 |
<!-- ### quantize_version: 2 -->
|
@@ -17,7 +22,7 @@ quantized_by: mradermacher
|
|
17 |
static quants of https://huggingface.co/xwen-team/Xwen-72B-Chat
|
18 |
|
19 |
<!-- provided-files -->
|
20 |
-
weighted/imatrix quants are available at https://huggingface.co/
|
21 |
## Usage
|
22 |
|
23 |
If you are unsure how to use GGUF files, refer to one of [TheBloke's
|
@@ -57,8 +62,6 @@ questions you might have and/or if you want some other model quantized.
|
|
57 |
|
58 |
## Thanks
|
59 |
|
60 |
-
|
61 |
-
me use its servers and providing upgrades to my workstation to enable
|
62 |
-
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
|
63 |
|
64 |
<!-- end -->
|
|
|
7 |
license: apache-2.0
|
8 |
quantized_by: mradermacher
|
9 |
---
|
10 |
+
|
11 |
+
> [!Important]
|
12 |
+
> Big thanks to [@mradermacher](https://huggingface.co/mradermacher) for helping us build this repository of GGUFs for our [Xwen-72B-Chat](https://huggingface.co/xwen-team/Xwen-72B-Chat)!
|
13 |
+
|
14 |
+
|
15 |
## About
|
16 |
|
17 |
<!-- ### quantize_version: 2 -->
|
|
|
22 |
static quants of https://huggingface.co/xwen-team/Xwen-72B-Chat
|
23 |
|
24 |
<!-- provided-files -->
|
25 |
+
weighted/imatrix quants are available at https://huggingface.co/xwen-team/Xwen-72B-Chat-i1-GGUF
|
26 |
## Usage
|
27 |
|
28 |
If you are unsure how to use GGUF files, refer to one of [TheBloke's
|
|
|
62 |
|
63 |
## Thanks
|
64 |
|
65 |
+
Big thanks to [@mradermacher](https://huggingface.co/mradermacher) for helping us build this repository of GGUFs for our [Xwen-72B-Chat](https://huggingface.co/xwen-team/Xwen-72B-Chat)!
|
|
|
|
|
66 |
|
67 |
<!-- end -->
|