mradermacher commited on
Commit
fdec18b
·
verified ·
1 Parent(s): 9433ad0

auto-patch README.md

Browse files
Files changed (1) hide show
  1. README.md +6 -1
README.md CHANGED
@@ -5,6 +5,8 @@ language:
5
  library_name: transformers
6
  license: apache-2.0
7
  license_link: https://huggingface.co/Qwen/Qwen2.5-32B-Instruct/blob/main/LICENSE
 
 
8
  quantized_by: mradermacher
9
  tags:
10
  - chat
@@ -19,6 +21,9 @@ tags:
19
  static quants of https://huggingface.co/Qwen/Qwen2.5-32B-Instruct
20
 
21
  <!-- provided-files -->
 
 
 
22
  weighted/imatrix quants are available at https://huggingface.co/mradermacher/Qwen2.5-32B-Instruct-i1-GGUF
23
  ## Usage
24
 
@@ -73,6 +78,6 @@ questions you might have and/or if you want some other model quantized.
73
 
74
  I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
75
  me use its servers and providing upgrades to my workstation to enable
76
- this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
77
 
78
  <!-- end -->
 
5
  library_name: transformers
6
  license: apache-2.0
7
  license_link: https://huggingface.co/Qwen/Qwen2.5-32B-Instruct/blob/main/LICENSE
8
+ mradermacher:
9
+ readme_rev: 1
10
  quantized_by: mradermacher
11
  tags:
12
  - chat
 
21
  static quants of https://huggingface.co/Qwen/Qwen2.5-32B-Instruct
22
 
23
  <!-- provided-files -->
24
+
25
+ ***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#Qwen2.5-32B-Instruct-GGUF).***
26
+
27
  weighted/imatrix quants are available at https://huggingface.co/mradermacher/Qwen2.5-32B-Instruct-i1-GGUF
28
  ## Usage
29
 
 
78
 
79
  I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
80
  me use its servers and providing upgrades to my workstation to enable
81
+ this work in my free time.
82
 
83
  <!-- end -->