mradermacher commited on
Commit
66dbe6a
·
verified ·
1 Parent(s): b7b6a32

auto-patch README.md

Browse files
Files changed (1) hide show
  1. README.md +6 -1
README.md CHANGED
@@ -8,6 +8,8 @@ library_name: transformers
8
  license: other
9
  license_link: https://falconllm.tii.ae/falcon-terms-and-conditions.html
10
  license_name: falcon-llm-license
 
 
11
  quantized_by: mradermacher
12
  tags:
13
  - unsloth
@@ -24,6 +26,9 @@ tags:
24
  static quants of https://huggingface.co/suayptalha/Maestro-10B
25
 
26
  <!-- provided-files -->
 
 
 
27
  weighted/imatrix quants are available at https://huggingface.co/mradermacher/Maestro-10B-i1-GGUF
28
  ## Usage
29
 
@@ -67,6 +72,6 @@ questions you might have and/or if you want some other model quantized.
67
 
68
  I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
69
  me use its servers and providing upgrades to my workstation to enable
70
- this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
71
 
72
  <!-- end -->
 
8
  license: other
9
  license_link: https://falconllm.tii.ae/falcon-terms-and-conditions.html
10
  license_name: falcon-llm-license
11
+ mradermacher:
12
+ readme_rev: 1
13
  quantized_by: mradermacher
14
  tags:
15
  - unsloth
 
26
  static quants of https://huggingface.co/suayptalha/Maestro-10B
27
 
28
  <!-- provided-files -->
29
+
30
+ ***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#Maestro-10B-GGUF).***
31
+
32
  weighted/imatrix quants are available at https://huggingface.co/mradermacher/Maestro-10B-i1-GGUF
33
  ## Usage
34
 
 
72
 
73
  I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
74
  me use its servers and providing upgrades to my workstation to enable
75
+ this work in my free time.
76
 
77
  <!-- end -->