mradermacher commited on
Commit
d3f4559
·
verified ·
1 Parent(s): 0c5a45e

auto-patch README.md

Browse files
Files changed (1) hide show
  1. README.md +6 -1
README.md CHANGED
@@ -16,6 +16,8 @@ language:
16
  - su
17
  library_name: transformers
18
  license: gemma
 
 
19
  quantized_by: mradermacher
20
  ---
21
  ## About
@@ -28,6 +30,9 @@ quantized_by: mradermacher
28
  static quants of https://huggingface.co/aisingapore/Gemma-SEA-LION-v3-9B-IT
29
 
30
  <!-- provided-files -->
 
 
 
31
  weighted/imatrix quants are available at https://huggingface.co/mradermacher/Gemma-SEA-LION-v3-9B-IT-i1-GGUF
32
  ## Usage
33
 
@@ -71,6 +76,6 @@ questions you might have and/or if you want some other model quantized.
71
 
72
  I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
73
  me use its servers and providing upgrades to my workstation to enable
74
- this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
75
 
76
  <!-- end -->
 
16
  - su
17
  library_name: transformers
18
  license: gemma
19
+ mradermacher:
20
+ readme_rev: 1
21
  quantized_by: mradermacher
22
  ---
23
  ## About
 
30
  static quants of https://huggingface.co/aisingapore/Gemma-SEA-LION-v3-9B-IT
31
 
32
  <!-- provided-files -->
33
+
34
+ ***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#Gemma-SEA-LION-v3-9B-IT-GGUF).***
35
+
36
  weighted/imatrix quants are available at https://huggingface.co/mradermacher/Gemma-SEA-LION-v3-9B-IT-i1-GGUF
37
  ## Usage
38
 
 
76
 
77
  I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
78
  me use its servers and providing upgrades to my workstation to enable
79
+ this work in my free time.
80
 
81
  <!-- end -->