mradermacher commited on
Commit
ecd7b9e
verified
1 Parent(s): 95cec8a

auto-patch README.md

Browse files
Files changed (1) hide show
  1. README.md +7 -0
README.md CHANGED
@@ -9,6 +9,8 @@ language:
9
  - ro
10
  library_name: transformers
11
  license: apache-2.0
 
 
12
  quantized_by: mradermacher
13
  tags:
14
  - text-generation
@@ -31,6 +33,9 @@ tags:
31
  static quants of https://huggingface.co/drwlf/Medra
32
 
33
  <!-- provided-files -->
 
 
 
34
  weighted/imatrix quants are available at https://huggingface.co/mradermacher/Medra-i1-GGUF
35
  ## Usage
36
 
@@ -44,6 +49,8 @@ more details, including on how to concatenate multi-part files.
44
 
45
  | Link | Type | Size/GB | Notes |
46
  |:-----|:-----|--------:|:------|
 
 
47
  | [GGUF](https://huggingface.co/mradermacher/Medra-GGUF/resolve/main/Medra.Q2_K.gguf) | Q2_K | 1.8 | |
48
  | [GGUF](https://huggingface.co/mradermacher/Medra-GGUF/resolve/main/Medra.Q3_K_S.gguf) | Q3_K_S | 2.0 | |
49
  | [GGUF](https://huggingface.co/mradermacher/Medra-GGUF/resolve/main/Medra.Q3_K_M.gguf) | Q3_K_M | 2.2 | lower quality |
 
9
  - ro
10
  library_name: transformers
11
  license: apache-2.0
12
+ mradermacher:
13
+ readme_rev: 1
14
  quantized_by: mradermacher
15
  tags:
16
  - text-generation
 
33
  static quants of https://huggingface.co/drwlf/Medra
34
 
35
  <!-- provided-files -->
36
+
37
+ ***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#Medra-GGUF).***
38
+
39
  weighted/imatrix quants are available at https://huggingface.co/mradermacher/Medra-i1-GGUF
40
  ## Usage
41
 
 
49
 
50
  | Link | Type | Size/GB | Notes |
51
  |:-----|:-----|--------:|:------|
52
+ | [GGUF](https://huggingface.co/mradermacher/Medra-GGUF/resolve/main/Medra.mmproj-Q8_0.gguf) | mmproj-Q8_0 | 0.7 | multi-modal supplement |
53
+ | [GGUF](https://huggingface.co/mradermacher/Medra-GGUF/resolve/main/Medra.mmproj-f16.gguf) | mmproj-f16 | 1.0 | multi-modal supplement |
54
  | [GGUF](https://huggingface.co/mradermacher/Medra-GGUF/resolve/main/Medra.Q2_K.gguf) | Q2_K | 1.8 | |
55
  | [GGUF](https://huggingface.co/mradermacher/Medra-GGUF/resolve/main/Medra.Q3_K_S.gguf) | Q3_K_S | 2.0 | |
56
  | [GGUF](https://huggingface.co/mradermacher/Medra-GGUF/resolve/main/Medra.Q3_K_M.gguf) | Q3_K_M | 2.2 | lower quality |