tcpipuk commited on
Commit
b682e5f
·
verified ·
1 Parent(s): 140d95a

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +78 -0
README.md ADDED
@@ -0,0 +1,78 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ library_name: gguf
4
+ base_model: TheDrummer/Big-Tiger-Gemma-27B-v3
5
+ tags:
6
+ - bartowski-method
7
+ - gguf
8
+ - q4_k_l
9
+ - q4_k_m
10
+ - q4_k_xl
11
+ - q4_k_xxl
12
+ - quantized
13
+ ---
14
+
15
+ # TheDrummer-Big-Tiger-Gemma-27B-v3-GGUF
16
+
17
+ GGUF quantizations of [TheDrummer/Big-Tiger-Gemma-27B-v3](https://huggingface.co/TheDrummer/Big-Tiger-Gemma-27B-v3) using my own variation of the Bartowski method.
18
+
19
+ | Quantization | Layers: Embeddings/Output | Layers: Attention | Layers: Feed-Forward | Status |
20
+ |--------------|---------------------------|-------------------|---------------------|--------|
21
+ | Q4_K_M | Q4_K_M | Q4_K_M | Q4_K_M | 🔄 Processing... |
22
+ | Q4_K_L | Q6_K | Q6_K | Q4_K_M | ⏳ Planned |
23
+ | Q4_K_XL | Q8_0 | Q6_K | Q4_K_M | ⏳ Planned |
24
+ | Q4_K_XXL | Q8_0 | Q8_0 | Q4_K_M | ⏳ Planned |
25
+
26
+ ---
27
+
28
+ # Original Model Information
29
+
30
+
31
+ # TheDrummer-Big-Tiger-Gemma-27B-v3-GGUF
32
+
33
+ GGUF quantizations of [TheDrummer/Big-Tiger-Gemma-27B-v3](https://huggingface.co/TheDrummer/Big-Tiger-Gemma-27B-v3) using my own variation of the Bartowski method.
34
+
35
+ | Quantization | Layers: Embeddings/Output | Layers: Attention | Layers: Feed-Forward | Status |
36
+ |--------------|---------------------------|-------------------|---------------------|--------|
37
+ | Q4_K_M | Q4_K_M | Q4_K_M | Q4_K_M | ⏳ Planned |
38
+ | Q4_K_L | Q6_K | Q6_K | Q4_K_M | ⏳ Planned |
39
+ | Q4_K_XL | Q8_0 | Q6_K | Q4_K_M | ⏳ Planned |
40
+ | Q4_K_XXL | Q8_0 | Q8_0 | Q4_K_M | ⏳ Planned |
41
+
42
+ ---
43
+
44
+ # Original Model Information
45
+
46
+
47
+ # TheDrummer-Big-Tiger-Gemma-27B-v3-GGUF
48
+
49
+ GGUF quantizations of [TheDrummer/Big-Tiger-Gemma-27B-v3](https://huggingface.co/TheDrummer/Big-Tiger-Gemma-27B-v3) using my own variation of the Bartowski method.
50
+
51
+ | Quantization | Layers: Embeddings/Output | Layers: Attention | Layers: Feed-Forward | Status |
52
+ |--------------|---------------------------|-------------------|---------------------|--------|
53
+ | Q4_K_M | Q4_K_M | Q4_K_M | Q4_K_M | ⏳ Planned |
54
+ | Q4_K_L | Q6_K | Q6_K | Q4_K_M | ⏳ Planned |
55
+ | Q4_K_XL | Q8_0 | Q6_K | Q4_K_M | ⏳ Planned |
56
+ | Q4_K_XXL | Q8_0 | Q8_0 | Q4_K_M | ⏳ Planned |
57
+
58
+ ---
59
+
60
+ # Original Model Information
61
+
62
+
63
+ # TheDrummer-Big-Tiger-Gemma-27B-v3-GGUF
64
+
65
+ GGUF quantizations of [TheDrummer/Big-Tiger-Gemma-27B-v3](https://huggingface.co/TheDrummer/Big-Tiger-Gemma-27B-v3) using my own variation of the Bartowski method.
66
+
67
+ | Quantization | Layers: Embeddings/Output | Layers: Attention | Layers: Feed-Forward | Status |
68
+ |--------------|---------------------------|-------------------|---------------------|--------|
69
+ | QuantizationType.Q4_K_M | Q4_K_M | Q4_K_M | Q4_K_M | ❌ Failed |
70
+ | QuantizationType.Q4_K_L | Q8_0 | Q4_K_M | Q4_K_M | ❌ Failed |
71
+ | QuantizationType.Q4_K_XL | Q8_0 | Q8_0 | Q4_K_M | ❌ Failed |
72
+ | QuantizationType.Q4_K_XXL | Q8_0 | Q8_0 | Q6_K | ❌ Failed |
73
+
74
+ ---
75
+
76
+ ## Original Model
77
+
78
+ Quantization of [TheDrummer/Big-Tiger-Gemma-27B-v3](https://huggingface.co/TheDrummer/Big-Tiger-Gemma-27B-v3) by TheDrummer.