Upload README.md with huggingface_hub
Browse files
README.md
CHANGED
@@ -64,23 +64,23 @@ inference:
|
|
64 |
--prompt-template chatml \
|
65 |
--ctx-size 32000
|
66 |
```
|
67 |
-
|
68 |
## Quantized GGUF Models
|
69 |
|
70 |
| Name | Quant method | Bits | Size | Use case |
|
71 |
| ---- | ---- | ---- | ---- | ----- |
|
72 |
-
| [Bielik-4.5B-v3.0-Instruct-Q2_K.gguf](https://huggingface.co/second-state/Bielik-4.5B-v3.0-Instruct-GGUF/blob/main/Bielik-4.5B-v3.0-Instruct-Q2_K.gguf) | Q2_K | 2 |
|
73 |
-
| [Bielik-4.5B-v3.0-Instruct-Q3_K_L.gguf](https://huggingface.co/second-state/Bielik-4.5B-v3.0-Instruct-GGUF/blob/main/Bielik-4.5B-v3.0-Instruct-Q3_K_L.gguf) | Q3_K_L | 3 |
|
74 |
-
| [Bielik-4.5B-v3.0-Instruct-Q3_K_M.gguf](https://huggingface.co/second-state/Bielik-4.5B-v3.0-Instruct-GGUF/blob/main/Bielik-4.5B-v3.0-Instruct-Q3_K_M.gguf) | Q3_K_M | 3 |
|
75 |
-
| [Bielik-4.5B-v3.0-Instruct-Q3_K_S.gguf](https://huggingface.co/second-state/Bielik-4.5B-v3.0-Instruct-GGUF/blob/main/Bielik-4.5B-v3.0-Instruct-Q3_K_S.gguf) | Q3_K_S | 3 |
|
76 |
-
| [Bielik-4.5B-v3.0-Instruct-Q4_0.gguf](https://huggingface.co/second-state/Bielik-4.5B-v3.0-Instruct-GGUF/blob/main/Bielik-4.5B-v3.0-Instruct-Q4_0.gguf) | Q4_0 | 4 |
|
77 |
-
| [Bielik-4.5B-v3.0-Instruct-Q4_K_M.gguf](https://huggingface.co/second-state/Bielik-4.5B-v3.0-Instruct-GGUF/blob/main/Bielik-4.5B-v3.0-Instruct-Q4_K_M.gguf) | Q4_K_M | 4 |
|
78 |
-
| [Bielik-4.5B-v3.0-Instruct-Q4_K_S.gguf](https://huggingface.co/second-state/Bielik-4.5B-v3.0-Instruct-GGUF/blob/main/Bielik-4.5B-v3.0-Instruct-Q4_K_S.gguf) | Q4_K_S | 4 |
|
79 |
-
| [Bielik-4.5B-v3.0-Instruct-Q5_0.gguf](https://huggingface.co/second-state/Bielik-4.5B-v3.0-Instruct-GGUF/blob/main/Bielik-4.5B-v3.0-Instruct-Q5_0.gguf) | Q5_0 | 5 |
|
80 |
-
| [Bielik-4.5B-v3.0-Instruct-Q5_K_M.gguf](https://huggingface.co/second-state/Bielik-4.5B-v3.0-Instruct-GGUF/blob/main/Bielik-4.5B-v3.0-Instruct-Q5_K_M.gguf) | Q5_K_M | 5 |
|
81 |
-
| [Bielik-4.5B-v3.0-Instruct-Q5_K_S.gguf](https://huggingface.co/second-state/Bielik-4.5B-v3.0-Instruct-GGUF/blob/main/Bielik-4.5B-v3.0-Instruct-Q5_K_S.gguf) | Q5_K_S | 5 |
|
82 |
-
| [Bielik-4.5B-v3.0-Instruct-Q6_K.gguf](https://huggingface.co/second-state/Bielik-4.5B-v3.0-Instruct-GGUF/blob/main/Bielik-4.5B-v3.0-Instruct-Q6_K.gguf) | Q6_K | 6 |
|
83 |
-
| [Bielik-4.5B-v3.0-Instruct-Q8_0.gguf](https://huggingface.co/second-state/Bielik-4.5B-v3.0-Instruct-GGUF/blob/main/Bielik-4.5B-v3.0-Instruct-Q8_0.gguf) | Q8_0 | 8 |
|
84 |
-
| [Bielik-4.5B-v3.0-Instruct-f16.gguf](https://huggingface.co/second-state/Bielik-4.5B-v3.0-Instruct-GGUF/blob/main/Bielik-4.5B-v3.0-Instruct-f16.gguf) | f16 | 16 |
|
85 |
|
86 |
*Quantized with llama.cpp b5201*
|
|
|
64 |
--prompt-template chatml \
|
65 |
--ctx-size 32000
|
66 |
```
|
67 |
+
|
68 |
## Quantized GGUF Models
|
69 |
|
70 |
| Name | Quant method | Bits | Size | Use case |
|
71 |
| ---- | ---- | ---- | ---- | ----- |
|
72 |
+
| [Bielik-4.5B-v3.0-Instruct-Q2_K.gguf](https://huggingface.co/second-state/Bielik-4.5B-v3.0-Instruct-GGUF/blob/main/Bielik-4.5B-v3.0-Instruct-Q2_K.gguf) | Q2_K | 2 | 1.77 GB| smallest, significant quality loss - not recommended for most purposes |
|
73 |
+
| [Bielik-4.5B-v3.0-Instruct-Q3_K_L.gguf](https://huggingface.co/second-state/Bielik-4.5B-v3.0-Instruct-GGUF/blob/main/Bielik-4.5B-v3.0-Instruct-Q3_K_L.gguf) | Q3_K_L | 3 | 2.50 GB| small, substantial quality loss |
|
74 |
+
| [Bielik-4.5B-v3.0-Instruct-Q3_K_M.gguf](https://huggingface.co/second-state/Bielik-4.5B-v3.0-Instruct-GGUF/blob/main/Bielik-4.5B-v3.0-Instruct-Q3_K_M.gguf) | Q3_K_M | 3 | 2.30 GB| very small, high quality loss |
|
75 |
+
| [Bielik-4.5B-v3.0-Instruct-Q3_K_S.gguf](https://huggingface.co/second-state/Bielik-4.5B-v3.0-Instruct-GGUF/blob/main/Bielik-4.5B-v3.0-Instruct-Q3_K_S.gguf) | Q3_K_S | 3 | 2.08 GB| very small, high quality loss |
|
76 |
+
| [Bielik-4.5B-v3.0-Instruct-Q4_0.gguf](https://huggingface.co/second-state/Bielik-4.5B-v3.0-Instruct-GGUF/blob/main/Bielik-4.5B-v3.0-Instruct-Q4_0.gguf) | Q4_0 | 4 | 2.70 GB| legacy; small, very high quality loss - prefer using Q3_K_M |
|
77 |
+
| [Bielik-4.5B-v3.0-Instruct-Q4_K_M.gguf](https://huggingface.co/second-state/Bielik-4.5B-v3.0-Instruct-GGUF/blob/main/Bielik-4.5B-v3.0-Instruct-Q4_K_M.gguf) | Q4_K_M | 4 | 2.88 GB| medium, balanced quality - recommended |
|
78 |
+
| [Bielik-4.5B-v3.0-Instruct-Q4_K_S.gguf](https://huggingface.co/second-state/Bielik-4.5B-v3.0-Instruct-GGUF/blob/main/Bielik-4.5B-v3.0-Instruct-Q4_K_S.gguf) | Q4_K_S | 4 | 2.72 GB| small, greater quality loss |
|
79 |
+
| [Bielik-4.5B-v3.0-Instruct-Q5_0.gguf](https://huggingface.co/second-state/Bielik-4.5B-v3.0-Instruct-GGUF/blob/main/Bielik-4.5B-v3.0-Instruct-Q5_0.gguf) | Q5_0 | 5 | 3.29 GB| legacy; medium, balanced quality - prefer using Q4_K_M |
|
80 |
+
| [Bielik-4.5B-v3.0-Instruct-Q5_K_M.gguf](https://huggingface.co/second-state/Bielik-4.5B-v3.0-Instruct-GGUF/blob/main/Bielik-4.5B-v3.0-Instruct-Q5_K_M.gguf) | Q5_K_M | 5 | 3.38 GB| large, very low quality loss - recommended |
|
81 |
+
| [Bielik-4.5B-v3.0-Instruct-Q5_K_S.gguf](https://huggingface.co/second-state/Bielik-4.5B-v3.0-Instruct-GGUF/blob/main/Bielik-4.5B-v3.0-Instruct-Q5_K_S.gguf) | Q5_K_S | 5 | 3.29 GB| large, low quality loss - recommended |
|
82 |
+
| [Bielik-4.5B-v3.0-Instruct-Q6_K.gguf](https://huggingface.co/second-state/Bielik-4.5B-v3.0-Instruct-GGUF/blob/main/Bielik-4.5B-v3.0-Instruct-Q6_K.gguf) | Q6_K | 6 | 3.91 GB| very large, extremely low quality loss |
|
83 |
+
| [Bielik-4.5B-v3.0-Instruct-Q8_0.gguf](https://huggingface.co/second-state/Bielik-4.5B-v3.0-Instruct-GGUF/blob/main/Bielik-4.5B-v3.0-Instruct-Q8_0.gguf) | Q8_0 | 8 | 5.06 GB| very large, extremely low quality loss - not recommended |
|
84 |
+
| [Bielik-4.5B-v3.0-Instruct-f16.gguf](https://huggingface.co/second-state/Bielik-4.5B-v3.0-Instruct-GGUF/blob/main/Bielik-4.5B-v3.0-Instruct-f16.gguf) | f16 | 16 | 9.52 GB| |
|
85 |
|
86 |
*Quantized with llama.cpp b5201*
|