Update README.md
Browse files
README.md
CHANGED
@@ -11,8 +11,21 @@ language:
|
|
11 |
- en
|
12 |
---
|
13 |

|
14 |
-
The priestess of Athena.
|
15 |
|
16 |
Fine-tuned on [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1)...[my team and I](https://huggingface.co/ConvexAI) reformatted many different datasets and included a small amount of private stuff to see how much we could improve mistral.
|
17 |
|
18 |
-
I spoke to it personally for about an hour, and I believe we need to work on our format for the private dataset a bit more, but other than that, it turned out great. I will be uploading it to open llm evaluations, today.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
11 |
- en
|
12 |
---
|
13 |

|
14 |
+
# The priestess of Athena.
|
15 |
|
16 |
Fine-tuned on [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1)...[my team and I](https://huggingface.co/ConvexAI) reformatted many different datasets and included a small amount of private stuff to see how much we could improve mistral.
|
17 |
|
18 |
+
I spoke to it personally for about an hour, and I believe we need to work on our format for the private dataset a bit more, but other than that, it turned out great. I will be uploading it to open llm evaluations, today.
|
19 |
+
|
20 |
+
## Provided files
|
21 |
+
|
22 |
+
| Name | Quant method | Bits | Size | Max RAM required | Use case |
|
23 |
+
| ---- | ---- | ---- | ---- | ---- | ----- |
|
24 |
+
| [Q2_K Tiny](https://huggingface.co/Kquant03/Medusa-7B-GGUF/blob/main/ggml-model-q2_k.gguf) | Q2_K | 2 | 2.7 GB| 4.7 GB | smallest, significant quality loss - not recommended for most purposes |
|
25 |
+
| [Q3_K_M](https://huggingface.co/Kquant03/Medusa-7B-GGUF/blob/main/ggml-model-q3_k_m.gguf) | Q3_K_M | 3 | 3.52 GB| 5.52 GB | very small, high quality loss |
|
26 |
+
| [Q4_0](https://huggingface.co/Kquant03/Medusa-7B-GGUF/blob/main/ggml-model-q4_0.gguf) | Q4_0 | 4 | 4.11 GB| 6.11 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
|
27 |
+
| [Q4_K_M](https://huggingface.co/Kquant03/Medusa-7B-GGUF/blob/main/ggml-model-q4_k_m.gguf) | Q4_K_M | 4 | 4.37 GB| 6.37 GB | medium, balanced quality - recommended |
|
28 |
+
| [Q5_0](https://huggingface.co/Kquant03/Medusa-7B-GGUF/blob/main/ggml-model-q5_0.gguf) | Q5_0 | 5 | 5 GB| 7 GB | legacy; large, balanced quality |
|
29 |
+
| [Q5_K_M](https://huggingface.co/Kquant03/Medusa-7B-GGUF/blob/main/ggml-model-q5_k_m.gguf) | Q5_K_M | 5 | 5.13 GB| 7.13 GB | large, balanced quality - recommended |
|
30 |
+
| [Q6 XL](https://huggingface.co/Kquant03/Medusa-7B-GGUF/blob/main/ggml-model-q6_k.gguf) | Q6_K | 6 | 5.94 GB| 7.94 GB | very large, extremely low quality loss |
|
31 |
+
| [Q8 XXL](https://huggingface.co/Kquant03/Medusa-7B-GGUF/blob/main/ggml-model-q8_0.gguf) | Q8_0 | 8 | 7.7 GB| 9.7 GB | very large, extremely low quality loss - not recommended |
|