Update README.md
#9
by
MaziyarPanahi
- opened
README.md
CHANGED
@@ -130,6 +130,13 @@ This model is a fine-tuned version of [microsoft/phi-2](https://huggingface.co/m
|
|
130 |
It achieves the following results on the evaluation set:
|
131 |
- Loss: 1.0075
|
132 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
133 |
## Prompt Template
|
134 |
|
135 |
```
|
|
|
130 |
It achieves the following results on the evaluation set:
|
131 |
- Loss: 1.0075
|
132 |
|
133 |
+
## Quantized Models
|
134 |
+
|
135 |
+
> I love how GGUF democratizes the use of Large Language Models (LLMs) on commodity hardware, more specifically, personal computers without any accelerated hardware. Because of this, I am committed to converting and quantizing any models I fine-tune to make them accessible to everyone!
|
136 |
+
|
137 |
+
GGUF (2/3/4/5/6/8 bits): [MaziyarPanahi/phi-2-logical-sft-GGUF](https://huggingface.co/MaziyarPanahi/phi-2-logical-sft-GGUF)
|
138 |
+
|
139 |
+
|
140 |
## Prompt Template
|
141 |
|
142 |
```
|