Update README.md
Browse files
README.md
CHANGED
@@ -14,12 +14,24 @@ tags:
|
|
14 |
|
15 |
# Model Card for RigoChat-7b-v2-GGUF
|
16 |
|
|
|
|
|
|
|
|
|
17 |
## Introduction
|
18 |
|
19 |
This repo contains [IIC/RigoChat-7b-v2](https://huggingface.co/IIC/RigoChat-7b-v2) model in the GGUF Format, with the original weights and quantized to different precisions.
|
20 |
|
21 |
The [llama.cpp](https://github.com/ggerganov/llama.cpp) library has been used to transform the parameters into GGUF format, as well as to perform the quantizations. Specifically, the following command has been used to obtain the model in full precision:
|
22 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
23 |
1. To download the weights:
|
24 |
|
25 |
```python
|
|
|
14 |
|
15 |
# Model Card for RigoChat-7b-v2-GGUF
|
16 |
|
17 |
+
<div style="display: flex; align-items: flex-start;">
|
18 |
+
|
19 |
+
<div style="flex: 1;">
|
20 |
+
|
21 |
## Introduction
|
22 |
|
23 |
This repo contains [IIC/RigoChat-7b-v2](https://huggingface.co/IIC/RigoChat-7b-v2) model in the GGUF Format, with the original weights and quantized to different precisions.
|
24 |
|
25 |
The [llama.cpp](https://github.com/ggerganov/llama.cpp) library has been used to transform the parameters into GGUF format, as well as to perform the quantizations. Specifically, the following command has been used to obtain the model in full precision:
|
26 |
|
27 |
+
</div>
|
28 |
+
|
29 |
+
<div style="margin-left: 20px;">
|
30 |
+
<img src="./images/RigoChat.jpg">
|
31 |
+
</div>
|
32 |
+
|
33 |
+
</div>
|
34 |
+
|
35 |
1. To download the weights:
|
36 |
|
37 |
```python
|