Update README.md
Browse files
README.md
CHANGED
@@ -13,6 +13,10 @@ tags:
|
|
13 |
|
14 |
This repo contains the RWKV-6-World-1.6B-GGUF NEW (RE)-quantized with the latest llama.cpp [b3771](https://github.com/ggerganov/llama.cpp/releases/tag/b3771).
|
15 |
|
|
|
|
|
|
|
|
|
16 |
## How to run the model
|
17 |
|
18 |
* Get the latest llama.cpp:
|
|
|
13 |
|
14 |
This repo contains the RWKV-6-World-1.6B-GGUF NEW (RE)-quantized with the latest llama.cpp [b3771](https://github.com/ggerganov/llama.cpp/releases/tag/b3771).
|
15 |
|
16 |
+
# **Note:**
|
17 |
+
|
18 |
+
* The Notebook used to convert this model is included feel free to use to in Colab or Kaggle to quantize future models using it.
|
19 |
+
|
20 |
## How to run the model
|
21 |
|
22 |
* Get the latest llama.cpp:
|