Update README.md
Browse filesAdded link to GGUF quants that were helpfully provided.
README.md
CHANGED
@@ -10,12 +10,14 @@ license: cc-by-nc-4.0
|
|
10 |
---
|
11 |
# kukulemon-7B
|
12 |
|
13 |
-
A merger of two similar models with strong reasoning, hopefully resulting in "dense" encoding of said reasoning, was merged with a model targeting roleplay.
|
14 |
|
15 |
-
I've tested with ChatML prompts with temperature=1.1 and minP=0.03. The model itself supports Alpaca format prompts. The model claims a context length of 32K,
|
16 |
|
17 |
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
|
18 |
|
|
|
|
|
19 |
## Merge Details
|
20 |
### Merge Method
|
21 |
|
|
|
10 |
---
|
11 |
# kukulemon-7B
|
12 |
|
13 |
+
A merger of two similar Kunoichi models with strong reasoning, hopefully resulting in "dense" encoding of said reasoning, was merged with a model targeting roleplay.
|
14 |
|
15 |
+
I've tested with ChatML prompts with temperature=1.1 and minP=0.03. The model itself supports Alpaca format prompts. The model claims a context length of 32K, it seemed to lose coherence after 8K in my informal testing.
|
16 |
|
17 |
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
|
18 |
|
19 |
+
You can also download [GGUF quants courtesy of Lewdiculous](https://huggingface.co/Lewdiculous/kukulemon-7B-GGUF-IQ-Imatrix/).
|
20 |
+
|
21 |
## Merge Details
|
22 |
### Merge Method
|
23 |
|