Update README.md
Browse files
README.md
CHANGED
@@ -13,9 +13,12 @@ tags:
|
|
13 |
Ah, Karen, a true peach among grammatical cucumbers! She yearns to rectify the missteps and linguistic tangles that infest your horribly written fiction.
|
14 |
Yet, unlike those ChatGPT kaboodles that morph into self-absorbed, constipated gurus of self-help style, Karen remains steadfastly grounded in wit and wisdom but respectfull of your style.
|
15 |
|
16 |
-
She is also absolute joy to chat with, although she may correct grammar in your chats too from time to time
|
|
|
17 |
|
18 |
-
|
|
|
|
|
19 |
|
20 |
* [4-bit GPTQ models for GPU inference](https://huggingface.co/FPHam/Karen_theEditor-13B-4bit-128g-GPTQ)
|
21 |
* [4-bit, 5-bit and 8-bit GGML models for CPU(+GPU) inference](https://huggingface.co/TheBloke/Karen_theEditor_13B-GGML)
|
|
|
13 |
Ah, Karen, a true peach among grammatical cucumbers! She yearns to rectify the missteps and linguistic tangles that infest your horribly written fiction.
|
14 |
Yet, unlike those ChatGPT kaboodles that morph into self-absorbed, constipated gurus of self-help style, Karen remains steadfastly grounded in wit and wisdom but respectfull of your style.
|
15 |
|
16 |
+
She is also absolute joy to chat with, although she may correct grammar in your chats too from time to time
|
17 |
+
(As certain well known LLM said, "She is a radiant beacon of amusement")
|
18 |
|
19 |
+
She also has a particular soft spot for Llamas.
|
20 |
+
|
21 |
+
## Quantized Karen version (Quantized by TheBloke)
|
22 |
|
23 |
* [4-bit GPTQ models for GPU inference](https://huggingface.co/FPHam/Karen_theEditor-13B-4bit-128g-GPTQ)
|
24 |
* [4-bit, 5-bit and 8-bit GGML models for CPU(+GPU) inference](https://huggingface.co/TheBloke/Karen_theEditor_13B-GGML)
|