Text Generation
GGUF
English
Chinese
MOE
Qwen 2.5 MOE
Mixture of Experts
Uncensored
2X1.5B
deepseek
reasoning
thinking
creative
128k context
general usage
problem solving
brainstorming
solve riddles
story generation
plot generation
storytelling
fiction story
story
writing
fiction
Qwen 2.5
mergekit
conversational
Update README.md
Browse files
README.md
CHANGED
@@ -29,6 +29,8 @@ tags:
|
|
29 |
pipeline_tag: text-generation
|
30 |
---
|
31 |
|
|
|
|
|
32 |
<H2>Qwen2.5-MOE-2X1.5B-DeepSeek-Uncensored-Censored-4B-gguf</H2>
|
33 |
|
34 |
This is a Qwen2.5 MOE (Mixture of Experts) model comprised of TWO Qwen 2.5 Deepseek (Censored/Normal AND Uncensored) 1.5B models
|
@@ -107,10 +109,8 @@ SOFTWARE patch (by me) for Silly Tavern (front end to connect to multiple AI app
|
|
107 |
|
108 |
---
|
109 |
|
110 |
-
Example Generation
|
111 |
|
112 |
Q8_0 Quant, Temp 1.5, rep pen 1.1, topp: .95, minp: .05, topk: 40
|
113 |
|
114 |
-
NOTE: FOUR generations below.
|
115 |
-
|
116 |
---
|
|
|
29 |
pipeline_tag: text-generation
|
30 |
---
|
31 |
|
32 |
+
(quants uploading, examples to be added)
|
33 |
+
|
34 |
<H2>Qwen2.5-MOE-2X1.5B-DeepSeek-Uncensored-Censored-4B-gguf</H2>
|
35 |
|
36 |
This is a Qwen2.5 MOE (Mixture of Experts) model comprised of TWO Qwen 2.5 Deepseek (Censored/Normal AND Uncensored) 1.5B models
|
|
|
109 |
|
110 |
---
|
111 |
|
112 |
+
<h2>Example Generation:</h2>
|
113 |
|
114 |
Q8_0 Quant, Temp 1.5, rep pen 1.1, topp: .95, minp: .05, topk: 40
|
115 |
|
|
|
|
|
116 |
---
|