Text Generation
Transformers
Safetensors
qwen3_moe
creative
creative writing
fiction writing
plot generation
sub-plot generation
story generation
scene continue
storytelling
fiction story
science fiction
romance
all genres
story
writing
vivid prose
vivid writing
Mixture of Experts
mixture of experts
64 experts
8 active experts
fiction
roleplaying
bfloat16
rp
qwen3
horror
finetune
thinking
reasoning
Merge
uncensored
abliterated
Not-For-All-Audiences
conversational
Update README.md
Browse files
README.md
CHANGED
@@ -47,11 +47,11 @@ base_model:
|
|
47 |
pipeline_tag: text-generation
|
48 |
---
|
49 |
|
50 |
-
(
|
51 |
|
52 |
<B><font color="red">WARNING:</font> MADNESS - UN HINGED and... NSFW. Vivid prose. INTENSE. Visceral Details. Violence. HORROR. GORE. Swearing. UNCENSORED... humor, romance, fun. </B>
|
53 |
|
54 |
-
<h2>Qwen3-
|
55 |
|
56 |
<img src="qwen3-harley-quinn-23b-puddin.webp" style="float:right; width:300px; height:300px; padding:10px;">
|
57 |
|
@@ -62,7 +62,7 @@ ABOUT:
|
|
62 |
|
63 |
A stranger, yet radically different version of Kalmaze's "Qwen/Qwen3-16B-A3B" (that was abliterated by "huihui-ai") with the
|
64 |
experts pruned to 64 (from 128, the Qwen 3 30B-A3B version) and then I added 19 layers expanding (Brainstorm 20x by DavidAU - info at the
|
65 |
-
bottom of this page) the model to
|
66 |
|
67 |
The goal: slightly alter the model, to address some odd creative thinking and output choices.
|
68 |
|
|
|
47 |
pipeline_tag: text-generation
|
48 |
---
|
49 |
|
50 |
+
( Quants pending, FOUR examples added )
|
51 |
|
52 |
<B><font color="red">WARNING:</font> MADNESS - UN HINGED and... NSFW. Vivid prose. INTENSE. Visceral Details. Violence. HORROR. GORE. Swearing. UNCENSORED... humor, romance, fun. </B>
|
53 |
|
54 |
+
<h2>Qwen3-22B-A3B-The-Harley-Quinn-PUDDIN-Abliterated-Uncensored</h2>
|
55 |
|
56 |
<img src="qwen3-harley-quinn-23b-puddin.webp" style="float:right; width:300px; height:300px; padding:10px;">
|
57 |
|
|
|
62 |
|
63 |
A stranger, yet radically different version of Kalmaze's "Qwen/Qwen3-16B-A3B" (that was abliterated by "huihui-ai") with the
|
64 |
experts pruned to 64 (from 128, the Qwen 3 30B-A3B version) and then I added 19 layers expanding (Brainstorm 20x by DavidAU - info at the
|
65 |
+
bottom of this page) the model to 22B total parameters.
|
66 |
|
67 |
The goal: slightly alter the model, to address some odd creative thinking and output choices.
|
68 |
|