Text Generation
Transformers
GGUF
creative
creative writing
fiction writing
plot generation
sub-plot generation
story generation
scene continue
storytelling
fiction story
science fiction
romance
all genres
story
writing
vivid prose
vivid writing
Mixture of Experts
mixture of experts
64 experts
8 active experts
fiction
roleplaying
bfloat16
rp
qwen3
horror
finetune
thinking
reasoning
qwen3_moe
Merge
uncensored
abliterated
Not-For-All-Audiences
llama-cpp
gguf-my-repo
conversational
Update README.md
Browse files
README.md
CHANGED
@@ -51,6 +51,49 @@ pipeline_tag: text-generation
|
|
51 |
This model was converted to GGUF format from [`DavidAU/Qwen3-22B-A3B-The-Harley-Quinn`](https://huggingface.co/DavidAU/Qwen3-22B-A3B-The-Harley-Quinn) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
|
52 |
Refer to the [original model card](https://huggingface.co/DavidAU/Qwen3-22B-A3B-The-Harley-Quinn) for more details on the model.
|
53 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
54 |
## Use with llama.cpp
|
55 |
Install llama.cpp through brew (works on Mac and Linux)
|
56 |
|
|
|
51 |
This model was converted to GGUF format from [`DavidAU/Qwen3-22B-A3B-The-Harley-Quinn`](https://huggingface.co/DavidAU/Qwen3-22B-A3B-The-Harley-Quinn) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
|
52 |
Refer to the [original model card](https://huggingface.co/DavidAU/Qwen3-22B-A3B-The-Harley-Quinn) for more details on the model.
|
53 |
|
54 |
+
---
|
55 |
+
A stranger, yet radically different version of Kalmaze's "Qwen/Qwen3-16B-A3B" with the
|
56 |
+
experts pruned to 64 (from 128, the Qwen 3 30B-A3B version) and then I added 19 layers expanding (Brainstorm 20x by DavidAU
|
57 |
+
info at bottom of this page) the model to 22B total parameters.
|
58 |
+
|
59 |
+
|
60 |
+
The goal: slightly alter the model, to address some odd creative thinking and output choices.
|
61 |
+
|
62 |
+
|
63 |
+
Then... Harley Quinn showed up, and then it was a party!
|
64 |
+
|
65 |
+
|
66 |
+
A wild, out of control (sometimes) but never boring party.
|
67 |
+
|
68 |
+
|
69 |
+
Please note that the modifications affect the entire model operation; roughly I adjusted the model to think a little "deeper"
|
70 |
+
and "ponder" a bit - but this is a very rough description.
|
71 |
+
|
72 |
+
|
73 |
+
That being said, reasoning and output generation will be altered regardless of your use case(s).
|
74 |
+
|
75 |
+
|
76 |
+
These modifications pushes Qwen's model to the absolute limit for creative use cases.
|
77 |
+
|
78 |
+
|
79 |
+
Detail, vividiness, and creativity all get a boost.
|
80 |
+
|
81 |
+
|
82 |
+
Prose (all) will also be very different from "default" Qwen3.
|
83 |
+
|
84 |
+
|
85 |
+
Likewise, regen(s) of the same prompt - even at the same settings - will create very different version(s) too.
|
86 |
+
|
87 |
+
|
88 |
+
The Brainstrom 20x has also lightly de-censored the model under some conditions.
|
89 |
+
|
90 |
+
|
91 |
+
However, this model can be prone to bouts of madness.
|
92 |
+
|
93 |
+
|
94 |
+
It will not always behave, and it will sometimes go -wildly- off script.
|
95 |
+
|
96 |
+
---
|
97 |
## Use with llama.cpp
|
98 |
Install llama.cpp through brew (works on Mac and Linux)
|
99 |
|