Text Generation
Transformers
Safetensors
qwen3_moe
creative
creative writing
fiction writing
plot generation
sub-plot generation
story generation
scene continue
storytelling
fiction story
science fiction
romance
all genres
story
writing
vivid prose
vivid writing
Mixture of Experts
mixture of experts
128 experts
8 active experts
fiction
roleplaying
bfloat16
rp
qwen3
horror
finetune
thinking
reasoning
conversational
Update README.md
Browse files
README.md
CHANGED
@@ -50,11 +50,15 @@ Qwen's excellent "Qwen3-30B-A3B" with Brainstorm 20x in a MOE at 42B parameters.
|
|
50 |
|
51 |
This pushes Qwen's model to the absolute limit for creative use cases.
|
52 |
|
|
|
|
|
|
|
|
|
53 |
ONE example generation below.
|
54 |
|
55 |
USAGE GUIDE:
|
56 |
|
57 |
-
Please refer to this model card for specific usage,
|
58 |
|
59 |
https://huggingface.co/DavidAU/Qwen3-33B-A3B-Stranger-Thoughts-GGUF
|
60 |
|
|
|
50 |
|
51 |
This pushes Qwen's model to the absolute limit for creative use cases.
|
52 |
|
53 |
+
Model is set with Qwen's default config:
|
54 |
+
- 40 k context
|
55 |
+
- 8 of 128 experts activated.
|
56 |
+
|
57 |
ONE example generation below.
|
58 |
|
59 |
USAGE GUIDE:
|
60 |
|
61 |
+
Please refer to this model card for specific usage, changing ACTIVE EXPERTS, templates, settings and the like:
|
62 |
|
63 |
https://huggingface.co/DavidAU/Qwen3-33B-A3B-Stranger-Thoughts-GGUF
|
64 |
|