Text Generation
Transformers
Safetensors
qwen3_moe
creative
creative writing
fiction writing
plot generation
sub-plot generation
story generation
scene continue
storytelling
fiction story
science fiction
romance
all genres
story
writing
vivid prose
vivid writing
Mixture of Experts
mixture of experts
128 experts
8 active experts
fiction
roleplaying
bfloat16
rp
qwen3
horror
finetune
thinking
reasoning
conversational
Update README.md
Browse files
README.md
CHANGED
@@ -49,7 +49,7 @@ pipeline_tag: text-generation
|
|
49 |
|
50 |
Qwen's excellent "Qwen3-30B-A3B" with Brainstorm 20x (tech notes at bottom of the page) in a MOE (128 experts) at 42B parameters (up from 30B).
|
51 |
|
52 |
-
This pushes Qwen's model to the absolute limit for creative use cases.
|
53 |
|
54 |
Detail, vividiness, and creativity all get a boost.
|
55 |
|
@@ -61,7 +61,9 @@ The Brainstrom 20x has also lightly de-censored the model under some conditions.
|
|
61 |
|
62 |
See 4 examples below.
|
63 |
|
64 |
-
Model retains full reasoning, and output generation of a Qwen3 MOE
|
|
|
|
|
65 |
|
66 |
Model is set with Qwen's default config:
|
67 |
- 40 k context
|
|
|
49 |
|
50 |
Qwen's excellent "Qwen3-30B-A3B" with Brainstorm 20x (tech notes at bottom of the page) in a MOE (128 experts) at 42B parameters (up from 30B).
|
51 |
|
52 |
+
This pushes Qwen's model to the absolute limit for creative use cases, programming/coding use cases and other use cases.
|
53 |
|
54 |
Detail, vividiness, and creativity all get a boost.
|
55 |
|
|
|
61 |
|
62 |
See 4 examples below.
|
63 |
|
64 |
+
Model retains full reasoning, and output generation of a Qwen3 MOE.
|
65 |
+
|
66 |
+
Model tested by third party for coding generation too (see review below).
|
67 |
|
68 |
Model is set with Qwen's default config:
|
69 |
- 40 k context
|