DavidAU's picture
Update README.md
dd62d9c verified
|
raw
history blame
1.45 kB
---
license: apache-2.0
library_name: transformers
language:
- en
- fr
- zh
- de
tags:
- creative
- creative writing
- fiction writing
- plot generation
- sub-plot generation
- fiction writing
- story generation
- scene continue
- storytelling
- fiction story
- science fiction
- romance
- all genres
- story
- writing
- vivid prose
- vivid writing
- moe
- mixture of experts
- 128 experts
- 8 active experts
- fiction
- roleplaying
- bfloat16
- rp
- qwen3
- horror
- finetune
- thinking
- reasoning
- qwen3_moe
- merge
base_model:
- Qwen/Qwen3-30B-A3B
- allura-org/Q3-30B-A3B-Designant
- Ewere/Qwen3-30B-A3B-abliterated-erotic
- Gryphe/Pantheon-Proto-RP-1.8-30B-A3B
pipeline_tag: text-generation
---
( Quants pending, model card updates pending, examples to be added, uploading... )
<h2>Qwen3-33B-A3B-Quad-Mind</h2>
This repo contains the full precision source code, in "safe tensors" format to generate GGUFs, GPTQ, EXL2, AWQ, HQQ and other formats.
The source code can also be used directly.
ABOUT:
A very special blend of Qwen's 30B-A3B 128 expert MOE, with Brainstorm 5X (by DavidAU) multi.
4 Layers are added containing 128 experts each, from 4 different models.
This is a very unique model.
USAGE:
Please refer to this model card for specific usage, suggested settings, changing ACTIVE EXPERTS, templates, settings and the like:
https://huggingface.co/DavidAU/Qwen3-33B-A3B-Stranger-Thoughts-GGUF
---
EXAMPLES Coming soon...
---