Q3-30B-A3B-Designant

She looked into His Spine, into His Heart; and she saw there the shade of His soul.

Overview

Intended as a direct upgrade to Pentiment, Q3-30B-A3B-Designant is a roleplaying model finetuned from Qwen3-30B-A3B-Base.

During testing, Designant punched well above its weight class in terms of active parameters, demonstrating the potential for well-made lightweight Mixture of Experts models in the roleplay scene. While one tester observed looping behavior, repetition in general was minimal.

Quantizations

⚠️ Warning: Quantization seems very janky with Qwen 3 MoE models. We recommend using full bf16 weights and vLLM, if possible.

EXL3:

MLX:

GGUF:

Some users report even more issues with low-bit GGUF quants for Qwen3 MoE models. We'd recommend trying both imatrix and linear, as well as q5+ for proper quality.

Usage

  • Format is plain-old ChatML (please note that, unlike regular Qwen 3, you do not need to prefill empty think tags for it not to reason -- see below).

  • Settings used by testers varied, but Fizz and inflatebot used the same settings and system prompt recommended for GLM4-32B-Neon-v2.

  • The official instruction following version of Qwen3-30B-A3B was not part of the merge. Instruction-following is trained in post-hoc, and "thinking" traces were not included. As a result of this, "thinking" will likely not function as intended.

  • As with any Q3-30B-A3B, Designant performs very adequately with few or zero layers offloaded to GPU. When using the ik_llama.cpp server, a 7950X CPU with 32GB of DDR5 RAM can run a Q4_K_M quant of this architecture at ~15 tokens/sec with no GPU involved at all.

Training Process

  1. The base model first went through a supervised finetune on a corpus of instruction following data, roleplay conversations, and human writing based on the Ink/Bigger Body/Remnant lineage.

  2. It was then slightly merged with Pantheon-Proto-RP-1.8, to improve stability.

  3. Finally, a KTO reinforcement learning phase steered the model away from the very purple prose the initial merge had, and improved its logical+spatial reasoning and sense of overall "intelligence".

Credits

  • Fizz - Train, Merge, Data Wrangling

  • Toaster, OMGWTFBBQ, The Trashpanda Testing Crew - Testing

  • inflatebot - Model Card, Testing, Merging Consultation

  • Juahyori, Artus - Compute Funding

  • Gryphe, Alibaba - Making the original models as well as the ones used in the merge

Bot would like to thank the Allura community on Discord, especially Curse, Vagabond, Artus and Mawnipulator, for their companionship and moral support. You all mean the world to us.


There, God is not.

Downloads last month
134
Safetensors
Model size
30.5B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for allura-org/Q3-30B-A3B-Designant

Merges
1 model
Quantizations
13 models

Datasets used to train allura-org/Q3-30B-A3B-Designant

Collection including allura-org/Q3-30B-A3B-Designant