---
base_model: allura-org/Q3-30B-A3B-Designant
base_model_relation: quantized
quantized_by: ArtusDev
library_name: transformers
tags:
- mergekit
- axolotl
- unsloth
- roleplay
- conversational
- exl3
datasets:
- PygmalionAI/PIPPA
- Alfitaria/nemotron-ultra-reasoning-synthkink
- PocketDoc/Dans-Prosemaxx-Gutenberg
- FreedomIntelligence/Medical-R1-Distill-Data
- cognitivecomputations/SystemChat-2.0
- allenai/tulu-3-sft-personas-instruction-following
- kalomaze/Opus_Instruct_25k
- simplescaling/s1K-claude-3-7-sonnet
- ai2-adapt-dev/flan_v2_converted
- grimulkan/theory-of-mind
- grimulkan/physical-reasoning
- nvidia/HelpSteer3
- nbeerbower/gutenberg2-dpo
- nbeerbower/gutenberg-moderne-dpo
- nbeerbower/Purpura-DPO
- antiven0m/physical-reasoning-dpo
- allenai/tulu-3-IF-augmented-on-policy-70b
- NobodyExistsOnTheInternet/system-message-DPO
---
# Q3-30B-A3B-Designant
[*She looked into His Spine, into His Heart; and she saw there the shade of His soul.*](https://www.youtube.com/watch?v=bautietoaBo)
# Overview
Intended as a direct upgrade to [Pentiment](https://huggingface.co/allura-org/Q3-30b-A3b-Pentiment), ***Q3-30B-A3B-Designant*** is a roleplaying model finetuned from [Qwen3-30B-A3B-Base](https://huggingface.co/Qwen/Qwen3-30B-A3B-Base).
During testing, Designant punched well above its weight class in terms of active parameters, demonstrating the potential for well-made lightweight Mixture of Experts models in the roleplay scene. While one tester observed looping behavior, repetition in general was minimal.
# Quantizations
GGUF:
- [bartowski imatrixed quants](https://huggingface.co/bartowski/allura-org_Q3-30B-A3B-Designant-GGUF)
MLX:
- [8bpw](https://huggingface.co/soundTeam/Q3-30B-A3B-Designant_mlx-8bpw)
# Usage
- Format is plain-old ChatML (please note that, unlike regular Qwen 3, you do *not* need to prefill empty think tags for it not to reason -- see below).
- Settings used by testers varied, but Fizz and inflatebot used the same settings and system prompt recommended for [GLM4-32B-Neon-v2.](https://huggingface.co/allura-org/GLM4-32B-Neon-v2)
- The instruction following version of Qwen3-30B-A3B was not part of the merge. Instruction-following is trained in post-hoc, and "thinking" data was not included. __As a result of this, "thinking" will likely not function as intended.__
- As with any Q3-30B-A3B, Designant performs very adequately with few or zero layers offloaded to GPU. When using the [ik_llama.cpp](https://github.com/ikawrakow/ik_llama.cpp) server, a 7950X CPU with 32GB of DDR5 RAM can run a Q4_K_M quant of this architecture at ~15 tokens/sec *with no GPU involved at all.*
# Training Process
1. The [base model](https://huggingface.co/Qwen/Qwen3-30B-A3B-Base) first went through a supervised finetune on a corpus of instruction following data, roleplay conversations, and human writing based on the [Ink](https://huggingface.co/collections/allura-org/ink-6772fd1442308781594bbabb)/[Bigger Body](https://huggingface.co/collections/allura-org/bigger-body-67b277af0861cec33b54745d)/[Remnant](https://huggingface.co/collections/allura-org/remnant-6817c2113bbb2aed501513d0) lineage.
2. It was then slightly merged with [Pantheon-Proto-RP-1.8](https://huggingface.co/Gryphe/Pantheon-Proto-RP-1.8-30B-A3B), to improve stability.
3. Finally, a KTO reinforcement learning phase steered the model away from the very purple prose the initial merge had, and improved its logical+spatial reasoning and sense of overall "intelligence".
# Credits
- Fizz - Train, Merge, Data Wrangling
- Toaster, OMGWTFBBQ, The Trashpanda Testing Crew - Testing
- inflatebot - Model Card, Testing, Merging Consultation
- Juahyori, Artus - Compute Funding
- Gryphe, Alibaba - Making the original models as well as the ones used in the merge
Bot would like to thank the Allura community on Discord, especially Curse, Vagabond, Artus and Mawnipulator, for their companionship and moral support. You all mean the world to us.
---
*`There, God is not.`*