Qwen3-42B-A3B-2507-Thinking-Abliterated-uncensored-TOTAL-RECALL-v2-Medium-MASTER-CODER-bf16-mlx

What changed with q6 (the brainstormed variant)?

Area	Improvement / Degradation
ARC‑easy      +0.011 (from 0.436 to 0.447) – better multi‑choice reasoning
HellaSwag     +0.032 (from 0.616 to 0.648) – stronger commonsense selection
PiQA          +0.005 (from 0.763 to 0.768) – marginal boost on physical‑reasoning
ARC‑challenge unchanged (0.387)            – no loss on harder questions
BoolQ         –0.003 (from 0.628 to 0.625) – tiny dip on factual QA
OpenBookQA    –0.020 (from 0.400 to 0.380) – largest relative drop, ~5% relative loss
Winogrande    +0.003 (from 0.639 to 0.636) – essentially unchanged

Overall picture

Average boost ≈ +0.0038 (~0.4%) over the baseline bf16 and q6 variants.

Gains are concentrated on ARC‑easy (contextual understanding) and HellaSwag (commonsense).

The only real trade‑off: a slight dip in BoolQ and a larger drop on OpenBookQA.

This model Qwen3-42B-A3B-2507-Thinking-Abliterated-uncensored-TOTAL-RECALL-v2-Medium-MASTER-CODER-bf16-mlx was converted to MLX format from DavidAU/Qwen3-42B-A3B-2507-Thinking-Abliterated-uncensored-TOTAL-RECALL-v2-Medium-MASTER-CODER using mlx-lm version 0.26.3.

Use with mlx

pip install mlx-lm
from mlx_lm import load, generate

model, tokenizer = load("Qwen3-42B-A3B-2507-Thinking-Abliterated-uncensored-TOTAL-RECALL-v2-Medium-MASTER-CODER-bf16-mlx")

prompt = "hello"

if tokenizer.chat_template is not None:
    messages = [{"role": "user", "content": prompt}]
    prompt = tokenizer.apply_chat_template(
        messages, add_generation_prompt=True
    )

response = generate(model, tokenizer, prompt=prompt, verbose=True)
Downloads last month
334
Safetensors
Model size
42.4B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for nightmedia/Qwen3-42B-A3B-2507-Thinking-Abliterated-uncensored-TOTAL-RECALL-v2-Medium-MASTER-CODER-bf16-mlx