nightmedia's picture
Update README.md
e01b734 verified
metadata
license: apache-2.0
library_name: mlx
language:
  - en
  - fr
  - zh
  - de
tags:
  - programming
  - code generation
  - code
  - codeqwen
  - moe
  - coding
  - coder
  - qwen2
  - chat
  - qwen
  - qwen-coder
  - Qwen3-30B-A3B-Thinking-2507
  - Qwen3-30B-A3B
  - mixture of experts
  - 128 experts
  - 8 active experts
  - 256k context
  - qwen3
  - finetune
  - brainstorm 20x
  - brainstorm
  - thinking
  - reasoning
  - uncensored
  - abliterated
  - qwen3_moe
  - mlx
base_model: >-
  DavidAU/Qwen3-42B-A3B-2507-Thinking-Abliterated-uncensored-TOTAL-RECALL-v2-Medium-MASTER-CODER
pipeline_tag: text-generation

Qwen3-42B-A3B-2507-Thinking-Abliterated-uncensored-TOTAL-RECALL-v2-Medium-MASTER-CODER-q6-mlx

Performance Evaluation

The model was evaluated on seven standard NLP benchmarks:

Benchmark  brainstormed‑q6	bf16	q6
ARC‑challenge      0.387	0.387	0.378
ARC‑easy           0.447	0.436	0.434
BoolQ              0.625	0.628	0.636
HellaSwag          0.648	0.616	0.618
OpenBookQA         0.380	0.400	0.400
PiQA               0.768	0.763	0.765
Winogrande         0.636	0.639	0.634
Avg (7)           0.5559	0.5527	0.5521

The brain‑stormed module consistently improves performance on ARC‑easy, HellaSwag and PiQA, while matching or slightly underperforming on the other tasks. The overall average performance is +0.0038 (+0.4 %) over the non‑brainstormed baselines.

This model Qwen3-42B-A3B-2507-Thinking-Abliterated-uncensored-TOTAL-RECALL-v2-Medium-MASTER-CODER-q6-mlx was converted to MLX format from DavidAU/Qwen3-42B-A3B-2507-Thinking-Abliterated-uncensored-TOTAL-RECALL-v2-Medium-MASTER-CODER using mlx-lm version 0.26.3.

Use with mlx

pip install mlx-lm
from mlx_lm import load, generate

model, tokenizer = load("Qwen3-42B-A3B-2507-Thinking-Abliterated-uncensored-TOTAL-RECALL-v2-Medium-MASTER-CODER-q6-mlx")

prompt = "hello"

if tokenizer.chat_template is not None:
    messages = [{"role": "user", "content": prompt}]
    prompt = tokenizer.apply_chat_template(
        messages, add_generation_prompt=True
    )

response = generate(model, tokenizer, prompt=prompt, verbose=True)