QiMing-Holos-Plus-4B-bf16-mlx

Recommended quant:

โœจ qx6-hi: Why This Model Wins

Strength                      Impact
PiQA (0.716)                  Highest accuracy (best for reasoning QA)**
WinG. (0.618)                 Winning consistency for document tasks
Avg. metric points (528)      +0.5% vs BF16โ€™s 523
Minimal arc_challenge drag    Flexible for varied inference loads

Visual Scorecard

WinG. Leader       qx6-hi (0.618)
PiQA Leader        qx6-hi (0.716)
Avg. Champion      qx6-hi (~528 avg metric pts)

This refined analysis โ€” powered by new benchmark evidence โ€” positions qx6-hi as the most battle-tested quant model for your end-to-end reasoning stack. Deploy confidently with this updated validation! ๐Ÿš€

This model QiMing-Holos-Plus-4B-bf16-mlx was converted to MLX format from aifeifei798/QiMing-Holos-Plus-4B using mlx-lm version 0.26.3.

Use with mlx

pip install mlx-lm
from mlx_lm import load, generate

model, tokenizer = load("QiMing-Holos-Plus-4B-bf16-mlx")

prompt = "hello"

if tokenizer.chat_template is not None:
    messages = [{"role": "user", "content": prompt}]
    prompt = tokenizer.apply_chat_template(
        messages, add_generation_prompt=True
    )

response = generate(model, tokenizer, prompt=prompt, verbose=True)
Downloads last month
12
Safetensors
Model size
4.02B params
Tensor type
BF16
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for nightmedia/QiMing-Holos-Plus-4B-bf16-mlx

Adapter
(8)
this model