gpt-oss-120b — MLX bf16 (non-quantized)
Summary. This is a non-quantized MLX conversion of gpt-oss-120B in bfloat16 (bf16). Built for Apple Silicon with Metal acceleration.
- Base model:
openai/gpt-oss-120b
(Apache-2.0) - Precision: bfloat16 (no quantization)
- Files: MLX weight shards +
config.json
; tokenizer files included for drop-in use - Intended use: local inference / research on M-series Macs
- Not intended for: safety-critical decisions; outputs may be inaccurate or biased
Requirements
Runs on Apple Silicon (M1 or newer) with macOS ≥ 13.5 via MLX (Metal).
- Not supported: Intel macOS / Linux / Windows (consider a GGUF build + llama.cpp instead).
- Memory guidance: large unified memory recommended (e.g., 64–96 GB). The effective GPU working set is capped by Metal’s budget; keep 5–10% headroom.
How to use (MLX)
pip install mlx-lm
# Python API (uses tokenizer bundled with this repo)
from mlx_lm import load, generate
model, tokenizer = load("halley-ai/gpt-oss-120b-MLX-bf16")
print(generate(
model, tokenizer,
prompt="Explain the Chudnovsky algorithm to compute π.",
max_tokens=256, max_kv_size=512
))
# CLI
python -m mlx_lm generate --model halley-ai/gpt-oss-120b-MLX-bf16 \
--prompt "Explain the Chudnovsky algorithm to compute pi." \
--max-kv-size 512 --max-tokens 256
Evaluation
Perplexity (PPL) streaming evaluation on WikiText-2 (raw, test); fast preset with window=stride=4096
, ~100k tokens, EOS inserted between docs.
Variant | PPL (ctx=4096, fast) |
---|---|
MLX bf16 (non-quant) | 7.38 |
MLX 8-bit (gs=32) | 7.39 |
MLX 6-bit (gs=64) | 7.40 |
Notes
- Results from local runs on Apple Silicon using MLX; numbers vary slightly with tokenizer details, logits dtype, and token subset.
- For more sensitive comparisons, use overlapping windows (e.g.,
--stride 512
) and evaluate the full split.
Conversion details (provenance)
python -m mlx_lm convert \
--hf-path openai/gpt-oss-120b \
--mlx-path gpt-oss-120b-MLX-bf16 \
--dtype bfloat16
Sibling & reference models
- halley-ai/gpt-oss-120b-MLX-8bit-gs32 (int8, group size 32)
- halley-ai/gpt-oss-120b-MLX-6bit-gs64 (int6, group size 64)
Limitations & biases
Outputs may be factually wrong or unsafe. Do not use for medical, legal, or financial decisions without human review. Large models can be sensitive to prompts; prefer explicit instructions and structure.
License & credits
- License: Apache-2.0 (inherits from base model)
- Base model: OpenAI gpt-oss-120B
- Conversion: Halley AI Lab (MLX bf16)
- Please cite both the base model and this repository when you use the weights.
- Downloads last month
- 514
Model tree for halley-ai/gpt-oss-120b-MLX-bf16
Base model
openai/gpt-oss-120b