Jackrong's picture
Add files using upload-large-folder tool
2d291a6 verified
|
raw
history blame
1.17 kB
metadata
language:
  - en
  - zh
license: apache-2.0
library_name: mlx
tags:
  - text-generation
  - mlx
  - apple-silicon
  - gpt
  - quantized
  - 4bit-quantization
pipeline_tag: text-generation
base_model: openai/gpt-oss-20b
model-index:
  - name: gpt-oss-20b-MLX-4bit
    results:
      - task:
          type: text-generation
        dataset:
          name: GPT-OSS-20B Evaluation
          type: openai/gpt-oss-20b
        metrics:
          - type: bits_per_weight
            value: 4.276
            name: Bits per weight (4-bit)

Jackrong/gpt-oss-20b-MLX-4bit

This model Jackrong/gpt-oss-20b-MLX-4bit was converted to MLX format from openai/gpt-oss-20b using mlx-lm version 0.27.0.

Use with mlx

pip install mlx-lm
from mlx_lm import load, generate

model, tokenizer = load("Jackrong/gpt-oss-20b-MLX-4bit")

prompt = "hello"

if tokenizer.chat_template is not None:
    messages = [{"role": "user", "content": prompt}]
    prompt = tokenizer.apply_chat_template(
        messages, add_generation_prompt=True
    )

response = generate(model, tokenizer, prompt=prompt, verbose=True)