cs2764/Kimi-K2-Instruct-0905-mlx-3Bit-gs32

The Model cs2764/Kimi-K2-Instruct-0905-mlx-3Bit-gs32 was converted to MLX format from moonshotai/Kimi-K2-Instruct-0905 using mlx-lm version 0.28.0.

Quantization Details

This model was converted with the following quantization settings:

  • Quantization Strategy: 3-bit quantization
  • Group Size: 32
  • Average bits per weight: 4.002

Use with mlx

pip install mlx-lm
from mlx_lm import load, generate

model, tokenizer = load("cs2764/Kimi-K2-Instruct-0905-mlx-3Bit-gs32")

prompt="hello"

if hasattr(tokenizer, "apply_chat_template") and tokenizer.chat_template is not None:
    messages = [{"role": "user", "content": prompt}]
    prompt = tokenizer.apply_chat_template(
        messages, tokenize=False, add_generation_prompt=True
    )

response = generate(model, tokenizer, prompt=prompt, verbose=True)
Downloads last month
267
Safetensors
Model size
1T params
Tensor type
BF16
U32
F32
Inference Providers NEW
This model isn't deployed by any Inference Provider. 馃檵 Ask for provider support

Model tree for cs2764/Kimi-K2-Instruct-0905-mlx-3Bit-gs32

Quantized
(15)
this model