Kimi-Dev-72B-GPTQ-Int4

Base model: moonshotai/Kimi-Dev-72B

Calibrate using the https://huggingface.co/datasets/timdettmers/openassistant-guanaco/blob/main/openassistant_best_replies_eval.jsonl dataset.
The quantization configuration is as follows

quant_config = QuantizeConfig(bits=4, group_size=128, desc_act=False)

【vLLM Startup Command】

vllm serve JunHowie/Kimi-Dev-72B-GPTQ-Int4 

【Model Download】

from huggingface_hub import snapshot_download
snapshot_download('JunHowie/Kimi-Dev-72B-GPTQ-Int4', cache_dir="your_local_path")

【Overview】



We introduce Kimi-Dev-72B, our new open-source coding LLM for software engineering tasks. Kimi-Dev-72B achieves a new state-of-the-art on SWE-bench Verified among open-source models.

  • Kimi-Dev-72B achieves 60.4% performance on SWE-bench Verified. It surpasses the runner-up, setting a new state-of-the-art result among open-source models.

  • Kimi-Dev-72B is optimized via large-scale reinforcement learning. It autonomously patches real repositories in Docker and gains rewards only when the entire test suite passes. This ensures correct and robust solutions, aligning with real-world development standards.

  • Kimi-Dev-72B is available for download and deployment on Hugging Face and GitHub. We welcome developers and researchers to explore its capabilities and contribute to development.

Kimi Logo

Performance of Open-source Models on SWE-bench Verified.

Quick Start

from transformers import AutoModelForCausalLM, AutoTokenizer

model_name = "moonshotai/Kimi-Dev-72B"

model = AutoModelForCausalLM.from_pretrained(
    model_name,
    torch_dtype="auto",
    device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained(model_name)

prompt = "Give me a short introduction to large language model."
messages = [
    {"role": "system", "content": "You are a helpful assistant."},
    {"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
    messages,
    tokenize=False,
    add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)

generated_ids = model.generate(
    **model_inputs,
    max_new_tokens=512
)
generated_ids = [
    output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]

response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]

Citation

@misc{kimi_dev_72b_2025,
  title        = {Introducing Kimi-Dev: A Strong and Open-source Coding LLM for Issue Resolution},
  author       = {{Kimi-Dev Team}},
  year         = {2025},
  month        = {June},
  url          = {\url{https://www.moonshot.cn/Kimi-Dev}}
}
Downloads last month
17
Safetensors
Model size
73B params
Tensor type
BF16
·
I32
·
F16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for QuantTrio/Kimi-Dev-72B-GPTQ-Int4

Base model

Qwen/Qwen2.5-72B
Quantized
(30)
this model