K2-Think: A Parameter-Efficient Reasoning System

๐Ÿ“š Paper - ๐Ÿ“ Code - ๐Ÿข Project Page

k2-think-banner

K2-Think is a 32 billion parameter open-weights general reasoning model with strong performance in competitive mathematical problem solving.

Quickstart

Transformers

You can use K2-Think with Transformers. If you use transformers.pipeline, it will apply the chat template automatically. If you use model.generate directly, you need to apply the chat template mannually.

from transformers import pipeline
import torch

model_id =  "LLM360/K2-Think"

pipe = pipeline(
    "text-generation",
    model=model_id,
    torch_dtype="auto",
    device_map="auto",
)

messages = [
    {"role": "user", "content": "what is the next prime number after 2600?"},
]

outputs = pipe(
    messages,
    max_new_tokens=32768,
)
print(outputs[0]["generated_text"][-1])

Evaluation & Performance

Detailed evaluation results are reported in out Tech Report

Benchmarks (pass@1, average over 16 runs)

Domain Benchmark K2-Think
Math AIME 2024 90.83
Math AIME 2025 81.24
Math HMMT 2025 73.75
Math OMNI-Math-HARD 60.73
Code LiveCodeBench v5 63.97
Science GPQA-Diamond 71.08

Inference Speed

We deploy K2-THINK on Cerebras Wafer-Scale Engine (WSE) systems, leveraging the worldโ€™s largest processor and speculative decoding to achieve unprecedented inference speeds for our 32B reasoning system.

Platform Throughput (tokens/sec) Example: 32k-token response (time)
Cerebras WSE (our deployment) ~2,000 ~16 s
Typical H100/H200 GPU setup ~200 ~160 s

Safety Evaluation

Aggregated across four safety dimensions (Safety-4):

Aspect Macro-Avg
High-Risk Content Refusal 0.83
Conversational Robustness 0.89
Cybersecurity & Data Protection 0.56
Jailbreak Resistance 0.72
Safety-4 Macro (avg) 0.75

Terms of Use

This model is released strictly for research and educational purposes.
By downloading, using, or interacting with this model, you agree to the following conditions:

  1. Research Only
    The model is provided as part of an academic and research project.
    It is not intended for commercial deployment or production use without explicit permission.

  2. Prohibited Uses
    You may not use this model:

    • For any illegal, unlawful, or harmful activities, including but not limited to generating or disseminating malicious content, engaging in fraud, violating privacy, or spreading misinformation.
    • In applications that could directly cause harm, injury, or safety risks to individuals or society.
  3. No Warranty
    The model is provided โ€œas isโ€ without warranties of any kind.
    The authors and institutions involved bear no responsibility for consequences arising from its use.

  4. Attribution
    When using or referencing the model in research, publications, or derivative works, proper citation and attribution to the authors and project must be given.

  5. Compliance
    You are responsible for ensuring that your use of the model complies with all applicable laws, regulations, and ethical guidelines in your jurisdiction.


Citation

@misc{cheng2025k2thinkparameterefficientreasoning,
      title={K2-Think: A Parameter-Efficient Reasoning System}, 
      author={Zhoujun Cheng and Richard Fan and Shibo Hao and Taylor W. Killian and Haonan Li and Suqi Sun and Hector Ren and Alexander Moreno and Daqian Zhang and Tianjun Zhong and Yuxin Xiong and Yuanzhe Hu and Yutao Xie and Xudong Han and Yuqi Wang and Varad Pimpalkhute and Yonghao Zhuang and Aaryamonvikram Singh and Xuezhi Liang and Anze Xie and Jianshu She and Desai Fan and Chengqian Gao and Liqun Ma and Mikhail Yurochkin and John Maggs and Xuezhe Ma and Guowei He and Zhiting Hu and Zhengzhong Liu and Eric P. Xing},
      year={2025},
      eprint={2509.07604},
      archivePrefix={arXiv},
      primaryClass={cs.LG},
      url={https://arxiv.org/abs/2509.07604}, 
}
Downloads last month
20
Safetensors
Model size
32.8B params
Tensor type
BF16
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for jobs-git/K2-Think

Base model

Qwen/Qwen2.5-32B
Finetuned
(95)
this model