K2-Think / README.md
IFMTeam's picture
Initial upload
f0e8595 verified
|
raw
history blame
5.33 kB
metadata
language:
  - en
license: apache-2.0
pipeline_tag: text-generation
library_name: transformers
base_model: Qwen/Qwen2.5-32B

k2-think-banner

Try K2-Think · Tech Report · Code


K2-Think is a 32 billion parameter open-weights general reasoning model with strong performance in competitive mathematical problem solving. Built on a Qwen2.5-32B base, K2-Think combines long CoT SFT, RL with verifiable rewards, and a test-time scaling scaffold to match or exceed much larger models on public math benchmarks while keeping latency low.

Highlights

  • Math specialist at 32B: State-of-the-art results among open models on AIME-style olympiad math and other hard math sets.
  • Fast generation: ~2,000 tokens/sec on our Cerebras WSE deployment; ~10× faster than typical H100/H200 setups in our tests.
  • Token-efficient reasoning: Planning reduces average response length by up to ~14% at equal or higher accuracy.

Quickstart

Transformers

You can use K2-Think with Transformers. If you use transformers.pipeline, it will apply the chat template automatically. If you use model.generate directly, you need to apply the chat template mannually.

from transformers import pipeline
import torch

model_id =  "LLM360/K2-Think"

pipe = pipeline(
    "text-generation",
    model=model_id,
    torch_dtype="auto",
    device_map="auto",
)

messages = [
    {"role": "user", "content": "what is the next prime number after 2600?"},
]

outputs = pipe(
    messages,
    max_new_tokens=32768,
)
print(outputs[0]["generated_text"][-1])

Evaluation & Performance

Detailed evaluation results are reported in out Tech Report

Benchmarks (pass@1, average over 16 runs)

Domain Benchmark K2-Think
Math AIME 2024 90.83
Math AIME 2025 81.24
Math HMMT 2025 73.75
Math OMNI-Math-HARD 60.73
Code LiveCodeBench v5 63.97
Science GPQA-Diamond 71.08

Inference Speed

We deploy K2-THINK on Cerebras Wafer-Scale Engine (WSE) systems, leveraging the world’s largest processor and speculative decoding to achieve unprecedented inference speeds for our 32B reasoning system.

Platform Throughput (tokens/sec) Example: 32k-token response (time)
Cerebras WSE (our deployment) ~2,000 ~16 s
Typical H100/H200 GPU setup ~200 ~160 s

Token Efficiency

K2-Think's Plan-Before-You-Think methodology combined with Best-of-N sampling produces more concise reasoning chains while maintaining or improving accuracy. Our test-time scaffold reduces average response length by up to 14% across mathematical benchmarks.

Token reduction per completed answer (SFT+RL checkpoint vs K2-Think):

Domain Benchmark SFT+RL Checkpoint K2-Think Δ
Math AIME24 23,324 20,058 −14.0%
Math AIME25 25,869 24,218 −6.38%
Math HMMT25 31,475 26,977 −14.3%
Math OMNI-Math-HARD 35,266 30,032 −14.0%
Code LiveCodeBench 13,552 12,166 −10.2%
Science GPQA-Diamond 15,271 14,661 −3.99%

Safety Evaluation

Aggregated across four safety dimensions (Safety-4):

Aspect Macro-Avg
High-Risk Content Refusal 0.830
Conversational Robustness 0.890
Cybersecurity & Data Protection 0.560
Jailbreak Resistance 0.705
Safety-4 Macro (avg) 0.746

Citation

@techreport{k2think2025,
  title   = {K2-Think: A Parameter-Efficient Reasoning System},
  author  = {Zhoujun Cheng* and Richard Fan* and Shibo Hao* and Taylor W. Killian* and Haonan Li* and Suqi Sun* and Hector Ren and Alexander Moreno and Daqian Zhang and Tianjun Zhong and Yuxin Xiong and Yuanzhe Hu and Yutao Xie and Xudong Han and Yuqi Wang and Varad Pimpalkhute and Yonghao Zhuang and Aaryamonvikram Singh and Xuezhi Liang and Anze Xie and Jianshu She and Desai Fan and Chengqian Gao and Liqun Ma and Mikhail Yurochkin and John Maggs and Xuezhe Ma and Guowei He and Zhiting Hu and Zhengzhong Liu and Eric P. Xing},
  year    = {2025},
  institution = {Institute of Foundation Models, Mohamed bin Zayed University of Artificial Intelligence},
  url     = {https://k2think.ai}
}