Kimina-Prover-72B

image/png

Kimina-Prover-72B is a theorem proving model developed by Project Numina and Kimi teams, focusing on competition style problem solving capabilities in Lean 4. It is trained via large scale reinforcement learning. It achieves 84.0% accuracy with Pass@32 on MiniF2F-test.

For advanced usage examples, see https://github.com/MoonshotAI/Kimina-Prover-Preview/tree/master/kimina_prover_demo

Quick Start with vLLM

You can easily do inference using vLLM:

from vllm import LLM, SamplingParams
from transformers import AutoTokenizer
model_name = "AI-MO/Kimina-Prover-72B"
model = LLM(
    model=model_name,
    tensor_parallel_size=8, # Should have 8 GPUs on this node
    max_model_len=131072,
    )
tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True)
problem = "The volume of a cone is given by the formula $V = \frac{1}{3}Bh$, where $B$ is the area of the base and $h$ is the height. The area of the base of a cone is 30 square units, and its height is 6.5 units. What is the number of cubic units in its volume?"
formal_statement = """import Mathlib
import Aesop
set_option maxHeartbeats 0
open BigOperators Real Nat Topology Rat
/-- The volume of a cone is given by the formula $V = \frac{1}{3}Bh$, where $B$ is the area of the base and $h$ is the height. The area of the base of a cone is 30 square units, and its height is 6.5 units. What is the number of cubic units in its volume? Show that it is 65.-/
theorem mathd_algebra_478 (b h v : ℝ) (h₀ : 0 < b ∧ 0 < h ∧ 0 < v) (h₁ : v = 1 / 3 * (b * h))
    (h₂ : b = 30) (h₃ : h = 13 / 2) : v = 65 := by
"""
prompt = "Think about and solve the following problem step by step in Lean 4."
prompt += f"\n# Problem:{problem}"""
prompt += f"\n# Formal statement:\n```lean4\n{formal_statement}\n```\n"
messages = [
    {"role": "system", "content": "You are an expert in mathematics and proving theorems in Lean 4."},
    {"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
    messages,
    tokenize=False,
    add_generation_prompt=True
)
sampling_params = SamplingParams(temperature=0.6, top_p=0.95, max_tokens=8096)
output = model.generate(text, sampling_params=sampling_params)
output_text = output[0].outputs[0].text
print(output_text)
Downloads last month
40
Safetensors
Model size
72.7B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for AI-MO/Kimina-Prover-72B

Base model

Qwen/Qwen2.5-72B
Finetuned
(43)
this model
Quantizations
3 models

Collection including AI-MO/Kimina-Prover-72B