Walk Before You Run!
Concise LLM Reasoning via Reinforcement Learning

Paper Hugging Face

🎉News

Usage

import vllm


def apply_template(question: str):
    return ("""<|startoftext|>A conversation between User and Assistant. The User asks a question, and the Assistant solves it. \
The Assistant first thinks about the reasoning process in the mind and then provides the User with the answer. \
The reasoning process is enclosed within <think> </think> and answer is enclosed within <answer> </answer> tags, respectively, \
i.e., <think> reasoning process here </think> <answer> answer here </answer>. \
Please reason step by step, and put your final answer within \\boxed{}.

User:
{query}

Assistant:
""".replace("{query}", question))

model_name = "Nickyang/ConciseR-Zero-7B-Preview"

sampling_params = vllm.SamplingParams(
    n=32,
    temperature=0.6,
    top_p=1.0,
    max_tokens=3072,
)

model = vllm.LLM(
    model_name,
    max_model_len=4096,
    dtype="bfloat16",
    enable_prefix_caching=True,
)

prompts = [
    "How many positive whole-number divisors does 196 have?"
]
prompts = list(map(apply_template, prompts))
outputs = model.generate(prompts, sampling_params)

print(outputs)

Citation

@misc{song2025conciser,
      title={Walk Before You Run! Concise LLM Reasoning via Reinforcement Learning}, 
      author={Mingyang Song and Mao Zheng},
      year={2025},
      eprint={2505.21178},
      archivePrefix={arXiv},
      primaryClass={cs.CL},
      url={https://arxiv.org/abs/2505.21178}, 
}
Downloads last month
9
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for Nickyang/ConciseR-Zero-7B-Preview

Base model

Qwen/Qwen2.5-7B
Finetuned
(292)
this model

Dataset used to train Nickyang/ConciseR-Zero-7B-Preview

Collection including Nickyang/ConciseR-Zero-7B-Preview