Model Card for R1-Distill-0.6B
This model is a fine-tuned version of Qwen/Qwen3-0.6B-Base on the open-r1/Mixture-of-Thoughts dataset. It has been trained using TRL.
Quick start
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="alphadl/R1-Distill-0.6B", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
Training procedure
This model was trained with SFT.
Framework versions
- TRL: 0.18.0
- Transformers: 4.52.3
- Pytorch: 2.6.0
- Datasets: 3.2.0
- Tokenizers: 0.21.1
Evaluation
- lighteval: 0.10.0
Model | Qwen3-0.6B-Base | R1-Distill-0.6B (Ours) |
---|---|---|
Math-500 | 38.2 | 41.0(+2.8 ) |
GPQA Diamond | 24.2 | 28.3(+4.1 ) |
export VLLM_WORKER_MULTIPROC_METHOD=spawn # Required for vLLM
export NUMEXPR_MAX_THREADS=128 # Utilize all 128 cores for numerical computations
MODEL=data/R1-Distill-0.6B
# Evaluate the base model
# MODEL=Qwen/Qwen3-0.6B-Base
MODEL_ARGS="model_name=$MODEL,dtype=bfloat16,max_model_length=32768,gpu_memory_utilization=0.8,generation_parameters={max_new_tokens:8192,temperature:0.6,top_p:0.95}"
OUTPUT_DIR=data/evals/$MODEL
# Math 500
TASK=math_500
lighteval vllm $MODEL_ARGS "lighteval|$TASK|0|0" \
--use-chat-template \
--output-dir $OUTPUT_DIR
# GPQA Diamond
TASK=gpqa:diamond
lighteval vllm $MODEL_ARGS "lighteval|$TASK|0|0" \
--use-chat-template \
--output-dir $OUTPUT_DIR
- Downloads last month
- 1
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support
Model tree for alphadl/R1-Distill-0.6B-Qwen
Base model
Qwen/Qwen3-0.6B-Base