--- license: apache-2.0 library_name: transformers language: - en - zh base_model: - Amu/t1-3B ---
Tao Image
# Train ![res](res.png) # Eval ## qwen-2.5-3B-Instruct | task_type | metric | dataset_name | subset_name | average_score | count | average_score | | --------- | ------ | ------------ | ----------- | ------------- | ----- | ------------- | | math | AveragePass@1 | math_500 | default | 0.648 | 500 | | | math | AveragePass@1 | gpqa | gpqa_diamond | 0.2677 | 198 | 0.5192 | | math | AveragePass@1 | aime24 | default | 0.0333 | 30 | | ## t1-3B | task_type | metric | dataset_name | subset_name | average_score | count | average_score | | --------- | ------ | ------------ | ----------- | ------------- | ----- | ------------- | | math | AveragePass@1 | math_500 | default | 0.698 | 500 | | | math | AveragePass@1 | gpqa | gpqa_diamond | 0.3182 | 198 | 0.5714 | | math | AveragePass@1 | aime24 | default | 0.1333 | 30 | | ## t1-3B-grpo | task_type | metric | dataset_name | subset_name | average_score | count | average_score | | --------- | ------ | ------------ | ----------- | ------------- | ----- | ------------- | | math | AveragePass@1 | math_500 | default | 0.77 | 500 | | | math | AveragePass@1 | gpqa | gpqa_diamond | 0.2879 | 198 | 0.6113 | | math | AveragePass@1 | aime24 | default | 0.1 | 30 | | # Reproduce I use [t1-3b](https://huggingface.co/Amu/t1-3B) as a base model which it is trained by using [t1-101k](https://huggingface.co/datasets/Amu/t1-101K). I use the grpo algorithm in [deepscaler](https://github.com/agentica-project/deepscaler). the below is the grpo script: - t1-3b-grpo-8k ```bash CUDA_VISIBLE_DEVICES=2,3,4,5,6,7 python3 -m verl.trainer.main_ppo \ algorithm.adv_estimator=grpo \ data.train_files=/root/deepscaler/data/train.parquet \ data.val_files=/root/deepscaler/data/aime.parquet \ data.train_batch_size=96 \ data.val_batch_size=288 \ data.max_prompt_length=1024 \ data.max_response_length=8192 \ actor_rollout_ref.model.path=/workspace/R1/model/t1-3B \ actor_rollout_ref.actor.optim.lr=1e-6 \ actor_rollout_ref.model.use_remove_padding=True \ actor_rollout_ref.actor.ppo_mini_batch_size=48 \ actor_rollout_ref.actor.ppo_micro_batch_size=48 \ actor_rollout_ref.actor.use_dynamic_bsz=True \ actor_rollout_ref.actor.ppo_max_token_len_per_gpu=32768 \ actor_rollout_ref.actor.use_kl_loss=True \ actor_rollout_ref.actor.kl_loss_coef=0.001 \ actor_rollout_ref.actor.kl_loss_type=low_var_kl \ actor_rollout_ref.actor.ulysses_sequence_parallel_size=1 \ actor_rollout_ref.model.enable_gradient_checkpointing=True \ actor_rollout_ref.actor.fsdp_config.param_offload=False \ actor_rollout_ref.actor.fsdp_config.grad_offload=False \ actor_rollout_ref.actor.fsdp_config.optimizer_offload=False \ actor_rollout_ref.rollout.tensor_model_parallel_size=1 \ actor_rollout_ref.rollout.name=vllm \ actor_rollout_ref.rollout.temperature=0.6 \ actor_rollout_ref.rollout.val_temperature=0.6 \ actor_rollout_ref.rollout.gpu_memory_utilization=0.85 \ actor_rollout_ref.rollout.n=12 \ actor_rollout_ref.rollout.n_val=6 \ actor_rollout_ref.ref.fsdp_config.param_offload=True \ algorithm.kl_ctrl.kl_coef=0.001 \ trainer.critic_warmup=0 \ trainer.logger=['console','wandb'] \ trainer.project_name='t1-3b' \ trainer.experiment_name='t1-3b-grpo-8k' \ +trainer.val_before_train=True \ trainer.n_gpus_per_node=6 \ trainer.nnodes=1 \ trainer.save_freq=50 \ trainer.test_freq=50 \ trainer.default_hdfs_dir=null \ trainer.total_epochs=5 ``` - t1-3b-grpo-16k ```bash CUDA_VISIBLE_DEVICES=2,3,4,5,6,7 python3 -m verl.trainer.main_ppo \ algorithm.adv_estimator=grpo \ data.train_files=/root/deepscaler/data/train.parquet \ data.val_files=/root/deepscaler/data/aime.parquet \ data.train_batch_size=48 \ data.val_batch_size=144 \ data.max_prompt_length=1024 \ data.max_response_length=16384 \ actor_rollout_ref.model.path=/workspace/R1/deepscaler/checkpoints/t1-3b/t1-3b-grpo-8k/actor/global_step_450 \ actor_rollout_ref.actor.optim.lr=1e-6 \ actor_rollout_ref.model.use_remove_padding=True \ actor_rollout_ref.actor.ppo_mini_batch_size=48 \ actor_rollout_ref.actor.ppo_micro_batch_size=48 \ actor_rollout_ref.actor.use_dynamic_bsz=True \ actor_rollout_ref.actor.ppo_max_token_len_per_gpu=32768 \ actor_rollout_ref.actor.use_kl_loss=True \ actor_rollout_ref.actor.kl_loss_coef=0.001 \ actor_rollout_ref.actor.kl_loss_type=low_var_kl \ actor_rollout_ref.actor.ulysses_sequence_parallel_size=1 \ actor_rollout_ref.model.enable_gradient_checkpointing=True \ actor_rollout_ref.actor.fsdp_config.param_offload=False \ actor_rollout_ref.actor.fsdp_config.grad_offload=False \ actor_rollout_ref.actor.fsdp_config.optimizer_offload=False \ actor_rollout_ref.rollout.tensor_model_parallel_size=1 \ actor_rollout_ref.rollout.name=vllm \ actor_rollout_ref.rollout.temperature=0.6 \ actor_rollout_ref.rollout.val_temperature=0.6 \ actor_rollout_ref.rollout.gpu_memory_utilization=0.85 \ actor_rollout_ref.rollout.n=12 \ actor_rollout_ref.rollout.n_val=6 \ actor_rollout_ref.ref.fsdp_config.param_offload=True \ algorithm.kl_ctrl.kl_coef=0.001 \ trainer.critic_warmup=0 \ trainer.logger=['console','wandb'] \ trainer.project_name='t1-3b' \ trainer.experiment_name='t1-3b-grpo-16k' \ +trainer.val_before_train=True \ trainer.n_gpus_per_node=6 \ trainer.nnodes=1 \ trainer.save_freq=20 \ trainer.test_freq=20 \ trainer.default_hdfs_dir=null \ trainer.total_epochs=5 ``` - t1-3b-grpo-32k ```bash CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 python3 -m verl.trainer.main_ppo \ algorithm.adv_estimator=grpo \ data.train_files=/root/deepscaler/data/train.parquet \ data.val_files=/root/deepscaler/data/aime.parquet \ data.train_batch_size=16 \ data.val_batch_size=16 \ data.max_prompt_length=1024 \ data.max_response_length=31744 \ actor_rollout_ref.model.path=/workspace/R1/deepscaler/checkpoints/t1-3b/t1-3b-grpo-16k/actor/global_step_460 \ actor_rollout_ref.actor.optim.lr=1e-6 \ actor_rollout_ref.model.use_remove_padding=True \ actor_rollout_ref.actor.ppo_mini_batch_size=16 \ actor_rollout_ref.actor.ppo_micro_batch_size=16 \ actor_rollout_ref.actor.use_dynamic_bsz=True \ actor_rollout_ref.actor.ppo_max_token_len_per_gpu=32768 \ actor_rollout_ref.actor.use_kl_loss=True \ actor_rollout_ref.actor.kl_loss_coef=0.001 \ actor_rollout_ref.actor.kl_loss_type=low_var_kl \ actor_rollout_ref.actor.ulysses_sequence_parallel_size=1 \ actor_rollout_ref.model.enable_gradient_checkpointing=True \ actor_rollout_ref.actor.fsdp_config.param_offload=False \ actor_rollout_ref.actor.fsdp_config.grad_offload=False \ actor_rollout_ref.actor.fsdp_config.optimizer_offload=False \ actor_rollout_ref.rollout.tensor_model_parallel_size=1 \ actor_rollout_ref.rollout.name=vllm \ actor_rollout_ref.rollout.temperature=0.6 \ actor_rollout_ref.rollout.val_temperature=0.6 \ actor_rollout_ref.rollout.gpu_memory_utilization=0.6 \ actor_rollout_ref.rollout.n=8 \ actor_rollout_ref.rollout.n_val=8 \ actor_rollout_ref.ref.fsdp_config.param_offload=True \ algorithm.kl_ctrl.kl_coef=0.001 \ trainer.critic_warmup=0 \ trainer.logger=['console','wandb'] \ trainer.project_name='t1-3b' \ trainer.experiment_name='t1-3b-grpo-32k' \ +trainer.val_before_train=True \ trainer.n_gpus_per_node=8 \ trainer.nnodes=1 \ trainer.save_freq=20 \ trainer.test_freq=20 \ trainer.default_hdfs_dir=null \ trainer.total_epochs=5 ``` # What's the next I will release a tiny vlm( t1-vl-grpo ).