Dataset Viewer
Auto-converted to Parquet
dataset_name
stringclasses
8 values
pass_k[sample_size]
int64
4
64
pass_0_final
float64
0.25
0.81
pass_acc_final
float64
0.13
0.83
pass_k_final
float64
0.46
0.98
pass_0_math_verify
float64
0.25
0.81
pass_acc_math_verify
float64
0.13
0.83
pass_k_math_verify
float64
0.44
0.98
pass_0_deepscaler
float64
0.25
0.8
pass_acc_deepscaler
float64
0.13
0.82
pass_k_deepscaler
float64
0.38
0.98
path
stringclasses
8 values
config
stringclasses
8 values
amc23
64
0.711328
0.625
0.975
0.711328
0.625
0.975
0.711328
0.625
0.975
/home/slurm/tuenv2/open_i1_project/llm_utils/r1-1.5b/results_final/64/amc23/results_rewards.ds
{"model_name_or_path": "/home/slurm/tuenv2/open_i1_project/llm_utils/r1-1.5b", "dataset_name_or_path": "tuenguyen/eval_math_amc23", "output_dir": "/home/slurm/tuenv2/open_i1_project/llm_utils/r1-1.5b/results_final/64/amc23", "temperature": 0.6, "sample_size": 64, "top_p": 0.95, "max_tokens": 32768, "answer_row": "answer", "generation_row": "problem", "format_str": "{problem}\\n\\n.Please reason step by step, and put your final answer within \\boxed{{}}.", "system_prompt": null}
aime_2024
64
0.326042
0.266667
0.833333
0.326042
0.266667
0.833333
0.326042
0.266667
0.833333
/home/slurm/tuenv2/open_i1_project/llm_utils/r1-1.5b/results_final/64/aime_2024/results_rewards.ds
{"model_name_or_path": "/home/slurm/tuenv2/open_i1_project/llm_utils/r1-1.5b", "dataset_name_or_path": "HuggingFaceH4/aime_2024", "output_dir": "/home/slurm/tuenv2/open_i1_project/llm_utils/r1-1.5b/results_final/64/aime_2024", "temperature": 0.6, "sample_size": 64, "top_p": 0.95, "max_tokens": 32768, "answer_row": "answer", "generation_row": "problem", "format_str": "{problem}\\n\\n.Please reason step by step, and put your final answer within \\boxed{{}}.", "system_prompt": null}
aime_2025
64
0.245313
0.133333
0.6
0.245313
0.133333
0.6
0.245313
0.133333
0.6
/home/slurm/tuenv2/open_i1_project/llm_utils/r1-1.5b/results_final/64/aime_2025/results_rewards.ds
{"model_name_or_path": "/home/slurm/tuenv2/open_i1_project/llm_utils/r1-1.5b", "dataset_name_or_path": "yentinglin/aime_2025", "output_dir": "/home/slurm/tuenv2/open_i1_project/llm_utils/r1-1.5b/results_final/64/aime_2025", "temperature": 0.6, "sample_size": 64, "top_p": 0.95, "max_tokens": 32768, "answer_row": "answer", "generation_row": "problem", "format_str": "{problem}\\n\\n.Please reason step by step, and put your final answer within \\boxed{{}}.", "system_prompt": null}
olympiadbench
4
0.47
0.468148
0.608889
0.469259
0.468148
0.607407
0.444444
0.445926
0.574815
/home/slurm/tuenv2/open_i1_project/llm_utils/r1-1.5b/results_final/4/olympiadbench/results_rewards.ds
{"model_name_or_path": "/home/slurm/tuenv2/open_i1_project/llm_utils/r1-1.5b", "dataset_name_or_path": "tuenguyen/eval_math_olympiadbench", "output_dir": "/home/slurm/tuenv2/open_i1_project/llm_utils/r1-1.5b/results_final/4/olympiadbench", "temperature": 0.6, "sample_size": 4, "top_p": 0.95, "max_tokens": 32768, "answer_row": "answer", "generation_row": "problem", "format_str": "{problem}\\n\\n.Please reason step by step, and put your final answer within \\boxed{{}}.", "system_prompt": null}
minerva_math
4
0.3125
0.319853
0.455882
0.303309
0.3125
0.4375
0.245404
0.257353
0.375
/home/slurm/tuenv2/open_i1_project/llm_utils/r1-1.5b/results_final/4/minerva_math/results_rewards.ds
{"model_name_or_path": "/home/slurm/tuenv2/open_i1_project/llm_utils/r1-1.5b", "dataset_name_or_path": "tuenguyen/eval_math_minerva_math", "output_dir": "/home/slurm/tuenv2/open_i1_project/llm_utils/r1-1.5b/results_final/4/minerva_math", "temperature": 0.6, "sample_size": 4, "top_p": 0.95, "max_tokens": 32768, "answer_row": "answer", "generation_row": "problem", "format_str": "{problem}\\n\\n.Please reason step by step, and put your final answer within \\boxed{{}}.", "system_prompt": null}
math_500
4
0.812
0.832
0.92
0.81
0.83
0.918
0.803
0.824
0.91
/home/slurm/tuenv2/open_i1_project/llm_utils/r1-1.5b/results_final/4/math_500/results_rewards.ds
{"model_name_or_path": "/home/slurm/tuenv2/open_i1_project/llm_utils/r1-1.5b", "dataset_name_or_path": "HuggingFaceH4/MATH-500", "output_dir": "/home/slurm/tuenv2/open_i1_project/llm_utils/r1-1.5b/results_final/4/math_500", "temperature": 0.6, "sample_size": 4, "top_p": 0.95, "max_tokens": 32768, "answer_row": "answer", "generation_row": "problem", "format_str": "{problem}\\n\\n.Please reason step by step, and put your final answer within \\boxed{{}}.", "system_prompt": null}
Math-Bench-final
4
0.392857
0.390977
0.601504
0.390038
0.387218
0.597744
0.359962
0.353383
0.56391
/home/slurm/tuenv2/open_i1_project/llm_utils/r1-1.5b/results_final/4/Math-Bench-final/results_rewards.ds
{"model_name_or_path": "/home/slurm/tuenv2/open_i1_project/llm_utils/r1-1.5b", "dataset_name_or_path": "Intelligent-Internet/Math-Bench-final", "output_dir": "/home/slurm/tuenv2/open_i1_project/llm_utils/r1-1.5b/results_final/4/Math-Bench-final", "temperature": 0.6, "sample_size": 4, "top_p": 0.95, "max_tokens": 32768, "answer_row": "final_answer", "generation_row": "problem_translation", "format_str": "{problem}\\n\\n.Please reason step by step, and put your final answer within \\boxed{{}}.", "system_prompt": null}
math_gaokao2023en
4
0.684416
0.683117
0.828571
0.679221
0.677922
0.820779
0.67013
0.667532
0.81039
/home/slurm/tuenv2/open_i1_project/llm_utils/r1-1.5b/results_final/4/math_gaokao2023en/results_rewards.ds
{"model_name_or_path": "/home/slurm/tuenv2/open_i1_project/llm_utils/r1-1.5b", "dataset_name_or_path": "tuenguyen/eval_math_gaokao2023en", "output_dir": "/home/slurm/tuenv2/open_i1_project/llm_utils/r1-1.5b/results_final/4/math_gaokao2023en", "temperature": 0.6, "sample_size": 4, "top_p": 0.95, "max_tokens": 32768, "answer_row": "answer", "generation_row": "problem", "format_str": "{problem}\\n\\n.Please reason step by step, and put your final answer within \\boxed{{}}.", "system_prompt": null}
README.md exists but content is empty.
Downloads last month
29

Collection including tuenguyen/r1_eval