SynLogic
Collection
Synthesizing Verifiable Reasoning Data at Scale for Learning Logical Reasoning and Beyond
•
5 items
•
Updated
•
7
SynLogic-7B is a logical reasoning model built on Qwen2.5-7B-Base and trained using reinforcement learning on our SynLogic dataset. Despite its smaller size, the model demonstrates strong logical reasoning capabilities and effective generalization to mathematical domains.
Model | KOR-Bench | BBH | BBEH |
---|---|---|---|
Qwen2.5-7B-Instruct | 38.6 | 62.7 | 12.4 |
SynLogic-7B | 48.1 | 66.5 | 8.0 |
Model | AIME 2024 | MATH 500 | AMC 2023 |
---|---|---|---|
Qwen2.5-7B-Base | 0.3 | 64.6 | 30.0 |
Qwen2.5-7B-Instruct | 6.3 | 76.4 | 52.5 |
SynLogic-7B | 10.0 | 71.8 | 55.0 |
Key Achievements:
@misc{liu2025synlogic,
title={SynLogic: Synthesizing Verifiable Reasoning Data at Scale for Learning Logical Reasoning and Beyond},
author={Junteng Liu and Yuanxiang Fan and Zhuo Jiang and Han Ding and Yongyi Hu and Chi Zhang and Yiqi Shi and Shitong Weng and Aili Chen and Shiqi Chen and Yunan Huang and Mozhi Zhang and Pengyu Zhao and Junjie Yan and Junxian He},
year={2025},
eprint={2505.19641},
archivePrefix={arXiv},
primaryClass={cs.AI},
url={https://arxiv.org/abs/2505.19641},
}