|
--- |
|
library_name: transformers |
|
license: apache-2.0 |
|
base_model: Qwen/Qwen2.5-32B-Instruct |
|
tags: |
|
- llama-factory |
|
- full |
|
- generated_from_trainer |
|
model-index: |
|
- name: OpenThinker2-32B |
|
results: [] |
|
datasets: |
|
- open-thoughts/OpenThoughts2-1M |
|
--- |
|
|
|
<p align="center"> |
|
<img src="https://huggingface.co/datasets/open-thoughts/open-thoughts-114k/resolve/main/open_thoughts.png" width="50%"> |
|
</p> |
|
|
|
# OpenThinker2-32B |
|
|
|
This model is a fine-tuned version of [Qwen/Qwen2.5-32B-Instruct](https://huggingface.co/Qwen/Qwen2.5-32B-Instruct) on the |
|
[OpenThoughts2-1M](https://huggingface.co/datasets/open-thoughts/OpenThoughts2-1M) dataset. |
|
|
|
The [OpenThinker2-32B](https://huggingface.co/open-thoughts/OpenThinker2-32B) model is the highest performing open-data model. |
|
This model improves upon our previous [OpenThinker-32B](https://huggingface.co/open-thoughts/OpenThinker-32B) model, which was trained on 114k examples from [OpenThoughts-114k](https://huggingface.co/datasets/open-thoughts/open-thoughts-114k). |
|
The numbers reported in the table below are evaluated with our open-source tool [Evalchemy](https://github.com/mlfoundations/Evalchemy). |
|
|
|
| Model | Data | AIME24 | AIME25 | AMC23 | MATH500 | GPQA-D | LCBv2 | |
|
| ----------------------------------------------------------------------------------------------- | ---- | ------ | ------ | ----- | ------- | ------ | ----- | |
|
| [OpenThinker2-32B](https://huggingface.co/open-thoughts/OpenThinker2-32B) | β
| 76.7 | 58.7 | 94.0 | 90.8 | 64.1 | 72.5 | |
|
| [OpenThinker-32B](https://huggingface.co/open-thoughts/OpenThinker-32B) | β
| 68.0 | 49.3 | 95.5 | 90.6 | 63.5 | 68.6 | |
|
| [DeepSeek-R1-Distill-Qwen-32B](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-32B) | β | 74.7 | 50.0 | 96.5 | 90.0 | 65.8 | 72.3 | |
|
| [Light-R1-32B](https://huggingface.co/qihoo360/Light-R1-32B) | β
| 74.7 | 58.0 | 96.0 | 90.4 | 62.0 | 56.0 | |
|
| [S1.1-32B](https://huggingface.co/simplescaling/s1.1-32B) | β
| 59.3 | 42.7 | 91.5 | 87.4 | 62.0 | 58.7 | |
|
|
|
|
|
## Data |
|
|
|
This model was trained on the [OpenThoughts2-1M](https://huggingface.co/datasets/open-thoughts/OpenThoughts2-1M) dataset. |
|
|
|
The [OpenThoughts2-1M](https://huggingface.co/datasets/open-thoughts/OpenThoughts2-1M) dataset was constructed by augmenting [OpenThoughts-114k](https://huggingface.co/datasets/open-thoughts/open-thoughts-114k) with existing datasets like [OpenR1](https://huggingface.co/open-r1), as well as additional math and code reasoning data. |
|
We generate the additional math and code data by ablating over 26 different question generation methodologies and sampling from the highest performing ones. |
|
|
|
See the [OpenThoughts2-1M](https://huggingface.co/datasets/open-thoughts/OpenThoughts2-1M) dataset page or our [blog post](https://www.open-thoughts.ai/blog/thinkagain) for additional information. |
|
|
|
|
|
## Intended uses & limitations |
|
|
|
Apache 2.0 License |
|
|
|
|
|
## Training procedure |
|
|
|
We used 128 4xA100 nodes to train the model for 50 hours. |
|
|
|
### Training hyperparameters |
|
|
|
The following hyperparameters were used during training: |
|
- learning_rate: 8e-05 |
|
- seed: 42 |
|
- distributed_type: multi-GPU |
|
- num_devices: 512 |
|
- gradient_accumulation_steps: 1 |
|
- total_train_batch_size: 512 |
|
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments |
|
- lr_scheduler_type: cosine |
|
- lr_scheduler_warmup_ratio: 0.1 |
|
- num_epochs: 5.0 |
|
|
|
### Framework versions |
|
|
|
- Transformers 4.46.1 |
|
- Pytorch 2.3.0 |
|
- Datasets 3.1.0 |
|
- Tokenizers 0.20.3 |
|
|
|
More info can be found in our repository: [https://github.com/open-thoughts/open-thoughts](https://github.com/open-thoughts/open-thoughts). |
|
|
|
# Citation |
|
``` |
|
@misc{openthoughts, |
|
author = {Team, OpenThoughts}, |
|
month = jan, |
|
title = {{Open Thoughts}}, |
|
howpublished = {https://open-thoughts.ai}, |
|
year = {2025} |
|
} |
|
``` |
|
|
|
# Links |
|
- π [OpenThoughts2 and OpenThinker2 Blog Post](https://www.open-thoughts.ai/blog/thinkagain) |
|
- π» [Open Thoughts GitHub Repository](https://github.com/open-thoughts/open-thoughts) |
|
- π§ [OpenThoughts2-1M dataset](https://huggingface.co/datasets/open-thoughts/OpenThoughts2-1M) |
|
- π€ [OpenThinker2-7B model](https://huggingface.co/open-thoughts/OpenThinker2-7B) |
|
- π€ [OpenThinker2-32B model](https://huggingface.co/open-thoughts/OpenThinker2-32B) - this model. |
|
|