File size: 1,140 Bytes
1598897 7ad800a 1598897 7ad800a 1598897 13e6f42 7ad800a 1598897 7ad800a 1598897 71dd98a 1598897 7ad800a 1598897 7ad800a |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 |
---
base_model: unsloth/Qwen2.5-3B-Instruct
datasets:
- open-r1/OpenR1-Math-220k
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2.5
- trl
- sft
license_name: qwen-research
license_link: https://huggingface.co/Qwen/Qwen2.5-3B-Instruct/blob/main/LICENSE
language:
- en
---
<picture>
<img alt="image" src="https://huggingface.co/tensopolis/assets/resolve/main/logo_512.png">
</picture>
## qwen2.5-3b-or1-tensopolis
This model is a **reasoning** fine-tune of unsloth/**Qwen2.5-3B-Instruct**. Trained in **1xA100** for about **50 hours**. Please refer to the base model and dataset for more information about license, prompt format, etc.
Base model: [**Qwen/Qwen2.5-3B-Instruct**](https://huggingface.co/Qwen/Qwen2.5-3B-Instruct)
Dataset: [**open-r1/OpenR1-Math-220k**](https://huggingface.co/datasets/open-r1/OpenR1-Math-220k)
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|