metadata
base_model:
- gz987/qwen2.5-7b-cabs-v0.3
- bunnycore/Qwen-2.5-7b-s1k-lora_model
- simplescaling/s1.1-7B
- gz987/qwen2.5-7b-cabs-v0.3
- bunnycore/Qwen-2.5-7b-rp-lora
- marcuscedricridia/pre-cursa-o1-v1.2
- Xiaojian9992024/Qwen2.5-Dyanka-7B-Preview
- Krystalan/DRT-7B
- Qwen/Qwen2.5-7B-Instruct
- open-r1/OlympicCoder-7B
library_name: transformers
tags:
- mergekit
- merge
merge
This is a merge of pre-trained language models created using mergekit.
Merge Details
Merge Method
This model was merged using the Model Stock merge method using Qwen/Qwen2.5-7B-Instruct as a base.
Models Merged
The following models were included in the merge:
- gz987/qwen2.5-7b-cabs-v0.3 + bunnycore/Qwen-2.5-7b-s1k-lora_model
- simplescaling/s1.1-7B
- gz987/qwen2.5-7b-cabs-v0.3 + bunnycore/Qwen-2.5-7b-rp-lora
- marcuscedricridia/pre-cursa-o1-v1.2
- Xiaojian9992024/Qwen2.5-Dyanka-7B-Preview
- Krystalan/DRT-7B
- open-r1/OlympicCoder-7B
Configuration
The following YAML configuration was used to produce this model:
models:
- model: Krystalan/DRT-7B
parameters:
weight: 0.3
- model: simplescaling/s1.1-7B
parameters:
weight: 0.3
- model: Krystalan/DRT-7B
parameters:
weight: 0.3
- model: open-r1/OlympicCoder-7B
parameters:
weight: 0.3
- model: marcuscedricridia/pre-cursa-o1-v1.2
parameters:
weight: 0.3
- model: Xiaojian9992024/Qwen2.5-Dyanka-7B-Preview
- model: gz987/qwen2.5-7b-cabs-v0.3+bunnycore/Qwen-2.5-7b-s1k-lora_model
- model: gz987/qwen2.5-7b-cabs-v0.3+bunnycore/Qwen-2.5-7b-rp-lora
base_model: Qwen/Qwen2.5-7B-Instruct
merge_method: model_stock
parameters:
dtype: bfloat16
tokenizer_source: Qwen/Qwen2.5-7B-Instruct