metadata
base_model:
- bunnycore/Qwen2.5-7B-RRP-1M
- bunnycore/Qwen-2.5-7B-Deep-Stock-v4
- bunnycore/Qwen2.5-7B-CyberRombos
- bunnycore/Qwen-2.1-7b-Persona-lora_model
- bunnycore/Qwen2.5-7B-RRP-1M-Thinker
- bunnycore/QandoraExp-7B
- bunnycore/Qwen-2.5-7b-rp-lora
library_name: transformers
tags:
- mergekit
- merge
merge
This is a merge of pre-trained language models created using mergekit.
Merge Details
Merge Method
This model was merged using the Model Stock merge method using bunnycore/Qwen2.5-7B-RRP-1M as a base.
Models Merged
The following models were included in the merge:
- bunnycore/Qwen-2.5-7B-Deep-Stock-v4
- bunnycore/Qwen2.5-7B-CyberRombos + bunnycore/Qwen-2.1-7b-Persona-lora_model
- bunnycore/Qwen2.5-7B-RRP-1M-Thinker
- bunnycore/QandoraExp-7B + bunnycore/Qwen-2.5-7b-rp-lora
Configuration
The following YAML configuration was used to produce this model:
models:
- model: bunnycore/Qwen2.5-7B-RRP-1M-Thinker
parameters:
weight: 0.5
- model: bunnycore/QandoraExp-7B+bunnycore/Qwen-2.5-7b-rp-lora
- model: bunnycore/Qwen-2.5-7B-Deep-Stock-v4
- model: bunnycore/Qwen2.5-7B-CyberRombos+bunnycore/Qwen-2.1-7b-Persona-lora_model
base_model: bunnycore/Qwen2.5-7B-RRP-1M
merge_method: model_stock
parameters:
dtype: bfloat16
tokenizer_source: bunnycore/Qwen2.5-7B-RRP-1M