--- base_model: - Xiaojian9992024/Qwen2.5-Dyanka-7B-Preview - Qwen/Qwen2.5-7B-Instruct - gz987/qwen2.5-7b-cabs-v0.3 library_name: transformers tags: - mergekit - merge --- # dare_ties This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the [DARE TIES](https://arxiv.org/abs/2311.03099) merge method using [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct) as a base. ### Models Merged The following models were included in the merge: * [Xiaojian9992024/Qwen2.5-Dyanka-7B-Preview](https://huggingface.co/Xiaojian9992024/Qwen2.5-Dyanka-7B-Preview) * [gz987/qwen2.5-7b-cabs-v0.3](https://huggingface.co/gz987/qwen2.5-7b-cabs-v0.3) ### Configuration The following YAML configuration was used to produce this model: ```yaml base_model: Qwen/Qwen2.5-7B-Instruct dtype: float16 merge_method: dare_ties modules: default: slices: - sources: - layer_range: [0, 28] model: Xiaojian9992024/Qwen2.5-Dyanka-7B-Preview parameters: density: 0.8773638560987191 weight: 1.0832441422096908 - layer_range: [0, 28] model: gz987/qwen2.5-7b-cabs-v0.3 parameters: density: 0.8773638560987191 weight: 1.0832441422096908 - layer_range: [0, 28] model: Qwen/Qwen2.5-7B-Instruct parameters: normalize: 0.0 ```