--- base_model: - SvalTek/Qwen2.5-ColdBrew-Erratik-test - fblgit/cybertron-v4-qw7B-UNAMGS - FuseAI/FuseChat-Qwen-2.5-7B-Instruct library_name: transformers tags: - mergekit - merge --- # merge This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the [DARE TIES](https://arxiv.org/abs/2311.03099) merge method using [SvalTek/Qwen2.5-ColdBrew-Erratik-test](https://huggingface.co/SvalTek/Qwen2.5-ColdBrew-Erratik-test) as a base. ### Models Merged The following models were included in the merge: * [fblgit/cybertron-v4-qw7B-UNAMGS](https://huggingface.co/fblgit/cybertron-v4-qw7B-UNAMGS) * [FuseAI/FuseChat-Qwen-2.5-7B-Instruct](https://huggingface.co/FuseAI/FuseChat-Qwen-2.5-7B-Instruct) ### Configuration The following YAML configuration was used to produce this model: ```yaml name: Qwen2.5-ColdBrew models: - model: SvalTek/Qwen2.5-ColdBrew-Erratik-test parameters: density: 0.8 weight: 0.5 - model: fblgit/cybertron-v4-qw7B-UNAMGS parameters: density: 0.5 weight: 0.3 - model: FuseAI/FuseChat-Qwen-2.5-7B-Instruct parameters: density: 0.5 weight: 0.3 merge_method: dare_ties base_model: SvalTek/Qwen2.5-ColdBrew-Erratik-test parameters: normalize: true int8_mask: true dtype: float16 ```