--- base_model: - Nitral-AI/Captain-Eris_Violet-GRPO-v0.420 - AlexCuadron/dpo_roleplay library_name: transformers tags: - mergekit - merge --- # output This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the [SLERP](https://en.wikipedia.org/wiki/Slerp) merge method. ### Models Merged The following models were included in the merge: * [Nitral-AI/Captain-Eris_Violet-GRPO-v0.420](https://huggingface.co/Nitral-AI/Captain-Eris_Violet-GRPO-v0.420) * [AlexCuadron/dpo_roleplay](https://huggingface.co/AlexCuadron/dpo_roleplay) ### Configuration The following YAML configuration was used to produce this model: ```yaml merge_method: slerp slices: - sources: - model: Nitral-AI/Captain-Eris_Violet-GRPO-v0.420 layer_range: [0, 40] - model: AlexCuadron/dpo_roleplay layer_range: [0, 40] base_model: Nitral-AI/Captain-Eris_Violet-GRPO-v0.420 parameters: t: - filter: self_attn value: [0.7, 0.6, 0.6, 0.5, 0.4] - filter: mlp value: [0.3, 0.4, 0.5, 0.6, 0.7] - filter: "layer=[0:10]" value: 0.5 - filter: "layer=[30:40]" value: 0.6 - value: 0.55 dtype: bfloat16 ```