technicolor consists of the following merge, which was then merged with the below LoRAs to produce rainbow:
slices:
- sources:
- model: paulml/OGNO-7B
layer_range: [0, 32]
- model: SanjiWatsuki/Kunoichi-DPO-v2-7B
layer_range: [0, 32]
merge_method: slerp
base_model: SanjiWatsuki/Kunoichi-DPO-v2-7B
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
rainbow
This is a merge of pre-trained language models created using mergekit.
Merge Details
Merge Method
This model was merged using the task arithmetic merge method using technicolor as a base.
Models Merged
The following models were included in the merge:
- technicolor + jeiku/Theory_of_Mind_Mistral
- technicolor + jeiku/Gnosis_Reformatted_Mistral
- technicolor + Undi95/Mistral-7B-small_pippa_limaRP-v3-lora
- technicolor + jeiku/Theory_of_Mind_Roleplay_Mistral
Configuration
The following YAML configuration was used to produce this model:
merge_method: task_arithmetic
base_model: technicolor
parameters:
normalize: true
models:
- model: technicolor+jeiku/Theory_of_Mind_Roleplay_Mistral
parameters:
weight: 1
- model: technicolor+jeiku/Theory_of_Mind_Mistral
parameters:
weight: 1
- model: technicolor+jeiku/Gnosis_Reformatted_Mistral
parameters:
weight: 1
- model: technicolor+Undi95/Mistral-7B-small_pippa_limaRP-v3-lora
parameters:
weight: 1
dtype: float16
- Downloads last month
- 6
Inference Providers
NEW
This model is not currently available via any of the supported third-party Inference Providers, and
the model is not deployed on the HF Inference API.
Model tree for jeiku/Rainbow_69_7B
Merge model
this model