This is probably the first multislerp model on Hugging Face.
Emilia-Multislerp-12B
This is a merge of pre-trained language models created using mergekit.
Merge Details
Merge Method
This model was merged using the Multi-SLERP merge method using yamatazen/Orihime-12B as a base.
Models Merged
The following models were included in the merge:
Configuration
The following YAML configuration was used to produce this model:
base_model: yamatazen/Orihime-12B
models:
- model: nbeerbower/mistral-nemo-gutenberg-12B-v4
parameters:
weight: [0.25, 0.3, 0.5, 0.6, 0.75]
- model: natong19/Mistral-Nemo-Instruct-2407-abliterated
parameters:
weight: [0.25, 0.3, 0.5, 0.3, 0.25]
merge_method: multislerp
dtype: bfloat16
out_dtype: bfloat16
parameters:
normalize: true
tokenizer:
source: union
- Downloads last month
- 55
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support
Model tree for yamatazen/Emilia-Multislerp-12B
Merge model
this model