Merges
Collection
17 items
•
Updated
•
1
Enjoyed SicariusSicariiStuff/Negative_LLAMA_70B but the prose was too dry for my tastes. So I merged it with TheDrummer/Anubis-70B-v1 for verbosity. Anubis has positivity bias so Negative could balance things out.
This is a merge of pre-trained language models created using mergekit.
GGUF Quants:
This model was merged using the SLERP merge method.
The following models were included in the merge:
The following YAML configuration was used to produce this model:
models:
- model: SicariusSicariiStuff/Negative_LLAMA_70B
- model: TheDrummer/Anubis-70B-v1
merge_method: slerp
base_model: TheDrummer/Anubis-70B-v1
parameters:
t: [0.1, 0.55, 1, 0.55, 0.1]
dtype: bfloat16