--- base_model: - SicariusSicariiStuff/Negative_LLAMA_70B - TheDrummer/Anubis-70B-v1 library_name: transformers license: llama3.3 tags: - mergekit - merge --- ![Enthralling Creatures](Negative-Anubis.png) # Negative Drummer Enjoyed [SicariusSicariiStuff/Negative_LLAMA_70B](https://huggingface.co/SicariusSicariiStuff/Negative_LLAMA_70B) but the prose was too dry for my tastes. So I merged it with [TheDrummer/Anubis-70B-v1](https://huggingface.co/TheDrummer/Anubis-70B-v1) for verbosity. Anubis has positivity bias so Negative could balance things out. This is a merge of pre-trained language models created using [mergekit](https://github.com/arcee-ai/mergekit). GGUF Quants: - GGUF (static): [bartowski/Negative-Anubis-70B-v1-GGUF](https://huggingface.co/bartowski/Negative-Anubis-70B-v1-GGUF) - ~~GGUF (static) another: [mradermacher/Negative-Anubis-70B-v1-GGUF](https://huggingface.co/mradermacher/Negative-Anubis-70B-v1-GGUF)~~ - ~~GGUF (weighted/imatrix): [mradermacher/Negative-Anubis-70B-v1-i1-GGUF](https://huggingface.co/mradermacher/Negative-Anubis-70B-v1-i1-GGUF)~~ ## Merge Details ### Merge Method This model was merged using the SLERP merge method. ### Models Merged The following models were included in the merge: * [SicariusSicariiStuff/Negative_LLAMA_70B](https://huggingface.co/SicariusSicariiStuff/Negative_LLAMA_70B) * [TheDrummer/Anubis-70B-v1](https://huggingface.co/TheDrummer/Anubis-70B-v1) ### Configuration The following YAML configuration was used to produce this model: ```yaml models: - model: SicariusSicariiStuff/Negative_LLAMA_70B - model: TheDrummer/Anubis-70B-v1 merge_method: slerp base_model: TheDrummer/Anubis-70B-v1 parameters: t: [0.1, 0.55, 1, 0.55, 0.1] dtype: bfloat16 ```