--- base_model: - cognitivecomputations/Dolphin-Mistral-24B-Venice-Edition - ReadyArt/Omega-Darker_The-Final-Directive-24B - aixonlab/Eurydice-24b-v3 - PocketDoc/Dans-PersonalityEngine-V1.2.0-24b - ReadyArt/Forgotten-Safeword-24B-v4.0 library_name: transformers tags: - mergekit - merge --- # Erotophobia-24B-v1.1 ![Model Banner](banner.png) My ~first~ second merge and model ~ever~! Literally depraved. I made the configuration simpler and replaced the personality model. I think it fits nicely now and produce an actual working model. But since this is my first model, I kept thinking that it's very good. Don't get me wrong, it is "working as intended" but yeah, try it out for yourself! It is very very **VERY** obedient, it will do what you tell it to do! Works both on assistant mode and roleplay mode. It has a deep understanding of personality and quite visceral on describing organs, the narrative feel wholesome but explicit when you need it. I personally like it very much and fitting my use case. Heavily inspired by [FlareRebellion/DarkHazard-v1.3-24b](https://huggingface.co/FlareRebellion/DarkHazard-v1.3-24b) and [ReadyArt/Omega-Darker-Gaslight_The-Final-Forgotten-Fever-Dream-24B](https://huggingface.co/ReadyArt/Omega-Darker-Gaslight_The-Final-Forgotten-Fever-Dream-24B). I would personally thank you sleepdeprived3 for the amazing finetunes, DoppelReflEx for giving me the dream to make a merge model someday, and the people in BeaverAI Club for the great inspirations. Luv <3 ### Addendum I tested this model both personally and in [UGI Leaderboard](https://huggingface.co/spaces/DontPlanToEnd/UGI-Leaderboard). The reason why the model doesn't follow your system prompt is because the model itself is not following order (see direct willingness). This theory is also being proved by my testing of using Venice.ai chat and seeing that the model are able to output their system prompt when the content itself say otherwise. It is very bad at following order but perfect at adhering. After blindly merging, I have a clear goal of what I want to make of the next generations of the model. A good storyteller that understand personality and organs. I am doing internal research right now, the most promising base is [TroyDoesAI/BlackSheep-24B](https://huggingface.co/TroyDoesAI/BlackSheep-24B) as of writing this. Thank you again, for everyone that helped! - DontPlanToEnd for the test and leaderboard. - mradermacher for the GGUF-s. - Artus for being a daddy (?). - DarkHazard for the [BRAND NEW MODEL](https://huggingface.co/FlareRebellion/DarkHazard-v2.0-24b)? (Inspired by this repo... but I'm just a newbie) - YOU! For trying this model out. No, thank YOU! ## Recommended Usage For roleplay mode, use [Mistral-V7-Tekken-T4](https://huggingface.co/sleepdeprived3/Mistral-V7-Tekken-T4)! I personally use the `nsigma` 1.5 and `temp` 4 with the rest neutralized. A bit silly yeah, but add just a tiny bit of `min_p` (if you want) and then turn up the XTC and DRY. Also try this one too [Mistral-V7-Tekken-T5-XML](https://huggingface.co/sleepdeprived3/Mistral-V7-Tekken-T5-XML), the system prompt is very nice. For assistant mode, regular Mistral V7 with a `` at the very beginning with blank system prompt should do the trick. (Thanks to Dolphin!) ## Quants Thanks for Artus for providing the Q8 GGUF quants here: Thanks for mradermacher for providing the static and imatrix quants here: ## Safety erm... `:3` ## Merge Details This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ### Merge Method This model was merged using the [DARE TIES](https://arxiv.org/abs/2311.03099) merge method using [cognitivecomputations/Dolphin-Mistral-24B-Venice-Edition](https://huggingface.co/cognitivecomputations/Dolphin-Mistral-24B-Venice-Edition) as a base. ### Models Merged The following models were included in the merge: * [ReadyArt/Omega-Darker_The-Final-Directive-24B](https://huggingface.co/ReadyArt/Omega-Darker_The-Final-Directive-24B) * [aixonlab/Eurydice-24b-v3](https://huggingface.co/aixonlab/Eurydice-24b-v3) * [PocketDoc/Dans-PersonalityEngine-V1.2.0-24b](https://huggingface.co/PocketDoc/Dans-PersonalityEngine-V1.2.0-24b) * [ReadyArt/Forgotten-Safeword-24B-v4.0](https://huggingface.co/ReadyArt/Forgotten-Safeword-24B-v4.0) ### Configuration The following YAML configuration was used to produce this model: ```yaml merge_method: dare_ties base_model: cognitivecomputations/Dolphin-Mistral-24B-Venice-Edition tokenizer: source: union chat_template: auto models: - model: cognitivecomputations/Dolphin-Mistral-24B-Venice-Edition # uncensored - model: PocketDoc/Dans-PersonalityEngine-V1.2.0-24b # personality parameters: weight: 0.3 - model: aixonlab/Eurydice-24b-v3 # creativity & storytelling parameters: weight: 0.3 - model: ReadyArt/Omega-Darker_The-Final-Directive-24B # unhinged parameters: weight: 0.2 - model: ReadyArt/Forgotten-Safeword-24B-v4.0 # lube parameters: weight: 0.2 parameters: density: 0.3 ```