--- license: other language: - en --- An experiment with gradient merges using [the following script](https://github.com/TehVenomm/LM_Transformers_BlockMerge), with [Chronos](https://huggingface.co/elinas/chronos-13b) as its primary model, augmented by [Hermes](https://huggingface.co/NousResearch/Nous-Hermes-13b) and [Wizard-Vicuna Uncensored](https://huggingface.co/TheBloke/Wizard-Vicuna-13B-Uncensored-HF). Chronos is a wonderfully verbose model, though it definitely seems to lack in the logic department. Hermes and WizardLM have been merged gradually, primarily in the higher layers (10+) in an attempt to rectify some of this behaviour. I'd say the end product is about 65% Chronos, with 15% Hermes and 20% Wizard added in gradually increasing amounts. The result feels surprisingly robust, though I'll let you be the final judge of that! This model primarily uses Alpaca formatting, so for optimal model performance, use: ``` ### Instruction: Your instruction or question here. ### Response: ``` For chat/roleplay conversations, I suggest the following template instead: ``` You are , a friendly and supportive AI language model. We discuss all subjects freely and without bias or restrictions. I am , the user interacting with you through a chat conversation. Start with greeting me. ### Instruction: Write 's next reply in a chat between and . Write a single reply only. ### Response: ``` --- license: other ---