--- base_model: - ZeroAgency/Mistral-Small-3.1-24B-Instruct-2503-hf - PocketDoc/Dans-PersonalityEngine-V1.2.0-24b - Gryphe/Pantheon-RP-1.8-24b-Small-3.1 library_name: transformers tags: - mergekit - merge - general-purpose - roleplay - storywriting - chemistry - biology - code - climate - axolotl - instruct - chatml license: apache-2.0 language: - en - ru --- # DXP-Zero-V1.0-24b-Small-Instruct Notice: - The model might lack the necessary evil for making story twisty or dark adventure but it make ammend on creating coherent story in long context form. Perfect for romance, adventure, sci-fi, and even general purpose. So i was browsing for Mistral finetune and found this base [model](https://huggingface.co/ZeroAgency/Mistral-Small-3.1-24B-Instruct-2503-hf) by [ZeroAgency](https://huggingface.co/ZeroAgency), and oh boy... It was perfect! So here are few notable improvements i observed. Pros: - Increased output for storytelling or roleplay. - Dynamic output (it can adjust how much output, i mean like when you go with shorter prompt it will do smaller outputs and so does with longer prompt more output too). - Less repetitive (though it depends on your own prompt and settings). - I have tested with 49444/65536 tokens no degradation although i notice it's actually learning the context better and it's impacting the output a lot. (what i don't like is, it's learning the previous context(of turns) too quickly and set it as new standards.). Tested genres: - Romance/Bromance Added note: I was testing using my own quantization i1-Q5-K-M. Download i1-GGUF [here](https://huggingface.co/h34v7/DXP-Zero-V1.0-24b-Small-Instruct-i1-GGUF). ## Merge Details This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ### Merge Method This model was merged using the [TIES](https://arxiv.org/abs/2306.01708) merge method using [ZeroAgency/Mistral-Small-3.1-24B-Instruct-2503-hf](https://huggingface.co/ZeroAgency/Mistral-Small-3.1-24B-Instruct-2503-hf) as a base. ### Models Merged The following models were included in the merge: * [PocketDoc/Dans-PersonalityEngine-V1.2.0-24b](https://huggingface.co/PocketDoc/Dans-PersonalityEngine-V1.2.0-24b) * [Gryphe/Pantheon-RP-1.8-24b-Small-3.1](https://huggingface.co/Gryphe/Pantheon-RP-1.8-24b-Small-3.1) ### Configuration The following YAML configuration was used to produce this model: ```yaml models: - model: Gryphe/Pantheon-RP-1.8-24b-Small-3.1 parameters: density: 0.7 weight: 0.7 - model: PocketDoc/Dans-PersonalityEngine-V1.2.0-24b parameters: density: 0.5 weight: 0.5 merge_method: ties base_model: ZeroAgency/Mistral-Small-3.1-24B-Instruct-2503-hf parameters: normalize: false int8_mask: true dtype: bfloat16 tokenizer: source: ZeroAgency/Mistral-Small-3.1-24B-Instruct-2503-hf ```