|
--- |
|
base_model: |
|
- InferenceIllusionist/SorcererLM-22B |
|
- TheDrummer/Cydonia-22B-v1.3 |
|
- unsloth/Mistral-Small-Instruct-2409 |
|
- Saxo/Linkbricks-Horizon-AI-Japanese-Superb-V1-22B |
|
- anthracite-org/magnum-v4-22b |
|
- ArliAI/Mistral-Small-22B-ArliAI-RPMax-v1.1 |
|
- spow12/ChatWaifu_v2.0_22B |
|
- unsloth/Mistral-Small-Instruct-2409 |
|
- rAIfle/Acolyte-LORA |
|
- byroneverson/Mistral-Small-Instruct-2409-abliterated |
|
- Gryphe/Pantheon-RP-Pure-1.6.2-22b-Small |
|
- allura-org/MS-Meadowlark-22B |
|
- crestf411/MS-sunfall-v0.7.0 |
|
- TheDrummer/Cydonia-22B-v1.1 |
|
- TheDrummer/Cydonia-22B-v1.2 |
|
- unsloth/Mistral-Small-Instruct-2409 |
|
- Kaoeiri/Moingooistrial-22B-V1-Lora |
|
library_name: transformers |
|
tags: |
|
- mergekit |
|
- merge |
|
|
|
--- |
|
# merge |
|
|
|
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). |
|
|
|
## Merge Details |
|
### Merge Method |
|
|
|
This model was merged using the [DARE](https://arxiv.org/abs/2311.03099) [TIES](https://arxiv.org/abs/2306.01708) merge method using [unsloth/Mistral-Small-Instruct-2409](https://huggingface.co/unsloth/Mistral-Small-Instruct-2409) as a base. |
|
|
|
### Models Merged |
|
|
|
The following models were included in the merge: |
|
* [InferenceIllusionist/SorcererLM-22B](https://huggingface.co/InferenceIllusionist/SorcererLM-22B) |
|
* [TheDrummer/Cydonia-22B-v1.3](https://huggingface.co/TheDrummer/Cydonia-22B-v1.3) |
|
* [Saxo/Linkbricks-Horizon-AI-Japanese-Superb-V1-22B](https://huggingface.co/Saxo/Linkbricks-Horizon-AI-Japanese-Superb-V1-22B) |
|
* [anthracite-org/magnum-v4-22b](https://huggingface.co/anthracite-org/magnum-v4-22b) |
|
* [ArliAI/Mistral-Small-22B-ArliAI-RPMax-v1.1](https://huggingface.co/ArliAI/Mistral-Small-22B-ArliAI-RPMax-v1.1) |
|
* [spow12/ChatWaifu_v2.0_22B](https://huggingface.co/spow12/ChatWaifu_v2.0_22B) |
|
* [unsloth/Mistral-Small-Instruct-2409](https://huggingface.co/unsloth/Mistral-Small-Instruct-2409) + [rAIfle/Acolyte-LORA](https://huggingface.co/rAIfle/Acolyte-LORA) |
|
* [byroneverson/Mistral-Small-Instruct-2409-abliterated](https://huggingface.co/byroneverson/Mistral-Small-Instruct-2409-abliterated) |
|
* [Gryphe/Pantheon-RP-Pure-1.6.2-22b-Small](https://huggingface.co/Gryphe/Pantheon-RP-Pure-1.6.2-22b-Small) |
|
* [allura-org/MS-Meadowlark-22B](https://huggingface.co/allura-org/MS-Meadowlark-22B) |
|
* [crestf411/MS-sunfall-v0.7.0](https://huggingface.co/crestf411/MS-sunfall-v0.7.0) |
|
* [TheDrummer/Cydonia-22B-v1.1](https://huggingface.co/TheDrummer/Cydonia-22B-v1.1) |
|
* [TheDrummer/Cydonia-22B-v1.2](https://huggingface.co/TheDrummer/Cydonia-22B-v1.2) |
|
* [unsloth/Mistral-Small-Instruct-2409](https://huggingface.co/unsloth/Mistral-Small-Instruct-2409) + [Kaoeiri/Moingooistrial-22B-V1-Lora](https://huggingface.co/Kaoeiri/Moingooistrial-22B-V1-Lora) |
|
|
|
### Configuration |
|
|
|
The following YAML configuration was used to produce this model: |
|
|
|
```yaml |
|
models: |
|
- model: anthracite-org/magnum-v4-22b |
|
parameters: |
|
weight: 1.0 # Primary model for human-like writing |
|
density: 0.88 # Strong foundation with room for blending |
|
- model: TheDrummer/Cydonia-22B-v1.3 |
|
parameters: |
|
weight: 0.27 # Balanced for creative flair |
|
density: 0.71 # Subtle creativity with strong coherence |
|
- model: TheDrummer/Cydonia-22B-v1.2 |
|
parameters: |
|
weight: 0.17 # Light creativity for nuanced diversity |
|
density: 0.68 # Maintains alignment with overarching structure |
|
- model: TheDrummer/Cydonia-22B-v1.1 |
|
parameters: |
|
weight: 0.2 # Adds depth to accurate and specific nuances |
|
density: 0.69 # Smoothly integrates details without overwhelming |
|
- model: Gryphe/Pantheon-RP-Pure-1.6.2-22b-Small |
|
parameters: |
|
weight: 0.3 # Refined for deeper storytelling and RP focus |
|
density: 0.78 # Supports narrative without clashing |
|
- model: allura-org/MS-Meadowlark-22B |
|
parameters: |
|
weight: 0.29 # Balances creativity with structured fluency |
|
density: 0.72 # Enhances clarity and descriptive depth |
|
- model: spow12/ChatWaifu_v2.0_22B |
|
parameters: |
|
weight: 0.27 # Maintains anime-style RP and conversational tone |
|
density: 0.7 # Intact for balanced integration |
|
- model: Saxo/Linkbricks-Horizon-AI-Japanese-Superb-V1-22B |
|
parameters: |
|
weight: 0.19 # Specialized for Japanese contexts |
|
density: 0.58 # Ensures contextual accuracy without overlap |
|
- model: crestf411/MS-sunfall-v0.7.0 |
|
parameters: |
|
weight: 0.26 # Enhanced for impactful dramatic storytelling |
|
density: 0.74 # Balances spicy narratives with other models |
|
- model: unsloth/Mistral-Small-Instruct-2409+rAIfle/Acolyte-LORA |
|
parameters: |
|
weight: 0.25 # Balanced for varied structured content |
|
density: 0.71 # Ensures seamless alignment with base |
|
- model: InferenceIllusionist/SorcererLM-22B |
|
parameters: |
|
weight: 0.22 # Stylized refinement for cohesive outputs |
|
density: 0.73 # Keeps stylistic diversity in balance |
|
- model: unsloth/Mistral-Small-Instruct-2409+Kaoeiri/Moingooistrial-22B-V1-Lora |
|
parameters: |
|
weight: 0.24 # Mythical storytelling integration |
|
density: 0.71 # Balanced for smooth interaction |
|
- model: ArliAI/Mistral-Small-22B-ArliAI-RPMax-v1.1 |
|
parameters: |
|
weight: 0.1 # Light touch to avoid excessive RP influence |
|
density: 0.64 # Fine-tuned for roleplay-specific elements |
|
- model: byroneverson/Mistral-Small-Instruct-2409-abliterated |
|
parameters: |
|
weight: 0.16 # Adds raw and unfiltered context nuance |
|
density: 0.69 # Supports diverse content without overpowering |
|
|
|
merge_method: dare_ties # Best for diverse and complex model blending |
|
base_model: unsloth/Mistral-Small-Instruct-2409 |
|
parameters: |
|
density: 0.85 # Overall density ensures logical and creative balance |
|
epsilon: 0.08 # Reduced for smoother model interpolation |
|
lambda: 1.23 # Balanced scaling for crisp and coherent outputs |
|
dtype: bfloat16 |
|
|
|
``` |
|
|