You need to agree to share your contact information to access this model

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Log in or Sign Up to review the conditions and access this model content.

image/png

merge

This is a merge of pre-trained language models created using mergekit.

Merge Details

Merge Method

This model was merged using the DARE TIES merge method using unsloth/Mistral-Small-Instruct-2409 as a base.

Models Merged

The following models were included in the merge:

Configuration

The following YAML configuration was used to produce this model:

models:
  - model: anthracite-org/magnum-v4-22b
    parameters:
      weight: 1.0         # Primary model for human-like writing
      density: 0.88       # Solid foundation for clear, balanced text generation
  - model: TheDrummer/Cydonia-22B-v1.3
    parameters:
      weight: 0.26        # Slightly reduced weight for nuanced creativity
      density: 0.7        # Maintains subtle creative influence
  - model: TheDrummer/Cydonia-22B-v1.2
    parameters:
      weight: 0.16        # Adjusted for balanced creativity without interference
      density: 0.68       # Harmonized with other storytelling contributions
  - model: TheDrummer/Cydonia-22B-v1.1
    parameters:
      weight: 0.18        # Refined for precision in roleplay and nuanced content
      density: 0.68       # Ensures stability without overwhelming integration
  - model: Gryphe/Pantheon-RP-Pure-1.6.2-22b-Small
    parameters:
      weight: 0.28        # Balanced for storytelling depth without dominance
      density: 0.77       # Smooth integration for narrative-driven content
  - model: allura-org/MS-Meadowlark-22B
    parameters:
      weight: 0.3         # Retains balanced creativity and descriptive clarity
      density: 0.72       # Enhances fluency and narrative cohesion
  - model: spow12/ChatWaifu_v2.0_22B
    parameters:
      weight: 0.27        # Maintains anime-style RP and conversational tone
      density: 0.7        # Preserved for compatibility with other models
  - model: Saxo/Linkbricks-Horizon-AI-Japanese-Superb-V1-22B
    parameters:
      weight: 0.2         # Specialized for Japanese linguistic contexts
      density: 0.58       # Fine-tuned for focused coherence
  - model: crestf411/MS-sunfall-v0.7.0
    parameters:
      weight: 0.25        # Subtle tone for dramatic storytelling
      density: 0.74       # Balanced for integration with other narrative styles
  - model: unsloth/Mistral-Small-Instruct-2409+rAIfle/Acolyte-LORA
    parameters:
      weight: 0.24        # Subtle addition for structured content variation
      density: 0.7        # Aligns seamlessly with the overall blend
  - model: InferenceIllusionist/SorcererLM-22B
    parameters:
      weight: 0.23        # Provides stylistic coherence
      density: 0.74       # Supports expressive and balanced outputs
  - model: unsloth/Mistral-Small-Instruct-2409+Kaoeiri/Moingooistrial-22B-V1-Lora
    parameters:
      weight: 0.26        # Mythical and monster storytelling
      density: 0.72       # Balanced for integration with core models
  - model: ArliAI/Mistral-Small-22B-ArliAI-RPMax-v1.1
    parameters:
      weight: 0.12        # Light roleplay influence to prevent overheating
      density: 0.65       # Keeps roleplay-heavy elements in check
  - model: byroneverson/Mistral-Small-Instruct-2409-abliterated
    parameters:
      weight: 0.15        # Provides raw and unfiltered context
      density: 0.68       # Harmonizes with primary base model

merge_method: dare_ties  # Optimal for diverse and complex model blending
base_model: unsloth/Mistral-Small-Instruct-2409
parameters:
  density: 0.85          # Overall density ensures logical and creative balance
  epsilon: 0.09          # Small step size for smooth blending
  lambda: 1.22           # Adjusted scaling for refined sharpness and coherence
dtype: bfloat16
Downloads last month
0
Safetensors
Model size
22.2B params
Tensor type
BF16
·
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and the model is not deployed on the HF Inference API.

Model tree for Kaoeiri/MS-Magpantheonsel-lark-v4x1.6.2RP-Cydonia-vXXX-22B-7.2