The-True-Abomination-24B

This is a merge of pre-trained language models created using mergekit.

Merge Details

Description

This model is a merge created with the same idea as Casual-Autopsy/L3-Super-Nova-RP-8B in mind: compatability with as many SillyTavern features and extensions as possible while still being able to handle itself in roleplay.

Reasoning isn't perfect, but it certainly helps boost the model's capability with legacy reasoning(Status block/thinking box CoT) (Which I honestly prefer over modern reasoning. Less immersion breaking IMO.) Setting max reasoning prompts to 3 or more and/or injecting a CoT formatting prompt is recommended.

Merge Method

This model was merged using the SCE, Della, and CABS merge methods using TheDrummer/Cydonia-24B-v2 as a base.

Models Merged

The following models were included in the merge:

Configuration

The following YAML configurations were used to produce this model:

Gaslit-Safeword

models:
  - model: TheDrummer/Cydonia-24B-v2

  - model: ReadyArt/Forgotten-Safeword-24B-v4.0
    parameters:
      weight: 0.4
      density: 0.35
      epsilon: 0.3
  - model: ReadyArt/Gaslit-Transgression-24B-v1.0
    parameters:
      weight: 0.4
      density: 0.35
      epsilon: 0.3
merge_method: della
base_model: TheDrummer/Cydonia-24B-v2
parameters:
  normalize: true
dtype: bfloat16

Omega-Duo

models:
  - model: TheDrummer/Cydonia-24B-v2

  - model: ReadyArt/The-Omega-Directive-M-24B-v1.1
    parameters:
      weight: 0.4
      density: 0.35
      epsilon: 0.3
  - model: ReadyArt/Omega-Darker_The-Final-Directive-24B
    parameters:
      weight: 0.4
      density: 0.35
      epsilon: 0.3
merge_method: della
base_model: TheDrummer/Cydonia-24B-v2
parameters:
  normalize: true
dtype: bfloat16

SCE-Abomination

models:
  - model: TheDrummer/Cydonia-24B-v2

  - model: Mawdistical/Mawdistic-NightLife-24b
  - model: Gaslit-Safeword
  - model: Omega-Duo
merge_method: sce
base_model: TheDrummer/Cydonia-24B-v2
parameters:
  select_topk: 0.8
dtype: bfloat16

UNC-Reasoning

models:
  - model: SCE-Abomination

  - model: Undi95/MistralThinker-v1.1
    parameters:
      weight: 0.6
      n_val: 16
      m_val: 32
  - model: cognitivecomputations/Dolphin-Mistral-24B-Venice-Edition
    parameters:
      weight: 0.4
      n_val: 11
      m_val: 33
merge_method: cabs
default_n_val: 8
default_m_val: 32
pruning_order:
  - Undi95/MistralThinker-v1.1
  - cognitivecomputations/Dolphin-Mistral-24B-Venice-Edition
base_model: SCE-Abomination
dtype: bfloat16

INT-Multitasks

models:
  - model: SCE-Abomination

  - model: AlexBefest/CardProjector-24B-v3
    parameters:
      weight: 0.6
      n_val: 16
      m_val: 32
  - model: arcee-ai/Arcee-Blitz
    parameters:
      weight: 0.4
      n_val: 11
      m_val: 33
merge_method: cabs
default_n_val: 8
default_m_val: 32
pruning_order:
  - AlexBefest/CardProjector-24B-v3
  - arcee-ai/Arcee-Blitz
base_model: SCE-Abomination
dtype: bfloat16

The-True-Abomination-24B

models:
  # Pivot model
  - model: SCE-Abomination
  # Target models
  - model: TroyDoesAI/BlackSheep-24B
  - model: UNC-Reasoning
  - model: INT-Multitasks
merge_method: sce
base_model: SCE-Abomination
parameters:
  select_topk: 0.45
dtype: bfloat16
Downloads last month
17
Safetensors
Model size
23.6B params
Tensor type
BF16
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for Casual-Autopsy/The-True-Abomination-24B