π Cthulhu 24B 1.3
Prepare to delve into the depths of language model fusion with Cthulhu, a monumental model merge based on Mistral Small v3.2 (2506) and Mistral Small v3.1 (2503). This ambitious project aims to synthesize the collective intelligence of the latest cutting-edge finetunes of Mistral Small, creating a "supermerge" that transcends the capabilities of any single iteration.
Experimental update to Cthulhu series. I did not compare 1.3 to previous versions yet, but it aims to be more RP-oriented.
base_model: anthracite-core/Mistral-Small-3.2-24B-Instruct-2506-Text-Only
merge_method: dare_ties
architecture: MistralForCausalLM
dtype: bfloat16
models:
- model: aixonlab/Eurydice-24b-v3.5
parameters:
density: 0.4
weight: 0.05
- model: allura-forge/ms32-final-TEXTONLY
parameters:
density: 0.5
weight: 0.1
- model: Darkhn/M3.2-24B-Animus-V7.1
parameters:
density: 0.5
weight: 0.1
- model: Delta-Vector/Austral-24B-Winton
parameters:
density: 0.4
weight: 0.05
- model: Delta-Vector/MS3.2-Austral-Winton
parameters:
density: 0.5
weight: 0.1
- model: Doctor-Shotgun/MS3.2-24B-Magnum-Diamond
parameters:
density: 0.5
weight: 0.1
- model: PocketDoc/Dans-PersonalityEngine-V1.3.0-24b
parameters:
density: 0.4
weight: 0.05
- model: ReadyArt/MS3.2-The-Omega-Directive-24B-Unslop-v2.1
parameters:
density: 0.5
weight: 0.1
- model: SicariusSicariiStuff/Impish_Magic_24B
parameters:
density: 0.5
weight: 0.1
- model: TheDrummer/Cydonia-24B-v4.1
parameters:
density: 0.5
weight: 0.1
- model: trashpanda-org/MS3.2-24B-Mullein-v2
parameters:
density: 0.4
weight: 0.05
- model: zerofata/MS3.2-PaintedFantasy-v2-24B
parameters:
density: 0.5
weight: 0.1
tokenizer:
source: union
chat_template: auto
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
π
Ask for provider support
Model tree for Fentible/Cthulhu-24B-v1.3
Merge model
this model