V2.2 of Dungeonmaster (Very good at following prompts and quite unhinged), I decided to move away from the R1 base here, because I feel it the pros dont necessarily outweigh the cons. For the V2.X series I decided to go for a custom uncensored base.
Shoutout to Thana Alt from the Beaver AI discord who thoroughly tested this model and was able to get some interesting results (spoiler below)
(Thana's Silly Tavern advanced formatting settings are in the models files)
The sweet spot for the important sampler settings seems to be around:
Temp: 0.8
Min P: 0.02
Dungeonmaster is meant to be specifically for creative roleplays with stakes and consequences using the following curated models:
- LatitudeGames/Wayfarer-Large-70B-Llama-3.3 - A fine-tuned model specifically designed for this very application.
- ArliAI/Llama-3.3-70B-ArliAI-RPMax-v1.4 - Another fine-tune trained on RP datasets.
- Sao10K/70B-L3.3-mhnnn-x1 - For some extra unhinged creativity
- TheDrummer/Anubis-70B-v1 - Another excellent RP fine-tune to help balance things out.
- EVA-UNIT-01/EVA-LLaMA-3.33-70B-v0.1 - For it's strong descriptive writing.
- SicariusSicariiStuff/Negative_LLAMA_70B - To assist with the darker undertones.
- TheDrummer/Fallen-Llama-3.3-R1-70B-v1 - The secret sauce, a completely unhinged thinking model that turns things up to 11.
This is a merge of pre-trained language models created using mergekit.
Merge Details
Merge Method
This model was merged using the Linear DELLA merge method using TareksLab/L3.3-TRP-BASE-80-70B as a base.
Models Merged
The following models were included in the merge:
- ArliAI/Llama-3.3-70B-ArliAI-RPMax-v1.4
- TheDrummer/Fallen-Llama-3.3-R1-70B-v1
- EVA-UNIT-01/EVA-LLaMA-3.33-70B-v0.1
- TheDrummer/Anubis-70B-v1
- SicariusSicariiStuff/Negative_LLAMA_70B
- Sao10K/70B-L3.3-mhnnn-x1
- LatitudeGames/Wayfarer-Large-70B-Llama-3.3
Configuration
The following YAML configuration was used to produce this model:
models:
- model: TheDrummer/Fallen-Llama-3.3-R1-70B-v1
parameters:
weight: 0.12
density: 0.7
- model: ArliAI/Llama-3.3-70B-ArliAI-RPMax-v1.4
parameters:
weight: 0.12
density: 0.7
- model: EVA-UNIT-01/EVA-LLaMA-3.33-70B-v0.1
parameters:
weight: 0.12
density: 0.7
- model: TheDrummer/Anubis-70B-v1
parameters:
weight: 0.12
density: 0.7
- model: LatitudeGames/Wayfarer-Large-70B-Llama-3.3
parameters:
weight: 0.13
density: 0.7
- model: SicariusSicariiStuff/Negative_LLAMA_70B
parameters:
weight: 0.13
density: 0.7
- model: Sao10K/70B-L3.3-mhnnn-x1
parameters:
weight: 0.13
density: 0.7
- model: TareksLab/L3.3-TRP-BASE-80-70B
parameters:
weight: 0.13
density: 0.7
merge_method: della_linear
base_model: TareksLab/L3.3-TRP-BASE-80-70B
parameters:
epsilon: 0.2
lambda: 1.1
normalize: false
int8_mask: true
dtype: bfloat16
chat_template: llama3
tokenizer:
source: base
- Downloads last month
- 184