MERGE2

This is a merge of pre-trained language models created using mergekit.

Merge Details

Merge Method

This model was merged using the SCE merge method using Mawdistical/Vulpine-Seduction-70B as a base.

Models Merged

The following models were included in the merge:

Configuration

The following YAML configuration was used to produce this model:

models:
  - model: Mawdistical/Vulpine-Seduction-70B
    parameters:
      select_topk: 0.70
  - model: TheDrummer/Fallen-Llama-3.3-R1-70B-v1
    parameters:
      select_topk: 0.55
  - model: Sao10K/L3-70B-Euryale-v2.1
    parameters:
      select_topk: 0.60
  - model: ReadyArt/The-Omega-Directive-L-70B-v1.0
    parameters:
      select_topk: 0.50
  - model: SicariusSicariiStuff/Negative_LLAMA_70B
    parameters:
      select_topk: 0.65
base_model: Mawdistical/Vulpine-Seduction-70B
merge_method: sce
parameters:
  normalize: false
  int8_mask: true
dtype: float32
out_dtype: bfloat16
chat_template: llama3
tokenizer:
 source: Mawdistical/Vulpine-Seduction-70B
 pad_to_multiple_of: 8
Downloads last month
34
Safetensors
Model size
70.6B params
Tensor type
BF16
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for TareksLab/DarkDesires-LLaMa-70B