ZEUS-8B-V28

This is a merge of pre-trained language models created using mergekit.

Merge Details

Merge Method

This model was merged using the DARE TIES merge method using unsloth/Meta-Llama-3.1-8B-Instruct as a base.

Models Merged

The following models were included in the merge:

Configuration

The following YAML configuration was used to produce this model:

base_model: Skywork/Skywork-o1-Open-Llama-3.1-8B
dtype: bfloat16
merge_method: slerp
name: strawberry-patch
parameters:
  t:
  - value: 0.5
slices:
- sources:
  - layer_range: [0, 32]
    model: Skywork/Skywork-o1-Open-Llama-3.1-8B
  - layer_range: [0, 32]
    model: FreedomIntelligence/HuatuoGPT-o1-8B
---
base_model: unsloth/Meta-Llama-3.1-8B-Instruct
dtype: bfloat16
merge_method: dare_ties
parameters:
  int8_mask: 1.0
  normalize: 1.0
  random_seed: 145.0
slices:
- sources:
  - layer_range: [0, 32]
    model: unsloth/Llama-3.1-Storm-8B
    parameters:
      density: 0.94
      weight: 0.35
  - layer_range: [0, 32]
    model: arcee-ai/Llama-3.1-SuperNova-Lite
    parameters:
      density: 0.92
      weight: 0.26
  - layer_range: [0, 32]
    model: VAGOsolutions/Llama-3.1-SauerkrautLM-8b-Instruct
    parameters:
      density: 0.91
      weight:
      - filter: layers.21.
        value: 0.0
      - filter: layers.22.
        value: 0.0
      - filter: layers.23.
        value: 0.0
      - filter: layers.24.
        value: 0.0
      - filter: layers.25.
        value: 0.0
      - filter: layers.26.
        value: 0.0
      - filter: layers.27.
        value: 0.0
      - filter: layers.28.
        value: 0.0
      - value: 0.2
  - layer_range: [0, 32]
    model: strawberry-patch
    parameters:
      density: 0.92
      weight:
      - filter: layers.21.
        value: 0.2
      - filter: layers.22.
        value: 0.2
      - filter: layers.23.
        value: 0.2
      - filter: layers.24.
        value: 0.2
      - filter: layers.25.
        value: 0.2
      - filter: layers.26.
        value: 0.2
      - filter: layers.27.
        value: 0.2
      - filter: layers.28.
        value: 0.2
      - value: 0.0
  - layer_range: [0, 32]
    model: Orenguteng/Llama-3.1-8B-Lexi-Uncensored-V2
    parameters:
      density: 0.93
      weight: 0.19
  - layer_range: [0, 32]
    model: unsloth/Meta-Llama-3.1-8B-Instruct
tokenizer:
  tokens:
    <|begin_of_text|>:
      force: true
      source: unsloth/Meta-Llama-3.1-8B-Instruct
    <|eot_id|>:
      force: true
      source: unsloth/Meta-Llama-3.1-8B-Instruct
    <|finetune_right_pad_id|>:
      force: true
      source: unsloth/Meta-Llama-3.1-8B-Instruct

Open LLM Leaderboard Evaluation Results

Detailed results can be found here! Summarized results can be found here!

Metric Value (%)
Average 26.12
IFEval (0-Shot) 63.53
BBH (3-Shot) 32.62
MATH Lvl 5 (4-Shot) 12.31
GPQA (0-shot) 7.16
MuSR (0-shot) 8.84
MMLU-PRO (5-shot) 32.25
Downloads last month
15
Safetensors
Model size
8.03B params
Tensor type
BF16
·
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and the model is not deployed on the HF Inference API.

Model tree for T145/ZEUS-8B-V28

Evaluation results