Merged Model

This is a merge of pre-trained language models created using mergekit.

HomerCreativeAnvita-Logo

This model is currently ranked #3 on the Open LLM Leaderboard among models up to 8B parameters and #5 among models up to 13B parameters!

Merge Details

Merge Method

This model was merged using the SLERP merge method.

Models Merged

The following models were included in the merge:

Configuration

The following YAML configuration was used to produce this model:

base_model: ZeroXClem/Qwen2.5-7B-HomerCreative-Mix
dtype: bfloat16
merge_method: slerp
parameters:
  t:
  - filter: self_attn
    value: [0.0, 0.5, 0.3, 0.7, 1.0]
  - filter: mlp
    value: [1.0, 0.5, 0.7, 0.3, 0.0]
  - value: 0.5
slices:
- sources:
  - layer_range: [0, 28]
    model: ZeroXClem/Qwen2.5-7B-HomerCreative-Mix
  - layer_range: [0, 28]
    model: ZeroXClem/Qwen2.5-7B-HomerAnvita-NerdMix

Open LLM Leaderboard Evaluation Results

Detailed results can be found here

Metric Value
Avg. 34.62
IFEval (0-Shot) 78.08
BBH (3-Shot) 36.98
MATH Lvl 5 (4-Shot) 31.04
GPQA (0-shot) 8.61
MuSR (0-shot) 14.73
MMLU-PRO (5-shot) 38.28

Buy Me A Coffee

Downloads last month
123
Safetensors
Model size
7.62B params
Tensor type
BF16
·
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and the model is not deployed on the HF Inference API.

Model tree for suayptalha/HomerCreativeAnvita-Mix-Qw7B

Collection including suayptalha/HomerCreativeAnvita-Mix-Qw7B

Evaluation results