Edit model card

I FUCKED UP, THIS MODEL IS MEANT TO BE A BFLOAT16 MODEL, I'M CURRENTLY REDOING IT IN THE CORRECT WAY (look at the recipe, it end in float16, i'm so dumb lmao). It SHOULD be even better, I saw the problem after finetuning it, something was off. It's usable, it rank the best, but it's not even on the right float...KEK

Fixed model should be here: NeverSleep/Mistral-11B-OmniMix-bf16

Don't mind this one at the moment, I need to finetune it for RP, it's just a test.

Description

This repo contains fp16 files of Mistral-11B-OmniMix.

My goal for this model was only to make it score the highest possible with merge and layer toying, proving that:

  • Benchmark are objective
  • You should try a model yourself and don't go blindly to the highest rated one
  • Merge/Layer toying CAN be usable to do better model (maybe?)

Model used

Prompt template

The best one after further testing is this one:

<|system|>
Below is an instruction that describes a task. Write a response that appropriately completes the request.
<|user|>
{prompt}
<|assistant|>

image/png

But these one work too:

Below is an instruction that describes a task. Write a response that appropriately completes the request.

### Instruction:
{prompt}

### Response:
USER: <prompt>
ASSISTANT:

Or use any prompting system from one of the 4 source model, should work.

The secret sauce

Mistral-11B-OpenOrcaPlatypus :

slices:
  - sources:
    - model: Open-Orca/Mistral-7B-OpenOrca
      layer_range: [0, 24]
  - sources:
    - model: akjindal53244/Mistral-7B-v0.1-Open-Platypus
      layer_range: [8, 32]
merge_method: passthrough
dtype: bfloat16

Mistral-11B-CC-Zephyr :

slices:
  - sources:
    - model: "/content/drive/MyDrive/CC-v1.1-7B-bf16"
      layer_range: [0, 24]
  - sources:
    - model: "/content/drive/MyDrive/Zephyr-7B"
      layer_range: [8, 32]
merge_method: passthrough
dtype: bfloat16

Mistral-11B-OmniMix :

slices:
  - sources:
      - model: Mistral-11B-OpenOrcaPlatypus
        layer_range: [0, 48]
      - model: Mistral-11B-CC-Zephyr
        layer_range: [0, 48]
merge_method: slerp
base_model: Undi95/Mistral-11B-OpenOrcaPlatypus
parameters:
  t:
    - filter: lm_head 
      value: [0.75]
    - filter: embed_tokens
      value: [0.75]
    - filter: self_attn
      value: [0.75, 0.25]
    - filter: mlp
      value:  [0.25, 0.75]
    - filter: layernorm
      value: [0.5, 0.5]
    - filter: modelnorm
      value: [0.75]
    - value: 0.5 # fallback for rest of tensors
dtype: float16

I use mergekit for all the manipulation told here.

Some scoring I done myself

This was named "Mistral-11B-TestBench11", keep that in mind while looking trough this.

hf-causal-experimental (pretrained=/content/drive/MyDrive/Mistral-11B-Test), limit: None, provide_description: False, num_fewshot: 0, batch_size: 4

Task Version Metric Value Stderr
arc_challenge 0 acc 0.5597 ± 0.0145
acc_norm 0.5819 ± 0.0144
arc_easy 0 acc 0.8308 ± 0.0077
acc_norm 0.8215 ± 0.0079
hellaswag 0 acc 0.6371 ± 0.0048
acc_norm 0.8213 ± 0.0038
piqa 0 acc 0.8134 ± 0.0091
acc_norm 0.8275 ± 0.0088
truthfulqa_mc 1 mc1 0.3990 ± 0.0171
mc2 0.5685 ± 0.0155
winogrande 0 acc 0.7474 ± 0.0122

image/png

This model seem to be the best out of my 3 latest try:

image/png

image/png

You can find all the work I have done trying on this Pastebin.

Others

Special thanks to Sushi, Henky for the machine he give me for big task, and Charles Goddard for his amazing tool.

If you want to support me, you can here.

Open LLM Leaderboard Evaluation Results

Detailed results can be found here

Metric Value
Avg. 53.01
ARC (25-shot) 64.42
HellaSwag (10-shot) 83.93
MMLU (5-shot) 63.82
TruthfulQA (0-shot) 56.68
Winogrande (5-shot) 77.74
GSM8K (5-shot) 14.94
DROP (3-shot) 9.57
Downloads last month
49
Safetensors
Model size
10.7B params
Tensor type
FP16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for Undi95/Mistral-11B-OmniMix

Adapters
1 model
Quantizations
1 model