7B Mistral Merges
Collection
A collection of my 7B parameter merges
β’
3 items
β’
Updated
Anthesis_7B is a merge of the following models:
Alpaca works best, but Mistral gives good outputs as well.
models:
- model: ResplendentAI/Paradigm_Shift_7B
# no parameters necessary for base model
- model: rmdhirr/Foxglove_7B
parameters:
density: 0.60
weight: 0.40
merge_method: dare_ties
base_model: ResplendentAI/Paradigm_Shift_7B
parameters:
int8_mask: true
dtype: bfloat16
Detailed results can be found here
Metric | Value |
---|---|
Avg. | 67.97 |
AI2 Reasoning Challenge (25-Shot) | 69.03 |
HellaSwag (10-Shot) | 86.20 |
MMLU (5-Shot) | 62.06 |
TruthfulQA (0-shot) | 68.65 |
Winogrande (5-shot) | 78.93 |
GSM8k (5-shot) | 42.99 |