InfinityKumon-2x7B

InfinityKumon-2x7B

Another MoE merge from Endevor/InfinityRP-v1-7B and grimjim/kukulemon-7B.

The reason? Because I like InfinityRP-v1-7B so much and wondering if I can improve it even more by merging 2 great models into MoE.

Prompt format:

Alpaca or ChatML

Switch: FP16 - GGUF

Open LLM Leaderboard Evaluation Results

Detailed results can be found here

Metric Value
Avg. 71.52
AI2 Reasoning Challenge (25-Shot) 69.62
HellaSwag (10-Shot) 87.09
MMLU (5-Shot) 64.97
TruthfulQA (0-shot) 61.99
Winogrande (5-shot) 81.93
GSM8k (5-shot) 63.53
Downloads last month
9
Safetensors
Model size
12.9B params
Tensor type
F16
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for R136a1/InfinityKumon-2x7B

Finetuned
(2)
this model
Quantizations
1 model

Evaluation results