FusionNet_7Bx2_MoE_v0.1

Fine-tuned model on English language using MoE method. The improved version from FusionNet_7Bx2_MoE_14B.

Model description

The FusionNet_7Bx2_MoE_v0.1 is a model to experiment with the MoE method, which could significantly increase the performance of the original model. The FusionNet has 12.9B parameters, and this model is fine-tuned. Enjoy!

Open LLM Leaderboard Evaluation Results

Detailed results can be found here

Metric Value
Avg. 76.16
AI2 Reasoning Challenge (25-Shot) 74.06
HellaSwag (10-Shot) 88.90
MMLU (5-Shot) 65.00
TruthfulQA (0-shot) 71.20
Winogrande (5-shot) 87.53
GSM8k (5-shot) 70.28
Downloads last month
64
Safetensors
Model size
12.9B params
Tensor type
BF16
·
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and the model is not deployed on the HF Inference API.

Model tree for TomGrc/FusionNet_7Bx2_MoE_v0.1

Adapters
1 model
Merges
1 model
Quantizations
2 models

Evaluation results