FusionNet_7Bx2_MoE_v0.1
Fine-tuned model on English language using MoE method. The improved version from FusionNet_7Bx2_MoE_14B.
Model description
The FusionNet_7Bx2_MoE_v0.1 is a model to experiment with the MoE method, which could significantly increase the performance of the original model. The FusionNet has 12.9B parameters, and this model is fine-tuned. Enjoy!
Open LLM Leaderboard Evaluation Results
Detailed results can be found here
Metric | Value |
---|---|
Avg. | 76.16 |
AI2 Reasoning Challenge (25-Shot) | 74.06 |
HellaSwag (10-Shot) | 88.90 |
MMLU (5-Shot) | 65.00 |
TruthfulQA (0-shot) | 71.20 |
Winogrande (5-shot) | 87.53 |
GSM8k (5-shot) | 70.28 |
- Downloads last month
- 64
Inference Providers
NEW
This model is not currently available via any of the supported third-party Inference Providers, and
the model is not deployed on the HF Inference API.
Model tree for TomGrc/FusionNet_7Bx2_MoE_v0.1
Evaluation results
- normalized accuracy on AI2 Reasoning Challenge (25-Shot)test set Open LLM Leaderboard74.060
- normalized accuracy on HellaSwag (10-Shot)validation set Open LLM Leaderboard88.900
- accuracy on MMLU (5-Shot)test set Open LLM Leaderboard65.000
- mc2 on TruthfulQA (0-shot)validation set Open LLM Leaderboard71.200
- accuracy on Winogrande (5-shot)validation set Open LLM Leaderboard87.530
- accuracy on GSM8k (5-shot)test set Open LLM Leaderboard70.280