Quants From Bartowski <3: https://huggingface.co/bartowski/Hathor-L3-8B-v.02-GGUF https://huggingface.co/bartowski/Hathor-L3-8B-v.02-exl2
Notes: Hathor is trained on 3 epochs of private data, synthetic opus instructions, a mix of light/classical novel data, roleplaying chat pairs over llama 3 8B instruct. (expanded)
Open LLM Leaderboard Evaluation Results
Detailed results can be found here
Metric | Value |
---|---|
Avg. | 25.70 |
IFEval (0-Shot) | 71.75 |
BBH (3-Shot) | 32.83 |
MATH Lvl 5 (4-Shot) | 9.21 |
GPQA (0-shot) | 4.92 |
MuSR (0-shot) | 5.56 |
MMLU-PRO (5-shot) | 29.96 |
- Downloads last month
- 109
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
π
Ask for provider support
Model tree for Nitral-AI/Hathor_Stable-v0.2-L3-8B
Spaces using Nitral-AI/Hathor_Stable-v0.2-L3-8B 7
Evaluation results
- strict accuracy on IFEval (0-Shot)Open LLM Leaderboard71.750
- normalized accuracy on BBH (3-Shot)Open LLM Leaderboard32.830
- exact match on MATH Lvl 5 (4-Shot)Open LLM Leaderboard9.210
- acc_norm on GPQA (0-shot)Open LLM Leaderboard4.920
- acc_norm on MuSR (0-shot)Open LLM Leaderboard5.560
- accuracy on MMLU-PRO (5-shot)test set Open LLM Leaderboard29.960