the bible model :
this model has been fit ! for the bible ( king james ! ) it is very good at returning verse ! and chapter !
if the context was long enough it could recall the chapter !
I trained this model ofr creating timelines with the bible as well as having a bible inside my model to ask questions ! hence i over fit the bible to reduce hallucenations ! hence the exactness ! this also opened another task of recalling whole storys or articles and books : now the model is well trained for this recall task : once a book is semi fit its fit ! and can be recalled !
so NOW : it can take book training ! I also use this model for my merges ro keep the books aligned !
- Developed by: LeroyDyer
- License: apache-2.0
- Finetuned from model : LeroyDyer/_Spydaz_Web_AI_ChatQA_001_UFT
This mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.
Open LLM Leaderboard Evaluation Results
Detailed results can be found here
Metric | Value |
---|---|
Avg. | 6.86 |
IFEval (0-Shot) | 21.95 |
BBH (3-Shot) | 6.35 |
MATH Lvl 5 (4-Shot) | 1.74 |
GPQA (0-shot) | 4.59 |
MuSR (0-shot) | 2.45 |
MMLU-PRO (5-shot) | 4.09 |
- Downloads last month
- 11
Model tree for LeroyDyer/_Spydaz_Web_AI_BIBLE_002
Unable to build the model tree, the base model loops to the model itself. Learn more.
Evaluation results
- strict accuracy on IFEval (0-Shot)Open LLM Leaderboard21.950
- normalized accuracy on BBH (3-Shot)Open LLM Leaderboard6.350
- exact match on MATH Lvl 5 (4-Shot)Open LLM Leaderboard1.740
- acc_norm on GPQA (0-shot)Open LLM Leaderboard4.590
- acc_norm on MuSR (0-shot)Open LLM Leaderboard2.450
- accuracy on MMLU-PRO (5-shot)test set Open LLM Leaderboard4.090