Model Card for Cerebrum-RP
This model is derivative version of Cerebrum-1.0-7b on PIPPA Alpaca datasets.
Model Description
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by: Habibullah Akbar
- Funded by [optional]: Creasoft ID
- Shared by [optional]: Habibullah Akbar
- Model type: Auto-regressive
- Language(s) (NLP): Mostly English
- License: Apache-2.0
- Finetuned from model [optional]: Cerebrum-1.0-7b
Evaluation
- Downloads last month
- 8
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.
Model tree for ChavyvAkvar/Cerebrum-RP
Dataset used to train ChavyvAkvar/Cerebrum-RP
Evaluation results
- normalized accuracy on AI2 Reasoning Challenge (25-Shot)test set Open LLM Leaderboard59.300
- normalized accuracy on HellaSwag (10-Shot)validation set Open LLM Leaderboard82.890
- accuracy on MMLU (5-Shot)test set Open LLM Leaderboard62.500
- mc2 on TruthfulQA (0-shot)validation set Open LLM Leaderboard42.970
- accuracy on Winogrande (5-shot)validation set Open LLM Leaderboard78.690
- accuracy on GSM8k (5-shot)test set Open LLM Leaderboard38.890