Uploaded model
- Developed by: 1024m
- License: apache-2.0
- Finetuned from model : unsloth/phi-4
This llama model was trained 2x faster with Unsloth and Huggingface's TRL library.
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support
HF Inference deployability: The model has no library tag.
Model tree for 1024m/PHI-4-Hindi-LoRA
Dataset used to train 1024m/PHI-4-Hindi-LoRA
Evaluation results
- accuracy on MMLU Pro (5-Shot)test set Open LLM Leaderboard52.390
- accuracy (normalized) on GPQA (0-Shot)test set Open LLM Leaderboard39.770
- accuracy (normalized) on MuSR (0-Shot)test set Open LLM Leaderboard49.070
- accuracy (normalized) on Big Bench Hard (3-Shot)test set Open LLM Leaderboard66.970
- accuracy (exact match) on Math HARD (4-Shot)test set Open LLM Leaderboard23.110