The example uses a small dataset of 248 question-answer pairs from S-Group. While this small dataset will likely lead to overfitting, it effectively showcases the fine-tuning process using Unsloth. This repository is intended as a demonstration of the fine-tuning method, not a production-ready solution. Use repetition_penalty and no_repeat_ngram_size in prediction. Instructions and dataset are in GitHub.
Uploaded model
- Developed by: mlconvexai
- License: Gemma
- Finetuned from model : google/gemma-2-2b-it
This gemma2 model was trained 2x faster with Unsloth and Huggingface's TRL library.
- Downloads last month
- 9
Inference Providers
NEW
This model is not currently available via any of the supported third-party Inference Providers, and
the model is not deployed on the HF Inference API.