🧪 Gemma-2B-DolphinR1-TestV2 (Experimental Fine-Tune) 🧪 This is an experimental fine-tune of Google's Gemma-2B using the [Dolphin-R1 dataset](https://huggingface.co/datasets/cognitivecomputations/dolphin-r1). The goal is to enhance reasoning and chain-of-thought capabilities while maintaining efficiency with LoRA (r=32) and 4-bit quantization. 🚨 Disclaimer: This model is very much a work in progress and is still being tested for performance, reliability, and generalization. Expect quirks, inconsistencies, and potential overfitting in responses. ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6339a8648f27255b6b51180c/gMsWZ5sRzDiftZFZ0tFxA.png) This is made possible thanks to @unsloth. I am still very new at finetuning Large Language Models so this is more of a showcase of my learning journey. Remember, it's very experimental, do not recommend downloading or testing.