Create README.md
Browse files
README.md
ADDED
@@ -0,0 +1,9 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
🧪 Gemma-2B-DolphinR1-TestV2 (Experimental Fine-Tune) 🧪
|
2 |
+
This is an experimental fine-tune of Google's Gemma-2B using the [Dolphin-R1 dataset](https://huggingface.co/datasets/cognitivecomputations/dolphin-r1). The goal is to enhance reasoning and chain-of-thought capabilities while maintaining efficiency with LoRA (r=32) and 4-bit quantization.
|
3 |
+
|
4 |
+
🚨 Disclaimer: This model is very much a work in progress and is still being tested for performance, reliability, and generalization. Expect quirks, inconsistencies, and potential overfitting in responses.
|
5 |
+
|
6 |
+

|
7 |
+
|
8 |
+
|
9 |
+
This is made possible thanks to @unsloth. I am still very new at finetuning Large Language Models so this is more of a showcase of my learning journey. Remember, it's very experimental, do not recommend downloading or testing.
|