|
--- |
|
base_model: unsloth/gemma-2-9b-bnb-4bit |
|
language: |
|
- en |
|
license: apache-2.0 |
|
tags: |
|
- text-generation-inference |
|
- transformers |
|
- unsloth |
|
- gemma2 |
|
- trl |
|
- sft |
|
datasets: |
|
- yahma/alpaca-cleaned |
|
--- |
|
|
|
# Uploaded model |
|
|
|
- **Developed by:** NotAiLOL |
|
- **License:** apache-2.0 |
|
- **Finetuned from model :** unsloth/gemma-2-9b-bnb-4bit |
|
|
|
This gemma2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. |
|
|
|
# Details |
|
|
|
This model is fine tuned from unsloth/gemma-2-9b-bnb-4bit on the alpaca-cleaned dataset using the **QDoRA** method. |
|
|
|
This model achieved a loss of 0.817200 on the alpaca-cleaned dataset after step 120. |
|
|
|
This model follows the alpaca prompt: |
|
|
|
``` |
|
Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request. |
|
|
|
### Instruction: |
|
{} |
|
|
|
### Input: |
|
{} |
|
|
|
### Response: |
|
{} |
|
``` |
|
|
|
## Training |
|
|
|
This model is trained on a single Tesla T4 GPU. |
|
|
|
- 2047.6215 seconds used for training. |
|
- 34.13 minutes used for training. |
|
- Peak reserved memory = 9.562 GB. |
|
- Peak reserved memory for training = 2.216 GB. |
|
- Peak reserved memory % of max memory = 64.836 %. |
|
- Peak reserved memory for training % of max memory = 15.026 %. |
|
|
|
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth) |