File size: 1,382 Bytes
da42161 11d68fe da42161 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 |
---
base_model: Onuii/DAMI-base-merge-0408
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** Onuii
- **License:** apache-2.0
- **Finetuned from model :** Onuii/DAMI-base-merge-0408
--run_name "DAMI-lora-sft-digging-0409" \
--model_name "Onuii/DAMI-base-merge-0408" \
--dataset_name "Onuii/DAMI-lecture-DB-Dataset-0408-digging" \
--max_seq_length 8192 \
--dtype "bfloat16" \
--load_in_4bit False \
--r 16 \
--lora_alpha 32 \
--lora_dropout 0.1 \
--use_rslora True \
--batch_size 16 \
--gradient_accumulation_steps 2 \
--learning_rate 1e-5 \
--warmup_ratio 0.05 \
--num_train_epochs 2 \
--optimizer "adamw_8bit" \
--weight_decay 0.01 \
--lr_scheduler_type "linear" \
--output_dir "../outputs" \
--logging_dir "../logs" \
--logging_steps 1 \
--save_steps 200 \
--save_total_limit 2 \
--report_to "wandb" \
--use_gradient_checkpointing False \
--save_strategy "steps" \
--random_seed 562
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|