raulgdp's picture
End of training
7bde8a7 verified
metadata
library_name: peft
license: gemma
base_model: google/gemma-2-9b-it
tags:
  - generated_from_trainer
model-index:
  - name: gemma-2-9b-it-009-3000
    results: []

gemma-2-9b-it-009-3000

This model is a fine-tuned version of google/gemma-2-9b-it on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 0.5613

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • train_batch_size: 2
  • eval_batch_size: 8
  • seed: 42
  • gradient_accumulation_steps: 8
  • total_train_batch_size: 16
  • optimizer: Use paged_adamw_8bit with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: linear
  • num_epochs: 10
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss
1.4979 0.8658 100 1.4989
1.3025 1.7273 200 1.2772
1.1098 2.5887 300 1.1049
0.8921 3.4502 400 0.9508
0.7862 4.3117 500 0.8289
0.6309 5.1732 600 0.7307
0.6198 6.0346 700 0.6635
0.5331 6.9004 800 0.6203
0.5135 7.7619 900 0.5891
0.5122 8.6234 1000 0.5707
0.428 9.4848 1100 0.5613

Framework versions

  • PEFT 0.15.2
  • Transformers 4.51.3
  • Pytorch 2.7.0+cu126
  • Datasets 3.5.1
  • Tokenizers 0.21.1