run_gemma-2-2b_20250507_215200
This model is a fine-tuned version of google/gemma-2-2b on the None dataset. It achieves the following results on the evaluation set:
- Loss: 0.3085
- Accuracy: 0.9522
- Precision General: 0.9810
- Recall General: 0.9537
- F1 General: 0.9671
- Precision Memo: 0.8831
- Recall Memo: 0.9444
- F1 Memo: 0.9128
- Precision Album: 0.8333
- Recall Album: 1.0
- F1 Album: 0.9091
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 15
Training results
Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision General | Recall General | F1 General | Precision Memo | Recall Memo | F1 Memo | Precision Album | Recall Album | F1 Album |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
0.3519 | 1.0 | 147 | 0.2828 | 0.9144 | 0.9481 | 0.9349 | 0.9415 | 0.8228 | 0.9028 | 0.8609 | 1.0 | 0.2 | 0.3333 |
0.2311 | 2.0 | 294 | 0.6490 | 0.8801 | 0.9592 | 0.8744 | 0.9148 | 0.7363 | 0.9306 | 0.8221 | 0.4 | 0.4 | 0.4 |
0.2362 | 3.0 | 441 | 0.2727 | 0.9349 | 0.9375 | 0.9767 | 0.9567 | 0.9254 | 0.8611 | 0.8921 | 1.0 | 0.2 | 0.3333 |
0.1502 | 4.0 | 588 | 0.4032 | 0.9212 | 0.9444 | 0.9488 | 0.9466 | 0.8514 | 0.875 | 0.8630 | 1.0 | 0.4 | 0.5714 |
0.0399 | 5.0 | 735 | 0.3743 | 0.9452 | 0.9543 | 0.9721 | 0.9631 | 0.9155 | 0.9028 | 0.9091 | 1.0 | 0.4 | 0.5714 |
0.0442 | 6.0 | 882 | 0.5794 | 0.9418 | 0.9420 | 0.9814 | 0.9613 | 0.9403 | 0.875 | 0.9065 | 1.0 | 0.2 | 0.3333 |
0.0285 | 7.0 | 1029 | 0.5223 | 0.9349 | 0.9455 | 0.9674 | 0.9563 | 0.9 | 0.875 | 0.8873 | 1.0 | 0.4 | 0.5714 |
0.0 | 8.0 | 1176 | 0.6654 | 0.9384 | 0.9378 | 0.9814 | 0.9591 | 0.9394 | 0.8611 | 0.8986 | 1.0 | 0.2 | 0.3333 |
0.0048 | 9.0 | 1323 | 0.5900 | 0.9349 | 0.9455 | 0.9674 | 0.9563 | 0.9 | 0.875 | 0.8873 | 1.0 | 0.4 | 0.5714 |
0.0014 | 10.0 | 1470 | 0.6443 | 0.9384 | 0.9457 | 0.9721 | 0.9587 | 0.9130 | 0.875 | 0.8936 | 1.0 | 0.4 | 0.5714 |
0.0 | 11.0 | 1617 | 0.6786 | 0.9384 | 0.9417 | 0.9767 | 0.9589 | 0.9265 | 0.875 | 0.9 | 1.0 | 0.2 | 0.3333 |
0.0 | 12.0 | 1764 | 0.6797 | 0.9384 | 0.9417 | 0.9767 | 0.9589 | 0.9265 | 0.875 | 0.9 | 1.0 | 0.2 | 0.3333 |
0.0 | 13.0 | 1911 | 0.6791 | 0.9384 | 0.9417 | 0.9767 | 0.9589 | 0.9265 | 0.875 | 0.9 | 1.0 | 0.2 | 0.3333 |
0.0 | 14.0 | 2058 | 0.6785 | 0.9384 | 0.9417 | 0.9767 | 0.9589 | 0.9265 | 0.875 | 0.9 | 1.0 | 0.2 | 0.3333 |
0.0 | 15.0 | 2205 | 0.6798 | 0.9384 | 0.9417 | 0.9767 | 0.9589 | 0.9265 | 0.875 | 0.9 | 1.0 | 0.2 | 0.3333 |
Framework versions
- PEFT 0.15.0
- Transformers 4.50.0.dev0
- Pytorch 2.6.0+cu124
- Datasets 3.4.1
- Tokenizers 0.21.1
- Downloads last month
- 5
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support
Model tree for ethansandbar/run_gemma-2-2b_20250507_215200
Base model
google/gemma-2-2b