train_stsb_1752826678
This model is a fine-tuned version of meta-llama/Meta-Llama-3-8B-Instruct on the stsb dataset. It achieves the following results on the evaluation set:
- Loss: 1.3492
- Num Input Tokens Seen: 4364240
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 123
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10.0
Training results
Training Loss | Epoch | Step | Validation Loss | Input Tokens Seen |
---|---|---|---|---|
6.6431 | 0.5 | 647 | 6.9944 | 217472 |
4.7505 | 1.0 | 1294 | 4.8763 | 435488 |
3.6527 | 1.5 | 1941 | 3.6522 | 652480 |
2.7364 | 2.0 | 2588 | 3.2192 | 871200 |
2.5943 | 2.5 | 3235 | 2.8565 | 1089120 |
2.2267 | 3.0 | 3882 | 2.4612 | 1307968 |
1.8641 | 3.5 | 4529 | 2.1106 | 1529024 |
1.4517 | 4.0 | 5176 | 1.8600 | 1745568 |
1.8002 | 4.5 | 5823 | 1.7016 | 1965984 |
2.0368 | 5.0 | 6470 | 1.5960 | 2182352 |
1.3161 | 5.5 | 7117 | 1.5229 | 2399760 |
1.3774 | 6.0 | 7764 | 1.4644 | 2619888 |
1.2608 | 6.5 | 8411 | 1.4241 | 2837808 |
1.0822 | 7.0 | 9058 | 1.3955 | 3057216 |
1.5657 | 7.5 | 9705 | 1.3755 | 3275904 |
1.1048 | 8.0 | 10352 | 1.3634 | 3493600 |
1.4693 | 8.5 | 10999 | 1.3561 | 3712320 |
1.3292 | 9.0 | 11646 | 1.3501 | 3928704 |
0.8175 | 9.5 | 12293 | 1.3492 | 4147200 |
1.2007 | 10.0 | 12940 | 1.3493 | 4364240 |
Framework versions
- PEFT 0.15.2
- Transformers 4.51.3
- Pytorch 2.7.1+cu126
- Datasets 3.6.0
- Tokenizers 0.21.1
- Downloads last month
- 3
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support
Model tree for rbelanec/train_stsb_1752826678
Base model
meta-llama/Meta-Llama-3-8B-Instruct