sft-count_loss-t5-v1_1-base-mle0.5-ul0.5-tox1.0-e10
This model is a fine-tuned version of google/t5-v1_1-base on the None dataset. It achieves the following results on the evaluation set:
- Loss: 2.8154
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: constant
- lr_scheduler_warmup_steps: 5
- num_epochs: 10
Training results
Training Loss | Epoch | Step | Validation Loss |
---|---|---|---|
4.0162 | 0.2899 | 200 | 2.8666 |
2.4715 | 0.5797 | 400 | 2.1468 |
2.2726 | 0.8696 | 600 | 2.6145 |
2.2269 | 1.1594 | 800 | 2.5553 |
2.129 | 1.4493 | 1000 | 2.5554 |
2.1178 | 1.7391 | 1200 | 2.5770 |
2.0445 | 2.0290 | 1400 | 2.4566 |
2.0342 | 2.3188 | 1600 | 2.5303 |
2.0212 | 2.6087 | 1800 | 2.5145 |
2.022 | 2.8986 | 2000 | 2.5316 |
2.0218 | 3.1884 | 2200 | 2.6211 |
2.0083 | 3.4783 | 2400 | 2.5134 |
2.0286 | 3.7681 | 2600 | 2.5617 |
2.004 | 4.0580 | 2800 | 2.6998 |
2.0451 | 4.3478 | 3000 | 2.7486 |
2.0148 | 4.6377 | 3200 | 2.6602 |
2.0174 | 4.9275 | 3400 | 2.6803 |
2.0045 | 5.2174 | 3600 | 2.6916 |
2.0335 | 5.5072 | 3800 | 2.7176 |
2.0442 | 5.7971 | 4000 | 2.6628 |
2.0538 | 6.0870 | 4200 | 2.6963 |
2.0806 | 6.3768 | 4400 | 2.7406 |
2.0362 | 6.6667 | 4600 | 2.7027 |
2.0735 | 6.9565 | 4800 | 2.7394 |
2.0998 | 7.2464 | 5000 | 2.8079 |
2.0832 | 7.5362 | 5200 | 2.8369 |
2.0284 | 7.8261 | 5400 | 2.7515 |
2.0722 | 8.1159 | 5600 | 2.8469 |
2.1144 | 8.4058 | 5800 | 2.8835 |
2.1753 | 8.6957 | 6000 | 2.8898 |
2.133 | 8.9855 | 6200 | 2.8389 |
2.1348 | 9.2754 | 6400 | 2.8973 |
2.1297 | 9.5652 | 6600 | 2.8732 |
2.1861 | 9.8551 | 6800 | 2.8154 |
Framework versions
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Datasets 3.5.0
- Tokenizers 0.21.1
- Downloads last month
- 2,476
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support
Model tree for TarhanE/sft-count_loss-t5-v1_1-base-mle0.5-ul0.5-tox1.0-e10
Base model
google/t5-v1_1-base