velectra-base_v2
This model is a fine-tuned version of FPTAI/velectra-base-discriminator-cased on an unknown dataset. It achieves the following results on the evaluation set:
- Loss: 0.5503
- Accuracy: 0.9242
- Precision Macro: 0.8370
- Recall Macro: 0.7946
- F1 Macro: 0.8125
- F1 Weighted: 0.9222
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 128
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 20
- mixed_precision_training: Native AMP
Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision Macro | Recall Macro | F1 Macro | F1 Weighted |
|---|---|---|---|---|---|---|---|---|
| 0.5452 | 1.0 | 90 | 0.2734 | 0.9071 | 0.8647 | 0.6926 | 0.7190 | 0.8965 |
| 0.2546 | 2.0 | 180 | 0.2530 | 0.9198 | 0.8318 | 0.7882 | 0.8059 | 0.9176 |
| 0.1788 | 3.0 | 270 | 0.2528 | 0.9223 | 0.8241 | 0.7732 | 0.7929 | 0.9193 |
| 0.1323 | 4.0 | 360 | 0.2605 | 0.9261 | 0.8473 | 0.8000 | 0.8197 | 0.9241 |
| 0.0901 | 5.0 | 450 | 0.2840 | 0.9305 | 0.8839 | 0.7986 | 0.8303 | 0.9276 |
| 0.0682 | 6.0 | 540 | 0.3434 | 0.9210 | 0.8458 | 0.8007 | 0.8197 | 0.9192 |
| 0.0482 | 7.0 | 630 | 0.3689 | 0.9191 | 0.7970 | 0.8197 | 0.8073 | 0.9206 |
| 0.0443 | 8.0 | 720 | 0.3906 | 0.9223 | 0.8315 | 0.7728 | 0.7952 | 0.9191 |
| 0.0275 | 9.0 | 810 | 0.4178 | 0.9210 | 0.8717 | 0.7504 | 0.7861 | 0.9155 |
| 0.028 | 10.0 | 900 | 0.4642 | 0.9103 | 0.7837 | 0.7837 | 0.7835 | 0.9103 |
| 0.02 | 11.0 | 990 | 0.4823 | 0.9179 | 0.8459 | 0.7694 | 0.7971 | 0.9143 |
| 0.0122 | 12.0 | 1080 | 0.5070 | 0.9179 | 0.8594 | 0.7853 | 0.8136 | 0.9151 |
| 0.0098 | 13.0 | 1170 | 0.5093 | 0.9248 | 0.8387 | 0.7911 | 0.8106 | 0.9225 |
| 0.0108 | 14.0 | 1260 | 0.5309 | 0.9248 | 0.8678 | 0.7783 | 0.8098 | 0.9212 |
| 0.0101 | 15.0 | 1350 | 0.5214 | 0.9261 | 0.8623 | 0.7669 | 0.7986 | 0.9216 |
| 0.0076 | 16.0 | 1440 | 0.5352 | 0.9242 | 0.8653 | 0.7737 | 0.8054 | 0.9203 |
| 0.0042 | 17.0 | 1530 | 0.5533 | 0.9198 | 0.8163 | 0.7870 | 0.8000 | 0.9181 |
| 0.0058 | 18.0 | 1620 | 0.5503 | 0.9255 | 0.8574 | 0.7871 | 0.8138 | 0.9225 |
| 0.0034 | 19.0 | 1710 | 0.5590 | 0.9248 | 0.8349 | 0.8035 | 0.8173 | 0.9233 |
| 0.0029 | 20.0 | 1800 | 0.5503 | 0.9242 | 0.8370 | 0.7946 | 0.8125 | 0.9222 |
Framework versions
- Transformers 4.55.0
- Pytorch 2.7.0+cu126
- Datasets 4.0.0
- Tokenizers 0.21.4
- Downloads last month
- 1
Model tree for aiface/velectra-base_v2
Base model
FPTAI/velectra-base-discriminator-cased