mdeberta-v3-base-subjectivity-sentiment-italian
This model is a fine-tuned version of on the None dataset. It achieves the following results on the evaluation set:
- Loss: 0.6602
- Macro F1: 0.7437
- Macro P: 0.7322
- Macro R: 0.7690
- Subj F1: 0.6437
- Subj P: 0.5696
- Subj R: 0.7401
- Accuracy: 0.7826
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 6
Training results
Training Loss | Epoch | Step | Validation Loss | Macro F1 | Macro P | Macro R | Subj F1 | Subj P | Subj R | Accuracy |
---|---|---|---|---|---|---|---|---|---|---|
No log | 1.0 | 101 | 0.6392 | 0.7244 | 0.7284 | 0.7208 | 0.5913 | 0.6071 | 0.5763 | 0.7886 |
No log | 2.0 | 202 | 0.5375 | 0.6731 | 0.7018 | 0.7579 | 0.6064 | 0.4548 | 0.9096 | 0.6867 |
No log | 3.0 | 303 | 0.5731 | 0.7453 | 0.7373 | 0.7563 | 0.6349 | 0.5970 | 0.6780 | 0.7931 |
No log | 4.0 | 404 | 0.5788 | 0.7522 | 0.7405 | 0.7752 | 0.6534 | 0.5848 | 0.7401 | 0.7916 |
0.4395 | 5.0 | 505 | 0.6922 | 0.7491 | 0.7400 | 0.7628 | 0.6423 | 0.5971 | 0.6949 | 0.7946 |
0.4395 | 6.0 | 606 | 0.6602 | 0.7437 | 0.7322 | 0.7690 | 0.6437 | 0.5696 | 0.7401 | 0.7826 |
Framework versions
- Transformers 4.49.0
- Pytorch 2.5.1+cu121
- Datasets 3.3.1
- Tokenizers 0.21.0
- Downloads last month
- 4
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support