mdeberta-v3-base-subjectivity-sentiment-multilingual
This model is a fine-tuned version of on the None dataset. It achieves the following results on the evaluation set:
- Loss: 0.7762
- Macro F1: 0.7580
- Macro P: 0.7558
- Macro R: 0.7614
- Subj F1: 0.7100
- Subj P: 0.6878
- Subj R: 0.7336
- Accuracy: 0.7676
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 6
Training results
Training Loss | Epoch | Step | Validation Loss | Macro F1 | Macro P | Macro R | Subj F1 | Subj P | Subj R | Accuracy |
---|---|---|---|---|---|---|---|---|---|---|
No log | 1.0 | 402 | 0.5154 | 0.6964 | 0.7341 | 0.7337 | 0.6969 | 0.5685 | 0.9001 | 0.6964 |
0.6027 | 2.0 | 804 | 0.5061 | 0.7264 | 0.7402 | 0.7508 | 0.7086 | 0.6055 | 0.8539 | 0.7276 |
0.4707 | 3.0 | 1206 | 0.6328 | 0.7387 | 0.7389 | 0.7511 | 0.7036 | 0.6373 | 0.7852 | 0.7434 |
0.3996 | 4.0 | 1608 | 0.7000 | 0.7519 | 0.7556 | 0.7492 | 0.6903 | 0.7128 | 0.6692 | 0.7672 |
0.3579 | 5.0 | 2010 | 0.7443 | 0.7476 | 0.7485 | 0.7614 | 0.7154 | 0.6440 | 0.8045 | 0.7518 |
0.3579 | 6.0 | 2412 | 0.7762 | 0.7580 | 0.7558 | 0.7614 | 0.7100 | 0.6878 | 0.7336 | 0.7676 |
Framework versions
- Transformers 4.49.0
- Pytorch 2.5.1+cu121
- Datasets 3.3.1
- Tokenizers 0.21.0
- Downloads last month
- 7
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support