pradanaadn's picture
End of training
51ffbdb verified
|
raw
history blame
2.65 kB
metadata
license: apache-2.0
base_model: distilbert/distilbert-base-uncased-finetuned-sst-2-english
tags:
  - generated_from_trainer
metrics:
  - accuracy
model-index:
  - name: sucidal-text-classification-distillbert
    results: []

sucidal-text-classification-distillbert

This model is a fine-tuned version of distilbert/distilbert-base-uncased-finetuned-sst-2-english on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 0.4590
  • Accuracy: 0.8198
  • F1 Score: 0.8198

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 16
  • eval_batch_size: 16
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 500
  • num_epochs: 3

Training results

Training Loss Epoch Step Validation Loss Accuracy F1 Score
0.3602 0.1885 500 0.7867 0.7670 0.7670
0.6285 0.3769 1000 0.5574 0.7795 0.7795
0.5624 0.5654 1500 0.5011 0.7988 0.7988
0.5413 0.7539 2000 0.4968 0.8017 0.8017
0.5084 0.9423 2500 0.4712 0.8085 0.8085
0.4253 1.1308 3000 0.4938 0.8053 0.8053
0.3915 1.3193 3500 0.4781 0.8136 0.8136
0.3739 1.5077 4000 0.5195 0.8043 0.8043
0.3638 1.6962 4500 0.4790 0.8201 0.8201
0.3667 1.8847 5000 0.4590 0.8198 0.8198
0.3182 2.0731 5500 0.5129 0.8218 0.8218
0.2325 2.2616 6000 0.5279 0.8198 0.8198
0.2318 2.4501 6500 0.5368 0.8197 0.8197
0.2219 2.6385 7000 0.5606 0.8221 0.8221
0.2261 2.8270 7500 0.5406 0.8229 0.8229

Framework versions

  • Transformers 4.41.2
  • Pytorch 2.3.0+cu121
  • Datasets 2.20.0
  • Tokenizers 0.19.1