Ghunghru's picture
End of training
0b04f1f verified
metadata
license: apache-2.0
base_model: bert-base-multilingual-cased
tags:
  - generated_from_trainer
metrics:
  - f1
model-index:
  - name: Misinformation-Covid-LowLearningRatebert-base-multilingual-cased
    results: []

Misinformation-Covid-LowLearningRatebert-base-multilingual-cased

This model is a fine-tuned version of bert-base-multilingual-cased on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 0.5774
  • F1: 0.0488

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-07
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 50

Training results

Training Loss Epoch Step Validation Loss F1
0.6829 1.0 189 0.6704 0.1463
0.673 2.0 378 0.6340 0.0784
0.6543 3.0 567 0.6453 0.0
0.6519 4.0 756 0.6439 0.0
0.6598 5.0 945 0.6427 0.0
0.65 6.0 1134 0.6416 0.0
0.673 7.0 1323 0.6415 0.0
0.6573 8.0 1512 0.6411 0.0
0.6641 9.0 1701 0.6404 0.0
0.667 10.0 1890 0.6398 0.0
0.6646 11.0 2079 0.6387 0.0
0.6552 12.0 2268 0.6377 0.0
0.6617 13.0 2457 0.6368 0.0
0.649 14.0 2646 0.6352 0.0
0.663 15.0 2835 0.6338 0.0
0.6506 16.0 3024 0.6322 0.0
0.6627 17.0 3213 0.6306 0.0
0.6492 18.0 3402 0.6288 0.0
0.6457 19.0 3591 0.6262 0.0
0.6448 20.0 3780 0.6238 0.0
0.6431 21.0 3969 0.6211 0.0
0.6412 22.0 4158 0.6189 0.0
0.6333 23.0 4347 0.6151 0.0
0.6435 24.0 4536 0.6121 0.0
0.6325 25.0 4725 0.6092 0.0
0.6271 26.0 4914 0.6047 0.0
0.6234 27.0 5103 0.6018 0.0
0.6185 28.0 5292 0.5993 0.0
0.6274 29.0 5481 0.5964 0.0
0.6129 30.0 5670 0.5942 0.0
0.6204 31.0 5859 0.5921 0.0
0.6044 32.0 6048 0.5913 0.0
0.6103 33.0 6237 0.5891 0.0
0.6005 34.0 6426 0.5868 0.0
0.6058 35.0 6615 0.5865 0.0
0.6179 36.0 6804 0.5846 0.0
0.6077 37.0 6993 0.5835 0.0
0.5964 38.0 7182 0.5832 0.0
0.6106 39.0 7371 0.5813 0.0
0.5865 40.0 7560 0.5816 0.0
0.6142 41.0 7749 0.5795 0.0
0.5903 42.0 7938 0.5790 0.0
0.5926 43.0 8127 0.5790 0.0
0.6077 44.0 8316 0.5786 0.0
0.6025 45.0 8505 0.5780 0.0
0.604 46.0 8694 0.5771 0.0488
0.5875 47.0 8883 0.5774 0.0488
0.5797 48.0 9072 0.5775 0.0488
0.6054 49.0 9261 0.5775 0.0488
0.5974 50.0 9450 0.5774 0.0488

Framework versions

  • Transformers 4.32.1
  • Pytorch 2.1.2
  • Datasets 2.12.0
  • Tokenizers 0.13.3