hkivancoral's picture
End of training
2cdff06
metadata
license: apache-2.0
base_model: facebook/deit-tiny-patch16-224
tags:
  - generated_from_trainer
datasets:
  - imagefolder
metrics:
  - accuracy
model-index:
  - name: smids_1x_deit_tiny_adamax_001_fold1
    results:
      - task:
          name: Image Classification
          type: image-classification
        dataset:
          name: imagefolder
          type: imagefolder
          config: default
          split: test
          args: default
        metrics:
          - name: Accuracy
            type: accuracy
            value: 0.8580968280467446

smids_1x_deit_tiny_adamax_001_fold1

This model is a fine-tuned version of facebook/deit-tiny-patch16-224 on the imagefolder dataset. It achieves the following results on the evaluation set:

  • Loss: 1.1082
  • Accuracy: 0.8581

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.001
  • train_batch_size: 32
  • eval_batch_size: 32
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_ratio: 0.1
  • num_epochs: 50

Training results

Training Loss Epoch Step Validation Loss Accuracy
0.7889 1.0 76 0.6069 0.7679
0.5102 2.0 152 0.5610 0.7997
0.4217 3.0 228 0.8352 0.6995
0.4159 4.0 304 0.5497 0.7813
0.3589 5.0 380 0.5194 0.8114
0.276 6.0 456 0.5883 0.8097
0.2664 7.0 532 0.6189 0.7846
0.1835 8.0 608 0.6054 0.8280
0.2342 9.0 684 0.5718 0.7997
0.1616 10.0 760 0.4962 0.8347
0.215 11.0 836 0.5625 0.8397
0.0927 12.0 912 0.7589 0.8114
0.1215 13.0 988 0.6624 0.8280
0.1378 14.0 1064 0.6885 0.8364
0.0728 15.0 1140 0.7416 0.8297
0.0869 16.0 1216 0.6972 0.8514
0.036 17.0 1292 0.9407 0.8214
0.0943 18.0 1368 0.8304 0.8347
0.0241 19.0 1444 0.8350 0.8347
0.0865 20.0 1520 0.8686 0.8214
0.0747 21.0 1596 0.9140 0.8230
0.0326 22.0 1672 0.9287 0.8464
0.0264 23.0 1748 1.1143 0.8531
0.0071 24.0 1824 0.9573 0.8464
0.0572 25.0 1900 0.9057 0.8347
0.0013 26.0 1976 0.9807 0.8548
0.0325 27.0 2052 1.0110 0.8447
0.0062 28.0 2128 1.0638 0.8247
0.0158 29.0 2204 0.9283 0.8464
0.018 30.0 2280 0.7858 0.8564
0.0102 31.0 2356 0.9337 0.8614
0.0111 32.0 2432 0.9445 0.8581
0.0002 33.0 2508 0.9913 0.8514
0.0001 34.0 2584 1.0058 0.8564
0.011 35.0 2660 1.0207 0.8581
0.004 36.0 2736 1.0277 0.8564
0.0081 37.0 2812 1.0540 0.8548
0.0 38.0 2888 1.0713 0.8497
0.0038 39.0 2964 1.0596 0.8548
0.0 40.0 3040 1.0612 0.8581
0.0037 41.0 3116 1.0825 0.8564
0.0 42.0 3192 1.0750 0.8614
0.0051 43.0 3268 1.0851 0.8581
0.0 44.0 3344 1.0866 0.8564
0.0 45.0 3420 1.0922 0.8614
0.0 46.0 3496 1.0974 0.8598
0.0 47.0 3572 1.1025 0.8598
0.0028 48.0 3648 1.1080 0.8598
0.0 49.0 3724 1.1080 0.8581
0.0 50.0 3800 1.1082 0.8581

Framework versions

  • Transformers 4.35.2
  • Pytorch 2.1.0+cu118
  • Datasets 2.15.0
  • Tokenizers 0.15.0