Edit model card

swin-tiny-patch4-window7-224-finetuned-icpr

This model is a fine-tuned version of microsoft/swin-tiny-patch4-window7-224 on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 0.0479
  • Accuracy: 0.9895

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 16
  • eval_batch_size: 16
  • seed: 42
  • gradient_accumulation_steps: 4
  • total_train_batch_size: 64
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_ratio: 0.1
  • num_epochs: 100

Training results

Training Loss Epoch Step Validation Loss Accuracy
1.1149 0.99 65 0.9590 0.7629
0.2653 2.0 131 0.1648 0.9532
0.1984 2.99 196 0.0894 0.9713
0.1719 4.0 262 0.0863 0.9685
0.1537 4.99 327 0.0810 0.9761
0.1162 6.0 393 0.0785 0.9771
0.1063 6.99 458 0.0835 0.9723
0.1392 8.0 524 0.0674 0.9761
0.1286 8.99 589 0.0788 0.9761
0.1294 10.0 655 0.0658 0.9790
0.0843 10.99 720 0.0735 0.9732
0.074 12.0 786 0.0636 0.9761
0.0734 12.99 851 0.1043 0.9751
0.0774 14.0 917 0.0898 0.9723
0.068 14.99 982 0.0719 0.9809
0.0821 16.0 1048 0.0956 0.9742
0.0576 16.99 1113 0.0725 0.9713
0.0652 18.0 1179 0.0957 0.9751
0.0712 18.99 1244 0.0809 0.9790
0.075 20.0 1310 0.1283 0.9675
0.0988 20.99 1375 0.0966 0.9742
0.0538 22.0 1441 0.1125 0.9761
0.0578 22.99 1506 0.0648 0.9828
0.0675 24.0 1572 0.0992 0.9799
0.0611 24.99 1637 0.0682 0.9818
0.0434 26.0 1703 0.0719 0.9809
0.0339 26.99 1768 0.0930 0.9780
0.0346 28.0 1834 0.0903 0.9799
0.0806 28.99 1899 0.0903 0.9799
0.0518 30.0 1965 0.0982 0.9790
0.0407 30.99 2030 0.0702 0.9828
0.0528 32.0 2096 0.0897 0.9761
0.0774 32.99 2161 0.0626 0.9818
0.053 34.0 2227 0.0576 0.9837
0.0512 34.99 2292 0.0707 0.9847
0.0388 36.0 2358 0.1040 0.9790
0.06 36.99 2423 0.0840 0.9799
0.0477 38.0 2489 0.0659 0.9857
0.0482 38.99 2554 0.0479 0.9895
0.0292 40.0 2620 0.0699 0.9818
0.0386 40.99 2685 0.1030 0.9837
0.0441 42.0 2751 0.0801 0.9818
0.0269 42.99 2816 0.1037 0.9809
0.0385 44.0 2882 0.0870 0.9799
0.0502 44.99 2947 0.1367 0.9771
0.0389 46.0 3013 0.1093 0.9771
0.0209 46.99 3078 0.0954 0.9837
0.0327 48.0 3144 0.0886 0.9857
0.0269 48.99 3209 0.0767 0.9828
0.0461 50.0 3275 0.0661 0.9857
0.0226 50.99 3340 0.0769 0.9818
0.0304 52.0 3406 0.0841 0.9828
0.0326 52.99 3471 0.1002 0.9828
0.0593 54.0 3537 0.0634 0.9847
0.0489 54.99 3602 0.0702 0.9837
0.0495 56.0 3668 0.1060 0.9809
0.0457 56.99 3733 0.0715 0.9866
0.0487 58.0 3799 0.0906 0.9818
0.0416 58.99 3864 0.0973 0.9790
0.0358 60.0 3930 0.0887 0.9857
0.0503 60.99 3995 0.0959 0.9809
0.0555 62.0 4061 0.1057 0.9780
0.0288 62.99 4126 0.0971 0.9799
0.0514 64.0 4192 0.0754 0.9847
0.0602 64.99 4257 0.0789 0.9837
0.0209 66.0 4323 0.1005 0.9837
0.0366 66.99 4388 0.1070 0.9818
0.031 68.0 4454 0.1018 0.9818
0.043 68.99 4519 0.1020 0.9828
0.0262 70.0 4585 0.0896 0.9837
0.0299 70.99 4650 0.0913 0.9837
0.0211 72.0 4716 0.0957 0.9857
0.0351 72.99 4781 0.1180 0.9818
0.0498 74.0 4847 0.1056 0.9828
0.0174 74.99 4912 0.1032 0.9809
0.0368 76.0 4978 0.1071 0.9790
0.0367 76.99 5043 0.0987 0.9828
0.027 78.0 5109 0.1037 0.9818
0.0225 78.99 5174 0.1129 0.9809
0.0241 80.0 5240 0.1202 0.9828
0.026 80.99 5305 0.1219 0.9790
0.0223 82.0 5371 0.1194 0.9799
0.0454 82.99 5436 0.1148 0.9790
0.019 84.0 5502 0.1168 0.9818
0.0269 84.99 5567 0.1246 0.9799
0.0403 86.0 5633 0.1301 0.9790
0.0294 86.99 5698 0.1204 0.9799
0.0501 88.0 5764 0.1168 0.9790
0.0361 88.99 5829 0.1143 0.9818
0.0278 90.0 5895 0.1029 0.9799
0.0267 90.99 5960 0.0991 0.9818
0.0308 92.0 6026 0.1028 0.9828
0.0246 92.99 6091 0.1031 0.9809
0.0283 94.0 6157 0.1035 0.9818
0.0278 94.99 6222 0.0999 0.9818
0.0221 96.0 6288 0.1007 0.9809
0.0197 96.99 6353 0.0989 0.9818
0.0435 98.0 6419 0.0986 0.9818
0.0266 98.99 6484 0.0987 0.9818
0.0334 99.24 6500 0.0987 0.9818

Framework versions

  • Transformers 4.35.2
  • Pytorch 2.1.0+cu118
  • Datasets 2.14.7
  • Tokenizers 0.15.0
Downloads last month
7
Safetensors
Model size
27.6M params
Tensor type
I64
·
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for chiragtubakad/swin-tiny-patch4-window7-224-finetuned-icpr

Finetuned
(464)
this model