--- library_name: transformers license: apache-2.0 base_model: google/vit-base-patch16-224-in21k tags: - generated_from_trainer datasets: - imagefolder metrics: - accuracy model-index: - name: emotion_classification_adjusted results: - task: name: Image Classification type: image-classification dataset: name: imagefolder type: imagefolder config: default split: train args: default metrics: - name: Accuracy type: accuracy value: 0.8875 --- # emotion_classification_adjusted This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.8104 - Accuracy: 0.8875 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 60 - label_smoothing_factor: 0.1 ### Training results | Training Loss | Epoch | Step | Accuracy | Validation Loss | |:-------------:|:-----:|:----:|:--------:|:---------------:| | 2.0787 | 1.0 | 20 | 0.1625 | 2.0753 | | 2.073 | 2.0 | 40 | 0.1187 | 2.0737 | | 2.0599 | 3.0 | 60 | 0.1938 | 2.0585 | | 2.0363 | 4.0 | 80 | 0.1938 | 2.0368 | | 2.0051 | 5.0 | 100 | 0.2625 | 1.9921 | | 1.9348 | 6.0 | 120 | 0.3375 | 1.9185 | | 1.8466 | 7.0 | 140 | 0.375 | 1.8056 | | 1.755 | 8.0 | 160 | 0.4313 | 1.7292 | | 1.676 | 9.0 | 180 | 0.45 | 1.6674 | | 1.6244 | 10.0 | 200 | 0.475 | 1.6237 | | 1.5661 | 11.0 | 220 | 0.5062 | 1.5973 | | 1.5252 | 12.0 | 240 | 0.5 | 1.5262 | | 1.4729 | 13.0 | 260 | 0.55 | 1.5050 | | 1.4203 | 14.0 | 280 | 0.55 | 1.4784 | | 1.364 | 15.0 | 300 | 0.525 | 1.5131 | | 1.3262 | 16.0 | 320 | 0.5125 | 1.4776 | | 1.3102 | 17.0 | 340 | 0.5563 | 1.4200 | | 1.2595 | 18.0 | 360 | 0.5563 | 1.4329 | | 1.2188 | 19.0 | 380 | 0.5375 | 1.4213 | | 1.1991 | 20.0 | 400 | 0.525 | 1.4077 | | 1.1526 | 21.0 | 420 | 0.6062 | 1.3625 | | 1.1225 | 22.0 | 440 | 0.5437 | 1.3745 | | 1.1283 | 23.0 | 460 | 0.5375 | 1.3677 | | 1.0856 | 24.0 | 480 | 0.5625 | 1.3283 | | 1.0559 | 25.0 | 500 | 0.5687 | 1.3440 | | 1.0102 | 26.0 | 520 | 0.5437 | 1.3357 | | 0.9915 | 27.0 | 540 | 0.5813 | 1.3377 | | 0.9807 | 28.0 | 560 | 0.55 | 1.3824 | | 0.9382 | 29.0 | 580 | 0.4938 | 1.4468 | | 0.9857 | 30.0 | 600 | 0.8125 | 0.9923 | | 0.9956 | 31.0 | 620 | 0.7625 | 1.0361 | | 0.9875 | 32.0 | 640 | 0.775 | 1.0310 | | 0.9582 | 33.0 | 660 | 0.7625 | 1.0572 | | 0.9649 | 34.0 | 680 | 0.8063 | 0.9725 | | 0.9099 | 35.0 | 700 | 0.7562 | 1.0355 | | 0.9339 | 36.0 | 720 | 0.7937 | 1.0129 | | 0.9045 | 37.0 | 740 | 0.7562 | 1.0315 | | 0.8903 | 38.0 | 760 | 0.8187 | 0.9923 | | 0.8799 | 39.0 | 780 | 0.7625 | 1.0386 | | 0.8664 | 40.0 | 800 | 0.7438 | 1.0626 | | 0.8351 | 41.0 | 820 | 0.7688 | 0.9885 | | 0.8514 | 42.0 | 840 | 0.7875 | 0.9975 | | 0.857 | 43.0 | 860 | 0.75 | 1.0169 | | 0.8331 | 44.0 | 880 | 0.7937 | 0.9763 | | 0.8093 | 45.0 | 900 | 0.7937 | 0.9645 | | 0.8303 | 46.0 | 920 | 0.8 | 0.9880 | | 0.8077 | 47.0 | 940 | 0.8063 | 1.0094 | | 0.8082 | 48.0 | 960 | 0.7937 | 0.9757 | | 0.8088 | 49.0 | 980 | 0.7438 | 1.0451 | | 0.7985 | 50.0 | 1000 | 0.7875 | 0.9850 | | 0.8013 | 51.0 | 1020 | 0.7688 | 1.0362 | | 0.7882 | 52.0 | 1040 | 0.775 | 1.0007 | | 0.8051 | 53.0 | 1060 | 0.7438 | 1.0314 | | 0.812 | 54.0 | 1080 | 0.8 | 0.9782 | | 0.7895 | 55.0 | 1100 | 0.725 | 1.0396 | | 0.8012 | 56.0 | 1120 | 0.7688 | 0.9894 | | 0.7973 | 57.0 | 1140 | 0.7875 | 0.9981 | | 0.7946 | 58.0 | 1160 | 0.8063 | 0.9754 | | 0.8437 | 59.0 | 1180 | 0.85 | 0.8544 | | 0.8489 | 60.0 | 1200 | 0.7991 | 0.9062 | ### Framework versions - Transformers 4.48.3 - Pytorch 2.5.1 - Datasets 3.2.0 - Tokenizers 0.21.0