vit-Facial-Expression-Recognition
This model is a fine-tuned version of motheecreator/vit-Facial-Expression-Recognition on the imagefolder dataset. It achieves the following results on the evaluation set:
- Loss: 0.5713
- Accuracy: 0.8308
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 256
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 20
Training results
Training Loss | Epoch | Step | Validation Loss | Accuracy |
---|---|---|---|---|
0.3894 | 1.2198 | 100 | 0.3854 | 0.8667 |
0.3556 | 2.4397 | 200 | 0.3952 | 0.8628 |
0.3545 | 3.6595 | 300 | 0.4057 | 0.8604 |
0.3243 | 4.8794 | 400 | 0.4035 | 0.8565 |
0.3718 | 6.0977 | 500 | 0.4029 | 0.8586 |
0.342 | 7.3176 | 600 | 0.4279 | 0.8493 |
0.3021 | 8.5374 | 700 | 0.4375 | 0.8449 |
0.2795 | 9.7573 | 800 | 0.4542 | 0.8468 |
0.2642 | 10.9771 | 900 | 0.4717 | 0.8434 |
0.2223 | 12.1954 | 1000 | 0.4991 | 0.8367 |
0.1875 | 13.4153 | 1100 | 0.5185 | 0.8346 |
0.1481 | 14.6351 | 1200 | 0.5357 | 0.8351 |
0.1291 | 15.8550 | 1300 | 0.5409 | 0.8350 |
0.1021 | 17.0733 | 1400 | 0.5684 | 0.8292 |
0.0851 | 18.2931 | 1500 | 0.5752 | 0.8296 |
0.076 | 19.5130 | 1600 | 0.5712 | 0.8308 |
Framework versions
- Transformers 4.48.2
- Pytorch 2.6.0+cu126
- Datasets 3.2.0
- Tokenizers 0.21.0
- Downloads last month
- 7
Inference Providers
NEW
This model is not currently available via any of the supported third-party Inference Providers, and
the model is not deployed on the HF Inference API.