metadata
license: apache-2.0
base_model: google/vit-base-patch16-224-in21k
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: attraction-classifier
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.7955974842767296
attraction-classifier
This model is a fine-tuned version of google/vit-base-patch16-224-in21k on the imagefolder dataset. It achieves the following results on the evaluation set:
- Loss: 0.4691
- Accuracy: 0.7956
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 69
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.15
- num_epochs: 15
Training results
Training Loss | Epoch | Step | Validation Loss | Accuracy |
---|---|---|---|---|
0.6703 | 0.34 | 15 | 0.6354 | 0.7327 |
0.5449 | 0.67 | 30 | 0.5836 | 0.7421 |
0.5407 | 1.01 | 45 | 0.5594 | 0.7421 |
0.5255 | 1.34 | 60 | 0.5294 | 0.7547 |
0.5586 | 1.68 | 75 | 0.5171 | 0.7642 |
0.5438 | 2.01 | 90 | 0.5212 | 0.7704 |
0.4807 | 2.35 | 105 | 0.5181 | 0.7390 |
0.6202 | 2.68 | 120 | 0.4972 | 0.7704 |
0.5021 | 3.02 | 135 | 0.4566 | 0.7987 |
0.4313 | 3.35 | 150 | 0.4852 | 0.7925 |
0.3532 | 3.69 | 165 | 0.4378 | 0.8113 |
0.3577 | 4.02 | 180 | 0.4515 | 0.8019 |
0.4736 | 4.36 | 195 | 0.4498 | 0.7893 |
0.3516 | 4.69 | 210 | 0.4408 | 0.8239 |
0.4437 | 5.03 | 225 | 0.4611 | 0.7799 |
0.3543 | 5.36 | 240 | 0.4294 | 0.8208 |
0.4029 | 5.7 | 255 | 0.4155 | 0.8428 |
0.3808 | 6.03 | 270 | 0.4116 | 0.8302 |
0.3211 | 6.37 | 285 | 0.4009 | 0.8302 |
0.2949 | 6.7 | 300 | 0.4321 | 0.8176 |
0.2663 | 7.04 | 315 | 0.4229 | 0.8396 |
0.3049 | 7.37 | 330 | 0.4110 | 0.8365 |
0.1303 | 7.71 | 345 | 0.4288 | 0.8333 |
0.2079 | 8.04 | 360 | 0.4218 | 0.8208 |
0.208 | 8.38 | 375 | 0.3908 | 0.8365 |
0.2067 | 8.72 | 390 | 0.5191 | 0.7862 |
0.1635 | 9.05 | 405 | 0.4691 | 0.7956 |
Framework versions
- Transformers 4.35.2
- Pytorch 2.0.1+cu117
- Datasets 2.15.0
- Tokenizers 0.15.0