mmomm25's picture
Model save
42c405c verified
metadata
license: apache-2.0
base_model: microsoft/swinv2-base-patch4-window12-192-22k
tags:
  - generated_from_trainer
datasets:
  - imagefolder
metrics:
  - accuracy
  - f1
  - precision
  - recall
model-index:
  - name: swinv2-base-patch4-window12-192-22k-ConcreteClassifier-PVT
    results:
      - task:
          name: Image Classification
          type: image-classification
        dataset:
          name: imagefolder
          type: imagefolder
          config: default
          split: train
          args: default
        metrics:
          - name: Accuracy
            type: accuracy
            value:
              accuracy: 0.6160830090791181
          - name: F1
            type: f1
            value:
              f1: 0.6252886431691335
          - name: Precision
            type: precision
            value:
              precision: 0.6429213069076691
          - name: Recall
            type: recall
            value:
              recall: 0.6390705914040895

swinv2-base-patch4-window12-192-22k-ConcreteClassifier-PVT

This model is a fine-tuned version of microsoft/swinv2-base-patch4-window12-192-22k on the imagefolder dataset. It achieves the following results on the evaluation set:

  • Loss: 0.9423
  • Accuracy: {'accuracy': 0.6160830090791181}
  • F1: {'f1': 0.6252886431691335}
  • Precision: {'precision': 0.6429213069076691}
  • Recall: {'recall': 0.6390705914040895}

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.001
  • train_batch_size: 2
  • eval_batch_size: 2
  • seed: 42
  • gradient_accumulation_steps: 4
  • total_train_batch_size: 8
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: cosine
  • lr_scheduler_warmup_ratio: 0.1
  • num_epochs: 30

Training results

Training Loss Epoch Step Validation Loss Accuracy F1 Precision Recall
1.2473 1.0 1927 0.9809 {'accuracy': 0.5909208819714656} {'f1': 0.5910435702573883} {'precision': 0.6661654194259725} {'recall': 0.6016064778916844}
1.7959 2.0 3854 1.8580 {'accuracy': 0.26848249027237353} {'f1': 0.17635435227079216} {'precision': 0.15600300783137744} {'recall': 0.2748312652287413}
1.8437 3.0 5781 1.9243 {'accuracy': 0.1743190661478599} {'f1': 0.1111482743173735} {'precision': 0.1516816437642854} {'recall': 0.1887752678966561}
1.803 4.0 7708 1.9003 {'accuracy': 0.1846952010376135} {'f1': 0.1316379146314846} {'precision': 0.1460659091286986} {'recall': 0.1928584239028949}
1.8121 5.0 9635 1.7386 {'accuracy': 0.33229571984435796} {'f1': 0.24134878979912178} {'precision': 0.2476900084001273} {'recall': 0.3378007007551934}
1.7984 6.0 11562 1.7322 {'accuracy': 0.3014267185473411} {'f1': 0.2074421859419775} {'precision': 0.17325128260235972} {'recall': 0.31234162186318526}
1.7385 7.0 13489 1.6964 {'accuracy': 0.3120622568093385} {'f1': 0.2220020856519651} {'precision': 0.24821744277779434} {'recall': 0.3239827067963708}
1.578 8.0 15416 1.6265 {'accuracy': 0.2827496757457847} {'f1': 0.20468330816120264} {'precision': 0.23605832742279897} {'recall': 0.2983260879145022}
1.7397 9.0 17343 1.4545 {'accuracy': 0.41686121919584956} {'f1': 0.38005456072209515} {'precision': 0.38606801407175106} {'recall': 0.43686617410103795}
1.9688 10.0 19270 1.5583 {'accuracy': 0.3761348897535668} {'f1': 0.3427169207054117} {'precision': 0.3765571735514816} {'recall': 0.3992290557525892}
1.5298 11.0 21197 1.3465 {'accuracy': 0.46718547341115435} {'f1': 0.4188714217488438} {'precision': 0.41426640127137376} {'recall': 0.4948604123766437}
1.7964 12.0 23124 2.1154 {'accuracy': 0.2059662775616083} {'f1': 0.15046942523030551} {'precision': 0.24607297755729998} {'recall': 0.1965173812427065}
1.6734 13.0 25051 1.8294 {'accuracy': 0.26718547341115434} {'f1': 0.23757487469054722} {'precision': 0.2619241634609933} {'recall': 0.28087243077801927}
1.4376 14.0 26978 1.3830 {'accuracy': 0.4254215304798962} {'f1': 0.41718777684401953} {'precision': 0.4554477836203232} {'recall': 0.450319904911865}
1.3403 15.0 28905 1.2193 {'accuracy': 0.4926070038910506} {'f1': 0.44282153705015465} {'precision': 0.5137836037175402} {'recall': 0.5162930270137864}
1.4806 16.0 30832 1.2119 {'accuracy': 0.5060959792477302} {'f1': 0.49960050025350783} {'precision': 0.4999152354253829} {'recall': 0.5224114833278971}
1.2526 17.0 32759 1.1265 {'accuracy': 0.532295719844358} {'f1': 0.5154889606383344} {'precision': 0.5564411969038197} {'recall': 0.5477056677098556}
1.3673 18.0 34686 1.0845 {'accuracy': 0.5553826199740597} {'f1': 0.5594361896393714} {'precision': 0.5621230845137732} {'recall': 0.5721227129539878}
1.241 19.0 36613 1.1499 {'accuracy': 0.5317769130998703} {'f1': 0.5281353810835646} {'precision': 0.5775200417239811} {'recall': 0.5464467844676986}
1.2881 20.0 38540 1.0368 {'accuracy': 0.5732814526588845} {'f1': 0.5822889712524011} {'precision': 0.5902075134144437} {'recall': 0.5939011675173421}
1.1817 21.0 40467 1.0927 {'accuracy': 0.562905317769131} {'f1': 0.5595449409010723} {'precision': 0.6241213136239965} {'recall': 0.5901142810563812}
1.0632 22.0 42394 0.9835 {'accuracy': 0.5916990920881972} {'f1': 0.5977863969792085} {'precision': 0.6133139141030883} {'recall': 0.6180557435733788}
1.3852 23.0 44321 0.9644 {'accuracy': 0.6108949416342413} {'f1': 0.6180791606479925} {'precision': 0.6280310097271309} {'recall': 0.6298244322284886}
0.9998 24.0 46248 1.0270 {'accuracy': 0.5730220492866407} {'f1': 0.5857906297181984} {'precision': 0.6037780108566702} {'recall': 0.5926557023955539}
1.2504 25.0 48175 0.9579 {'accuracy': 0.6041504539559014} {'f1': 0.6089085687671967} {'precision': 0.6356079019561951} {'recall': 0.6279729594034652}
0.9085 26.0 50102 0.9158 {'accuracy': 0.6256809338521401} {'f1': 0.634525318707402} {'precision': 0.638920861622337} {'recall': 0.6478580836265876}
1.0098 27.0 52029 0.9567 {'accuracy': 0.598443579766537} {'f1': 0.6064064760408924} {'precision': 0.6229095253118345} {'recall': 0.6214139916724217}
1.0409 28.0 53956 0.9464 {'accuracy': 0.611413748378729} {'f1': 0.6211699172289087} {'precision': 0.6387402688839067} {'recall': 0.634161757892515}
1.3289 29.0 55883 0.9405 {'accuracy': 0.6176394293125811} {'f1': 0.6271514140166757} {'precision': 0.6447687702321006} {'recall': 0.6400894012742261}
0.8946 30.0 57810 0.9423 {'accuracy': 0.6160830090791181} {'f1': 0.6252886431691335} {'precision': 0.6429213069076691} {'recall': 0.6390705914040895}

Framework versions

  • Transformers 4.37.2
  • Pytorch 2.1.0
  • Datasets 2.17.1
  • Tokenizers 0.15.2