videomae-base-finetuned-kinetics-0410_final_5sec_org_ab7_val_inside_train_retry

This model is a fine-tuned version of MCG-NJU/videomae-base-finetuned-kinetics on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 0.4180
  • Accuracy: 0.9060

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_ratio: 0.05
  • training_steps: 67100

Training results

Training Loss Epoch Step Validation Loss Accuracy
0.2778 0.0100 672 0.3076 0.8849
0.2358 1.0100 1344 0.3640 0.8912
0.0279 2.0100 2016 0.6215 0.8622
0.0838 3.0100 2688 0.5896 0.8692
0.0011 4.0100 3360 0.2472 0.9366
0.0007 5.0100 4032 0.7264 0.8316
0.0021 6.0100 4704 0.4523 0.8778
0.469 7.0100 5376 0.2878 0.9045
0.0097 8.0100 6048 0.4613 0.8919
0.001 9.0100 6720 0.3168 0.8990
0.0027 10.0100 7392 0.7703 0.8504
0.005 11.0100 8064 0.6012 0.8434
0.4087 12.0100 8736 0.4830 0.8880
0.0003 13.0100 9408 0.5820 0.8739
0.0114 14.0100 10080 1.1650 0.7972
0.0001 15.0100 10752 0.4314 0.9052
0.0001 16.0100 11424 0.4977 0.8927
0.0001 17.0100 12096 0.4809 0.8810
0.0003 18.0100 12768 0.4098 0.9201
0.0085 19.0100 13440 0.3973 0.9217

Framework versions

  • Transformers 4.48.0
  • Pytorch 2.5.1+cu124
  • Datasets 3.2.0
  • Tokenizers 0.21.0
Downloads last month
8
Safetensors
Model size
86.2M params
Tensor type
F32
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for d2o2ji/videomae-base-finetuned-kinetics-0410_final_5sec_org_ab7_val_inside_train_retry

Finetuned
(146)
this model