RALL_NoCrop_Aug16F-8B16F-GACWDlr

This model is a fine-tuned version of MCG-NJU/videomae-base-finetuned-kinetics on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 0.8643
  • Accuracy: 0.8032

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-05
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • gradient_accumulation_steps: 2
  • total_train_batch_size: 16
  • optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_ratio: 0.1
  • training_steps: 3462

Training results

Training Loss Epoch Step Validation Loss Accuracy
0.6327 0.0416 144 0.6327 0.6299
0.4228 1.0416 288 0.5300 0.7464
0.2855 2.0416 432 0.5658 0.7648
0.2789 3.0416 576 0.5733 0.7587
0.237 4.0416 720 0.7180 0.7628
0.1125 5.0416 864 0.7992 0.7710
0.0921 6.0416 1008 0.8145 0.7669
0.1423 7.0416 1152 0.9354 0.7648
0.1307 8.0416 1296 0.9036 0.7648
0.0479 9.0416 1440 1.1271 0.7730
0.0724 10.0416 1584 1.0805 0.7669
0.1424 11.0416 1728 1.0949 0.7669
0.0577 12.0416 1872 1.1183 0.7730
0.1258 13.0416 2016 1.0614 0.7914
0.0271 14.0416 2160 1.1381 0.7771
0.0557 15.0416 2304 1.2154 0.7587
0.054 16.0416 2448 1.1568 0.7710
0.1001 17.0416 2592 1.1639 0.7853
0.0401 18.0416 2736 1.1892 0.7812

Framework versions

  • Transformers 4.51.3
  • Pytorch 2.6.0+cu124
  • Datasets 3.6.0
  • Tokenizers 0.21.1
Downloads last month
18
Safetensors
Model size
86.2M params
Tensor type
F32
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for TanAlexanderlz/RALL_NoCrop_Aug16F-8B16F-GACWDlr

Finetuned
(219)
this model