5c_2

This model is a fine-tuned version of MCG-NJU/videomae-large-finetuned-kinetics on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 3.8264
  • Accuracy: 0.44

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-05
  • train_batch_size: 5
  • eval_batch_size: 5
  • seed: 42
  • optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_ratio: 0.1
  • training_steps: 4600

Training results

Training Loss Epoch Step Validation Loss Accuracy
1.3192 0.0102 47 1.3455 0.4
0.8974 1.0102 94 1.4305 0.4
0.9209 2.0102 141 1.5682 0.4
0.7915 3.0102 188 1.6475 0.4
0.9571 4.0102 235 1.4177 0.4
0.9085 5.0102 282 1.4976 0.4
0.8815 6.0102 329 1.6249 0.4
0.9245 7.0102 376 1.7363 0.4
0.9179 8.0102 423 1.4075 0.4
1.0738 9.0102 470 1.9536 0.4
0.5865 10.0102 517 1.6996 0.4
0.4752 11.0102 564 2.1711 0.4
0.7409 12.0102 611 1.7252 0.36
0.7534 13.0102 658 1.9988 0.4
0.6109 14.0102 705 1.7449 0.36
0.4217 15.0102 752 2.9984 0.4
0.8409 16.0102 799 1.6709 0.36
0.6114 17.0102 846 1.9014 0.36
0.6806 18.0102 893 1.8763 0.36
0.5359 19.0102 940 2.1816 0.36
0.5989 20.0102 987 1.7239 0.32
0.4145 21.0102 1034 2.2460 0.32
0.3841 22.0102 1081 2.4509 0.36
0.4356 23.0102 1128 2.1125 0.36
0.2509 24.0102 1175 2.6513 0.32
0.4963 25.0102 1222 2.8019 0.4
0.1915 26.0102 1269 2.4637 0.32
0.1269 27.0102 1316 2.8957 0.36
0.3599 28.0102 1363 2.5853 0.36
0.399 29.0102 1410 3.3633 0.4
0.205 30.0102 1457 3.0276 0.32
0.0945 31.0102 1504 3.3960 0.4
0.3376 32.0102 1551 3.0445 0.32
0.2407 33.0102 1598 2.8461 0.32
0.1653 34.0102 1645 3.1737 0.36
0.187 35.0102 1692 3.5642 0.32
0.2339 36.0102 1739 3.6020 0.4
0.1097 37.0102 1786 3.5631 0.4
0.2859 38.0102 1833 3.6048 0.36
0.0123 39.0102 1880 4.2022 0.4
0.0062 40.0102 1927 4.2564 0.36
0.031 41.0102 1974 4.0465 0.4
0.1045 42.0102 2021 3.5379 0.36
0.0025 43.0102 2068 4.1880 0.4
0.2103 44.0102 2115 4.4486 0.32
0.0035 45.0102 2162 3.7883 0.32
0.0117 46.0102 2209 3.8264 0.44
0.0027 47.0102 2256 4.2371 0.32
0.0174 48.0102 2303 4.0451 0.4
0.0199 49.0102 2350 4.0996 0.4
0.0082 50.0102 2397 4.5682 0.36
0.0186 51.0102 2444 4.0036 0.36
0.1483 52.0102 2491 3.8019 0.36
0.1276 53.0102 2538 3.9253 0.4
0.0601 54.0102 2585 4.5047 0.4
0.0027 55.0102 2632 4.5747 0.36
0.0055 56.0102 2679 4.2363 0.32
0.0338 57.0102 2726 4.3328 0.36
0.0005 58.0102 2773 4.5897 0.36
0.0489 59.0102 2820 4.7412 0.32
0.11 60.0102 2867 4.7991 0.36
0.0006 61.0102 2914 4.8250 0.32
0.0008 62.0102 2961 4.7567 0.32
0.0004 63.0102 3008 4.4867 0.36
0.0877 64.0102 3055 4.8180 0.36
0.0009 65.0102 3102 4.3209 0.4
0.0004 66.0102 3149 4.3730 0.36
0.0005 67.0102 3196 4.0573 0.44
0.0288 68.0102 3243 3.7278 0.44
0.0014 69.0102 3290 4.9681 0.36
0.0002 70.0102 3337 4.8522 0.4
0.0009 71.0102 3384 4.9470 0.4
0.0004 72.0102 3431 4.8706 0.36
0.0016 73.0102 3478 4.8785 0.32
0.0003 74.0102 3525 4.9980 0.36
0.0003 75.0102 3572 4.7280 0.36
0.0003 76.0102 3619 5.0809 0.36
0.0005 77.0102 3666 4.8118 0.4
0.0003 78.0102 3713 4.7439 0.4
0.0003 79.0102 3760 4.9703 0.4
0.0004 80.0102 3807 4.5657 0.4
0.0004 81.0102 3854 4.5084 0.44
0.1261 82.0102 3901 4.8884 0.44
0.0002 83.0102 3948 4.8646 0.4
0.0002 84.0102 3995 4.8225 0.4
0.0003 85.0102 4042 4.7205 0.4
0.0008 86.0102 4089 4.7888 0.44
0.0004 87.0102 4136 4.8506 0.44
0.0004 88.0102 4183 4.8165 0.44
0.0006 89.0102 4230 4.6865 0.44
0.0002 90.0102 4277 4.6192 0.4
0.0002 91.0102 4324 4.6489 0.4
0.0005 92.0102 4371 4.7074 0.4
0.0002 93.0102 4418 4.6926 0.4
0.0012 94.0102 4465 4.7289 0.36
0.0002 95.0102 4512 4.7505 0.36
0.0002 96.0102 4559 4.7498 0.36
0.0002 97.0089 4600 4.7523 0.36

Framework versions

  • Transformers 4.46.2
  • Pytorch 2.0.1+cu117
  • Datasets 3.0.1
  • Tokenizers 0.20.0
Downloads last month
5
Safetensors
Model size
304M params
Tensor type
F32
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for beingbatman/5c_2

Finetuned
(6)
this model