--- library_name: transformers license: cc-by-nc-4.0 base_model: MCG-NJU/videomae-base-finetuned-kinetics tags: - generated_from_trainer metrics: - accuracy model-index: - name: videomae-base-finetuned-kinetics-allkisa-crop-background-0228-clip_duration results: [] --- # videomae-base-finetuned-kinetics-allkisa-crop-background-0228-clip_duration This model is a fine-tuned version of [MCG-NJU/videomae-base-finetuned-kinetics](https://huggingface.co/MCG-NJU/videomae-base-finetuned-kinetics) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.3168 - Accuracy: 0.9393 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.05 - training_steps: 26400 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-------:|:----:|:---------------:|:--------:| | 0.0014 | 0.0100 | 265 | 0.5277 | 0.7871 | | 0.1062 | 1.0100 | 530 | 0.4787 | 0.8080 | | 0.0267 | 2.0100 | 795 | 0.5039 | 0.8429 | | 0.0023 | 3.0100 | 1060 | 1.0419 | 0.7033 | | 0.0003 | 4.0100 | 1325 | 0.5443 | 0.8115 | | 0.0002 | 5.0100 | 1590 | 1.1036 | 0.7644 | | 0.0006 | 6.0100 | 1855 | 0.4425 | 0.8551 | | 3.5553 | 7.0100 | 2120 | 0.4898 | 0.8586 | | 0.0002 | 8.0100 | 2385 | 0.5075 | 0.8482 | | 0.0 | 9.0100 | 2650 | 0.5837 | 0.8674 | | 0.0001 | 10.0100 | 2915 | 0.9025 | 0.8255 | | 0.0001 | 11.0100 | 3180 | 0.6071 | 0.8691 | | 0.0 | 12.0100 | 3445 | 0.6740 | 0.8639 | | 0.0 | 13.0100 | 3710 | 0.9123 | 0.8272 | | 0.0 | 14.0100 | 3975 | 0.7934 | 0.8517 | | 0.0 | 15.0100 | 4240 | 0.6899 | 0.8674 | | 0.0 | 16.0100 | 4505 | 0.7532 | 0.8709 | | 0.0 | 17.0100 | 4770 | 0.7553 | 0.8691 | | 0.0002 | 18.0100 | 5035 | 0.7433 | 0.8691 | | 0.0 | 19.0100 | 5300 | 0.6715 | 0.8813 | | 0.0 | 20.0100 | 5565 | 0.7857 | 0.8656 | | 0.0 | 21.0100 | 5830 | 0.6759 | 0.8918 | | 0.0 | 22.0100 | 6095 | 0.6435 | 0.8901 | | 0.0 | 23.0100 | 6360 | 0.6853 | 0.8866 | | 0.0 | 24.0100 | 6625 | 0.6530 | 0.8831 | | 0.0038 | 25.0100 | 6890 | 0.7277 | 0.8743 | | 0.0001 | 26.0100 | 7155 | 0.7137 | 0.8778 | | 0.0 | 27.0100 | 7420 | 0.7081 | 0.8796 | | 0.0 | 28.0100 | 7685 | 0.6977 | 0.8778 | | 0.0 | 29.0100 | 7950 | 0.7008 | 0.8831 | | 0.0004 | 30.0100 | 8215 | 0.7023 | 0.8813 | | 0.0 | 31.0100 | 8480 | 0.7248 | 0.8796 | ### Framework versions - Transformers 4.48.1 - Pytorch 2.5.1+cu121 - Datasets 3.2.0 - Tokenizers 0.21.0