--- library_name: transformers license: cc-by-nc-4.0 base_model: MCG-NJU/videomae-base-finetuned-kinetics tags: - generated_from_trainer metrics: - accuracy model-index: - name: ALL_RGBCROP_ori16F-8B16F-GACWD5lrDO results: [] --- # ALL_RGBCROP_ori16F-8B16F-GACWD5lrDO This model is a fine-tuned version of [MCG-NJU/videomae-base-finetuned-kinetics](https://huggingface.co/MCG-NJU/videomae-base-finetuned-kinetics) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.4092 - Accuracy: 0.8144 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 16 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - training_steps: 1920 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:------:|:----:|:---------------:|:--------:| | 0.694 | 0.025 | 48 | 0.7127 | 0.4939 | | 0.6395 | 1.025 | 96 | 0.6772 | 0.5976 | | 0.4871 | 2.025 | 144 | 0.6142 | 0.6707 | | 0.4162 | 3.025 | 192 | 0.5418 | 0.7256 | | 0.2336 | 4.025 | 240 | 0.5030 | 0.7622 | | 0.159 | 5.025 | 288 | 0.5045 | 0.7927 | | 0.1486 | 6.025 | 336 | 0.5186 | 0.7805 | | 0.0997 | 7.025 | 384 | 0.5649 | 0.7866 | | 0.07 | 8.025 | 432 | 0.6180 | 0.7805 | | 0.0377 | 9.025 | 480 | 0.6364 | 0.7927 | | 0.0103 | 10.025 | 528 | 0.7102 | 0.7866 | ### Framework versions - Transformers 4.51.3 - Pytorch 2.6.0+cu124 - Datasets 3.6.0 - Tokenizers 0.21.1