--- library_name: transformers license: cc-by-nc-4.0 base_model: MCG-NJU/videomae-base tags: - generated_from_trainer metrics: - accuracy model-index: - name: videomae-base-finetuned-yt_short_classification results: [] --- # videomae-base-finetuned-yt_short_classification This model is a fine-tuned version of [MCG-NJU/videomae-base](https://huggingface.co/MCG-NJU/videomae-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.4704 - Accuracy: 0.7815 - 0 Precision: 0.7484 - 0 Recall: 0.8149 - 0 F1-score: 0.7803 - 0 Support: 6322.0 - 1 Precision: 0.8170 - 1 Recall: 0.7510 - 1 F1-score: 0.7827 - 1 Support: 6957.0 - Accuracy F1-score: 0.7815 - Macro avg Precision: 0.7827 - Macro avg Recall: 0.7830 - Macro avg F1-score: 0.7815 - Macro avg Support: 13279.0 - Weighted avg Precision: 0.7844 - Weighted avg Recall: 0.7815 - Weighted avg F1-score: 0.7815 - Weighted avg Support: 13279.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - training_steps: 2060 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | 0 Precision | 0 Recall | 0 F1-score | 0 Support | 1 Precision | 1 Recall | 1 F1-score | 1 Support | Accuracy F1-score | Macro avg Precision | Macro avg Recall | Macro avg F1-score | Macro avg Support | Weighted avg Precision | Weighted avg Recall | Weighted avg F1-score | Weighted avg Support | |:-------------:|:------:|:----:|:---------------:|:--------:|:-----------:|:--------:|:----------:|:---------:|:-----------:|:--------:|:----------:|:---------:|:-----------------:|:-------------------:|:----------------:|:------------------:|:-----------------:|:----------------------:|:-------------------:|:---------------------:|:--------------------:| | 0.6282 | 0.2005 | 413 | 0.6101 | 0.6848 | 0.7561 | 0.4991 | 0.6012 | 6322.0 | 0.6522 | 0.8537 | 0.7395 | 6957.0 | 0.6848 | 0.7041 | 0.6764 | 0.6704 | 13279.0 | 0.7016 | 0.6848 | 0.6737 | 13279.0 | | 0.6569 | 1.2005 | 826 | 0.5357 | 0.7290 | 0.7392 | 0.6655 | 0.7004 | 6322.0 | 0.7213 | 0.7867 | 0.7526 | 6957.0 | 0.7290 | 0.7303 | 0.7261 | 0.7265 | 13279.0 | 0.7298 | 0.7290 | 0.7277 | 13279.0 | | 0.5064 | 2.2005 | 1239 | 0.4839 | 0.7687 | 0.7517 | 0.7680 | 0.7597 | 6322.0 | 0.7849 | 0.7694 | 0.7771 | 6957.0 | 0.7687 | 0.7683 | 0.7687 | 0.7684 | 13279.0 | 0.7691 | 0.7687 | 0.7688 | 13279.0 | | 0.4293 | 3.2005 | 1652 | 0.5120 | 0.7518 | 0.6850 | 0.8861 | 0.7727 | 6322.0 | 0.8589 | 0.6297 | 0.7267 | 6957.0 | 0.7518 | 0.7719 | 0.7579 | 0.7497 | 13279.0 | 0.7761 | 0.7518 | 0.7486 | 13279.0 | | 0.421 | 4.1981 | 2060 | 0.4704 | 0.7815 | 0.7484 | 0.8149 | 0.7803 | 6322.0 | 0.8170 | 0.7510 | 0.7827 | 6957.0 | 0.7815 | 0.7827 | 0.7830 | 0.7815 | 13279.0 | 0.7844 | 0.7815 | 0.7815 | 13279.0 | ### Framework versions - Transformers 4.46.3 - Pytorch 2.0.0+cu117 - Datasets 3.1.0 - Tokenizers 0.20.3