Kartikeya commited on
Commit
9d8200c
·
verified ·
1 Parent(s): 323470f

Model save

Browse files
Files changed (1) hide show
  1. README.md +83 -0
README.md ADDED
@@ -0,0 +1,83 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: transformers
3
+ license: cc-by-nc-4.0
4
+ base_model: MCG-NJU/videomae-base
5
+ tags:
6
+ - generated_from_trainer
7
+ metrics:
8
+ - accuracy
9
+ model-index:
10
+ - name: videomae-base-finetuned-yt_short_classification
11
+ results: []
12
+ ---
13
+
14
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
15
+ should probably proofread and complete it, then remove this comment. -->
16
+
17
+ # videomae-base-finetuned-yt_short_classification
18
+
19
+ This model is a fine-tuned version of [MCG-NJU/videomae-base](https://huggingface.co/MCG-NJU/videomae-base) on an unknown dataset.
20
+ It achieves the following results on the evaluation set:
21
+ - Loss: 0.4704
22
+ - Accuracy: 0.7815
23
+ - 0 Precision: 0.7484
24
+ - 0 Recall: 0.8149
25
+ - 0 F1-score: 0.7803
26
+ - 0 Support: 6322.0
27
+ - 1 Precision: 0.8170
28
+ - 1 Recall: 0.7510
29
+ - 1 F1-score: 0.7827
30
+ - 1 Support: 6957.0
31
+ - Accuracy F1-score: 0.7815
32
+ - Macro avg Precision: 0.7827
33
+ - Macro avg Recall: 0.7830
34
+ - Macro avg F1-score: 0.7815
35
+ - Macro avg Support: 13279.0
36
+ - Weighted avg Precision: 0.7844
37
+ - Weighted avg Recall: 0.7815
38
+ - Weighted avg F1-score: 0.7815
39
+ - Weighted avg Support: 13279.0
40
+
41
+ ## Model description
42
+
43
+ More information needed
44
+
45
+ ## Intended uses & limitations
46
+
47
+ More information needed
48
+
49
+ ## Training and evaluation data
50
+
51
+ More information needed
52
+
53
+ ## Training procedure
54
+
55
+ ### Training hyperparameters
56
+
57
+ The following hyperparameters were used during training:
58
+ - learning_rate: 5e-05
59
+ - train_batch_size: 16
60
+ - eval_batch_size: 16
61
+ - seed: 42
62
+ - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
63
+ - lr_scheduler_type: linear
64
+ - lr_scheduler_warmup_ratio: 0.1
65
+ - training_steps: 2060
66
+
67
+ ### Training results
68
+
69
+ | Training Loss | Epoch | Step | Validation Loss | Accuracy | 0 Precision | 0 Recall | 0 F1-score | 0 Support | 1 Precision | 1 Recall | 1 F1-score | 1 Support | Accuracy F1-score | Macro avg Precision | Macro avg Recall | Macro avg F1-score | Macro avg Support | Weighted avg Precision | Weighted avg Recall | Weighted avg F1-score | Weighted avg Support |
70
+ |:-------------:|:------:|:----:|:---------------:|:--------:|:-----------:|:--------:|:----------:|:---------:|:-----------:|:--------:|:----------:|:---------:|:-----------------:|:-------------------:|:----------------:|:------------------:|:-----------------:|:----------------------:|:-------------------:|:---------------------:|:--------------------:|
71
+ | 0.6282 | 0.2005 | 413 | 0.6101 | 0.6848 | 0.7561 | 0.4991 | 0.6012 | 6322.0 | 0.6522 | 0.8537 | 0.7395 | 6957.0 | 0.6848 | 0.7041 | 0.6764 | 0.6704 | 13279.0 | 0.7016 | 0.6848 | 0.6737 | 13279.0 |
72
+ | 0.6569 | 1.2005 | 826 | 0.5357 | 0.7290 | 0.7392 | 0.6655 | 0.7004 | 6322.0 | 0.7213 | 0.7867 | 0.7526 | 6957.0 | 0.7290 | 0.7303 | 0.7261 | 0.7265 | 13279.0 | 0.7298 | 0.7290 | 0.7277 | 13279.0 |
73
+ | 0.5064 | 2.2005 | 1239 | 0.4839 | 0.7687 | 0.7517 | 0.7680 | 0.7597 | 6322.0 | 0.7849 | 0.7694 | 0.7771 | 6957.0 | 0.7687 | 0.7683 | 0.7687 | 0.7684 | 13279.0 | 0.7691 | 0.7687 | 0.7688 | 13279.0 |
74
+ | 0.4293 | 3.2005 | 1652 | 0.5120 | 0.7518 | 0.6850 | 0.8861 | 0.7727 | 6322.0 | 0.8589 | 0.6297 | 0.7267 | 6957.0 | 0.7518 | 0.7719 | 0.7579 | 0.7497 | 13279.0 | 0.7761 | 0.7518 | 0.7486 | 13279.0 |
75
+ | 0.421 | 4.1981 | 2060 | 0.4704 | 0.7815 | 0.7484 | 0.8149 | 0.7803 | 6322.0 | 0.8170 | 0.7510 | 0.7827 | 6957.0 | 0.7815 | 0.7827 | 0.7830 | 0.7815 | 13279.0 | 0.7844 | 0.7815 | 0.7815 | 13279.0 |
76
+
77
+
78
+ ### Framework versions
79
+
80
+ - Transformers 4.46.3
81
+ - Pytorch 2.0.0+cu117
82
+ - Datasets 3.1.0
83
+ - Tokenizers 0.20.3