Model save
Browse files- README.md +88 -0
- model.safetensors +1 -1
README.md
ADDED
@@ -0,0 +1,88 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
library_name: transformers
|
3 |
+
license: cc-by-nc-4.0
|
4 |
+
base_model: MCG-NJU/videomae-base
|
5 |
+
tags:
|
6 |
+
- generated_from_trainer
|
7 |
+
metrics:
|
8 |
+
- accuracy
|
9 |
+
model-index:
|
10 |
+
- name: videomae-base-finetuned-yt_short_classification-2
|
11 |
+
results: []
|
12 |
+
---
|
13 |
+
|
14 |
+
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
|
15 |
+
should probably proofread and complete it, then remove this comment. -->
|
16 |
+
|
17 |
+
# videomae-base-finetuned-yt_short_classification-2
|
18 |
+
|
19 |
+
This model is a fine-tuned version of [MCG-NJU/videomae-base](https://huggingface.co/MCG-NJU/videomae-base) on an unknown dataset.
|
20 |
+
It achieves the following results on the evaluation set:
|
21 |
+
- Loss: 0.4733
|
22 |
+
- Accuracy: 0.7818
|
23 |
+
- 0 Precision: 0.7333
|
24 |
+
- 0 Recall: 0.8515
|
25 |
+
- 0 F1-score: 0.7880
|
26 |
+
- 0 Support: 6322.0
|
27 |
+
- 1 Precision: 0.8419
|
28 |
+
- 1 Recall: 0.7186
|
29 |
+
- 1 F1-score: 0.7753
|
30 |
+
- 1 Support: 6957.0
|
31 |
+
- Accuracy F1-score: 0.7818
|
32 |
+
- Macro avg Precision: 0.7876
|
33 |
+
- Macro avg Recall: 0.7850
|
34 |
+
- Macro avg F1-score: 0.7817
|
35 |
+
- Macro avg Support: 13279.0
|
36 |
+
- Weighted avg Precision: 0.7902
|
37 |
+
- Weighted avg Recall: 0.7818
|
38 |
+
- Weighted avg F1-score: 0.7814
|
39 |
+
- Weighted avg Support: 13279.0
|
40 |
+
|
41 |
+
## Model description
|
42 |
+
|
43 |
+
More information needed
|
44 |
+
|
45 |
+
## Intended uses & limitations
|
46 |
+
|
47 |
+
More information needed
|
48 |
+
|
49 |
+
## Training and evaluation data
|
50 |
+
|
51 |
+
More information needed
|
52 |
+
|
53 |
+
## Training procedure
|
54 |
+
|
55 |
+
### Training hyperparameters
|
56 |
+
|
57 |
+
The following hyperparameters were used during training:
|
58 |
+
- learning_rate: 5e-05
|
59 |
+
- train_batch_size: 16
|
60 |
+
- eval_batch_size: 16
|
61 |
+
- seed: 42
|
62 |
+
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
|
63 |
+
- lr_scheduler_type: linear
|
64 |
+
- lr_scheduler_warmup_ratio: 0.1
|
65 |
+
- training_steps: 4120
|
66 |
+
|
67 |
+
### Training results
|
68 |
+
|
69 |
+
| Training Loss | Epoch | Step | Validation Loss | Accuracy | 0 Precision | 0 Recall | 0 F1-score | 0 Support | 1 Precision | 1 Recall | 1 F1-score | 1 Support | Accuracy F1-score | Macro avg Precision | Macro avg Recall | Macro avg F1-score | Macro avg Support | Weighted avg Precision | Weighted avg Recall | Weighted avg F1-score | Weighted avg Support |
|
70 |
+
|:-------------:|:------:|:----:|:---------------:|:--------:|:-----------:|:--------:|:----------:|:---------:|:-----------:|:--------:|:----------:|:---------:|:-----------------:|:-------------------:|:----------------:|:------------------:|:-----------------:|:----------------------:|:-------------------:|:---------------------:|:--------------------:|
|
71 |
+
| 0.6122 | 0.1002 | 413 | 0.7143 | 0.6551 | 0.5925 | 0.8828 | 0.7091 | 6322.0 | 0.8080 | 0.4482 | 0.5766 | 6957.0 | 0.6551 | 0.7002 | 0.6655 | 0.6428 | 13279.0 | 0.7054 | 0.6551 | 0.6396 | 13279.0 |
|
72 |
+
| 0.6904 | 1.1002 | 826 | 0.5800 | 0.6909 | 0.8170 | 0.4519 | 0.5819 | 6322.0 | 0.6458 | 0.9080 | 0.7548 | 6957.0 | 0.6909 | 0.7314 | 0.6800 | 0.6683 | 13279.0 | 0.7273 | 0.6909 | 0.6725 | 13279.0 |
|
73 |
+
| 0.5489 | 2.1002 | 1239 | 0.5122 | 0.7555 | 0.7450 | 0.7395 | 0.7422 | 6322.0 | 0.7648 | 0.7700 | 0.7674 | 6957.0 | 0.7555 | 0.7549 | 0.7547 | 0.7548 | 13279.0 | 0.7554 | 0.7555 | 0.7554 | 13279.0 |
|
74 |
+
| 0.4979 | 3.1002 | 1652 | 0.5434 | 0.7443 | 0.6785 | 0.8798 | 0.7662 | 6322.0 | 0.8505 | 0.6212 | 0.7180 | 6957.0 | 0.7443 | 0.7645 | 0.7505 | 0.7421 | 13279.0 | 0.7686 | 0.7443 | 0.7409 | 13279.0 |
|
75 |
+
| 0.5141 | 4.1002 | 2065 | 0.4793 | 0.7723 | 0.7482 | 0.7866 | 0.7669 | 6322.0 | 0.7966 | 0.7594 | 0.7775 | 6957.0 | 0.7723 | 0.7724 | 0.7730 | 0.7722 | 13279.0 | 0.7735 | 0.7723 | 0.7725 | 13279.0 |
|
76 |
+
| 0.4472 | 5.1002 | 2478 | 0.4673 | 0.7798 | 0.7398 | 0.8290 | 0.7819 | 6322.0 | 0.8255 | 0.7351 | 0.7777 | 6957.0 | 0.7798 | 0.7827 | 0.7820 | 0.7798 | 13279.0 | 0.7847 | 0.7798 | 0.7797 | 13279.0 |
|
77 |
+
| 0.4108 | 6.1002 | 2891 | 0.4491 | 0.7952 | 0.7715 | 0.8096 | 0.7901 | 6322.0 | 0.8188 | 0.7821 | 0.8000 | 6957.0 | 0.7952 | 0.7951 | 0.7958 | 0.7950 | 13279.0 | 0.7963 | 0.7952 | 0.7953 | 13279.0 |
|
78 |
+
| 0.3756 | 7.1002 | 3304 | 0.4955 | 0.7773 | 0.7472 | 0.8045 | 0.7748 | 6322.0 | 0.8090 | 0.7526 | 0.7798 | 6957.0 | 0.7773 | 0.7781 | 0.7786 | 0.7773 | 13279.0 | 0.7796 | 0.7773 | 0.7774 | 13279.0 |
|
79 |
+
| 0.3147 | 8.1002 | 3717 | 0.5889 | 0.7318 | 0.6523 | 0.9348 | 0.7684 | 6322.0 | 0.9023 | 0.5472 | 0.6813 | 6957.0 | 0.7318 | 0.7773 | 0.7410 | 0.7249 | 13279.0 | 0.7833 | 0.7318 | 0.7228 | 13279.0 |
|
80 |
+
| 0.4019 | 9.0978 | 4120 | 0.4733 | 0.7818 | 0.7333 | 0.8515 | 0.7880 | 6322.0 | 0.8419 | 0.7186 | 0.7753 | 6957.0 | 0.7818 | 0.7876 | 0.7850 | 0.7817 | 13279.0 | 0.7902 | 0.7818 | 0.7814 | 13279.0 |
|
81 |
+
|
82 |
+
|
83 |
+
### Framework versions
|
84 |
+
|
85 |
+
- Transformers 4.46.3
|
86 |
+
- Pytorch 2.0.0+cu117
|
87 |
+
- Datasets 3.1.0
|
88 |
+
- Tokenizers 0.20.3
|
model.safetensors
CHANGED
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
size 344937368
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:a1d14fd489c32d7bbc4dda237f570d8652c86d682ff30a9c08d1e4aab5f60762
|
3 |
size 344937368
|