safety-utcustom-train-SF30-RGB-b5
This model is a fine-tuned version of nvidia/mit-b5 on the sam1120/safety-utcustom-TRAIN-30 dataset. It achieves the following results on the evaluation set:
- Accuracy Safe: 0.8299
- Accuracy Unlabeled: nan
- Accuracy Unsafe: 0.9036
- Iou Safe: 0.3480
- Iou Unlabeled: 0.0
- Iou Unsafe: 0.8996
- Loss: 0.5783
- Mean Accuracy: 0.8668
- Mean Iou: 0.4158
- Overall Accuracy: 0.9013
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-06
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.05
- num_epochs: 120
Training results
Training Loss | Epoch | Step | Accuracy Safe | Accuracy Unlabeled | Accuracy Unsafe | Iou Safe | Iou Unlabeled | Iou Unsafe | Validation Loss | Mean Accuracy | Mean Iou | Overall Accuracy |
---|---|---|---|---|---|---|---|---|---|---|---|---|
1.0614 | 5.0 | 10 | 0.1904 | nan | 0.5439 | 0.0682 | 0.0 | 0.5350 | 1.0385 | 0.3672 | 0.2011 | 0.5327 |
1.0269 | 10.0 | 20 | 0.4801 | nan | 0.5773 | 0.1795 | 0.0 | 0.5719 | 0.9975 | 0.5287 | 0.2505 | 0.5742 |
1.0005 | 15.0 | 30 | 0.6270 | nan | 0.6316 | 0.2261 | 0.0 | 0.6269 | 0.9428 | 0.6293 | 0.2843 | 0.6315 |
0.9716 | 20.0 | 40 | 0.6870 | nan | 0.6802 | 0.2529 | 0.0 | 0.6756 | 0.8918 | 0.6836 | 0.3095 | 0.6804 |
0.9255 | 25.0 | 50 | 0.7339 | nan | 0.7081 | 0.2805 | 0.0 | 0.7037 | 0.8542 | 0.7210 | 0.3281 | 0.7089 |
0.9256 | 30.0 | 60 | 0.7705 | nan | 0.7229 | 0.2781 | 0.0 | 0.7189 | 0.8330 | 0.7467 | 0.3324 | 0.7244 |
0.8167 | 35.0 | 70 | 0.7622 | nan | 0.7349 | 0.3004 | 0.0 | 0.7311 | 0.8114 | 0.7485 | 0.3438 | 0.7358 |
0.7927 | 40.0 | 80 | 0.7776 | nan | 0.7594 | 0.3154 | 0.0 | 0.7559 | 0.7793 | 0.7685 | 0.3571 | 0.7600 |
0.8227 | 45.0 | 90 | 0.8020 | nan | 0.7821 | 0.3152 | 0.0 | 0.7789 | 0.7574 | 0.7920 | 0.3647 | 0.7827 |
0.81 | 50.0 | 100 | 0.8114 | nan | 0.7983 | 0.3140 | 0.0 | 0.7955 | 0.7370 | 0.8049 | 0.3698 | 0.7987 |
0.7198 | 55.0 | 110 | 0.8002 | nan | 0.8194 | 0.3303 | 0.0 | 0.8162 | 0.7118 | 0.8098 | 0.3822 | 0.8188 |
0.7523 | 60.0 | 120 | 0.7877 | nan | 0.8482 | 0.3457 | 0.0 | 0.8443 | 0.6832 | 0.8179 | 0.3967 | 0.8462 |
0.7239 | 65.0 | 130 | 0.8112 | nan | 0.8485 | 0.3197 | 0.0 | 0.8453 | 0.6745 | 0.8298 | 0.3883 | 0.8473 |
0.6235 | 70.0 | 140 | 0.7906 | nan | 0.8686 | 0.3507 | 0.0 | 0.8649 | 0.6419 | 0.8296 | 0.4052 | 0.8662 |
0.6887 | 75.0 | 150 | 0.7951 | nan | 0.8758 | 0.3568 | 0.0 | 0.8720 | 0.6302 | 0.8354 | 0.4096 | 0.8732 |
0.6079 | 80.0 | 160 | 0.8069 | nan | 0.8879 | 0.3561 | 0.0 | 0.8841 | 0.6120 | 0.8474 | 0.4134 | 0.8853 |
0.6022 | 85.0 | 170 | 0.8126 | nan | 0.9062 | 0.3699 | 0.0 | 0.9020 | 0.5849 | 0.8594 | 0.4240 | 0.9032 |
0.5748 | 90.0 | 180 | 0.8053 | nan | 0.9047 | 0.3793 | 0.0 | 0.9005 | 0.5802 | 0.8550 | 0.4266 | 0.9016 |
0.6228 | 95.0 | 190 | 0.8164 | nan | 0.9050 | 0.3624 | 0.0 | 0.9007 | 0.5793 | 0.8607 | 0.4210 | 0.9022 |
0.5332 | 100.0 | 200 | 0.8214 | nan | 0.9134 | 0.3623 | 0.0 | 0.9091 | 0.5616 | 0.8674 | 0.4238 | 0.9105 |
0.6655 | 105.0 | 210 | 0.8262 | nan | 0.9072 | 0.3572 | 0.0 | 0.9031 | 0.5688 | 0.8667 | 0.4201 | 0.9046 |
0.5835 | 110.0 | 220 | 0.8233 | nan | 0.9092 | 0.3599 | 0.0 | 0.9050 | 0.5653 | 0.8662 | 0.4216 | 0.9064 |
0.5764 | 115.0 | 230 | 0.8099 | nan | 0.9165 | 0.3783 | 0.0 | 0.9120 | 0.5460 | 0.8632 | 0.4301 | 0.9131 |
0.5621 | 120.0 | 240 | 0.8299 | nan | 0.9036 | 0.3480 | 0.0 | 0.8996 | 0.5783 | 0.8668 | 0.4158 | 0.9013 |
Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu117
- Datasets 2.13.1
- Tokenizers 0.13.3
- Downloads last month
- 1
Inference Providers
NEW
This model is not currently available via any of the supported third-party Inference Providers, and
the model is not deployed on the HF Inference API.