BTX24 commited on
Commit
42b120e
·
verified ·
1 Parent(s): 68b8061

Model save

Browse files
Files changed (2) hide show
  1. README.md +83 -0
  2. model.safetensors +1 -1
README.md ADDED
@@ -0,0 +1,83 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: transformers
3
+ license: cc-by-nc-4.0
4
+ base_model: facebook/hiera-base-224-in1k-hf
5
+ tags:
6
+ - generated_from_trainer
7
+ metrics:
8
+ - accuracy
9
+ - f1
10
+ - precision
11
+ - recall
12
+ model-index:
13
+ - name: hiera-finetuned-stroke-binary-ultrasound
14
+ results: []
15
+ ---
16
+
17
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
18
+ should probably proofread and complete it, then remove this comment. -->
19
+
20
+ # hiera-finetuned-stroke-binary-ultrasound
21
+
22
+ This model is a fine-tuned version of [facebook/hiera-base-224-in1k-hf](https://huggingface.co/facebook/hiera-base-224-in1k-hf) on an unknown dataset.
23
+ It achieves the following results on the evaluation set:
24
+ - Loss: 0.0104
25
+ - Accuracy: 0.9951
26
+ - F1: 0.9951
27
+ - Precision: 0.9951
28
+ - Recall: 0.9951
29
+
30
+ ## Model description
31
+
32
+ More information needed
33
+
34
+ ## Intended uses & limitations
35
+
36
+ More information needed
37
+
38
+ ## Training and evaluation data
39
+
40
+ More information needed
41
+
42
+ ## Training procedure
43
+
44
+ ### Training hyperparameters
45
+
46
+ The following hyperparameters were used during training:
47
+ - learning_rate: 2e-05
48
+ - train_batch_size: 16
49
+ - eval_batch_size: 8
50
+ - seed: 42
51
+ - gradient_accumulation_steps: 4
52
+ - total_train_batch_size: 64
53
+ - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
54
+ - lr_scheduler_type: cosine_with_restarts
55
+ - lr_scheduler_warmup_ratio: 0.1
56
+ - num_epochs: 12
57
+ - mixed_precision_training: Native AMP
58
+
59
+ ### Training results
60
+
61
+ | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
62
+ |:-------------:|:-------:|:----:|:---------------:|:--------:|:------:|:---------:|:------:|
63
+ | 0.038 | 0.8753 | 100 | 0.0184 | 0.9938 | 0.9938 | 0.9939 | 0.9938 |
64
+ | 0.0468 | 1.7440 | 200 | 0.0206 | 0.9926 | 0.9926 | 0.9927 | 0.9926 |
65
+ | 0.0445 | 2.6127 | 300 | 0.0225 | 0.9901 | 0.9901 | 0.9902 | 0.9901 |
66
+ | 0.0415 | 3.4814 | 400 | 0.0187 | 0.9889 | 0.9889 | 0.9889 | 0.9889 |
67
+ | 0.0465 | 4.3501 | 500 | 0.0098 | 0.9951 | 0.9951 | 0.9951 | 0.9951 |
68
+ | 0.0397 | 5.2188 | 600 | 0.0286 | 0.9901 | 0.9901 | 0.9903 | 0.9901 |
69
+ | 0.0257 | 6.0875 | 700 | 0.0188 | 0.9926 | 0.9926 | 0.9927 | 0.9926 |
70
+ | 0.0434 | 6.9628 | 800 | 0.0209 | 0.9938 | 0.9938 | 0.9939 | 0.9938 |
71
+ | 0.0261 | 7.8315 | 900 | 0.0154 | 0.9926 | 0.9926 | 0.9926 | 0.9926 |
72
+ | 0.0198 | 8.7002 | 1000 | 0.0094 | 0.9951 | 0.9951 | 0.9951 | 0.9951 |
73
+ | 0.0207 | 9.5689 | 1100 | 0.0122 | 0.9938 | 0.9938 | 0.9939 | 0.9938 |
74
+ | 0.0157 | 10.4376 | 1200 | 0.0101 | 0.9951 | 0.9951 | 0.9951 | 0.9951 |
75
+ | 0.0188 | 11.3063 | 1300 | 0.0104 | 0.9951 | 0.9951 | 0.9951 | 0.9951 |
76
+
77
+
78
+ ### Framework versions
79
+
80
+ - Transformers 4.53.1
81
+ - Pytorch 2.7.1+cu126
82
+ - Datasets 3.6.0
83
+ - Tokenizers 0.21.2
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:b352a9538fa0d255774682c2a5e603d068b3c1ba3fda58a4c709b3ed9a65f54f
3
  size 203065440
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:996510a5e2597a45d9c2533162dd01efa391e0fcd0df3bc9430ac323caf7e874
3
  size 203065440