File size: 3,701 Bytes
4a4b13b
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
---
library_name: transformers
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: windowz_020625
  results: []
---

<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->

# windowz_020625

This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset.
It achieves the following results on the evaluation set:
- Accuracy: 0.9891
- F1: 0.9871
- Iou: 0.9786
- Contour Dice: 0.9923
- Per Class Metrics: {0: {'f1': 0.99741, 'iou': 0.99483, 'accuracy': 0.99611, 'contour_dice': 0.99741}, 1: {'f1': 0.97774, 'iou': 0.95645, 'accuracy': 0.98909, 'contour_dice': 0.97774}, 2: {'f1': 0.41176, 'iou': 0.25926, 'accuracy': 0.99298, 'contour_dice': 0.41176}}
- Loss: 0.0301

## Model description

More information needed

## Intended uses & limitations

More information needed

## Training and evaluation data

More information needed

## Training procedure

### Training hyperparameters

The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 100

### Training results

| Training Loss | Epoch | Step  |        | Dice   | Class Metrics                                                                                                                                                                                                                                          | Validation Loss |
|:-------------:|:-----:|:-----:|:------:|:------:|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------:|:---------------:|
| 0.4629        | 5.0   | 12815 | 0.9623 | 0.9725 | {0: {'f1': 0.99106, 'iou': 0.98228, 'accuracy': 0.98651, 'contour_dice': 0.99106}, 1: {'f1': 0.96086, 'iou': 0.92467, 'accuracy': 0.9813, 'contour_dice': 0.96086}, 2: {'f1': 0.51615, 'iou': 0.34784, 'accuracy': 0.99358, 'contour_dice': 0.51615}}  | 0.0652          |
| 0.4439        | 10.0  | 25630 | 0.9786 | 0.9923 | {0: {'f1': 0.99741, 'iou': 0.99483, 'accuracy': 0.99611, 'contour_dice': 0.99741}, 1: {'f1': 0.97774, 'iou': 0.95645, 'accuracy': 0.98909, 'contour_dice': 0.97774}, 2: {'f1': 0.41176, 'iou': 0.25926, 'accuracy': 0.99298, 'contour_dice': 0.41176}} | 0.0301          |
| 0.4077        | 15.0  | 38445 | 0.2129 | 0.4508 | {0: {'f1': 0.31665, 'iou': 0.18811, 'accuracy': 0.39105, 'contour_dice': 0.31665}, 1: {'f1': 0.43873, 'iou': 0.28101, 'accuracy': 0.38547, 'contour_dice': 0.43873}, 2: {'f1': 0.60057, 'iou': 0.42915, 'accuracy': 0.99441, 'contour_dice': 0.60057}} | 0.8030          |
| 0.4099        | 20.0  | 51260 | 0.9796 | 0.9892 | {0: {'f1': 0.9964, 'iou': 0.99283, 'accuracy': 0.9946, 'contour_dice': 0.9964}, 1: {'f1': 0.97792, 'iou': 0.9568, 'accuracy': 0.98934, 'contour_dice': 0.97792}, 2: {'f1': 0.67584, 'iou': 0.51039, 'accuracy': 0.99419, 'contour_dice': 0.67584}}     | 0.0441          |
| 0.3872        | 25.0  | 64075 | 0.9799 | 0.9882 | {0: {'f1': 0.99605, 'iou': 0.99212, 'accuracy': 0.99407, 'contour_dice': 0.99605}, 1: {'f1': 0.9783, 'iou': 0.95752, 'accuracy': 0.98953, 'contour_dice': 0.9783}, 2: {'f1': 0.73739, 'iou': 0.58402, 'accuracy': 0.99521, 'contour_dice': 0.73739}}   | 0.0314          |


### Framework versions

- Transformers 4.45.0
- Pytorch 2.5.1+cu124
- Datasets 2.21.0
- Tokenizers 0.20.3