hebashakeel commited on
Commit
86a552e
·
verified ·
1 Parent(s): 6d69c03

End of training

Browse files
README.md CHANGED
@@ -18,20 +18,20 @@ should probably proofread and complete it, then remove this comment. -->
18
 
19
  This model is a fine-tuned version of [google-bert/bert-base-uncased](https://huggingface.co/google-bert/bert-base-uncased) on an unknown dataset.
20
  It achieves the following results on the evaluation set:
21
- - Loss: 1.1107
22
- - Accuracy: 0.624
23
- - Auc: 0.861
24
- - Precision Class 0: 0.455
25
- - Precision Class 1: 0.75
26
- - Precision Class 2: 0.44
27
- - Precision Class 3: 0.725
28
- - Precision Class 4: 0.675
29
- - Precision Class 5: 0.5
30
- - Recall Class 0: 0.526
31
- - Recall Class 1: 0.522
32
- - Recall Class 2: 0.407
33
- - Recall Class 3: 0.787
34
- - Recall Class 4: 0.812
35
  - Recall Class 5: 0.333
36
 
37
  ## Model description
@@ -51,9 +51,9 @@ More information needed
51
  ### Training hyperparameters
52
 
53
  The following hyperparameters were used during training:
54
- - learning_rate: 0.0001
55
- - train_batch_size: 8
56
- - eval_batch_size: 8
57
  - seed: 42
58
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
59
  - lr_scheduler_type: linear
@@ -63,16 +63,16 @@ The following hyperparameters were used during training:
63
 
64
  | Training Loss | Epoch | Step | Validation Loss | Accuracy | Auc | Precision Class 0 | Precision Class 1 | Precision Class 2 | Precision Class 3 | Precision Class 4 | Precision Class 5 | Recall Class 0 | Recall Class 1 | Recall Class 2 | Recall Class 3 | Recall Class 4 | Recall Class 5 |
65
  |:-------------:|:-----:|:----:|:---------------:|:--------:|:-----:|:-----------------:|:-----------------:|:-----------------:|:-----------------:|:-----------------:|:-----------------:|:--------------:|:--------------:|:--------------:|:--------------:|:--------------:|:--------------:|
66
- | 1.6524 | 1.0 | 124 | 1.5268 | 0.434 | 0.755 | 0.406 | 0.0 | 0.0 | 0.697 | 0.384 | 0.0 | 0.52 | 0.0 | 0.0 | 0.548 | 0.836 | 0.0 |
67
- | 1.4758 | 2.0 | 248 | 1.4056 | 0.472 | 0.79 | 0.382 | 1.0 | 0.0 | 0.646 | 0.442 | 0.222 | 0.52 | 0.05 | 0.0 | 0.738 | 0.791 | 0.056 |
68
- | 1.3752 | 3.0 | 372 | 1.3204 | 0.533 | 0.818 | 0.448 | 1.0 | 0.556 | 0.744 | 0.466 | 0.333 | 0.52 | 0.25 | 0.455 | 0.69 | 0.821 | 0.028 |
69
- | 1.2936 | 4.0 | 496 | 1.2519 | 0.552 | 0.837 | 0.444 | 1.0 | 0.467 | 0.597 | 0.556 | 0.25 | 0.48 | 0.25 | 0.318 | 0.881 | 0.821 | 0.028 |
70
- | 1.2306 | 5.0 | 620 | 1.2009 | 0.547 | 0.848 | 0.464 | 0.75 | 0.429 | 0.773 | 0.505 | 0.286 | 0.52 | 0.3 | 0.136 | 0.81 | 0.836 | 0.111 |
71
- | 1.1925 | 6.0 | 744 | 1.1624 | 0.59 | 0.858 | 0.444 | 0.833 | 0.462 | 0.81 | 0.593 | 0.424 | 0.48 | 0.25 | 0.273 | 0.81 | 0.806 | 0.389 |
72
- | 1.1481 | 7.0 | 868 | 1.1378 | 0.58 | 0.862 | 0.462 | 0.857 | 0.438 | 0.791 | 0.579 | 0.36 | 0.48 | 0.3 | 0.318 | 0.81 | 0.821 | 0.25 |
73
- | 1.1254 | 8.0 | 992 | 1.1256 | 0.585 | 0.865 | 0.48 | 0.875 | 0.381 | 0.795 | 0.582 | 0.391 | 0.48 | 0.35 | 0.364 | 0.833 | 0.791 | 0.25 |
74
- | 1.102 | 9.0 | 1116 | 1.1147 | 0.585 | 0.867 | 0.48 | 0.875 | 0.381 | 0.795 | 0.582 | 0.391 | 0.48 | 0.35 | 0.364 | 0.833 | 0.791 | 0.25 |
75
- | 1.1002 | 10.0 | 1240 | 1.1122 | 0.594 | 0.868 | 0.48 | 0.846 | 0.368 | 0.745 | 0.612 | 0.391 | 0.48 | 0.55 | 0.318 | 0.833 | 0.776 | 0.25 |
76
 
77
 
78
  ### Framework versions
 
18
 
19
  This model is a fine-tuned version of [google-bert/bert-base-uncased](https://huggingface.co/google-bert/bert-base-uncased) on an unknown dataset.
20
  It achieves the following results on the evaluation set:
21
+ - Loss: 1.0333
22
+ - Accuracy: 0.648
23
+ - Auc: 0.878
24
+ - Precision Class 0: 0.409
25
+ - Precision Class 1: 0.769
26
+ - Precision Class 2: 0.382
27
+ - Precision Class 3: 0.729
28
+ - Precision Class 4: 0.833
29
+ - Precision Class 5: 0.478
30
+ - Recall Class 0: 0.474
31
+ - Recall Class 1: 0.87
32
+ - Recall Class 2: 0.481
33
+ - Recall Class 3: 0.745
34
+ - Recall Class 4: 0.781
35
  - Recall Class 5: 0.333
36
 
37
  ## Model description
 
51
  ### Training hyperparameters
52
 
53
  The following hyperparameters were used during training:
54
+ - learning_rate: 0.001
55
+ - train_batch_size: 16
56
+ - eval_batch_size: 16
57
  - seed: 42
58
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
59
  - lr_scheduler_type: linear
 
63
 
64
  | Training Loss | Epoch | Step | Validation Loss | Accuracy | Auc | Precision Class 0 | Precision Class 1 | Precision Class 2 | Precision Class 3 | Precision Class 4 | Precision Class 5 | Recall Class 0 | Recall Class 1 | Recall Class 2 | Recall Class 3 | Recall Class 4 | Recall Class 5 |
65
  |:-------------:|:-----:|:----:|:---------------:|:--------:|:-----:|:-----------------:|:-----------------:|:-----------------:|:-----------------:|:-----------------:|:-----------------:|:--------------:|:--------------:|:--------------:|:--------------:|:--------------:|:--------------:|
66
+ | 1.5028 | 1.0 | 62 | 1.1703 | 0.528 | 0.852 | 0.5 | 0.889 | 0.0 | 0.769 | 0.595 | 0.282 | 0.28 | 0.4 | 0.0 | 0.714 | 0.701 | 0.556 |
67
+ | 1.1661 | 2.0 | 124 | 1.0814 | 0.575 | 0.868 | 0.6 | 0.515 | 0.375 | 0.935 | 0.712 | 0.333 | 0.36 | 0.85 | 0.136 | 0.69 | 0.627 | 0.611 |
68
+ | 1.0576 | 3.0 | 186 | 1.0438 | 0.585 | 0.876 | 0.394 | 0.737 | 0.308 | 0.755 | 0.719 | 0.467 | 0.52 | 0.7 | 0.545 | 0.881 | 0.612 | 0.194 |
69
+ | 0.9603 | 4.0 | 248 | 1.0368 | 0.637 | 0.877 | 0.688 | 0.846 | 0.44 | 0.868 | 0.6 | 0.4 | 0.44 | 0.55 | 0.5 | 0.786 | 0.94 | 0.167 |
70
+ | 0.8873 | 5.0 | 310 | 1.0208 | 0.571 | 0.877 | 0.667 | 0.75 | 0.333 | 0.886 | 0.651 | 0.311 | 0.48 | 0.6 | 0.091 | 0.738 | 0.612 | 0.639 |
71
+ | 0.866 | 6.0 | 372 | 0.9809 | 0.604 | 0.877 | 0.484 | 0.684 | 0.312 | 0.892 | 0.671 | 0.259 | 0.6 | 0.65 | 0.227 | 0.786 | 0.821 | 0.194 |
72
+ | 0.8203 | 7.0 | 434 | 0.9894 | 0.637 | 0.882 | 0.519 | 0.75 | 0.4 | 0.8 | 0.696 | 0.4 | 0.56 | 0.6 | 0.455 | 0.857 | 0.821 | 0.222 |
73
+ | 0.8024 | 8.0 | 496 | 0.9797 | 0.632 | 0.882 | 0.484 | 0.682 | 0.45 | 0.889 | 0.693 | 0.393 | 0.6 | 0.75 | 0.409 | 0.762 | 0.776 | 0.306 |
74
+ | 0.7558 | 9.0 | 558 | 0.9738 | 0.594 | 0.883 | 0.6 | 0.765 | 0.375 | 0.766 | 0.694 | 0.32 | 0.48 | 0.65 | 0.273 | 0.857 | 0.642 | 0.444 |
75
+ | 0.7319 | 10.0 | 620 | 0.9632 | 0.632 | 0.884 | 0.519 | 0.722 | 0.36 | 0.8 | 0.708 | 0.44 | 0.56 | 0.65 | 0.409 | 0.857 | 0.761 | 0.306 |
76
 
77
 
78
  ### Framework versions
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:26236cd7b26b5b113eb2318a067fa18f26ff43c04e22ab1f010d17a604546b01
3
  size 437970952
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e2496c18e769658eed43a2a49f8a79e547a723305e5722db6ef743267aa1337d
3
  size 437970952
runs/Feb19_08-08-30_8e59040fe121/events.out.tfevents.1739952512.8e59040fe121.30.0 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:984435e76e797999ec3eccfbeab6dff46543c5daea073a7e0f700949d7a9e433
3
+ size 18609
runs/Feb19_08-08-30_8e59040fe121/events.out.tfevents.1739952605.8e59040fe121.30.1 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b63f09c5959d59f0eace5e03e8c78d3c09ad88de3aed4fbf329cf98d13ae99b0
3
+ size 1172
training_args.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:bd9c323451537e45144d6ab61af5919901c311b288a1a83717471ac3798098d2
3
  size 5240
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ecd622812a5272206544919535b0084a20180235c837901e492a64deb534c756
3
  size 5240