fedeortegariba commited on
Commit
ad308b9
·
verified ·
1 Parent(s): e4fb917

End of training

Browse files
Files changed (2) hide show
  1. README.md +96 -0
  2. model.safetensors +1 -1
README.md ADDED
@@ -0,0 +1,96 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: transformers
3
+ license: apache-2.0
4
+ base_model: distilbert-base-uncased
5
+ tags:
6
+ - generated_from_trainer
7
+ metrics:
8
+ - accuracy
9
+ - recall
10
+ - precision
11
+ - f1
12
+ model-index:
13
+ - name: distilbert-base-uncased-finetuned-text_cl
14
+ results: []
15
+ ---
16
+
17
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
18
+ should probably proofread and complete it, then remove this comment. -->
19
+
20
+ # distilbert-base-uncased-finetuned-text_cl
21
+
22
+ This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
23
+ It achieves the following results on the evaluation set:
24
+ - Loss: 1.2450
25
+ - Accuracy: 0.8641
26
+ - Recall: 0.9482
27
+ - Precision: 0.8402
28
+ - F1: 0.8910
29
+
30
+ ## Model description
31
+
32
+ More information needed
33
+
34
+ ## Intended uses & limitations
35
+
36
+ More information needed
37
+
38
+ ## Training and evaluation data
39
+
40
+ More information needed
41
+
42
+ ## Training procedure
43
+
44
+ ### Training hyperparameters
45
+
46
+ The following hyperparameters were used during training:
47
+ - learning_rate: 2e-05
48
+ - train_batch_size: 16
49
+ - eval_batch_size: 16
50
+ - seed: 42
51
+ - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
52
+ - lr_scheduler_type: linear
53
+ - num_epochs: 30
54
+
55
+ ### Training results
56
+
57
+ | Training Loss | Epoch | Step | Validation Loss | Accuracy | Recall | Precision | F1 |
58
+ |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:|
59
+ | No log | 1.0 | 231 | 0.4105 | 0.7765 | 0.9343 | 0.7474 | 0.8305 |
60
+ | No log | 2.0 | 462 | 0.5147 | 0.7946 | 0.9741 | 0.75 | 0.8475 |
61
+ | 0.2425 | 3.0 | 693 | 0.3620 | 0.8407 | 0.9373 | 0.8175 | 0.8733 |
62
+ | 0.2425 | 4.0 | 924 | 0.5248 | 0.8646 | 0.9402 | 0.8459 | 0.8906 |
63
+ | 0.0988 | 5.0 | 1155 | 0.7085 | 0.8553 | 0.9492 | 0.8287 | 0.8849 |
64
+ | 0.0988 | 6.0 | 1386 | 0.7420 | 0.8652 | 0.9084 | 0.8677 | 0.8876 |
65
+ | 0.0284 | 7.0 | 1617 | 0.7172 | 0.8705 | 0.9442 | 0.8510 | 0.8952 |
66
+ | 0.0284 | 8.0 | 1848 | 0.8150 | 0.8681 | 0.9442 | 0.8479 | 0.8935 |
67
+ | 0.0084 | 9.0 | 2079 | 0.9139 | 0.8629 | 0.9373 | 0.8455 | 0.8890 |
68
+ | 0.0084 | 10.0 | 2310 | 0.9463 | 0.8571 | 0.9392 | 0.8367 | 0.8850 |
69
+ | 0.009 | 11.0 | 2541 | 0.9524 | 0.8658 | 0.9124 | 0.8658 | 0.8885 |
70
+ | 0.009 | 12.0 | 2772 | 1.1889 | 0.8483 | 0.9512 | 0.8190 | 0.8802 |
71
+ | 0.0036 | 13.0 | 3003 | 1.1781 | 0.8524 | 0.9592 | 0.8196 | 0.8839 |
72
+ | 0.0036 | 14.0 | 3234 | 1.1979 | 0.8489 | 0.9572 | 0.8165 | 0.8812 |
73
+ | 0.0036 | 15.0 | 3465 | 1.0633 | 0.8635 | 0.9373 | 0.8462 | 0.8894 |
74
+ | 0.0064 | 16.0 | 3696 | 1.0781 | 0.8629 | 0.9333 | 0.8480 | 0.8886 |
75
+ | 0.0064 | 17.0 | 3927 | 1.1728 | 0.8477 | 0.9452 | 0.8216 | 0.8791 |
76
+ | 0.0033 | 18.0 | 4158 | 1.1487 | 0.8536 | 0.9432 | 0.8300 | 0.8830 |
77
+ | 0.0033 | 19.0 | 4389 | 1.1052 | 0.8600 | 0.9482 | 0.8351 | 0.8881 |
78
+ | 0.0031 | 20.0 | 4620 | 1.1933 | 0.8629 | 0.9612 | 0.8312 | 0.8915 |
79
+ | 0.0031 | 21.0 | 4851 | 1.3387 | 0.8454 | 0.9671 | 0.8071 | 0.8799 |
80
+ | 0.0029 | 22.0 | 5082 | 1.1393 | 0.8635 | 0.9482 | 0.8395 | 0.8906 |
81
+ | 0.0029 | 23.0 | 5313 | 1.2048 | 0.8617 | 0.9522 | 0.8349 | 0.8897 |
82
+ | 0.0003 | 24.0 | 5544 | 1.1798 | 0.8652 | 0.9522 | 0.8393 | 0.8922 |
83
+ | 0.0003 | 25.0 | 5775 | 1.1385 | 0.8705 | 0.9412 | 0.8529 | 0.8949 |
84
+ | 0.0014 | 26.0 | 6006 | 1.2903 | 0.8565 | 0.9532 | 0.8279 | 0.8861 |
85
+ | 0.0014 | 27.0 | 6237 | 1.1961 | 0.8635 | 0.9412 | 0.8438 | 0.8898 |
86
+ | 0.0014 | 28.0 | 6468 | 1.2178 | 0.8635 | 0.9442 | 0.8419 | 0.8901 |
87
+ | 0.0013 | 29.0 | 6699 | 1.2409 | 0.8635 | 0.9472 | 0.8401 | 0.8904 |
88
+ | 0.0013 | 30.0 | 6930 | 1.2450 | 0.8641 | 0.9482 | 0.8402 | 0.8910 |
89
+
90
+
91
+ ### Framework versions
92
+
93
+ - Transformers 4.51.3
94
+ - Pytorch 2.6.0+cu124
95
+ - Datasets 3.5.0
96
+ - Tokenizers 0.21.1
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:bf6925cbc8f276e4e9a84947b676e4c06cff7458167cb6a9a156dedc7d365c0f
3
  size 267832560
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6b472b329f0e3217739623386f4fcd1a89df13d1e035b9129c61adffb6ddb612
3
  size 267832560