arubenruben commited on
Commit
d400682
1 Parent(s): 087add8

End of training

Browse files
Files changed (1) hide show
  1. README.md +13 -19
README.md CHANGED
@@ -1,6 +1,6 @@
1
  ---
2
  license: mit
3
- base_model: PORTULAN/albertina-100m-portuguese-ptpt-encoder
4
  tags:
5
  - generated_from_trainer
6
  metrics:
@@ -9,22 +9,22 @@ metrics:
9
  - precision
10
  - recall
11
  model-index:
12
- - name: LVI_albertina-100m-portuguese-ptpt-encoder
13
  results: []
14
  ---
15
 
16
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
17
  should probably proofread and complete it, then remove this comment. -->
18
 
19
- # LVI_albertina-100m-portuguese-ptpt-encoder
20
 
21
- This model is a fine-tuned version of [PORTULAN/albertina-100m-portuguese-ptpt-encoder](https://huggingface.co/PORTULAN/albertina-100m-portuguese-ptpt-encoder) on the None dataset.
22
  It achieves the following results on the evaluation set:
23
- - Loss: 0.6932
24
- - Accuracy: 0.5
25
- - F1: 0.0
26
- - Precision: 0.0
27
- - Recall: 0.0
28
 
29
  ## Model description
30
 
@@ -43,7 +43,7 @@ More information needed
43
  ### Training hyperparameters
44
 
45
  The following hyperparameters were used during training:
46
- - learning_rate: 5e-05
47
  - train_batch_size: 16
48
  - eval_batch_size: 16
49
  - seed: 42
@@ -53,15 +53,9 @@ The following hyperparameters were used during training:
53
 
54
  ### Training results
55
 
56
- | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
57
- |:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:|:---------:|:------:|
58
- | 0.5678 | 1.0 | 3217 | 0.6316 | 0.6653 | 0.5619 | 0.8128 | 0.4294 |
59
- | 0.6042 | 2.0 | 6434 | 0.6911 | 0.5 | 0.0 | 0.0 | 0.0 |
60
- | 0.6946 | 3.0 | 9651 | 0.6932 | 0.5 | 0.0 | 0.0 | 0.0 |
61
- | 0.694 | 4.0 | 12868 | 0.6932 | 0.5 | 0.6667 | 0.5 | 1.0 |
62
- | 0.6942 | 5.0 | 16085 | 0.6933 | 0.5 | 0.6667 | 0.5 | 1.0 |
63
- | 0.6936 | 6.0 | 19302 | 0.6937 | 0.5 | 0.6667 | 0.5 | 1.0 |
64
- | 0.6937 | 7.0 | 22519 | 0.6932 | 0.5 | 0.0 | 0.0 | 0.0 |
65
 
66
 
67
  ### Framework versions
 
1
  ---
2
  license: mit
3
+ base_model: neuralmind/bert-large-portuguese-cased
4
  tags:
5
  - generated_from_trainer
6
  metrics:
 
9
  - precision
10
  - recall
11
  model-index:
12
+ - name: LVI_bert-large-portuguese-cased
13
  results: []
14
  ---
15
 
16
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
17
  should probably proofread and complete it, then remove this comment. -->
18
 
19
+ # LVI_bert-large-portuguese-cased
20
 
21
+ This model is a fine-tuned version of [neuralmind/bert-large-portuguese-cased](https://huggingface.co/neuralmind/bert-large-portuguese-cased) on the None dataset.
22
  It achieves the following results on the evaluation set:
23
+ - Loss: 0.0755
24
+ - Accuracy: 0.9775
25
+ - F1: 0.9775
26
+ - Precision: 0.9758
27
+ - Recall: 0.9793
28
 
29
  ## Model description
30
 
 
43
  ### Training hyperparameters
44
 
45
  The following hyperparameters were used during training:
46
+ - learning_rate: 5e-06
47
  - train_batch_size: 16
48
  - eval_batch_size: 16
49
  - seed: 42
 
53
 
54
  ### Training results
55
 
56
+ | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
57
+ |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:|
58
+ | 0.1071 | 1.0 | 3217 | 0.0755 | 0.9775 | 0.9775 | 0.9758 | 0.9793 |
 
 
 
 
 
 
59
 
60
 
61
  ### Framework versions