xjlulu commited on
Commit
20057da
1 Parent(s): 2dd56ec

End of training

Browse files
Files changed (1) hide show
  1. README.md +13 -11
README.md CHANGED
@@ -11,11 +11,12 @@ model-index:
11
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
12
  should probably proofread and complete it, then remove this comment. -->
13
 
14
- # ntu_adl_span_selection_roberta_macbert
15
 
16
  This model is a fine-tuned version of [hfl/chinese-macbert-base](https://huggingface.co/hfl/chinese-macbert-base) on an unknown dataset.
17
  It achieves the following results on the evaluation set:
18
- - Loss: 0.6356
 
19
 
20
  ## Model description
21
 
@@ -35,26 +36,27 @@ More information needed
35
 
36
  The following hyperparameters were used during training:
37
  - learning_rate: 3e-05
38
- - train_batch_size: 4
39
  - eval_batch_size: 1
40
  - seed: 42
41
- - gradient_accumulation_steps: 32
42
- - total_train_batch_size: 128
43
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
44
  - lr_scheduler_type: linear
45
- - num_epochs: 2
46
 
47
  ### Training results
48
 
49
- | Training Loss | Epoch | Step | Validation Loss |
50
- |:-------------:|:-----:|:----:|:---------------:|
51
- | No log | 1.0 | 169 | 0.6805 |
52
- | No log | 1.99 | 338 | 0.6356 |
 
53
 
54
 
55
  ### Framework versions
56
 
57
  - Transformers 4.34.1
58
  - Pytorch 2.1.0+cu118
59
- - Datasets 2.14.5
60
  - Tokenizers 0.14.1
 
11
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
12
  should probably proofread and complete it, then remove this comment. -->
13
 
14
+ # ntu_adl_span_selection_macbert
15
 
16
  This model is a fine-tuned version of [hfl/chinese-macbert-base](https://huggingface.co/hfl/chinese-macbert-base) on an unknown dataset.
17
  It achieves the following results on the evaluation set:
18
+ - Loss: 1.1049
19
+ - Em Accuracy: 0.7846
20
 
21
  ## Model description
22
 
 
36
 
37
  The following hyperparameters were used during training:
38
  - learning_rate: 3e-05
39
+ - train_batch_size: 1
40
  - eval_batch_size: 1
41
  - seed: 42
42
+ - gradient_accumulation_steps: 4
43
+ - total_train_batch_size: 4
44
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
45
  - lr_scheduler_type: linear
46
+ - num_epochs: 3
47
 
48
  ### Training results
49
 
50
+ | Training Loss | Epoch | Step | Validation Loss | Em Accuracy |
51
+ |:-------------:|:-----:|:-----:|:---------------:|:-----------:|
52
+ | 0.7063 | 1.0 | 5428 | 0.6971 | 0.7627 |
53
+ | 0.4457 | 2.0 | 10857 | 0.8407 | 0.7840 |
54
+ | 0.2263 | 3.0 | 16284 | 1.1049 | 0.7846 |
55
 
56
 
57
  ### Framework versions
58
 
59
  - Transformers 4.34.1
60
  - Pytorch 2.1.0+cu118
61
+ - Datasets 2.14.6
62
  - Tokenizers 0.14.1