Shakhovak commited on
Commit
7049a80
·
verified ·
1 Parent(s): 4561f5a

End of training

Browse files
Files changed (3) hide show
  1. README.md +35 -25
  2. adapter_model.bin +1 -1
  3. training_args.bin +1 -1
README.md CHANGED
@@ -3,6 +3,8 @@ license: other
3
  base_model: baffo32/decapoda-research-llama-7B-hf
4
  tags:
5
  - generated_from_trainer
 
 
6
  model-index:
7
  - name: llama-7b-absa-MT-laptops
8
  results: []
@@ -13,9 +15,9 @@ should probably proofread and complete it, then remove this comment. -->
13
 
14
  # llama-7b-absa-MT-laptops
15
 
16
- This model is a fine-tuned version of [baffo32/decapoda-research-llama-7B-hf](https://huggingface.co/baffo32/decapoda-research-llama-7B-hf) on an unknown dataset.
17
  It achieves the following results on the evaluation set:
18
- - Loss: 0.0026
19
 
20
  ## Model description
21
 
@@ -43,35 +45,43 @@ The following hyperparameters were used during training:
43
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
44
  - lr_scheduler_type: linear
45
  - lr_scheduler_warmup_steps: 2
46
- - training_steps: 900
47
  - mixed_precision_training: Native AMP
48
 
49
  ### Training results
50
 
51
  | Training Loss | Epoch | Step | Validation Loss |
52
  |:-------------:|:-----:|:----:|:---------------:|
53
- | 0.0877 | 0.13 | 40 | 0.0277 |
54
- | 0.0225 | 0.25 | 80 | 0.0218 |
55
- | 0.0179 | 0.38 | 120 | 0.0170 |
56
- | 0.0165 | 0.51 | 160 | 0.0140 |
57
- | 0.0126 | 0.63 | 200 | 0.0121 |
58
- | 0.0295 | 0.76 | 240 | 0.0132 |
59
- | 0.0122 | 0.89 | 280 | 0.0107 |
60
- | 0.0096 | 1.01 | 320 | 0.0094 |
61
- | 0.0063 | 1.14 | 360 | 0.0087 |
62
- | 0.0055 | 1.26 | 400 | 0.0081 |
63
- | 0.0051 | 1.39 | 440 | 0.0073 |
64
- | 0.0045 | 1.52 | 480 | 0.0071 |
65
- | 0.0035 | 1.64 | 520 | 0.0060 |
66
- | 0.0034 | 1.77 | 560 | 0.0055 |
67
- | 0.0041 | 1.9 | 600 | 0.0041 |
68
- | 0.0028 | 2.02 | 640 | 0.0038 |
69
- | 0.0015 | 2.15 | 680 | 0.0033 |
70
- | 0.0014 | 2.28 | 720 | 0.0037 |
71
- | 0.0008 | 2.4 | 760 | 0.0038 |
72
- | 0.0014 | 2.53 | 800 | 0.0031 |
73
- | 0.0005 | 2.66 | 840 | 0.0027 |
74
- | 0.0008 | 2.78 | 880 | 0.0026 |
 
 
 
 
 
 
 
 
75
 
76
 
77
  ### Framework versions
 
3
  base_model: baffo32/decapoda-research-llama-7B-hf
4
  tags:
5
  - generated_from_trainer
6
+ datasets:
7
+ - sem_eval2014_task4
8
  model-index:
9
  - name: llama-7b-absa-MT-laptops
10
  results: []
 
15
 
16
  # llama-7b-absa-MT-laptops
17
 
18
+ This model is a fine-tuned version of [baffo32/decapoda-research-llama-7B-hf](https://huggingface.co/baffo32/decapoda-research-llama-7B-hf) on the sem_eval2014_task4 dataset.
19
  It achieves the following results on the evaluation set:
20
+ - Loss: 0.0007
21
 
22
  ## Model description
23
 
 
45
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
46
  - lr_scheduler_type: linear
47
  - lr_scheduler_warmup_steps: 2
48
+ - training_steps: 1200
49
  - mixed_precision_training: Native AMP
50
 
51
  ### Training results
52
 
53
  | Training Loss | Epoch | Step | Validation Loss |
54
  |:-------------:|:-----:|:----:|:---------------:|
55
+ | 0.0877 | 0.13 | 40 | 0.0245 |
56
+ | 0.0223 | 0.25 | 80 | 0.0205 |
57
+ | 0.0202 | 0.38 | 120 | 0.0159 |
58
+ | 0.0585 | 0.51 | 160 | 0.0139 |
59
+ | 0.014 | 0.63 | 200 | 0.0116 |
60
+ | 0.0112 | 0.76 | 240 | 0.0106 |
61
+ | 0.0113 | 0.89 | 280 | 0.0086 |
62
+ | 0.0094 | 1.01 | 320 | 0.0086 |
63
+ | 0.0065 | 1.14 | 360 | 0.0088 |
64
+ | 0.0057 | 1.26 | 400 | 0.0061 |
65
+ | 0.005 | 1.39 | 440 | 0.0060 |
66
+ | 0.0059 | 1.52 | 480 | 0.0051 |
67
+ | 0.0047 | 1.64 | 520 | 0.0065 |
68
+ | 0.0046 | 1.77 | 560 | 0.0041 |
69
+ | 0.0035 | 1.9 | 600 | 0.0039 |
70
+ | 0.0032 | 2.02 | 640 | 0.0033 |
71
+ | 0.0015 | 2.15 | 680 | 0.0038 |
72
+ | 0.002 | 2.28 | 720 | 0.0027 |
73
+ | 0.0016 | 2.4 | 760 | 0.0023 |
74
+ | 0.0014 | 2.53 | 800 | 0.0020 |
75
+ | 0.0011 | 2.66 | 840 | 0.0019 |
76
+ | 0.001 | 2.78 | 880 | 0.0018 |
77
+ | 0.001 | 2.91 | 920 | 0.0015 |
78
+ | 0.0007 | 3.04 | 960 | 0.0012 |
79
+ | 0.0005 | 3.16 | 1000 | 0.0010 |
80
+ | 0.0003 | 3.29 | 1040 | 0.0009 |
81
+ | 0.0003 | 3.42 | 1080 | 0.0007 |
82
+ | 0.0003 | 3.54 | 1120 | 0.0007 |
83
+ | 0.0002 | 3.67 | 1160 | 0.0007 |
84
+ | 0.0002 | 3.79 | 1200 | 0.0007 |
85
 
86
 
87
  ### Framework versions
adapter_model.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:cfd04b63a96651a736d4605449b427cf49c4a675b5b8de2a6d2af411bd08bb3c
3
  size 268528394
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b52a659dd565a166ac60c88a7ecd3887dac2b0c6c43756016d36b687f3677450
3
  size 268528394
training_args.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:e6109ca14d2480797b4e04b72c744c00dcf4636e01e4a1a8d2be6e4ce6a1e80f
3
  size 4984
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c53e9219f7e885b728dfc74d40a7b7bf464d27519dc63db1b220c1b0ad29ebf0
3
  size 4984