Pamzyy commited on
Commit
a69908f
·
verified ·
1 Parent(s): 75c3251

Model save

Browse files
Files changed (1) hide show
  1. README.md +1 -13
README.md CHANGED
@@ -15,8 +15,6 @@ should probably proofread and complete it, then remove this comment. -->
15
  # sinhala_gpt2
16
 
17
  This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the None dataset.
18
- It achieves the following results on the evaluation set:
19
- - Loss: 0.6139
20
 
21
  ## Model description
22
 
@@ -44,20 +42,10 @@ The following hyperparameters were used during training:
44
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
45
  - lr_scheduler_type: cosine
46
  - lr_scheduler_warmup_steps: 500
47
- - num_epochs: 60
48
 
49
  ### Training results
50
 
51
- | Training Loss | Epoch | Step | Validation Loss |
52
- |:-------------:|:-------:|:----:|:---------------:|
53
- | 3.3922 | 7.3529 | 500 | 0.9294 |
54
- | 0.862 | 14.7059 | 1000 | 0.7480 |
55
- | 0.7413 | 22.0588 | 1500 | 0.6813 |
56
- | 0.6795 | 29.4118 | 2000 | 0.6468 |
57
- | 0.642 | 36.7647 | 2500 | 0.6274 |
58
- | 0.6187 | 44.1176 | 3000 | 0.6192 |
59
- | 0.607 | 51.4706 | 3500 | 0.6149 |
60
- | 0.602 | 58.8235 | 4000 | 0.6139 |
61
 
62
 
63
  ### Framework versions
 
15
  # sinhala_gpt2
16
 
17
  This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the None dataset.
 
 
18
 
19
  ## Model description
20
 
 
42
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
43
  - lr_scheduler_type: cosine
44
  - lr_scheduler_warmup_steps: 500
45
+ - num_epochs: 5
46
 
47
  ### Training results
48
 
 
 
 
 
 
 
 
 
 
 
49
 
50
 
51
  ### Framework versions