priyanshu594 commited on
Commit
fb575e4
·
verified ·
1 Parent(s): 016ed3d

End of training

Browse files
README.md CHANGED
@@ -16,7 +16,7 @@ should probably proofread and complete it, then remove this comment. -->
16
 
17
  This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) on an unknown dataset.
18
  It achieves the following results on the evaluation set:
19
- - Loss: 0.3950
20
 
21
  ## Model description
22
 
@@ -35,24 +35,24 @@ More information needed
35
  ### Training hyperparameters
36
 
37
  The following hyperparameters were used during training:
38
- - learning_rate: 1e-05
39
- - train_batch_size: 4
40
  - eval_batch_size: 2
41
  - seed: 42
42
- - gradient_accumulation_steps: 8
43
- - total_train_batch_size: 32
44
  - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
45
  - lr_scheduler_type: linear
46
- - lr_scheduler_warmup_steps: 100
47
- - training_steps: 500
48
  - mixed_precision_training: Native AMP
49
 
50
  ### Training results
51
 
52
  | Training Loss | Epoch | Step | Validation Loss |
53
  |:-------------:|:-----:|:----:|:---------------:|
54
- | 0.2491 | 125.0 | 250 | 0.3956 |
55
- | 0.2313 | 250.0 | 500 | 0.3950 |
56
 
57
 
58
  ### Framework versions
 
16
 
17
  This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) on an unknown dataset.
18
  It achieves the following results on the evaluation set:
19
+ - Loss: 0.4531
20
 
21
  ## Model description
22
 
 
35
  ### Training hyperparameters
36
 
37
  The following hyperparameters were used during training:
38
+ - learning_rate: 0.0001
39
+ - train_batch_size: 2
40
  - eval_batch_size: 2
41
  - seed: 42
42
+ - gradient_accumulation_steps: 4
43
+ - total_train_batch_size: 8
44
  - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
45
  - lr_scheduler_type: linear
46
+ - lr_scheduler_warmup_steps: 20
47
+ - training_steps: 20
48
  - mixed_precision_training: Native AMP
49
 
50
  ### Training results
51
 
52
  | Training Loss | Epoch | Step | Validation Loss |
53
  |:-------------:|:-----:|:----:|:---------------:|
54
+ | 0.5249 | 2.0 | 10 | 0.4686 |
55
+ | 0.5019 | 4.0 | 20 | 0.4531 |
56
 
57
 
58
  ### Framework versions
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:8f4076c2f7d81833d4983f0e0622793bbfc2dadfb9e61cee843f84f083935d0a
3
  size 577789320
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:fa289bb2687cd1d2258080bb719c41f29f2fdd42b7bf17f5264d8819aae09707
3
  size 577789320
runs/Jul27_17-22-17_9fb63f6eea31/events.out.tfevents.1753636942.9fb63f6eea31.271.0 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e28b44f95a12f4de9dab0663499fd156f6c3e7c82002eabab367a3bc8d7649e6
3
+ size 11719
training_args.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:fbb4089a469aa6c9dbadb50d6bcce0ade3a6822c0d243121be2602710e2081d6
3
  size 5496
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:63cd211c7cbe6285d1083af3f4ecbb11634bd967cb1b3203e92fa365dcfb9ac8
3
  size 5496