jefson08 commited on
Commit
08f3cef
·
verified ·
1 Parent(s): 5882c71

End of training

Browse files
Files changed (2) hide show
  1. README.md +9 -24
  2. generation_config.json +2 -1
README.md CHANGED
@@ -1,4 +1,5 @@
1
  ---
 
2
  tags:
3
  - generated_from_trainer
4
  datasets:
@@ -14,8 +15,6 @@ should probably proofread and complete it, then remove this comment. -->
14
  # speecht5_finetuned_kha
15
 
16
  This model was trained from scratch on the audiofolder dataset.
17
- It achieves the following results on the evaluation set:
18
- - Loss: 0.4932
19
 
20
  ## Model description
21
 
@@ -34,39 +33,25 @@ More information needed
34
  ### Training hyperparameters
35
 
36
  The following hyperparameters were used during training:
37
- - learning_rate: 1e-05
38
- - train_batch_size: 64
39
  - eval_batch_size: 2
40
  - seed: 42
41
- - gradient_accumulation_steps: 16
42
- - total_train_batch_size: 1024
43
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
44
  - lr_scheduler_type: linear
45
  - lr_scheduler_warmup_steps: 500
46
- - num_epochs: 1000
47
  - mixed_precision_training: Native AMP
48
 
49
  ### Training results
50
 
51
- | Training Loss | Epoch | Step | Validation Loss |
52
- |:-------------:|:--------:|:-----:|:---------------:|
53
- | 0.4596 | 78.4314 | 1000 | 0.4390 |
54
- | 0.4352 | 156.8627 | 2000 | 0.4409 |
55
- | 0.4182 | 235.2941 | 3000 | 0.4419 |
56
- | 0.4086 | 313.7255 | 4000 | 0.4521 |
57
- | 0.4013 | 392.1569 | 5000 | 0.4637 |
58
- | 0.3963 | 470.5882 | 6000 | 0.4660 |
59
- | 0.3896 | 549.0196 | 7000 | 0.4761 |
60
- | 0.3866 | 627.4510 | 8000 | 0.4869 |
61
- | 0.385 | 705.8824 | 9000 | 0.4891 |
62
- | 0.3806 | 784.3137 | 10000 | 0.4845 |
63
- | 0.3827 | 862.7451 | 11000 | 0.4887 |
64
- | 0.3788 | 941.1765 | 12000 | 0.4932 |
65
 
66
 
67
  ### Framework versions
68
 
69
- - Transformers 4.43.3
70
- - Pytorch 2.5.0.dev20240819+cu118
71
- - Datasets 2.21.0
72
  - Tokenizers 0.19.1
 
1
  ---
2
+ library_name: transformers
3
  tags:
4
  - generated_from_trainer
5
  datasets:
 
15
  # speecht5_finetuned_kha
16
 
17
  This model was trained from scratch on the audiofolder dataset.
 
 
18
 
19
  ## Model description
20
 
 
33
  ### Training hyperparameters
34
 
35
  The following hyperparameters were used during training:
36
+ - learning_rate: 2e-05
37
+ - train_batch_size: 32
38
  - eval_batch_size: 2
39
  - seed: 42
40
+ - gradient_accumulation_steps: 64
41
+ - total_train_batch_size: 2048
42
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
43
  - lr_scheduler_type: linear
44
  - lr_scheduler_warmup_steps: 500
45
+ - num_epochs: 100
46
  - mixed_precision_training: Native AMP
47
 
48
  ### Training results
49
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
50
 
51
 
52
  ### Framework versions
53
 
54
+ - Transformers 4.44.2
55
+ - Pytorch 2.4.1+cu121
56
+ - Datasets 3.0.0
57
  - Tokenizers 0.19.1
generation_config.json CHANGED
@@ -5,5 +5,6 @@
5
  "eos_token_id": 2,
6
  "max_length": 1876,
7
  "pad_token_id": 1,
8
- "transformers_version": "4.43.3"
 
9
  }
 
5
  "eos_token_id": 2,
6
  "max_length": 1876,
7
  "pad_token_id": 1,
8
+ "transformers_version": "4.44.2",
9
+ "use_cache": false
10
  }