mustafoyev202 commited on
Commit
a7e7cb3
·
verified ·
1 Parent(s): a8e4d08

End of training

Browse files
README.md CHANGED
@@ -2,25 +2,25 @@
2
  library_name: peft
3
  language:
4
  - uz
5
- license: apache-2.0
6
- base_model: mustafoyev202/whisper-uz
7
  tags:
8
  - generated_from_trainer
9
  datasets:
10
  - mozilla-foundation/common_voice_17_0
11
  model-index:
12
- - name: Whisper Medium Uzbek
13
  results: []
14
  ---
15
 
16
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
17
  should probably proofread and complete it, then remove this comment. -->
18
 
19
- # Whisper Medium Uzbek
20
 
21
- This model is a fine-tuned version of [mustafoyev202/whisper-uz](https://huggingface.co/mustafoyev202/whisper-uz) on the Common Voice 17.0 dataset.
22
  It achieves the following results on the evaluation set:
23
- - Loss: 0.7319
24
 
25
  ## Model description
26
 
@@ -40,30 +40,29 @@ More information needed
40
 
41
  The following hyperparameters were used during training:
42
  - learning_rate: 1e-05
43
- - train_batch_size: 8
44
- - eval_batch_size: 8
45
  - seed: 42
46
- - gradient_accumulation_steps: 2
47
- - total_train_batch_size: 16
48
  - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
49
  - lr_scheduler_type: linear
50
  - lr_scheduler_warmup_steps: 500
51
  - training_steps: 2000
 
52
 
53
  ### Training results
54
 
55
- | Training Loss | Epoch | Step | Validation Loss |
56
- |:-------------:|:-----:|:----:|:---------------:|
57
- | 0.7933 | 0.8 | 500 | 0.9289 |
58
- | 0.678 | 1.6 | 1000 | 0.8336 |
59
- | 0.6008 | 2.4 | 1500 | 0.7544 |
60
- | 0.6056 | 3.2 | 2000 | 0.7319 |
61
 
62
 
63
  ### Framework versions
64
 
65
  - PEFT 0.15.2.dev0
66
- - Transformers 4.51.0.dev0
67
  - Pytorch 2.6.0+cu124
68
  - Datasets 3.5.0
69
  - Tokenizers 0.21.1
 
2
  library_name: peft
3
  language:
4
  - uz
5
+ license: mit
6
+ base_model: openai/whisper-large-v2
7
  tags:
8
  - generated_from_trainer
9
  datasets:
10
  - mozilla-foundation/common_voice_17_0
11
  model-index:
12
+ - name: Uzbek STT
13
  results: []
14
  ---
15
 
16
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
17
  should probably proofread and complete it, then remove this comment. -->
18
 
19
+ # Uzbek STT
20
 
21
+ This model is a fine-tuned version of [openai/whisper-large-v2](https://huggingface.co/openai/whisper-large-v2) on the Common Voice 17.0 dataset.
22
  It achieves the following results on the evaluation set:
23
+ - Loss: 1.1178
24
 
25
  ## Model description
26
 
 
40
 
41
  The following hyperparameters were used during training:
42
  - learning_rate: 1e-05
43
+ - train_batch_size: 16
44
+ - eval_batch_size: 16
45
  - seed: 42
 
 
46
  - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
47
  - lr_scheduler_type: linear
48
  - lr_scheduler_warmup_steps: 500
49
  - training_steps: 2000
50
+ - mixed_precision_training: Native AMP
51
 
52
  ### Training results
53
 
54
+ | Training Loss | Epoch | Step | Validation Loss |
55
+ |:-------------:|:------:|:----:|:---------------:|
56
+ | 2.323 | 0.1650 | 500 | 2.0135 |
57
+ | 1.4141 | 0.3300 | 1000 | 1.2143 |
58
+ | 1.2739 | 0.4950 | 1500 | 1.1382 |
59
+ | 1.3098 | 0.6601 | 2000 | 1.1178 |
60
 
61
 
62
  ### Framework versions
63
 
64
  - PEFT 0.15.2.dev0
65
+ - Transformers 4.52.0.dev0
66
  - Pytorch 2.6.0+cu124
67
  - Datasets 3.5.0
68
  - Tokenizers 0.21.1
adapter_model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:5923a8b4358638c0ad3a8244e88ae76908b743533ebf2c860c26251890f60c45
3
  size 62969640
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1db55124b254e4a06974b1bbf0ebb8977889f81099abcc13895810ad7aa871b6
3
  size 62969640
runs/Apr15_03-58-21_interns/events.out.tfevents.1744689547.interns CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:cd11be6dd3af10f26bf4154a720c9ca2df363e7772e07b97782e8ed2926a0be8
3
- size 15863
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2be3da875ee836f94c708c06c04196e9e166df5746b4a9678c286271370540d2
3
+ size 25470