victor233 commited on
Commit
8901e4f
·
verified ·
1 Parent(s): ec1dc60

End of training

Browse files
README.md ADDED
@@ -0,0 +1,71 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: transformers
3
+ license: apache-2.0
4
+ base_model: jonatasgrosman/wav2vec2-large-xlsr-53-russian
5
+ tags:
6
+ - generated_from_trainer
7
+ metrics:
8
+ - wer
9
+ model-index:
10
+ - name: my_awesome_asr_mind_model
11
+ results: []
12
+ ---
13
+
14
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
15
+ should probably proofread and complete it, then remove this comment. -->
16
+
17
+ # my_awesome_asr_mind_model
18
+
19
+ This model is a fine-tuned version of [jonatasgrosman/wav2vec2-large-xlsr-53-russian](https://huggingface.co/jonatasgrosman/wav2vec2-large-xlsr-53-russian) on an unknown dataset.
20
+ It achieves the following results on the evaluation set:
21
+ - Loss: 8.6914
22
+ - Wer: 0.9317
23
+
24
+ ## Model description
25
+
26
+ More information needed
27
+
28
+ ## Intended uses & limitations
29
+
30
+ More information needed
31
+
32
+ ## Training and evaluation data
33
+
34
+ More information needed
35
+
36
+ ## Training procedure
37
+
38
+ ### Training hyperparameters
39
+
40
+ The following hyperparameters were used during training:
41
+ - learning_rate: 1e-05
42
+ - train_batch_size: 8
43
+ - eval_batch_size: 8
44
+ - seed: 42
45
+ - gradient_accumulation_steps: 2
46
+ - total_train_batch_size: 16
47
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
48
+ - lr_scheduler_type: linear
49
+ - lr_scheduler_warmup_steps: 500
50
+ - num_epochs: 3
51
+ - mixed_precision_training: Native AMP
52
+
53
+ ### Training results
54
+
55
+ | Training Loss | Epoch | Step | Validation Loss | Wer |
56
+ |:-------------:|:------:|:----:|:---------------:|:------:|
57
+ | 16.4326 | 0.3436 | 100 | 26.2073 | 1.0 |
58
+ | 16.1132 | 0.6873 | 200 | 13.6308 | 1.0 |
59
+ | 3.3225 | 1.0309 | 300 | 10.8892 | 1.0 |
60
+ | 3.6426 | 1.3746 | 400 | 6.1344 | 0.9985 |
61
+ | 2.6996 | 1.7182 | 500 | 4.0074 | 0.9930 |
62
+ | 2.2841 | 2.0619 | 600 | 4.9802 | 0.9742 |
63
+ | 2.2091 | 2.4055 | 700 | 7.2261 | 0.9517 |
64
+ | 2.089 | 2.7491 | 800 | 8.6914 | 0.9317 |
65
+
66
+
67
+ ### Framework versions
68
+
69
+ - Transformers 4.44.2
70
+ - Pytorch 2.4.1+cu121
71
+ - Tokenizers 0.19.1
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:e33a52d675809613e2ff49322c4979c7f7675fae6a6aa97350a3993563462a73
3
  size 1261967380
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:91aaac1786ba2801d30ed8e4092d1c733e66c537bf79170fd8a1d3ba1fe10c1c
3
  size 1261967380
runs/Oct12_06-30-49_d5b9ba6f9988/events.out.tfevents.1728714650.d5b9ba6f9988.11850.4 CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:5530597fd01d1a01914e3a9e8f193089edaa3dc3d9ac410ac1dff99716ce5e77
3
- size 45750
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:442c84b4d394ae55095442e17779cc9a64d8e7651f75d2e51bfa58c900c68d18
3
+ size 46104