Model save
Browse files
README.md
ADDED
@@ -0,0 +1,100 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
library_name: transformers
|
3 |
+
license: mit
|
4 |
+
base_model: facebook/w2v-bert-2.0
|
5 |
+
tags:
|
6 |
+
- generated_from_trainer
|
7 |
+
metrics:
|
8 |
+
- wer
|
9 |
+
model-index:
|
10 |
+
- name: w2v-bert-2.0-igbo_naijavoices_500h
|
11 |
+
results: []
|
12 |
+
---
|
13 |
+
|
14 |
+
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
|
15 |
+
should probably proofread and complete it, then remove this comment. -->
|
16 |
+
|
17 |
+
# w2v-bert-2.0-igbo_naijavoices_500h
|
18 |
+
|
19 |
+
This model is a fine-tuned version of [facebook/w2v-bert-2.0](https://huggingface.co/facebook/w2v-bert-2.0) on the None dataset.
|
20 |
+
It achieves the following results on the evaluation set:
|
21 |
+
- Loss: 0.2218
|
22 |
+
- Wer: 0.2455
|
23 |
+
- Cer: 0.1105
|
24 |
+
|
25 |
+
## Model description
|
26 |
+
|
27 |
+
More information needed
|
28 |
+
|
29 |
+
## Intended uses & limitations
|
30 |
+
|
31 |
+
More information needed
|
32 |
+
|
33 |
+
## Training and evaluation data
|
34 |
+
|
35 |
+
More information needed
|
36 |
+
|
37 |
+
## Training procedure
|
38 |
+
|
39 |
+
### Training hyperparameters
|
40 |
+
|
41 |
+
The following hyperparameters were used during training:
|
42 |
+
- learning_rate: 3e-05
|
43 |
+
- train_batch_size: 160
|
44 |
+
- eval_batch_size: 160
|
45 |
+
- seed: 42
|
46 |
+
- distributed_type: multi-GPU
|
47 |
+
- num_devices: 2
|
48 |
+
- total_train_batch_size: 320
|
49 |
+
- total_eval_batch_size: 320
|
50 |
+
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
|
51 |
+
- lr_scheduler_type: linear
|
52 |
+
- lr_scheduler_warmup_ratio: 0.1
|
53 |
+
- num_epochs: 50.0
|
54 |
+
- mixed_precision_training: Native AMP
|
55 |
+
|
56 |
+
### Training results
|
57 |
+
|
58 |
+
| Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
|
59 |
+
|:-------------:|:-------:|:-----:|:---------------:|:------:|:------:|
|
60 |
+
| 0.923 | 0.6901 | 1000 | 0.6313 | 0.5401 | 0.2069 |
|
61 |
+
| 0.4648 | 1.3803 | 2000 | 0.4222 | 0.3868 | 0.1549 |
|
62 |
+
| 0.3635 | 2.0704 | 3000 | 0.3582 | 0.3499 | 0.1434 |
|
63 |
+
| 0.3388 | 2.7605 | 4000 | 0.3368 | 0.3322 | 0.1383 |
|
64 |
+
| 0.2967 | 3.4507 | 5000 | 0.3141 | 0.3191 | 0.1366 |
|
65 |
+
| 0.2738 | 4.1408 | 6000 | 0.3041 | 0.3151 | 0.1328 |
|
66 |
+
| 0.3146 | 4.8309 | 7000 | 0.2972 | 0.3091 | 0.1297 |
|
67 |
+
| 0.2612 | 5.5210 | 8000 | 0.2856 | 0.2998 | 0.1312 |
|
68 |
+
| 0.282 | 6.2112 | 9000 | 0.2873 | 0.3001 | 0.1300 |
|
69 |
+
| 0.2989 | 6.9013 | 10000 | 0.2864 | 0.2959 | 0.1309 |
|
70 |
+
| 0.2633 | 7.5914 | 11000 | 0.2660 | 0.2883 | 0.1242 |
|
71 |
+
| 0.2471 | 8.2816 | 12000 | 0.2674 | 0.2905 | 0.1263 |
|
72 |
+
| 0.2746 | 8.9717 | 13000 | 0.2671 | 0.2822 | 0.1224 |
|
73 |
+
| 0.2754 | 9.6618 | 14000 | 0.2617 | 0.2833 | 0.1234 |
|
74 |
+
| 0.2881 | 10.3520 | 15000 | 0.2596 | 0.2833 | 0.1229 |
|
75 |
+
| 0.2717 | 11.0421 | 16000 | 0.2524 | 0.2760 | 0.1221 |
|
76 |
+
| 0.2204 | 11.7322 | 17000 | 0.2513 | 0.2720 | 0.1197 |
|
77 |
+
| 0.2429 | 12.4224 | 18000 | 0.2530 | 0.2738 | 0.1203 |
|
78 |
+
| 0.2429 | 13.1125 | 19000 | 0.2511 | 0.2745 | 0.1194 |
|
79 |
+
| 0.2449 | 13.8026 | 20000 | 0.2555 | 0.2748 | 0.1209 |
|
80 |
+
| 0.2053 | 14.4928 | 21000 | 0.2464 | 0.2719 | 0.1181 |
|
81 |
+
| 0.222 | 15.1829 | 22000 | 0.2428 | 0.2659 | 0.1195 |
|
82 |
+
| 0.1874 | 15.8730 | 23000 | 0.2418 | 0.2609 | 0.1156 |
|
83 |
+
| 0.1924 | 16.5631 | 24000 | 0.2363 | 0.2675 | 0.1176 |
|
84 |
+
| 0.1855 | 17.2533 | 25000 | 0.2336 | 0.2629 | 0.1151 |
|
85 |
+
| 0.2172 | 17.9434 | 26000 | 0.2302 | 0.2633 | 0.1153 |
|
86 |
+
| 0.2074 | 18.6335 | 27000 | 0.2345 | 0.2588 | 0.1161 |
|
87 |
+
| 0.1589 | 19.3237 | 28000 | 0.2204 | 0.2474 | 0.1137 |
|
88 |
+
| 0.1382 | 20.0138 | 29000 | 0.2279 | 0.2531 | 0.1127 |
|
89 |
+
| 0.1525 | 20.7039 | 30000 | 0.2284 | 0.2550 | 0.1128 |
|
90 |
+
| 0.149 | 21.3941 | 31000 | 0.2221 | 0.2509 | 0.1122 |
|
91 |
+
| 0.1193 | 22.0842 | 32000 | 0.2229 | 0.2482 | 0.1113 |
|
92 |
+
| 0.1245 | 22.7743 | 33000 | 0.2218 | 0.2455 | 0.1105 |
|
93 |
+
|
94 |
+
|
95 |
+
### Framework versions
|
96 |
+
|
97 |
+
- Transformers 4.48.1
|
98 |
+
- Pytorch 2.7.1+cu126
|
99 |
+
- Datasets 4.0.0
|
100 |
+
- Tokenizers 0.21.2
|
model.safetensors
CHANGED
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
size 2423097460
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:7509586c5216f95072ea662d0070b30887bf5dbac55a82996f8927e20ee6fef8
|
3 |
size 2423097460
|
runs/Jul18_00-07-43_gf-asr-training-2a100/events.out.tfevents.1752797745.gf-asr-training-2a100.244221.0
CHANGED
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
-
size
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:e0c2d3c0fdad78baf946bf2b23d8948cdc1c15214512c14a8c1164e3c0bc46e1
|
3 |
+
size 7047925
|