O'zbekcha Speech To Text 4-versiyasi
Bu modelimiz facebook/wav2vec2-base va MOZILLA-FOUNDATION/COMMON_VOICE_10_0 - ning O'zbekcha dataseti bilan o'qitildi. Model 100ta epoxda o'qitildi. O'qitish jarayonida quyidagi natijalarga erishildi:
- Xatolik: 0.7755
- So'zning xatolik darajasi: 0.3976
O'qitish uchun belgilangan giperparameterlar
Quyidagi giperparameterlar o'qitish uchun ishlatildi:
- learning_rate: 3e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 2
- total_train_batch_size: 16
- total_eval_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 100.0
- mixed_precision_training: Native AMP
O'qitish natijalari
Xatolik | Epox | Qadam | Tasdiq xatoligi | SXD |
---|---|---|---|---|
0.1532 | 13.59 | 20000 | 0.6070 | 0.4822 |
0.099 | 27.17 | 40000 | 0.6438 | 0.4461 |
0.0769 | 40.76 | 60000 | 0.6889 | 0.4343 |
0.0622 | 54.35 | 80000 | 0.7638 | 0.4181 |
0.0561 | 67.93 | 100000 | 0.7523 | 0.4081 |
0.0454 | 81.52 | 120000 | 0.7569 | 0.4006 |
0.0378 | 95.11 | 140000 | 0.7702 | 0.3989 |
- Downloads last month
- 10
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support