xls-r-uyghur-cv8 / README.md
lucio's picture
update model card README.md
c8b6183
|
raw
history blame
3.81 kB
metadata
language:
  - ug
license: apache-2.0
tags:
  - automatic-speech-recognition
  - mozilla-foundation/common_voice_8_0
  - generated_from_trainer
datasets:
  - common_voice
model-index:
  - name: xls-r-uyghur-cv8
    results: []

xls-r-uyghur-cv8

This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - UG dataset. It achieves the following results on the evaluation set:

  • Loss: 0.2240
  • Wer: 0.3693

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 4e-05
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • gradient_accumulation_steps: 4
  • total_train_batch_size: 32
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 2000
  • num_epochs: 100.0
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Wer
4.1169 2.66 500 4.0146 1.0
3.2512 5.32 1000 3.2342 1.0
2.5435 7.97 1500 1.8155 1.0286
1.5575 10.64 2000 0.6346 0.7058
1.3979 13.3 2500 0.4885 0.6320
1.2874 15.95 3000 0.4271 0.6088
1.2383 18.61 3500 0.3889 0.5869
1.2054 21.28 4000 0.3609 0.5793
1.1866 23.93 4500 0.3450 0.5513
1.1332 26.59 5000 0.3214 0.5379
1.135 29.25 5500 0.3122 0.5384
1.0992 31.91 6000 0.2948 0.5078
1.0707 34.57 6500 0.2928 0.5128
1.0754 37.23 7000 0.2857 0.5017
1.0461 39.89 7500 0.2791 0.5099
1.0328 42.55 8000 0.2729 0.5120
1.0201 45.21 8500 0.2654 0.4720
1.0035 47.87 9000 0.2623 0.4659
1.0069 50.53 9500 0.2569 0.4593
0.9998 53.19 10000 0.2519 0.4405
0.9762 55.85 10500 0.2505 0.4588
0.9755 58.51 11000 0.2479 0.4564
0.9624 61.17 11500 0.2460 0.4298
0.9494 63.83 12000 0.2402 0.4182
0.948 66.49 12500 0.2412 0.4212
0.9312 69.15 13000 0.2352 0.3970
0.9172 71.81 13500 0.2357 0.3926
0.9101 74.47 14000 0.2305 0.3905
0.9177 77.13 14500 0.2307 0.3838
0.9083 79.78 15000 0.2313 0.3800
0.9068 82.45 15500 0.2275 0.3742
0.9087 85.11 16000 0.2283 0.3747
0.8838 87.76 16500 0.2286 0.3777
0.8868 90.42 17000 0.2269 0.3722
0.8895 93.08 17500 0.2246 0.3714
0.8926 95.74 18000 0.2241 0.3705
0.8856 98.4 18500 0.2242 0.3693

Framework versions

  • Transformers 4.16.0.dev0
  • Pytorch 1.10.1+cu102
  • Datasets 1.18.2.dev0
  • Tokenizers 0.11.0