Edit model card

wav2vec2-large-xls-r-300m-Arabic-phoneme-based-MDD

This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 0.7800
  • Per: 0.1135

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0005
  • train_batch_size: 8
  • eval_batch_size: 6
  • seed: 42
  • gradient_accumulation_steps: 4
  • total_train_batch_size: 32
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 250
  • num_epochs: 40.0

Training results

Training Loss Epoch Step Validation Loss Per
4.5228 1.0 546 2.1723 0.6314
1.2389 2.0 1093 0.9571 0.2597
0.7931 3.0 1640 0.8440 0.2246
0.6438 4.0 2187 0.7831 0.2045
0.5584 5.0 2733 0.7660 0.1922
0.5062 6.0 3280 0.7193 0.1724
0.4596 7.0 3827 0.7373 0.1720
0.4227 8.0 4374 0.6829 0.1629
0.3832 9.0 4920 0.7181 0.1608
0.3617 10.0 5467 0.7043 0.1591
0.3495 11.0 6014 0.7295 0.1566
0.3282 12.0 6561 0.6897 0.1508
0.3086 13.0 7107 0.7353 0.1554
0.2911 14.0 7654 0.7144 0.1477
0.2801 15.0 8201 0.6988 0.1442
0.2658 16.0 8748 0.7061 0.1475
0.252 17.0 9294 0.7090 0.1403
0.2487 18.0 9841 0.7032 0.1363
0.2363 19.0 10388 0.7087 0.1395
0.222 20.0 10935 0.6982 0.1345
0.2152 21.0 11481 0.6964 0.1361
0.2063 22.0 12028 0.7246 0.1341
0.1958 23.0 12575 0.7331 0.1347
0.1866 24.0 13122 0.7493 0.1326
0.1786 25.0 13668 0.7536 0.1381
0.1751 26.0 14215 0.7345 0.1308
0.169 27.0 14762 0.7274 0.1251
0.1616 28.0 15309 0.7590 0.1293
0.1589 29.0 15855 0.7330 0.1243
0.1495 30.0 16402 0.7517 0.1228
0.1415 31.0 16949 0.7454 0.1208
0.1376 32.0 17496 0.7827 0.1254
0.1337 33.0 18042 0.7523 0.1221
0.128 34.0 18589 0.7752 0.1208
0.1262 35.0 19136 0.7716 0.1174
0.1196 36.0 19683 0.7620 0.1164
0.1161 37.0 20229 0.7792 0.1164
0.1117 38.0 20776 0.7800 0.1140
0.1103 39.0 21323 0.7716 0.1134
0.1074 39.95 21840 0.7800 0.1135

Framework versions

  • Transformers 4.32.0
  • Pytorch 2.0.1+cu118
  • Datasets 1.18.3
  • Tokenizers 0.13.3
Downloads last month
21
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for nrshoudi/wav2vec2-large-xls-r-300m-Arabic-phoneme-based-MDD

Finetuned
(435)
this model