Update README.md
Browse files
README.md
CHANGED
|
@@ -15,7 +15,7 @@ It was trained on nearly 60 000 hours of speech segments and covers 21 languages
|
|
| 15 |
## ASR fine-tuning
|
| 16 |
The SpeechBrain toolkit (Ravanelli et al., 2021) is used to fine-tune the model.
|
| 17 |
Fine-tuning is done for each language using the FLEURS dataset [2].
|
| 18 |
-
The pretrained model (SSA-HuBERT-base-60k) is considered as a speech encoder and is fully fine-tuned with two 1024 linear layers and a softmax output at the top.
|
| 19 |
|
| 20 |
## License
|
| 21 |
This model is released under the CC-by-NC 4.0 conditions.
|
|
|
|
| 15 |
## ASR fine-tuning
|
| 16 |
The SpeechBrain toolkit (Ravanelli et al., 2021) is used to fine-tune the model.
|
| 17 |
Fine-tuning is done for each language using the FLEURS dataset [2].
|
| 18 |
+
The pretrained model (SSA-HuBERT-base-60k-V2) is considered as a speech encoder and is fully fine-tuned with two 1024 linear layers and a softmax output at the top.
|
| 19 |
|
| 20 |
## License
|
| 21 |
This model is released under the CC-by-NC 4.0 conditions.
|