Update README.md
Browse files
README.md
CHANGED
@@ -40,7 +40,6 @@ The model transcribes text in Arabic without diacritical marks and supports peri
|
|
40 |
This model is ready for commercial and non-commercial use. β
|
41 |
|
42 |
## ποΈ Model Architecture
|
43 |
-
|
44 |
FastConformer [1] is an optimized version of the Conformer model with 8x depthwise-separable convolutional downsampling.
|
45 |
The model is trained in a multitask setup with hybrid Transducer decoder (RNNT) and Connectionist Temporal Classification (CTC) loss.
|
46 |
You may find more information on the details of FastConformer here: [Fast-Conformer Model](https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/asr/models.html#fast-conformer).
|
@@ -53,7 +52,6 @@ Model utilizes a [Google Sentencepiece Tokenizer](https://github.com/google/sent
|
|
53 |
- **Other Properties Related to Input:** 16000 Hz Mono-channel Audio, Pre-Processing Not Needed
|
54 |
|
55 |
### π€ Output
|
56 |
-
|
57 |
This model provides transcribed speech as a string for a given audio sample.
|
58 |
- **Output Type**: Text
|
59 |
- **Output Format:** String
|
@@ -61,8 +59,8 @@ This model provides transcribed speech as a string for a given audio sample.
|
|
61 |
- **Other Properties Related to Output:** May Need Inverse Text Normalization; Does Not Handle Special Characters; Outputs text in Arabic without diacritical marks
|
62 |
|
63 |
## β οΈ Limitations
|
64 |
-
The model is non-streaming and outputs the speech as a string without diacritical marks.
|
65 |
-
Not recommended for word-for-word transcription and punctuation as accuracy varies based on the characteristics of input audio (unrecognized word, accent, noise, speech type, and context of speech).
|
66 |
|
67 |
## π How to download and use the model
|
68 |
#### π§ Installations
|
@@ -159,7 +157,6 @@ asr_model.transcribe(['sample_audio_1.wav', 'sample_audio_2.wav', 'sample_audio_
|
|
159 |
- Linux
|
160 |
|
161 |
## π Explainability
|
162 |
-
|
163 |
- High-Level Application and Domain: Automatic Speech Recognition
|
164 |
- - Describe how this model works: The model transcribes audio input into text for the Arabic language
|
165 |
- Verified to have met prescribed quality standards: Yes
|
@@ -172,7 +169,6 @@ asr_model.transcribe(['sample_audio_1.wav', 'sample_audio_2.wav', 'sample_audio_
|
|
172 |
|
173 |
## π Safety & Security
|
174 |
### Use Case Restrictions:
|
175 |
-
|
176 |
- Non-streaming ASR model
|
177 |
- Model outputs text in Arabic without diacritical marks
|
178 |
- Output text requires Inverse Text Normalization
|
@@ -180,11 +176,9 @@ asr_model.transcribe(['sample_audio_1.wav', 'sample_audio_2.wav', 'sample_audio_
|
|
180 |
- The model is Egyptian Dialect further finetuned
|
181 |
|
182 |
## π License
|
183 |
-
|
184 |
License to use this model is covered by the [CC-BY-4.0](https://creativecommons.org/licenses/by/4.0/). By downloading the public and release version of the model, you accept the terms and conditions of the [CC-BY-4.0](https://creativecommons.org/licenses/by/4.0/) license.
|
185 |
|
186 |
## π References
|
187 |
-
|
188 |
[1] [Fast Conformer with Linearly Scalable Attention for Efficient Speech Recognition](https://arxiv.org/abs/2305.05084)
|
189 |
|
190 |
[2] [Google Sentencepiece Tokenizer](https://github.com/google/sentencepiece)
|
|
|
40 |
This model is ready for commercial and non-commercial use. β
|
41 |
|
42 |
## ποΈ Model Architecture
|
|
|
43 |
FastConformer [1] is an optimized version of the Conformer model with 8x depthwise-separable convolutional downsampling.
|
44 |
The model is trained in a multitask setup with hybrid Transducer decoder (RNNT) and Connectionist Temporal Classification (CTC) loss.
|
45 |
You may find more information on the details of FastConformer here: [Fast-Conformer Model](https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/asr/models.html#fast-conformer).
|
|
|
52 |
- **Other Properties Related to Input:** 16000 Hz Mono-channel Audio, Pre-Processing Not Needed
|
53 |
|
54 |
### π€ Output
|
|
|
55 |
This model provides transcribed speech as a string for a given audio sample.
|
56 |
- **Output Type**: Text
|
57 |
- **Output Format:** String
|
|
|
59 |
- **Other Properties Related to Output:** May Need Inverse Text Normalization; Does Not Handle Special Characters; Outputs text in Arabic without diacritical marks
|
60 |
|
61 |
## β οΈ Limitations
|
62 |
+
- The model is non-streaming and outputs the speech as a string without diacritical marks.
|
63 |
+
- Not recommended for word-for-word transcription and punctuation as accuracy varies based on the characteristics of input audio (unrecognized word, accent, noise, speech type, and context of speech).
|
64 |
|
65 |
## π How to download and use the model
|
66 |
#### π§ Installations
|
|
|
157 |
- Linux
|
158 |
|
159 |
## π Explainability
|
|
|
160 |
- High-Level Application and Domain: Automatic Speech Recognition
|
161 |
- - Describe how this model works: The model transcribes audio input into text for the Arabic language
|
162 |
- Verified to have met prescribed quality standards: Yes
|
|
|
169 |
|
170 |
## π Safety & Security
|
171 |
### Use Case Restrictions:
|
|
|
172 |
- Non-streaming ASR model
|
173 |
- Model outputs text in Arabic without diacritical marks
|
174 |
- Output text requires Inverse Text Normalization
|
|
|
176 |
- The model is Egyptian Dialect further finetuned
|
177 |
|
178 |
## π License
|
|
|
179 |
License to use this model is covered by the [CC-BY-4.0](https://creativecommons.org/licenses/by/4.0/). By downloading the public and release version of the model, you accept the terms and conditions of the [CC-BY-4.0](https://creativecommons.org/licenses/by/4.0/) license.
|
180 |
|
181 |
## π References
|
|
|
182 |
[1] [Fast Conformer with Linearly Scalable Attention for Efficient Speech Recognition](https://arxiv.org/abs/2305.05084)
|
183 |
|
184 |
[2] [Google Sentencepiece Tokenizer](https://github.com/google/sentencepiece)
|