Kunal Dhawan commited on
Commit
fcdd0aa
·
1 Parent(s): fe8a3a6

added model card for canary-1b-flash

Browse files

Signed-off-by: Kunal Dhawan <[email protected]>

Files changed (2) hide show
  1. .gitattributes +1 -0
  2. README.md +501 -0
.gitattributes CHANGED
@@ -33,3 +33,4 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ canary-1b-flash.nemo filter=lfs diff=lfs merge=lfs -text
README.md CHANGED
@@ -1,3 +1,504 @@
1
  ---
2
  license: cc-by-4.0
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: cc-by-4.0
3
+ language:
4
+ - en
5
+ - de
6
+ - es
7
+ - fr
8
+ library_name: nemo
9
+ datasets:
10
+ - librispeech_asr
11
+ - fisher_corpus
12
+ - Switchboard-1
13
+ - WSJ-0
14
+ - WSJ-1
15
+ - National-Singapore-Corpus-Part-1
16
+ - National-Singapore-Corpus-Part-6
17
+ - vctk
18
+ - voxpopuli
19
+ - europarl
20
+ - multilingual_librispeech
21
+ - mozilla-foundation/common_voice_8_0
22
+ - MLCommons/peoples_speech
23
+ thumbnail: null
24
+ tags:
25
+ - automatic-speech-recognition
26
+ - automatic-speech-translation
27
+ - speech
28
+ - audio
29
+ - Transformer
30
+ - FastConformer
31
+ - Conformer
32
+ - pytorch
33
+ - NeMo
34
+ - hf-asr-leaderboard
35
+ widget:
36
+ - example_title: Librispeech sample 1
37
+ src: https://cdn-media.huggingface.co/speech_samples/sample1.flac
38
+ - example_title: Librispeech sample 2
39
+ src: https://cdn-media.huggingface.co/speech_samples/sample2.flac
40
+ model-index:
41
+ - name: canary-1b-flash
42
+ results:
43
+ - task:
44
+ name: Automatic Speech Recognition
45
+ type: automatic-speech-recognition
46
+ dataset:
47
+ name: LibriSpeech (other)
48
+ type: librispeech_asr
49
+ config: other
50
+ split: test
51
+ args:
52
+ language: en
53
+ metrics:
54
+ - name: Test WER
55
+ type: wer
56
+ value: 2.87
57
+ - task:
58
+ type: Automatic Speech Recognition
59
+ name: automatic-speech-recognition
60
+ dataset:
61
+ name: SPGI Speech
62
+ type: kensho/spgispeech
63
+ config: test
64
+ split: test
65
+ args:
66
+ language: en
67
+ metrics:
68
+ - name: Test WER
69
+ type: wer
70
+ value: 1.95
71
+ - task:
72
+ type: Automatic Speech Translation
73
+ name: automatic-speech-translation
74
+ dataset:
75
+ name: FLEURS
76
+ type: google/fleurs
77
+ config: en_us
78
+ split: test
79
+ args:
80
+ language: en-de
81
+ metrics:
82
+ - name: Test BLEU (En->De)
83
+ type: bleu
84
+ value: 32.27
85
+ - task:
86
+ type: Automatic Speech Translation
87
+ name: automatic-speech-translation
88
+ dataset:
89
+ name: FLEURS
90
+ type: google/fleurs
91
+ config: en_us
92
+ split: test
93
+ args:
94
+ language: en-de
95
+ metrics:
96
+ - name: Test BLEU (En->Es)
97
+ type: bleu
98
+ value: 22.6
99
+ - task:
100
+ type: Automatic Speech Translation
101
+ name: automatic-speech-translation
102
+ dataset:
103
+ name: FLEURS
104
+ type: google/fleurs
105
+ config: en_us
106
+ split: test
107
+ args:
108
+ language: en-de
109
+ metrics:
110
+ - name: Test BLEU (En->Fr)
111
+ type: bleu
112
+ value: 41.22
113
+ - task:
114
+ type: Automatic Speech Translation
115
+ name: automatic-speech-translation
116
+ dataset:
117
+ name: FLEURS
118
+ type: google/fleurs
119
+ config: de_de
120
+ split: test
121
+ args:
122
+ language: de-en
123
+ metrics:
124
+ - name: Test BLEU (De->En)
125
+ type: bleu
126
+ value: 35.5
127
+ - task:
128
+ type: Automatic Speech Translation
129
+ name: automatic-speech-translation
130
+ dataset:
131
+ name: FLEURS
132
+ type: google/fleurs
133
+ config: es_419
134
+ split: test
135
+ args:
136
+ language: es-en
137
+ metrics:
138
+ - name: Test BLEU (Es->En)
139
+ type: bleu
140
+ value: 23.32
141
+ - task:
142
+ type: Automatic Speech Translation
143
+ name: automatic-speech-translation
144
+ dataset:
145
+ name: FLEURS
146
+ type: google/fleurs
147
+ config: fr_fr
148
+ split: test
149
+ args:
150
+ language: fr-en
151
+ metrics:
152
+ - name: Test BLEU (Fr->En)
153
+ type: bleu
154
+ value: 33.42
155
+ - task:
156
+ type: Automatic Speech Translation
157
+ name: automatic-speech-translation
158
+ dataset:
159
+ name: COVOST
160
+ type: covost2
161
+ config: de_de
162
+ split: test
163
+ args:
164
+ language: de-en
165
+ metrics:
166
+ - name: Test BLEU (De->En)
167
+ type: bleu
168
+ value: 39.33
169
+ - task:
170
+ type: Automatic Speech Translation
171
+ name: automatic-speech-translation
172
+ dataset:
173
+ name: COVOST
174
+ type: covost2
175
+ config: es_419
176
+ split: test
177
+ args:
178
+ language: es-en
179
+ metrics:
180
+ - name: Test BLEU (Es->En)
181
+ type: bleu
182
+ value: 41.86
183
+ - task:
184
+ type: Automatic Speech Translation
185
+ name: automatic-speech-translation
186
+ dataset:
187
+ name: COVOST
188
+ type: covost2
189
+ config: fr_fr
190
+ split: test
191
+ args:
192
+ language: fr-en
193
+ metrics:
194
+ - name: Test BLEU (Fr->En)
195
+ type: bleu
196
+ value: 41.43
197
+
198
+ metrics:
199
+ - wer
200
+ - bleu
201
+ pipeline_tag: automatic-speech-recognition
202
  ---
203
+
204
+ # Canary 1B Flash
205
+
206
+ <style>
207
+ img {
208
+ display: inline;
209
+ }
210
+ </style>
211
+
212
+ ## Description:
213
+ NVIDIA NeMo Canary [1] is a family of multilingual multi-tasking models that achieves state-of-the art performance on multiple speech benchmarks. With 883 million parameters and running at 900+ RTFx, canary-1b-flash supports automatic speech-to-text recognition (ASR) in 4 languages (English, German, French, Spanish) and translation from English to German/French/Spanish and from German/French/Spanish to English with or without punctuation and capitalization (PnC). In addition to this, canary-1b-flash also supports functionality for word-level timestamps for English, German, French, and Spanish. This model is ready for commercial use.
214
+
215
+ ### License/Terms of Use:
216
+ canary-1b-flash is released under the CC-BY-4.0 license. By using this model, you are agreeing to the [terms and conditions](https://choosealicense.com/licenses/cc-by-4.0/) of the license. <br>
217
+
218
+ ## References:
219
+ [1] [Less is More: Accurate Speech Recognition & Translation without Web-Scale Data](https://www.isca-archive.org/interspeech_2024/puvvada24_interspeech.pdf) <br>
220
+ [2] [Fast Conformer with Linearly Scalable Attention for Efficient Speech Recognition](https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=10389701)
221
+
222
+ [3] [Attention Is All You Need](https://arxiv.org/abs/1706.03762)
223
+
224
+ [4] [Unified Model for Code-Switching Speech Recognition and Language Identification Based on Concatenated Tokenizer](https://aclanthology.org/2023.calcs-1.7.pdf)
225
+
226
+ [5] [Google Sentencepiece Tokenizer](https://github.com/google/sentencepiece)
227
+
228
+ [6] [NVIDIA NeMo Toolkit](https://github.com/NVIDIA/NeMo)
229
+
230
+ [7] [EMMeTT: Efficient Multimodal Machine Translation Training](https://arxiv.org/abs/2409.13523)
231
+
232
+ [8] [Towards Measuring Fairness in AI: the Casual Conversations Dataset](https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=9634168)
233
+
234
+ ## Model Architecture:
235
+ Canary is an encoder-decoder model with FastConformer [2] Encoder and Transformer Decoder [3]. With audio features extracted from the encoder, task tokens such as \<target language\>, \<task\>, \<toggle timestamps\> and \<toggle PnC\> are fed into the Transformer Decoder to trigger the text generation process. Canary uses a concatenated tokenizer [4] from individual SentencePiece [5] tokenizers of each language, which makes it easy to scale up to more languages. The canary-1b-flash model has 32 encoder layers and 4 layers of decoder layers, leading to a total of 883M parameters.
236
+
237
+ ## NVIDIA NeMo
238
+
239
+ To train, fine-tune or transcribe with canary-1b-flash, you will need to install [NVIDIA NeMo](https://github.com/NVIDIA/NeMo).
240
+
241
+ ## How to Use this Model
242
+
243
+ The model is available for use in the NeMo toolkit [4], and can be used as a pre-trained checkpoint for inference or for fine-tuning on another dataset.
244
+
245
+ ### Loading the Model
246
+
247
+ ```python
248
+ from nemo.collections.asr.models import EncDecMultiTaskModel
249
+ # load model
250
+ canary_model = EncDecMultiTaskModel.from_pretrained('nvidia/canary-1b-flash')
251
+ # update dcode params
252
+ decode_cfg = canary_model.cfg.decoding
253
+ decode_cfg.beam.beam_size = 1
254
+ canary_model.change_decoding_strategy(decode_cfg)
255
+ ```
256
+
257
+ ## Input:
258
+ **Input Type(s):** Audio <br>
259
+ **Input Format(s):** .wav or .flac files<br>
260
+ **Input Parameters(s):** 1D <br>
261
+ **Other Properties Related to Input:** 16000 Hz Mono-channel Audio, Pre-Processing Not Needed <br>
262
+
263
+ Input to canary-1b-flash can be either a list of paths to audio files or a jsonl manifest file.
264
+
265
+ If the input is a list of paths, canary-1b-flash assumes that the audio is English and transcribes it. I.e., canary-1b-flash default behavior is English ASR.
266
+ ```python
267
+ predicted_text = canary_model.transcribe(
268
+ paths2audio_files=['path1.wav', 'path2.wav'],
269
+ batch_size=16, # batch size to run the inference with
270
+ )
271
+ ```
272
+
273
+ To use canary-1b-flash for transcribing other supported languages or perform Speech-to-Text translation or provide word-level timestamps, specify the input as jsonl manifest file, where each line in the file is a dictionary containing the following fields:
274
+
275
+ ```yaml
276
+ # Example of a line in input_manifest.json
277
+ {
278
+ "audio_filepath": "/path/to/audio.wav", # path to the audio file
279
+ "duration": 1000, # duration of the audio, can be set to `None` if using NeMo main branch
280
+ "taskname": "asr", # use "s2t_translation" for speech-to-text translation with r1.23, or "ast" if using the NeMo main branch
281
+ "source_lang": "en", # language of the audio input, set `source_lang`==`target_lang` for ASR, choices=['en','de','es','fr']
282
+ "target_lang": "en", # language of the text output, choices=['en','de','es','fr']
283
+ "pnc": "yes", # whether to have PnC output, choices=['yes', 'no']
284
+ "timestamp": "yes", # whether to output word-levl timestamps, choices=['yes', 'no']
285
+ }
286
+ ```
287
+
288
+ and then use:
289
+ ```python
290
+ predicted_text = canary_model.transcribe(
291
+ "<path to input manifest file>",
292
+ batch_size=16, # batch size to run the inference with
293
+ )
294
+ ```
295
+
296
+ ## Output:
297
+ **Output Type(s):** Text <br>
298
+ **Output Format:** Text output as a string (w/ timestamps) depending on the task chosen for decoding <br>
299
+ **Output Parameters:** 1-Dimensional text string <br>
300
+ **Other Properties Related to Output:** May Need Inverse Text Normalization; Does Not Handle Special Characters <br>
301
+
302
+
303
+ ## Software Integration:
304
+ **Runtime Engine(s):**
305
+ * NeMo - 2.1.0 or higher <br>
306
+
307
+ **Supported Hardware Microarchitecture Compatibility:** <br>
308
+ * [NVIDIA Ampere] <br>
309
+ * [NVIDIA Blackwell] <br>
310
+ * [NVIDIA Jetson] <br>
311
+ * [NVIDIA Hopper] <br>
312
+ * [NVIDIA Lovelace] <br>
313
+ * [NVIDIA Pascal] <br>
314
+ * [NVIDIA Turing] <br>
315
+ * [NVIDIA Volta] <br>
316
+
317
+ **[Preferred/Supported] Operating System(s):** <br>
318
+ * [Linux] <br>
319
+ * [Linux 4 Tegra] <br>
320
+ * [Windows] <br>
321
+
322
+ ## Model Version(s):
323
+ canary-1b-flash <br>
324
+
325
+
326
+ # Training and Evaluation Datasets:
327
+
328
+ ## Training Dataset:
329
+
330
+ The canary-1b-flash model is trained on a total of 85K hrs of speech data. It consists of 31K hrs of public data, 20K hrs collected by [Suno](https://suno.ai/), and 34K hrs of in-house data.
331
+ The datasets below include conversations, videos from the web and audiobook recordings.
332
+
333
+ **Data Collection Method:**
334
+ * Human <br>
335
+
336
+ **Labeling Method:**
337
+ * Hybrid: Human, Automated <br>
338
+
339
+ The constituents of public data are as follows.
340
+
341
+ #### English (25.5k hours)
342
+ - Librispeech 960 hours
343
+ - Fisher Corpus
344
+ - Switchboard-1 Dataset
345
+ - WSJ-0 and WSJ-1
346
+ - National Speech Corpus (Part 1, Part 6)
347
+ - VCTK
348
+ - VoxPopuli (EN)
349
+ - Europarl-ASR (EN)
350
+ - Multilingual Librispeech (MLS EN) - 2,000 hour subset
351
+ - Mozilla Common Voice (v7.0)
352
+ - People's Speech - 12,000 hour subset
353
+ - Mozilla Common Voice (v11.0) - 1,474 hour subset
354
+
355
+ #### German (2.5k hours)
356
+ - Mozilla Common Voice (v12.0) - 800 hour subset
357
+ - Multilingual Librispeech (MLS DE) - 1,500 hour subset
358
+ - VoxPopuli (DE) - 200 hr subset
359
+
360
+ #### Spanish (1.4k hours)
361
+ - Mozilla Common Voice (v12.0) - 395 hour subset
362
+ - Multilingual Librispeech (MLS ES) - 780 hour subset
363
+ - VoxPopuli (ES) - 108 hour subset
364
+ - Fisher - 141 hour subset
365
+
366
+ #### French (1.8k hours)
367
+ - Mozilla Common Voice (v12.0) - 708 hour subset
368
+ - Multilingual Librispeech (MLS FR) - 926 hour subset
369
+ - VoxPopuli (FR) - 165 hour subset
370
+
371
+
372
+ ## Evaluation Dataset:
373
+
374
+ **Data Collection Method:** <br>
375
+ * Human <br>
376
+
377
+ **Labeling Method:** <br>
378
+ * Human <br>
379
+
380
+ Automatic Speech Recognition:
381
+ * [HuggingFace OpenASR Leaderboard evaluation sets](https://huggingface.co/spaces/hf-audio/open_asr_leaderboard)
382
+ * [MLS](https://huggingface.co/datasets/facebook/multilingual_librispeech)
383
+
384
+ Automatic Speech Translation:
385
+ * [FLEURS](https://huggingface.co/datasets/google/fleurs)
386
+ * [COVOST-v2](https://github.com/facebookresearch/covost)
387
+ * [mExpresso](https://huggingface.co/facebook/seamless-expressive#mexpresso-multilingual-expresso)
388
+
389
+ Timestamp Prediction:
390
+ * [Librispeech](https://www.openslr.org/12)
391
+
392
+ Hallucination Robustness:
393
+ * [MUSAN](https://www.openslr.org/17/) 48 hrs eval set
394
+
395
+ Noise Robustness:
396
+ * [Librispeech](https://www.openslr.org/12)
397
+
398
+ Model Fairness:
399
+ * [Casual Conversations Dataset](https://arxiv.org/pdf/2104.02821)
400
+
401
+ ## Training
402
+
403
+ canary-1b-flash is trained using the NVIDIA NeMo toolkit [6] for a total of 200K steps with 2D bucketing [7] and optimal batch sizes set using OOMptimizer [7].The model is trained on 128 NVIDIA A100 80GB GPUs.
404
+ The model can be trained using this [example script](https://github.com/NVIDIA/NeMo/blob/main/examples/asr/speech_multitask/speech_to_text_aed.py) and [base config](https://github.com/NVIDIA/NeMo/blob/main/examples/asr/conf/speech_multitask/fast-conformer_aed.yaml).
405
+
406
+ The tokenizers for these models were built using the text transcripts of the train set with this [script](https://github.com/NVIDIA/NeMo/blob/main/scripts/tokenizers/process_asr_text_tokenizer.py).
407
+
408
+ ## Inference:
409
+ **Engine:** NVIDIA NeMo <br>
410
+ **Test Hardware :** <br>
411
+ * A6000 <br>
412
+ * A100 <br>
413
+ * V100 <br>
414
+
415
+ ## Performance
416
+
417
+ In both ASR and AST experiments, predictions were generated using beam search with width 5 and length penalty 1.0.
418
+
419
+ ### ASR Performance (w/o PnC)
420
+
421
+ The ASR performance is measured with word error rate (WER), and we process the groundtruth and predicted text with [whisper-normalizer](https://pypi.org/project/whisper-normalizer/).
422
+
423
+ WER on [HuggingFace OpenASR leaderboard](https://huggingface.co/spaces/hf-audio/open_asr_leaderboard):
424
+
425
+ | **Version** | **Model** | **RTFx** | **AMI** | **GigaSpeech** | **LS Clean** | **LS Other** | **Earnings22** | **SPGISpech** | **Tedlium** | **Voxpopuli** |
426
+ |:---------:|:-----------:|:------:|:------:|:------:|:------:|:------:|:------:|:------:|:------:|:------:|
427
+ | 2.2.0 | canary-1b-flash | 928.19 | 13.08 | 9.88 | 1.48 | 2.87 | 12.77 | 1.95 | 3.09 | 5.64 |
428
+
429
+
430
+ WER on [MLS](https://huggingface.co/datasets/facebook/multilingual_librispeech) test set:
431
+
432
+ | **Version** | **Model** | **De** | **Es** | **Fr** |
433
+ |:---------:|:-----------:|:------:|:------:|:------:|
434
+ | 2.2.0 | canary-1b-flash | 4.36 | 2.69 | 4.47 |
435
+
436
+
437
+ More details on evaluation can be found at [HuggingFace ASR Leaderboard](https://huggingface.co/spaces/hf-audio/open_asr_leaderboard)
438
+
439
+ ### AST Performance
440
+
441
+ We evaluate AST performance with [BLEU score](https://lightning.ai/docs/torchmetrics/stable/text/sacre_bleu_score.html), and use native annotations with punctuation and capitalization in the datasets.
442
+
443
+ BLEU score on [FLEURS](https://huggingface.co/datasets/google/fleurs) test set:
444
+
445
+ | **Version** | **Model** | **En->De** | **En->Es** | **En->Fr** | **De->En** | **Es->En** | **Fr->En** |
446
+ |:-----------:|:---------:|:----------:|:----------:|:----------:|:----------:|:----------:|:----------:|
447
+ | 2.2.0 | canary-1b-flash | 32.27 | 22.6 | 41.22 | 35.5 | 23.32 | 33.42 |
448
+
449
+
450
+ BLEU score on [COVOST-v2](https://github.com/facebookresearch/covost) test set:
451
+
452
+ | **Version** | **Model** | **De->En** | **Es->En** | **Fr->En** |
453
+ |:-----------:|:---------:|:----------:|:----------:|:----------:|
454
+ | 2.2.0 | canary-1b-flash | 39.33 | 41.86 | 41.43 |
455
+
456
+ BLEU score on [mExpresso](https://huggingface.co/facebook/seamless-expressive#mexpresso-multilingual-expresso) test set:
457
+
458
+ | **Version** | **Model** | **En->De** | **En->Es** | **En->Fr** |
459
+ |:-----------:|:---------:|:----------:|:----------:|:----------:|
460
+ | 2.2.0 | canary-1b-flash | 22.91 | 35.69 | 27.85 |
461
+
462
+ ### Timestamp Prediction
463
+ F1-score on [Librispeech Test sets](https://www.openslr.org/12) at collar value of 200ms
464
+ | **Version** | **Model** | **test-clean** | **test-other** |
465
+ |:-----------:|:---------:|:----------:|:----------:|
466
+ | 2.2.0 | canary-1b-flash | 95.5 | 93.5 |
467
+
468
+ ### Hallucination Robustness
469
+ Number of characters per minute on [MUSAN](https://www.openslr.org/17) 48 hrs eval set
470
+ | **Version** | **Model** | **# of character per minute** |
471
+ |:-----------:|:---------:|:----------:|
472
+ | 2.2.0 | canary-1b-flash | 60.92 |
473
+
474
+ ### Noise Robustness
475
+ WER on [Librispeech Test Clean](https://www.openslr.org/12) at different SNR (signal to noise ratio) levels of additive white noise
476
+
477
+ | **Version** | **Model** | **SNR 10** | **SNR 5** | **SNR 0** | **SNR -5** |
478
+ |:-----------:|:---------:|:----------:|:----------:|:----------:|:----------:|
479
+ | 2.2.0 | canary-1b-flash | 2.34 | 3.69 | 8.84 | 29.71 |
480
+
481
+ ## Model Fairness Evaluation
482
+
483
+ As outlined in the paper "Towards Measuring Fairness in AI: the Casual Conversations Dataset" [8], we assessed the canary-1b-flash model for fairness. The model was evaluated on the CausalConversations-v1 dataset, and the results are reported as follows:
484
+
485
+ ### Gender Bias:
486
+
487
+ | Gender | Male | Female | N/A | Other |
488
+ | :--- | :--- | :--- | :--- | :--- |
489
+ | Num utterances | 19325 | 24532 | 926 | 33 |
490
+ | % WER | 14.66 | 12.44 | 17.17 | 27.56 |
491
+
492
+ ### Age Bias:
493
+
494
+ | Age Group | (18-30) | (31-45) | (46-85) | (1-100) |
495
+ | :--- | :--- | :--- | :--- | :--- |
496
+ | Num utterances | 15956 | 14585 | 13349 | 43890 |
497
+ | % WER | 13.18 | 13.45 | 13.64 | 13.41 |
498
+
499
+ (Error rates for fairness evaluation are determined by normalizing both the reference and predicted text, similar to the methods used in the evaluations found at https://github.com/huggingface/open_asr_leaderboard.)
500
+
501
+ ## Ethical Considerations:
502
+ NVIDIA believes Trustworthy AI is a shared responsibility and we have established policies and practices to enable development for a wide array of AI applications. When downloaded or used in accordance with our terms of service, developers should work with their internal model team to ensure this model meets requirements for the relevant industry and use case and addresses unforeseen product misuse.
503
+ Please report security vulnerabilities or NVIDIA AI Concerns [here](https://www.nvidia.com/en-us/support/submit-security-vulnerability/).
504
+