NVIDIA FastConformer-Hybrid Large (ru) model converted to ONNX format for onnx-asr.

Install onnx-asr

pip install onnx-asr[cpu,hub]

Load FastConformer Ru model with CTC decoder and recognize wav file

import onnx_asr
model = onnx_asr.load_model("nemo-fastconformer-ru-ctc")
print(model.recognize("test.wav"))

Load FastConformer Ru model with RNN-T decoder and recognize wav file

import onnx_asr
model = onnx_asr.load_model("nemo-fastconformer-ru-rnnt")
print(model.recognize("test.wav"))

Code for models export

import nemo.collections.asr as nemo_asr
from pathlib import Path

model_name = "stt_ru_fastconformer_hybrid_large_pc"
onnx_dir = Path("nemo-onnx")
onnx_dir.mkdir(exist_ok=True)

model = nemo_asr.models.ASRModel.from_pretrained("nvidia/" + model_name)

# For export Hybrid models with CTC decoder
# model.set_export_config({"decoder_type": "ctc"})

model.export(str(Path(onnx_dir, "model.onnx")))

with Path(onnx_dir, "vocab.txt").open("wt") as f:
    for i, token in enumerate([*model.tokenizer.vocab, "<blk>"]):
        f.write(f"{token} {i}\n")
Downloads last month
240
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for istupakov/stt_ru_fastconformer_hybrid_large_pc_onnx

Quantized
(1)
this model

Space using istupakov/stt_ru_fastconformer_hybrid_large_pc_onnx 1

Collection including istupakov/stt_ru_fastconformer_hybrid_large_pc_onnx