ONNX

A fork of voicekit-team/T-one with ONNX and CUDA support.

Solves the problem of extremely slow operation of the model on some devices and adds the ability to run inference directly from the GPU code.

!pip install git+https://github.com/NikiPshg/T-one-cuda-onnx.git

Usage example

from tone import StreamingCTCPipeline, read_audio, read_example_audio


audio = read_example_audio() # or read_audio("your_audio.flac")
# device_id device_id if the graphics card is not found, the CPU is used
pipeline = StreamingCTCPipeline.from_hugging_face(device_id=0)
print(pipeline.forward_offline(audio))  # offline recognition using onnx cuda
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Collection including MTUCI/T-one-onnx-fix