Can it be used in case of real-time voice-chat scenario?

#2
by satya7 - opened

As the current hf-cpu deployment space taking 4-6 sec to infer a 10 sec audio output. Can you suggest what all approach is possible to reduce that latency. Thanks for the nice work.

Ringg AI org

We will be releasing a smaller model which can be used for the same. @utkarshshukla2912 can add more

Ringg AI org

Hey @satya7 , we will be providing a distill version of the model for realtime inference. Also a bit less expressive model which can run on CPU.

@satya7 we have updated to a faster model please
test it out

where is the model ?

Hey @rahul7star the distill window provides you the inference with the faster model.

not for inference, if model is out let us know

https://huggingface.co/RinggAI/Ringg-Squirrel-Free-API @rahul7star also released a free API for usage. we will release the base model in 2 weeks.

@utkarshshukla2912 Any updates on open-sourcing the TTS model?

Sign up or log in to comment