Python API
First of all, thanks for the model and the work that this implies. This model is awesome and goes very well.
Is there a Python API available for using this model locally? I want to use this model and functions as a part of a bigger python project and I would like to handle it as another library of the project instead of making calls from CLI or the WEBUI.
Another reason I would like this "pythonic" API is because I will be performing asynchronous calls to the model for a certain period of time, and making the call from the CLI supposes to load the model in GPU every time, while with python I can just load the model and keep it on memory until every process exits.
Thank you in advance and congratulations for all the work!
Can you give more details about it?
Maybe you can get more information about the project from our official github repo:https://github.com/fishaudio/fish-speech
Can you give more details about it?
Maybe you can get more information about the project from our official github repo:https://github.com/PoTaTo-Mika/fish-speech
Hello!
Like other models on HF, it will be more simple to use it by transformers python API.
Yes, I successfully used your model, but got multiple errors by instaling all dependencies. But I didnt use github repo, just cloned HF space with model (it was more simple).
Is there any chances to reproduct it to transformers API? Or how can I make it by myself?
Can you give more details about it?
Maybe you can get more information about the project from our official github repo:https://github.com/PoTaTo-Mika/fish-speechHello!
Like other models on HF, it will be more simple to use it by transformers python API.
Yes, I successfully used your model, but got multiple errors by instaling all dependencies. But I didnt use github repo, just cloned HF space with model (it was more simple).
Is there any chances to reproduct it to transformers API? Or how can I make it by myself?
Sorry for the wrong link , I've updated the official repo link.
Due to the tokenizer, we have to abort the transformers api, if you want to use it, you may need to convert the tiktoken into tokenizer.json.
Due to the tokenizer, we have to abort the transformers api, if you want to use it, you may need to convert the tiktoken into tokenizer.json.
Sorry, I'm newbie in TTS task. How can I do it?
Can you give more details about it?
Maybe you can get more information about the project from our official github repo:https://github.com/PoTaTo-Mika/fish-speechHello!
Like other models on HF, it will be more simple to use it by transformers python API.
Yes, I successfully used your model, but got multiple errors by instaling all dependencies. But I didnt use github repo, just cloned HF space with model (it was more simple).
Is there any chances to reproduct it to transformers API? Or how can I make it by myself?Sorry for the wrong link , I've updated the official repo link.
Due to the tokenizer, we have to abort the transformers api, if you want to use it, you may need to convert the tiktoken into tokenizer.json.
Hello again!
Can you also explain, why voice generation is different with demo (https://huggingface.co/spaces/fishaudio/fish-speech-1) and official repo (https://github.com/fishaudio/fish-speech)?
Idk, why sound quality on generation is so different... Voice from official repo sounds more "robotic" then in demo.
Can you give more details about it?
Maybe you can get more information about the project from our official github repo:https://github.com/PoTaTo-Mika/fish-speechHello!
Like other models on HF, it will be more simple to use it by transformers python API.
Yes, I successfully used your model, but got multiple errors by instaling all dependencies. But I didnt use github repo, just cloned HF space with model (it was more simple).
Is there any chances to reproduct it to transformers API? Or how can I make it by myself?Sorry for the wrong link , I've updated the official repo link.
Due to the tokenizer, we have to abort the transformers api, if you want to use it, you may need to convert the tiktoken into tokenizer.json.Hello again!
Can you also explain, why voice generation is different with demo (https://huggingface.co/spaces/fishaudio/fish-speech-1) and official repo (https://github.com/fishaudio/fish-speech)?
Idk, why sound quality on generation is so different... Voice from official repo sounds more "robotic" then in demo.
No difference between them, maybe you need to use a reference audio.