Suggestions on running the optimized gguf locally.

#3
by thechristyjo - opened

Hi, I am new to TTS models and I wanted to run these quantized models locally. I am a beginner. Can you suggest me some ways in which I can run these locally

you could try the ggc s2 first; the model will be downloaded automatically, and it doesn't need internet while the second time you launch

Can I run this on Windows?

raise RuntimeError(f"Error loading model from Hugging Face Hub ({model_name})") from e
RuntimeError: Error loading model from Hugging Face Hub (callgg/dia-f16)

i get this error on trying to run locally. can you suggest a workaround. also how can i load the quantized models in this model directory, not just the default model from nari labs

not enough information; might possibly missing dependencies (refer to the last line of the full log, you can see what dependency you are missing); need the engine diao to work, should manually install it first by pip install diao

Can I run this on Windows?

tested on windows; works

Can you pls help on how to run the .gguf model, the ggc s2 is running safetensor model

Sign up or log in to comment