Error when using ICL model
Hey dude, thank for the interface and th quantize model really cool to be able to run this on my 3060. Let me ask you a question (noobie here) I'm probably doing something wrong but I try to use the ICL model from your github I get the error bellow. I'm using the ICL on stage 1, on stage 2 s2-1B-general-exl2. And I'm adding a 30sec mp3 to the Use audio prompt first checkbox. Am I missing something? Thanx for your help and thx again for the interface and models.
Inference started. Outputs will be saved in /workspace/outputs...Starting stage 1...
Traceback (most recent call last):
File "/workspace/YuE-exllamav2-UI/src/yue/infer_stage1.py", line 496, in
main()
File "/workspace/YuE-exllamav2-UI/src/yue/infer_stage1.py", line 455, in main
pipeline = Stage1Pipeline_EXL2(
^^^^^^^^^^^^^^^^^^^^
File "/workspace/YuE-exllamav2-UI/src/yue/infer_stage1.py", line 294, in init
exl2_config = ExLlamaV2Config(model_path)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/pyenv/lib/python3.12/site-packages/exllamav2/config.py", line 186, in init
self.prepare()
File "/opt/conda/envs/pyenv/lib/python3.12/site-packages/exllamav2/config.py", line 465, in prepare
check_keys(self.arch.lm, self.arch.lm_prefix)
File "/opt/conda/envs/pyenv/lib/python3.12/site-packages/exllamav2/config.py", line 463, in check_keys
raise ValueError(f" ## Could not find {prefix}.* in model")
ValueError: ## Could not find model.embed_tokens.* in model