runtime error

Exit code: 1. Reason: β–ˆβ– | 297M/936M [00:02<00:04, 138MB/s]  model.pt: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‰| 936M/936M [00:03<00:00, 262MB/s] requirements.txt: 0%| | 0.00/96.0 [00:00<?, ?B/s] requirements.txt: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 96.0/96.0 [00:00<00:00, 722kB/s] Detect model requirements, begin to install it: /home/user/.cache/huggingface/hub/models--FunAudioLLM--SenseVoiceSmall/snapshots/3eb3b4eeffc2f2dde6051b853983753db33e35c3/requirements.txt install model requirements successfully Loading remote code failed: model, No module named 'model' INFO:vita_audio.tokenizer_sensevoice_glm4voice:self.device='cuda:0' Loading SenseVoiceSmall Done INFO:vita_audio.tokenizer_sensevoice_glm4voice:self.device='cuda:0' Loading GLM4VoiceTokenizer Traceback (most recent call last): File "/home/user/app/web_demo.py", line 397, in <module> main() File "/home/user/app/web_demo.py", line 350, in main audio_tokenizer.load_model() File "/home/user/app/vita_audio/tokenizer_sensevoice_glm4voice.py", line 130, in load_model WhisperVQEncoder.from_pretrained(self.model_name_or_path).eval().to(self.device) File "/usr/local/lib/python3.10/site-packages/transformers/modeling_utils.py", line 3110, in to return super().to(*args, **kwargs) File "/usr/local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1340, in to return self._apply(convert) File "/usr/local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 900, in _apply module._apply(fn) File "/usr/local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 927, in _apply param_applied = fn(param) File "/usr/local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1326, in convert return t.to( File "/usr/local/lib/python3.10/site-packages/torch/cuda/__init__.py", line 319, in _lazy_init torch._C._cuda_init() RuntimeError: Found no NVIDIA driver on your system. Please check that you have an NVIDIA GPU and installed a driver from http://www.nvidia.com/Download/index.aspx

Container logs:

Fetching error logs...