runtime error
Exit code: 1. Reason: 4888 (4.8K) [text/plain] Saving to: βscript.pyβ 0K .... 100% 154M=0s Last-modified header missing -- time-stamps turned off. 2024-10-30 17:19:56 (154 MB/s) - βscript.pyβ saved [4888/4888] /usr/local/lib/python3.10/site-packages/transformers/tokenization_utils_base.py:1617: FutureWarning: `clean_up_tokenization_spaces` was not set. It will be set to `True` by default. This behavior will be deprecated in transformers v4.45, and will be then set to `False` by default. For more details check this issue: https://github.com/huggingface/transformers/issues/31884 warnings.warn( Downloading shards: 0%| | 0/2 [00:00<?, ?it/s][A Downloading shards: 50%|βββββ | 1/2 [00:43<00:43, 43.34s/it][A Downloading shards: 100%|ββββββββββ| 2/2 [01:09<00:00, 33.24s/it][A Downloading shards: 100%|ββββββββββ| 2/2 [01:09<00:00, 34.75s/it] Loading checkpoint shards: 0%| | 0/2 [00:00<?, ?it/s][A Loading checkpoint shards: 0%| | 0/2 [00:00<?, ?it/s] Traceback (most recent call last): File "/home/user/app/app.py", line 37, in <module> model = Blip2ForConditionalGeneration.from_pretrained( File "/usr/local/lib/python3.10/site-packages/transformers/modeling_utils.py", line 4008, in from_pretrained ) = cls._load_pretrained_model( File "/usr/local/lib/python3.10/site-packages/transformers/modeling_utils.py", line 4496, in _load_pretrained_model new_error_msgs, offload_index, state_dict_index = _load_state_dict_into_meta_model( File "/usr/local/lib/python3.10/site-packages/transformers/modeling_utils.py", line 973, in _load_state_dict_into_meta_model set_module_tensor_to_device(model, param_name, param_device, **set_module_kwargs) File "/usr/local/lib/python3.10/site-packages/accelerate/utils/modeling.py", line 390, in set_module_tensor_to_device and torch.device(device).type == "cuda" RuntimeError: Cannot access accelerator device when none is available.
Container logs:
Fetching error logs...