runtime error
Exit code: 1. Reason: afetensors: 0%| | 0.00/3.73G [00:00<?, ?B/s][A model-00004-of-00004.safetensors: 0%| | 729k/3.73G [00:01<2:10:02, 478kB/s][A model-00004-of-00004.safetensors: 2%|â | 71.6M/3.73G [00:02<01:48, 33.8MB/s][A model-00004-of-00004.safetensors: 13%|ââ | 474M/3.73G [00:04<00:25, 128MB/s] [A model-00004-of-00004.safetensors: 20%|ââ | 742M/3.73G [00:05<00:18, 161MB/s][A model-00004-of-00004.safetensors: 29%|âââ | 1.08G/3.73G [00:06<00:13, 201MB/s][A model-00004-of-00004.safetensors: 47%|âââââ | 1.75G/3.73G [00:08<00:06, 321MB/s][A model-00004-of-00004.safetensors: 67%|âââââââ | 2.48G/3.73G [00:09<00:02, 417MB/s][A model-00004-of-00004.safetensors: 81%|ââââââââ | 3.02G/3.73G [00:11<00:02, 318MB/s][A model-00004-of-00004.safetensors: 95%|ââââââââââ| 3.53G/3.73G [00:12<00:00, 359MB/s][A model-00004-of-00004.safetensors: 100%|ââââââââââ| 3.73G/3.73G [00:13<00:00, 284MB/s] Downloading shards: 100%|ââââââââââ| 4/4 [00:48<00:00, 12.12s/it][A Downloading shards: 100%|ââââââââââ| 4/4 [00:48<00:00, 12.06s/it] Traceback (most recent call last): File "/home/user/app/app.py", line 19, in <module> model = AutoModelForCausalLM.from_pretrained(model_name, File "/usr/local/lib/python3.10/site-packages/transformers/models/auto/auto_factory.py", line 559, in from_pretrained return model_class.from_pretrained( File "/usr/local/lib/python3.10/site-packages/transformers/modeling_utils.py", line 4097, in from_pretrained model = cls(config, *model_args, **model_kwargs) File "/home/user/.cache/huggingface/modules/transformers_modules/AIDC-AI/Ovis2-8B/c0730f752cf605d44788a08151cfede0caab714d/modeling_ovis.py", line 293, in __init__ version.parse(importlib.metadata.version("flash_attn")) >= version.parse("2.6.3")), \ AssertionError: Using `flash_attention_2` requires having `flash_attn>=2.6.3` installed.
Container logs:
Fetching error logs...