runtime error
Exit code: 1. Reason: [A model-00003-of-00003.safetensors: 18%|ββ | 724M/4.03G [00:03<00:12, 269MB/s] [A model-00003-of-00003.safetensors: 29%|βββ | 1.17G/4.03G [00:04<00:08, 332MB/s][A model-00003-of-00003.safetensors: 38%|ββββ | 1.54G/4.03G [00:05<00:07, 343MB/s][A model-00003-of-00003.safetensors: 51%|βββββ | 2.04G/4.03G [00:06<00:05, 392MB/s][A model-00003-of-00003.safetensors: 63%|βββββββ | 2.55G/4.03G [00:07<00:03, 416MB/s][A model-00003-of-00003.safetensors: 74%|ββββββββ | 2.98G/4.03G [00:08<00:02, 419MB/s][A model-00003-of-00003.safetensors: 88%|βββββββββ | 3.56G/4.03G [00:09<00:01, 467MB/s][A model-00003-of-00003.safetensors: 100%|ββββββββββ| 4.03G/4.03G [00:10<00:00, 382MB/s] Traceback (most recent call last): File "/home/user/app/app.py", line 14, in <module> model = AutoModelForCausalLM.from_pretrained("Navid-AI/Yehia-7B-preview", torch_dtype=torch.bfloat16, attn_implementation="flash_attention_2", token=os.getenv("HF_TOKEN")).to(device) File "/usr/local/lib/python3.10/site-packages/transformers/models/auto/auto_factory.py", line 600, in from_pretrained return model_class.from_pretrained( File "/usr/local/lib/python3.10/site-packages/transformers/modeling_utils.py", line 311, in _wrapper return func(*args, **kwargs) File "/usr/local/lib/python3.10/site-packages/transformers/modeling_utils.py", line 4758, in from_pretrained config = cls._autoset_attn_implementation( File "/usr/local/lib/python3.10/site-packages/transformers/modeling_utils.py", line 2315, in _autoset_attn_implementation cls._check_and_enable_flash_attn_2( File "/usr/local/lib/python3.10/site-packages/transformers/modeling_utils.py", line 2466, in _check_and_enable_flash_attn_2 raise ValueError( ValueError: FlashAttention2 has been toggled on, but it cannot be used due to the following error: Flash Attention 2 is not available on CPU. Please make sure torch can access a CUDA device.
Container logs:
Fetching error logs...