runtime error
Exit code: 1. Reason: None of PyTorch, TensorFlow >= 2.0, or Flax have been found. Models won't be available and only tokenizers, configuration and file/data utilities can be used. tokenizer_config.json: 0%| | 0.00/99.9k [00:00<?, ?B/s][A tokenizer_config.json: 100%|██████████| 99.9k/99.9k [00:00<00:00, 25.9MB/s] tokenizer.json: 0%| | 0.00/5.06M [00:00<?, ?B/s][A tokenizer.json: 100%|██████████| 5.06M/5.06M [00:00<00:00, 23.2MB/s] special_tokens_map.json: 0%| | 0.00/7.69k [00:00<?, ?B/s][A special_tokens_map.json: 100%|██████████| 7.69k/7.69k [00:00<00:00, 14.0MB/s] chat_template.jinja: 0%| | 0.00/209 [00:00<?, ?B/s][A chat_template.jinja: 100%|██████████| 209/209 [00:00<00:00, 1.39MB/s] Traceback (most recent call last): File "/home/user/app/app.py", line 10, in <module> model = AutoModelForCausalLM.from_pretrained( File "/usr/local/lib/python3.10/site-packages/transformers/utils/import_utils.py", line 1885, in __getattribute__ requires_backends(cls, cls._backends) File "/usr/local/lib/python3.10/site-packages/transformers/utils/import_utils.py", line 1871, in requires_backends raise ImportError("".join(failed)) ImportError: AutoModelForCausalLM requires the PyTorch library but it was not found in your environment. Checkout the instructions on the installation page: https://pytorch.org/get-started/locally/ and follow the ones that match your environment. Please note that you may need to restart your runtime after installation.
Container logs:
Fetching error logs...