runtime error
Exit code: 1. Reason: A new version of the following files was downloaded from https://huggingface.co/deepseek-ai/DeepSeek-R1: - configuration_deepseek.py . Make sure to double-check they do not contain any added malicious code. To avoid downloading new versions of the code file, you can pin a revision. A new version of the following files was downloaded from https://huggingface.co/deepseek-ai/DeepSeek-R1: - modeling_deepseek.py . Make sure to double-check they do not contain any added malicious code. To avoid downloading new versions of the code file, you can pin a revision. Traceback (most recent call last): File "/home/user/app/app.py", line 221, in <module> main() File "/home/user/app/app.py", line 216, in main chat_app = GradioRAGChat() File "/home/user/app/app.py", line 116, in __init__ self.rag = RAGPipeline() File "/home/user/app/app.py", line 25, in __init__ self.load_llm() File "/home/user/app/app.py", line 29, in load_llm self.llm_model = AutoModelForCausalLM.from_pretrained( File "/usr/local/lib/python3.10/site-packages/transformers/models/auto/auto_factory.py", line 559, in from_pretrained return model_class.from_pretrained( File "/usr/local/lib/python3.10/site-packages/transformers/modeling_utils.py", line 3605, in from_pretrained config.quantization_config = AutoHfQuantizer.merge_quantization_configs( File "/usr/local/lib/python3.10/site-packages/transformers/quantizers/auto.py", line 181, in merge_quantization_configs quantization_config = AutoQuantizationConfig.from_dict(quantization_config) File "/usr/local/lib/python3.10/site-packages/transformers/quantizers/auto.py", line 105, in from_dict raise ValueError( ValueError: Unknown quantization type, got fp8 - supported types are: ['awq', 'bitsandbytes_4bit', 'bitsandbytes_8bit', 'gptq', 'aqlm', 'quanto', 'eetq', 'higgs', 'hqq', 'compressed-tensors', 'fbgemm_fp8', 'torchao', 'bitnet', 'vptq']
Container logs:
Fetching error logs...