Hugging Face's logo
PAI-GEN
/
Runtime error

runtime error

Exit code: 1. Reason: β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 3.87G/3.87G [00:08<00:00, 477MB/s] (…)ion_pytorch_model.safetensors.index.json: 0%| | 0.00/121k [00:00<?, ?B/s] (…)ion_pytorch_model.safetensors.index.json: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 121k/121k [00:00<00:00, 170MB/s] config.json: 0%| | 0.00/820 [00:00<?, ?B/s] config.json: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 820/820 [00:00<00:00, 10.6MB/s] vae/diffusion_pytorch_model.safetensors: 0%| | 0.00/168M [00:00<?, ?B/s] vae/diffusion_pytorch_model.safetensors: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 168M/168M [00:00<00:00, 169MB/s] Loading pipeline components...: 0%| | 0/7 [00:00<?, ?it/s]`torch_dtype` is deprecated! Use `dtype` instead! Loading pipeline components...: 0%| | 0/7 [00:00<?, ?it/s] Traceback (most recent call last): File "/home/user/app/app.py", line 34, in <module> selected_pipe = setup_model(default_t2i_model, torch_dtype, device) File "/home/user/app/utils.py", line 57, in setup_model pipe = FLUXPipelineWithIntermediateOutputs.from_pretrained(t2i_model_repo, torch_dtype=torch_dtype).to(device) File "/usr/local/lib/python3.10/site-packages/huggingface_hub/utils/_validators.py", line 114, in _inner_fn return fn(*args, **kwargs) File "/usr/local/lib/python3.10/site-packages/diffusers/pipelines/pipeline_utils.py", line 1021, in from_pretrained loaded_sub_model = load_sub_model( File "/usr/local/lib/python3.10/site-packages/diffusers/pipelines/pipeline_loading_utils.py", line 857, in load_sub_model loaded_sub_model = load_method(os.path.join(cached_folder, name), **loading_kwargs) File "/usr/local/lib/python3.10/site-packages/transformers/modeling_utils.py", line 277, in _wrapper return func(*args, **kwargs) File "/usr/local/lib/python3.10/site-packages/transformers/modeling_utils.py", line 4974, in from_pretrained model = cls(config, *model_args, **model_kwargs) TypeError: CLIPTextModel.__init__() got an unexpected keyword argument 'offload_state_dict'

Container logs:

Fetching error logs...