runtime error

Exit code: 1. Reason: /s] transformer/diffusion_pytorch_model.safe(…): 90%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‰ | 2.89G/3.22G [00:20<00:01, 208MB/s] transformer/diffusion_pytorch_model.safe(…): 98%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Š| 3.15G/3.22G [00:22<00:00, 211MB/s] transformer/diffusion_pytorch_model.safe(…): 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 3.22G/3.22G [00:22<00:00, 144MB/s] config.json: 0%| | 0.00/1.28k [00:00<?, ?B/s] config.json: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 1.28k/1.28k [00:00<00:00, 5.84MB/s] vae/diffusion_pytorch_model.safetensors: 0%| | 0.00/1.25G [00:00<?, ?B/s] vae/diffusion_pytorch_model.safetensors: 11%|β–ˆ | 134M/1.25G [00:04<00:41, 26.9MB/s] vae/diffusion_pytorch_model.safetensors: 86%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Œ | 1.07G/1.25G [00:06<00:00, 225MB/s] vae/diffusion_pytorch_model.safetensors: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 1.25G/1.25G [00:06<00:00, 191MB/s] Loading pipeline components...: 0%| | 0/5 [00:00<?, ?it/s] Loading pipeline components...: 20%|β–ˆβ–ˆ | 1/5 [00:02<00:09, 2.44s/it] Loading checkpoint shards: 0%| | 0/2 [00:00<?, ?it/s] Loading checkpoint shards: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 2/2 [00:00<00:00, 21.17it/s] Loading pipeline components...: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 5/5 [00:18<00:00, 3.69s/it] Loading pipeline components...: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 5/5 [00:18<00:00, 3.61s/it] /usr/local/lib/python3.10/site-packages/gradio/helpers.py:148: UserWarning: In future versions of Gradio, the `cache_examples` parameter will no longer accept a value of 'lazy'. To enable lazy caching in Gradio, you should set `cache_examples=True`, and `cache_mode='lazy'` instead. warnings.warn( Will cache examples in '/home/user/app/.gradio/cached_examples/21' directory at first use. Traceback (most recent call last): File "/home/user/app/app.py", line 149, in <module> demo.launch(mcp_server=True) TypeError: Blocks.launch() got an unexpected keyword argument 'mcp_server'

Container logs:

Fetching error logs...