Exception in worker VllmWorkerProcess while processing method load_model
running through vllm
vllm serve /home/user01/llm-models/llama-vision-46b --tensor-parallel-size 8 --trust-remote-code --dtype auto --gpu-memory-utilization 0.95 --swap-space 0 --enforce-eager
INFO 02-24 05:12:52 init.py:207] Automatically detected platform cuda.
INFO 02-24 05:12:52 api_server.py:912] vLLM API server version 0.7.3
INFO 02-24 05:12:52 api_server.py:913] args: Namespace(subparser='serve', model_tag='/home/user01/llm-models/llama-vision-46b', config='', host=None, port=8000, uvicorn_log_level='info', allow_credentials=False, allowed_origins=[''], allowed_methods=[''], allowed_headers=['*'], api_key=None, lora_modules=None, prompt_adapters=None, chat_template=None, chat_template_content_format='auto', response_role='assistant', ssl_keyfile=None, ssl_certfile=None, ssl_ca_certs=None, ssl_cert_reqs=0, root_path=None, middleware=[], return_tokens_as_token_ids=False, disable_frontend_multiprocessing=False, enable_request_id_headers=False, enable_auto_tool_choice=False, enable_reasoning=False, reasoning_parser=None, tool_call_parser=None, tool_parser_plugin='', model='/home/user01/llm-models/llama-vision-46b', task='auto', tokenizer=None, skip_tokenizer_init=False, revision=None, code_revision=None, tokenizer_revision=None, tokenizer_mode='auto', trust_remote_code=True, allowed_local_media_path=None, download_dir=None, load_format='auto', config_format=<ConfigFormat.AUTO: 'auto'>, dtype='auto', kv_cache_dtype='auto', max_model_len=None, guided_decoding_backend='xgrammar', logits_processor_pattern=None, model_impl='auto', distributed_executor_backend=None, pipeline_parallel_size=1, tensor_parallel_size=8, max_parallel_loading_workers=None, ray_workers_use_nsight=False, block_size=None, enable_prefix_caching=None, disable_sliding_window=False, use_v2_block_manager=True, num_lookahead_slots=0, seed=0, swap_space=0.0, cpu_offload_gb=0, gpu_memory_utilization=0.95, num_gpu_blocks_override=None, max_num_batched_tokens=None, max_num_partial_prefills=1, max_long_partial_prefills=1, long_prefill_token_threshold=0, max_num_seqs=None, max_logprobs=20, disable_log_stats=False, quantization=None, rope_scaling=None, rope_theta=None, hf_overrides=None, enforce_eager=True, max_seq_len_to_capture=8192, disable_custom_all_reduce=False, tokenizer_pool_size=0, tokenizer_pool_type='ray', tokenizer_pool_extra_config=None, limit_mm_per_prompt=None, mm_processor_kwargs=None, disable_mm_preprocessor_cache=False, enable_lora=False, enable_lora_bias=False, max_loras=1, max_lora_rank=16, lora_extra_vocab_size=256, lora_dtype='auto', long_lora_scaling_factors=None, max_cpu_loras=None, fully_sharded_loras=False, enable_prompt_adapter=False, max_prompt_adapters=1, max_prompt_adapter_token=0, device='auto', num_scheduler_steps=1, multi_step_stream_outputs=True, scheduler_delay_factor=0.0, enable_chunked_prefill=None, speculative_model=None, speculative_model_quantization=None, num_speculative_tokens=None, speculative_disable_mqa_scorer=False, speculative_draft_tensor_parallel_size=None, speculative_max_model_len=None, speculative_disable_by_batch_size=None, ngram_prompt_lookup_max=None, ngram_prompt_lookup_min=None, spec_decoding_acceptance_method='rejection_sampler', typical_acceptance_sampler_posterior_threshold=None, typical_acceptance_sampler_posterior_alpha=None, disable_logprobs_during_spec_decoding=None, model_loader_extra_config=None, ignore_patterns=[], preemption_mode=None, served_model_name=None, qlora_adapter_name_or_path=None, otlp_traces_endpoint=None, collect_detailed_traces=None, disable_async_output_proc=False, scheduling_policy='fcfs', scheduler_cls='vllm.core.scheduler.Scheduler', override_neuron_config=None, override_pooler_config=None, compilation_config=None, kv_transfer_config=None, worker_cls='auto', generation_config=None, override_generation_config=None, enable_sleep_mode=False, calculate_kv_scales=False, additional_config=None, disable_log_requests=False, max_log_len=None, disable_fastapi_docs=False, enable_prompt_tokens_details=False, dispatch_function=<function ServeSubcommand.cmd at 0x78006f8f6200>)
INFO 02-24 05:12:52 api_server.py:209] Started engine process with PID 2485
INFO 02-24 05:12:58 init.py:207] Automatically detected platform cuda.
INFO 02-24 05:13:00 config.py:549] This model supports multiple tasks: {'embed', 'score', 'generate', 'reward', 'classify'}. Defaulting to 'generate'.
WARNING 02-24 05:13:00 config.py:628] bitsandbytes quantization is not fully optimized yet. The speed can be slower than non-quantized models.
INFO 02-24 05:13:00 config.py:1382] Defaulting to use mp for distributed inference
WARNING 02-24 05:13:00 arg_utils.py:1197] The model has a long context length (131072). This may cause OOM errors during the initial memory profiling phase, or result in low performance due to small KV cache space. Consider setting --max-model-len to a smaller value.
WARNING 02-24 05:13:00 cuda.py:95] To see benefits of async output processing, enable CUDA graph. Since, enforce-eager is enabled, async output processor cannot be used
WARNING 02-24 05:13:00 config.py:685] Async output processing is not supported on the current platform type cuda.
INFO 02-24 05:13:05 config.py:549] This model supports multiple tasks: {'embed', 'classify', 'score', 'generate', 'reward'}. Defaulting to 'generate'.
WARNING 02-24 05:13:06 config.py:628] bitsandbytes quantization is not fully optimized yet. The speed can be slower than non-quantized models.
INFO 02-24 05:13:06 config.py:1382] Defaulting to use mp for distributed inference
WARNING 02-24 05:13:06 arg_utils.py:1197] The model has a long context length (131072). This may cause OOM errors during the initial memory profiling phase, or result in low performance due to small KV cache space. Consider setting --max-model-len to a smaller value.
WARNING 02-24 05:13:06 cuda.py:95] To see benefits of async output processing, enable CUDA graph. Since, enforce-eager is enabled, async output processor cannot be used
WARNING 02-24 05:13:06 config.py:685] Async output processing is not supported on the current platform type cuda.
INFO 02-24 05:13:06 llm_engine.py:234] Initializing a V0 LLM engine (v0.7.3) with config: model='/home/user01/llm-models/llama-vision-46b', speculative_config=None, tokenizer='/home/user01/llm-models/llama-vision-46b', skip_tokenizer_init=False, tokenizer_mode=auto, revision=None, override_neuron_config=None, tokenizer_revision=None, trust_remote_code=True, dtype=torch.bfloat16, max_seq_len=131072, download_dir=None, load_format=LoadFormat.AUTO, tensor_parallel_size=8, pipeline_parallel_size=1, disable_custom_all_reduce=False, quantization=bitsandbytes, enforce_eager=True, kv_cache_dtype=auto, device_config=cuda, decoding_config=DecodingConfig(guided_decoding_backend='xgrammar'), observability_config=ObservabilityConfig(otlp_traces_endpoint=None, collect_model_forward_time=False, collect_model_execute_time=False), seed=0, served_model_name=/home/user01/llm-models/llama-vision-46b, num_scheduler_steps=1, multi_step_stream_outputs=True, enable_prefix_caching=False, chunked_prefill_enabled=False, use_async_output_proc=False, disable_mm_preprocessor_cache=False, mm_processor_kwargs=None, pooler_config=None, compilation_config={"splitting_ops":[],"compile_sizes":[],"cudagraph_capture_sizes":[],"max_capture_size":0}, use_cached_outputs=True,
WARNING 02-24 05:13:07 multiproc_worker_utils.py:300] Reducing Torch parallelism from 4 threads to 1 to avoid unnecessary CPU contention. Set OMP_NUM_THREADS in the external environment to tune this value as needed.
INFO 02-24 05:13:07 custom_cache_manager.py:19] Setting Triton cache manager to: vllm.triton_utils.custom_cache_manager:CustomCacheManager
INFO 02-24 05:13:07 cuda.py:229] Using Flash Attention backend.
INFO 02-24 05:13:17 init.py:207] Automatically detected platform cuda.
(VllmWorkerProcess pid=2585) INFO 02-24 05:13:17 multiproc_worker_utils.py:229] Worker ready; awaiting tasks
INFO 02-24 05:13:18 init.py:207] Automatically detected platform cuda.
(VllmWorkerProcess pid=2591) INFO 02-24 05:13:18 multiproc_worker_utils.py:229] Worker ready; awaiting tasks
INFO 02-24 05:13:18 init.py:207] Automatically detected platform cuda.
INFO 02-24 05:13:18 init.py:207] Automatically detected platform cuda.
(VllmWorkerProcess pid=2589) INFO 02-24 05:13:18 multiproc_worker_utils.py:229] Worker ready; awaiting tasks
(VllmWorkerProcess pid=2588) INFO 02-24 05:13:18 multiproc_worker_utils.py:229] Worker ready; awaiting tasks
INFO 02-24 05:13:18 init.py:207] Automatically detected platform cuda.
(VllmWorkerProcess pid=2587) INFO 02-24 05:13:19 multiproc_worker_utils.py:229] Worker ready; awaiting tasks
INFO 02-24 05:13:19 init.py:207] Automatically detected platform cuda.
INFO 02-24 05:13:19 init.py:207] Automatically detected platform cuda.
(VllmWorkerProcess pid=2586) INFO 02-24 05:13:19 multiproc_worker_utils.py:229] Worker ready; awaiting tasks
(VllmWorkerProcess pid=2590) INFO 02-24 05:13:19 multiproc_worker_utils.py:229] Worker ready; awaiting tasks
(VllmWorkerProcess pid=2585) INFO 02-24 05:13:19 cuda.py:229] Using Flash Attention backend.
(VllmWorkerProcess pid=2591) INFO 02-24 05:13:20 cuda.py:229] Using Flash Attention backend.
(VllmWorkerProcess pid=2589) INFO 02-24 05:13:20 cuda.py:229] Using Flash Attention backend.
(VllmWorkerProcess pid=2588) INFO 02-24 05:13:20 cuda.py:229] Using Flash Attention backend.
(VllmWorkerProcess pid=2587) INFO 02-24 05:13:20 cuda.py:229] Using Flash Attention backend.
(VllmWorkerProcess pid=2590) INFO 02-24 05:13:20 cuda.py:229] Using Flash Attention backend.
(VllmWorkerProcess pid=2586) INFO 02-24 05:13:20 cuda.py:229] Using Flash Attention backend.
(VllmWorkerProcess pid=2590) INFO 02-24 05:13:22 utils.py:916] Found nccl from library libnccl.so.2
INFO 02-24 05:13:22 utils.py:916] Found nccl from library libnccl.so.2
(VllmWorkerProcess pid=2590) INFO 02-24 05:13:22 pynccl.py:69] vLLM is using nccl==2.21.5
(VllmWorkerProcess pid=2586) INFO 02-24 05:13:22 utils.py:916] Found nccl from library libnccl.so.2
INFO 02-24 05:13:22 pynccl.py:69] vLLM is using nccl==2.21.5
(VllmWorkerProcess pid=2586) INFO 02-24 05:13:22 pynccl.py:69] vLLM is using nccl==2.21.5
(VllmWorkerProcess pid=2585) INFO 02-24 05:13:22 utils.py:916] Found nccl from library libnccl.so.2
(VllmWorkerProcess pid=2585) INFO 02-24 05:13:22 pynccl.py:69] vLLM is using nccl==2.21.5
(VllmWorkerProcess pid=2587) INFO 02-24 05:13:22 utils.py:916] Found nccl from library libnccl.so.2
(VllmWorkerProcess pid=2587) INFO 02-24 05:13:22 pynccl.py:69] vLLM is using nccl==2.21.5
(VllmWorkerProcess pid=2591) INFO 02-24 05:13:22 utils.py:916] Found nccl from library libnccl.so.2
(VllmWorkerProcess pid=2591) INFO 02-24 05:13:22 pynccl.py:69] vLLM is using nccl==2.21.5
(VllmWorkerProcess pid=2589) INFO 02-24 05:13:22 utils.py:916] Found nccl from library libnccl.so.2
(VllmWorkerProcess pid=2589) INFO 02-24 05:13:22 pynccl.py:69] vLLM is using nccl==2.21.5
(VllmWorkerProcess pid=2588) INFO 02-24 05:13:22 utils.py:916] Found nccl from library libnccl.so.2
(VllmWorkerProcess pid=2588) INFO 02-24 05:13:22 pynccl.py:69] vLLM is using nccl==2.21.5
(VllmWorkerProcess pid=2586) WARNING 02-24 05:13:23 custom_all_reduce.py:136] Custom allreduce is disabled because it's not supported on more than two PCIe-only GPUs. To silence this warning, specify disable_custom_all_reduce=True explicitly.
WARNING 02-24 05:13:23 custom_all_reduce.py:136] Custom allreduce is disabled because it's not supported on more than two PCIe-only GPUs. To silence this warning, specify disable_custom_all_reduce=True explicitly.
(VllmWorkerProcess pid=2590) WARNING 02-24 05:13:23 custom_all_reduce.py:136] Custom allreduce is disabled because it's not supported on more than two PCIe-only GPUs. To silence this warning, specify disable_custom_all_reduce=True explicitly.
(VllmWorkerProcess pid=2587) WARNING 02-24 05:13:23 custom_all_reduce.py:136] Custom allreduce is disabled because it's not supported on more than two PCIe-only GPUs. To silence this warning, specify disable_custom_all_reduce=True explicitly.
(VllmWorkerProcess pid=2588) WARNING 02-24 05:13:23 custom_all_reduce.py:136] Custom allreduce is disabled because it's not supported on more than two PCIe-only GPUs. To silence this warning, specify disable_custom_all_reduce=True explicitly.
(VllmWorkerProcess pid=2591) WARNING 02-24 05:13:23 custom_all_reduce.py:136] Custom allreduce is disabled because it's not supported on more than two PCIe-only GPUs. To silence this warning, specify disable_custom_all_reduce=True explicitly.
(VllmWorkerProcess pid=2585) WARNING 02-24 05:13:23 custom_all_reduce.py:136] Custom allreduce is disabled because it's not supported on more than two PCIe-only GPUs. To silence this warning, specify disable_custom_all_reduce=True explicitly.
(VllmWorkerProcess pid=2589) WARNING 02-24 05:13:23 custom_all_reduce.py:136] Custom allreduce is disabled because it's not supported on more than two PCIe-only GPUs. To silence this warning, specify disable_custom_all_reduce=True explicitly.
INFO 02-24 05:13:23 shm_broadcast.py:258] vLLM message queue communication handle: Handle(connect_ip='127.0.0.1', local_reader_ranks=[1, 2, 3, 4, 5, 6, 7], buffer_handle=(7, 4194304, 6, 'psm_7015f6ee'), local_subscribe_port=45455, remote_subscribe_port=None)
(VllmWorkerProcess pid=2587) INFO 02-24 05:13:23 model_runner.py:1110] Starting to load model /home/user01/llm-models/llama-vision-46b...
(VllmWorkerProcess pid=2585) INFO 02-24 05:13:23 model_runner.py:1110] Starting to load model /home/user01/llm-models/llama-vision-46b...
(VllmWorkerProcess pid=2588) INFO 02-24 05:13:23 model_runner.py:1110] Starting to load model /home/user01/llm-models/llama-vision-46b...
(VllmWorkerProcess pid=2589) INFO 02-24 05:13:23 model_runner.py:1110] Starting to load model /home/user01/llm-models/llama-vision-46b...
INFO 02-24 05:13:23 model_runner.py:1110] Starting to load model /home/user01/llm-models/llama-vision-46b...
(VllmWorkerProcess pid=2586) INFO 02-24 05:13:23 model_runner.py:1110] Starting to load model /home/user01/llm-models/llama-vision-46b...
(VllmWorkerProcess pid=2590) INFO 02-24 05:13:23 model_runner.py:1110] Starting to load model /home/user01/llm-models/llama-vision-46b...
(VllmWorkerProcess pid=2591) INFO 02-24 05:13:23 model_runner.py:1110] Starting to load model /home/user01/llm-models/llama-vision-46b...
Loading safetensors checkpoint shards: 0% Completed | 0/10 [00:00<?, ?it/s]
(VllmWorkerProcess pid=2591) ERROR 02-24 05:13:24 multiproc_worker_utils.py:242] Exception in worker VllmWorkerProcess while processing method load_model.
(VllmWorkerProcess pid=2591) ERROR 02-24 05:13:24 multiproc_worker_utils.py:242] Traceback (most recent call last):
(VllmWorkerProcess pid=2591) ERROR 02-24 05:13:24 multiproc_worker_utils.py:242] File "/home/user01/vllm/lib/python3.12/site-packages/vllm/executor/multiproc_worker_utils.py", line 236, in _run_worker_process
(VllmWorkerProcess pid=2591) ERROR 02-24 05:13:24 multiproc_worker_utils.py:242] output = run_method(worker, method, args, kwargs)
(VllmWorkerProcess pid=2591) ERROR 02-24 05:13:24 multiproc_worker_utils.py:242] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(VllmWorkerProcess pid=2591) ERROR 02-24 05:13:24 multiproc_worker_utils.py:242] File "/home/user01/vllm/lib/python3.12/site-packages/vllm/utils.py", line 2196, in run_method
(VllmWorkerProcess pid=2591) ERROR 02-24 05:13:24 multiproc_worker_utils.py:242] return func(*args, **kwargs)
(VllmWorkerProcess pid=2591) ERROR 02-24 05:13:24 multiproc_worker_utils.py:242] ^^^^^^^^^^^^^^^^^^^^^
(VllmWorkerProcess pid=2591) ERROR 02-24 05:13:24 multiproc_worker_utils.py:242] File "/home/user01/vllm/lib/python3.12/site-packages/vllm/worker/worker.py", line 183, in load_model
(VllmWorkerProcess pid=2591) ERROR 02-24 05:13:24 multiproc_worker_utils.py:242] self.model_runner.load_model()
(VllmWorkerProcess pid=2591) ERROR 02-24 05:13:24 multiproc_worker_utils.py:242] File "/home/user01/vllm/lib/python3.12/site-packages/vllm/worker/model_runner.py", line 1112, in load_model
(VllmWorkerProcess pid=2591) ERROR 02-24 05:13:24 multiproc_worker_utils.py:242] self.model = get_model(vllm_config=self.vllm_config)
(VllmWorkerProcess pid=2591) ERROR 02-24 05:13:24 multiproc_worker_utils.py:242] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(VllmWorkerProcess pid=2591) ERROR 02-24 05:13:24 multiproc_worker_utils.py:242] File "/home/user01/vllm/lib/python3.12/site-packages/vllm/model_executor/model_loader/init.py", line 14, in get_model
(VllmWorkerProcess pid=2591) ERROR 02-24 05:13:24 multiproc_worker_utils.py:242] return loader.load_model(vllm_config=vllm_config)
(VllmWorkerProcess pid=2591) ERROR 02-24 05:13:24 multiproc_worker_utils.py:242] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(VllmWorkerProcess pid=2591) ERROR 02-24 05:13:24 multiproc_worker_utils.py:242] File "/home/user01/vllm/lib/python3.12/site-packages/vllm/model_executor/model_loader/loader.py", line 409, in load_model
(VllmWorkerProcess pid=2591) ERROR 02-24 05:13:24 multiproc_worker_utils.py:242] loaded_weights = model.load_weights(
(VllmWorkerProcess pid=2591) ERROR 02-24 05:13:24 multiproc_worker_utils.py:242] ^^^^^^^^^^^^^^^^^^^
(VllmWorkerProcess pid=2591) ERROR 02-24 05:13:24 multiproc_worker_utils.py:242] File "/home/user01/vllm/lib/python3.12/site-packages/vllm/model_executor/models/mllama.py", line 1468, in load_weights
(VllmWorkerProcess pid=2591) ERROR 02-24 05:13:24 multiproc_worker_utils.py:242] weight_loader(param, loaded_weight)
(VllmWorkerProcess pid=2591) ERROR 02-24 05:13:24 multiproc_worker_utils.py:242] File "/home/user01/vllm/lib/python3.12/site-packages/vllm/model_executor/layers/linear.py", line 1121, in weight_loader
(VllmWorkerProcess pid=2591) ERROR 02-24 05:13:24 multiproc_worker_utils.py:242] assert param_data.shape == loaded_weight.shape
(VllmWorkerProcess pid=2591) ERROR 02-24 05:13:24 multiproc_worker_utils.py:242] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(VllmWorkerProcess pid=2591) ERROR 02-24 05:13:24 multiproc_worker_utils.py:242] AssertionError
(VllmWorkerProcess pid=2587) ERROR 02-24 05:13:24 multiproc_worker_utils.py:242] Exception in worker VllmWorkerProcess while processing method load_model.
(VllmWorkerProcess pid=2587) ERROR 02-24 05:13:24 multiproc_worker_utils.py:242] Traceback (most recent call last):
(VllmWorkerProcess pid=2587) ERROR 02-24 05:13:24 multiproc_worker_utils.py:242] File "/home/user01/vllm/lib/python3.12/site-packages/vllm/executor/multiproc_worker_utils.py", line 236, in _run_worker_process
(VllmWorkerProcess pid=2587) ERROR 02-24 05:13:24 multiproc_worker_utils.py:242] output = run_method(worker, method, args, kwargs)
(VllmWorkerProcess pid=2587) ERROR 02-24 05:13:24 multiproc_worker_utils.py:242] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(VllmWorkerProcess pid=2587) ERROR 02-24 05:13:24 multiproc_worker_utils.py:242] File "/home/user01/vllm/lib/python3.12/site-packages/vllm/utils.py", line 2196, in run_method
(VllmWorkerProcess pid=2587) ERROR 02-24 05:13:24 multiproc_worker_utils.py:242] return func(*args, **kwargs)
(VllmWorkerProcess pid=2587) ERROR 02-24 05:13:24 multiproc_worker_utils.py:242] ^^^^^^^^^^^^^^^^^^^^^
(VllmWorkerProcess pid=2587) ERROR 02-24 05:13:24 multiproc_worker_utils.py:242] File "/home/user01/vllm/lib/python3.12/site-packages/vllm/worker/worker.py", line 183, in load_model
(VllmWorkerProcess pid=2587) ERROR 02-24 05:13:24 multiproc_worker_utils.py:242] self.model_runner.load_model()
(VllmWorkerProcess pid=2587) ERROR 02-24 05:13:24 multiproc_worker_utils.py:242] File "/home/user01/vllm/lib/python3.12/site-packages/vllm/worker/model_runner.py", line 1112, in load_model
(VllmWorkerProcess pid=2587) ERROR 02-24 05:13:24 multiproc_worker_utils.py:242] self.model = get_model(vllm_config=self.vllm_config)
(VllmWorkerProcess pid=2587) ERROR 02-24 05:13:24 multiproc_worker_utils.py:242] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(VllmWorkerProcess pid=2587) ERROR 02-24 05:13:24 multiproc_worker_utils.py:242] File "/home/user01/vllm/lib/python3.12/site-packages/vllm/model_executor/model_loader/init.py", line 14, in get_model
(VllmWorkerProcess pid=2587) ERROR 02-24 05:13:24 multiproc_worker_utils.py:242] return loader.load_model(vllm_config=vllm_config)
(VllmWorkerProcess pid=2587) ERROR 02-24 05:13:24 multiproc_worker_utils.py:242] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(VllmWorkerProcess pid=2587) ERROR 02-24 05:13:24 multiproc_worker_utils.py:242] File "/home/user01/vllm/lib/python3.12/site-packages/vllm/model_executor/model_loader/loader.py", line 409, in load_model
(VllmWorkerProcess pid=2587) ERROR 02-24 05:13:24 multiproc_worker_utils.py:242] loaded_weights = model.load_weights(
(VllmWorkerProcess pid=2587) ERROR 02-24 05:13:24 multiproc_worker_utils.py:242] ^^^^^^^^^^^^^^^^^^^
(VllmWorkerProcess pid=2587) ERROR 02-24 05:13:24 multiproc_worker_utils.py:242] File "/home/user01/vllm/lib/python3.12/site-packages/vllm/model_executor/models/mllama.py", line 1468, in load_weights
(VllmWorkerProcess pid=2587) ERROR 02-24 05:13:24 multiproc_worker_utils.py:242] weight_loader(param, loaded_weight)
(VllmWorkerProcess pid=2587) ERROR 02-24 05:13:24 multiproc_worker_utils.py:242] File "/home/user01/vllm/lib/python3.12/site-packages/vllm/model_executor/layers/linear.py", line 1121, in weight_loader
(VllmWorkerProcess pid=2587) ERROR 02-24 05:13:24 multiproc_worker_utils.py:242] assert param_data.shape == loaded_weight.shape
(VllmWorkerProcess pid=2587) ERROR 02-24 05:13:24 multiproc_worker_utils.py:242] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(VllmWorkerProcess pid=2587) ERROR 02-24 05:13:24 multiproc_worker_utils.py:242] AssertionError
(VllmWorkerProcess pid=2588) ERROR 02-24 05:13:24 multiproc_worker_utils.py:242] Exception in worker VllmWorkerProcess while processing method load_model.
(VllmWorkerProcess pid=2588) ERROR 02-24 05:13:24 multiproc_worker_utils.py:242] Traceback (most recent call last):
(VllmWorkerProcess pid=2588) ERROR 02-24 05:13:24 multiproc_worker_utils.py:242] File "/home/user01/vllm/lib/python3.12/site-packages/vllm/executor/multiproc_worker_utils.py", line 236, in _run_worker_process
(VllmWorkerProcess pid=2588) ERROR 02-24 05:13:24 multiproc_worker_utils.py:242] output = run_method(worker, method, args, kwargs)
(VllmWorkerProcess pid=2588) ERROR 02-24 05:13:24 multiproc_worker_utils.py:242] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(VllmWorkerProcess pid=2588) ERROR 02-24 05:13:24 multiproc_worker_utils.py:242] File "/home/user01/vllm/lib/python3.12/site-packages/vllm/utils.py", line 2196, in run_method
(VllmWorkerProcess pid=2588) ERROR 02-24 05:13:24 multiproc_worker_utils.py:242] return func(*args, **kwargs)
(VllmWorkerProcess pid=2588) ERROR 02-24 05:13:24 multiproc_worker_utils.py:242] ^^^^^^^^^^^^^^^^^^^^^
(VllmWorkerProcess pid=2588) ERROR 02-24 05:13:24 multiproc_worker_utils.py:242] File "/home/user01/vllm/lib/python3.12/site-packages/vllm/worker/worker.py", line 183, in load_model
(VllmWorkerProcess pid=2588) ERROR 02-24 05:13:24 multiproc_worker_utils.py:242] self.model_runner.load_model()
(VllmWorkerProcess pid=2588) ERROR 02-24 05:13:24 multiproc_worker_utils.py:242] File "/home/user01/vllm/lib/python3.12/site-packages/vllm/worker/model_runner.py", line 1112, in load_model
(VllmWorkerProcess pid=2588) ERROR 02-24 05:13:24 multiproc_worker_utils.py:242] self.model = get_model(vllm_config=self.vllm_config)
(VllmWorkerProcess pid=2588) ERROR 02-24 05:13:24 multiproc_worker_utils.py:242] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(VllmWorkerProcess pid=2588) ERROR 02-24 05:13:24 multiproc_worker_utils.py:242] File "/home/user01/vllm/lib/python3.12/site-packages/vllm/model_executor/model_loader/init.py", line 14, in get_model
(VllmWorkerProcess pid=2588) ERROR 02-24 05:13:24 multiproc_worker_utils.py:242] return loader.load_model(vllm_config=vllm_config)
(VllmWorkerProcess pid=2588) ERROR 02-24 05:13:24 multiproc_worker_utils.py:242] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(VllmWorkerProcess pid=2588) ERROR 02-24 05:13:24 multiproc_worker_utils.py:242] File "/home/user01/vllm/lib/python3.12/site-packages/vllm/model_executor/model_loader/loader.py", line 409, in load_model
(VllmWorkerProcess pid=2588) ERROR 02-24 05:13:24 multiproc_worker_utils.py:242] loaded_weights = model.load_weights(
(VllmWorkerProcess pid=2588) ERROR 02-24 05:13:24 multiproc_worker_utils.py:242] ^^^^^^^^^^^^^^^^^^^
(VllmWorkerProcess pid=2588) ERROR 02-24 05:13:24 multiproc_worker_utils.py:242] File "/home/user01/vllm/lib/python3.12/site-packages/vllm/model_executor/models/mllama.py", line 1468, in load_weights
(VllmWorkerProcess pid=2588) ERROR 02-24 05:13:24 multiproc_worker_utils.py:242] weight_loader(param, loaded_weight)
(VllmWorkerProcess pid=2588) ERROR 02-24 05:13:24 multiproc_worker_utils.py:242] File "/home/user01/vllm/lib/python3.12/site-packages/vllm/model_executor/layers/linear.py", line 1121, in weight_loader
(VllmWorkerProcess pid=2588) ERROR 02-24 05:13:24 multiproc_worker_utils.py:242] assert param_data.shape == loaded_weight.shape
(VllmWorkerProcess pid=2588) ERROR 02-24 05:13:24 multiproc_worker_utils.py:242] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(VllmWorkerProcess pid=2588) ERROR 02-24 05:13:24 multiproc_worker_utils.py:242] AssertionError
(VllmWorkerProcess pid=2585) ERROR 02-24 05:13:24 multiproc_worker_utils.py:242] Exception in worker VllmWorkerProcess while processing method load_model.
(VllmWorkerProcess pid=2585) ERROR 02-24 05:13:24 multiproc_worker_utils.py:242] Traceback (most recent call last):
(VllmWorkerProcess pid=2585) ERROR 02-24 05:13:24 multiproc_worker_utils.py:242] File "/home/user01/vllm/lib/python3.12/site-packages/vllm/executor/multiproc_worker_utils.py", line 236, in _run_worker_process
(VllmWorkerProcess pid=2585) ERROR 02-24 05:13:24 multiproc_worker_utils.py:242] output = run_method(worker, method, args, kwargs)
(VllmWorkerProcess pid=2585) ERROR 02-24 05:13:24 multiproc_worker_utils.py:242] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(VllmWorkerProcess pid=2585) ERROR 02-24 05:13:24 multiproc_worker_utils.py:242] File "/home/user01/vllm/lib/python3.12/site-packages/vllm/utils.py", line 2196, in run_method
(VllmWorkerProcess pid=2585) ERROR 02-24 05:13:24 multiproc_worker_utils.py:242] return func(*args, **kwargs)
(VllmWorkerProcess pid=2585) ERROR 02-24 05:13:24 multiproc_worker_utils.py:242] ^^^^^^^^^^^^^^^^^^^^^
(VllmWorkerProcess pid=2585) ERROR 02-24 05:13:24 multiproc_worker_utils.py:242] File "/home/user01/vllm/lib/python3.12/site-packages/vllm/worker/worker.py", line 183, in load_model
(VllmWorkerProcess pid=2585) ERROR 02-24 05:13:24 multiproc_worker_utils.py:242] self.model_runner.load_model()
(VllmWorkerProcess pid=2585) ERROR 02-24 05:13:24 multiproc_worker_utils.py:242] File "/home/user01/vllm/lib/python3.12/site-packages/vllm/worker/model_runner.py", line 1112, in load_model
(VllmWorkerProcess pid=2585) ERROR 02-24 05:13:24 multiproc_worker_utils.py:242] self.model = get_model(vllm_config=self.vllm_config)
(VllmWorkerProcess pid=2585) ERROR 02-24 05:13:24 multiproc_worker_utils.py:242] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(VllmWorkerProcess pid=2585) ERROR 02-24 05:13:24 multiproc_worker_utils.py:242] File "/home/user01/vllm/lib/python3.12/site-packages/vllm/model_executor/model_loader/init.py", line 14, in get_model
(VllmWorkerProcess pid=2585) ERROR 02-24 05:13:24 multiproc_worker_utils.py:242] return loader.load_model(vllm_config=vllm_config)
(VllmWorkerProcess pid=2585) ERROR 02-24 05:13:24 multiproc_worker_utils.py:242] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(VllmWorkerProcess pid=2585) ERROR 02-24 05:13:24 multiproc_worker_utils.py:242] File "/home/user01/vllm/lib/python3.12/site-packages/vllm/model_executor/model_loader/loader.py", line 409, in load_model
(VllmWorkerProcess pid=2585) ERROR 02-24 05:13:24 multiproc_worker_utils.py:242] loaded_weights = model.load_weights(
(VllmWorkerProcess pid=2585) ERROR 02-24 05:13:24 multiproc_worker_utils.py:242] ^^^^^^^^^^^^^^^^^^^
(VllmWorkerProcess pid=2585) ERROR 02-24 05:13:24 multiproc_worker_utils.py:242] File "/home/user01/vllm/lib/python3.12/site-packages/vllm/model_executor/models/mllama.py", line 1468, in load_weights
(VllmWorkerProcess pid=2585) ERROR 02-24 05:13:24 multiproc_worker_utils.py:242] weight_loader(param, loaded_weight)
(VllmWorkerProcess pid=2585) ERROR 02-24 05:13:24 multiproc_worker_utils.py:242] File "/home/user01/vllm/lib/python3.12/site-packages/vllm/model_executor/layers/linear.py", line 1121, in weight_loader
(VllmWorkerProcess pid=2585) ERROR 02-24 05:13:24 multiproc_worker_utils.py:242] assert param_data.shape == loaded_weight.shape
(VllmWorkerProcess pid=2585) ERROR 02-24 05:13:24 multiproc_worker_utils.py:242] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(VllmWorkerProcess pid=2585) ERROR 02-24 05:13:24 multiproc_worker_utils.py:242] AssertionError
(VllmWorkerProcess pid=2589) ERROR 02-24 05:13:24 multiproc_worker_utils.py:242] Exception in worker VllmWorkerProcess while processing method load_model.
(VllmWorkerProcess pid=2589) ERROR 02-24 05:13:24 multiproc_worker_utils.py:242] Traceback (most recent call last):
(VllmWorkerProcess pid=2589) ERROR 02-24 05:13:24 multiproc_worker_utils.py:242] File "/home/user01/vllm/lib/python3.12/site-packages/vllm/executor/multiproc_worker_utils.py", line 236, in _run_worker_process
(VllmWorkerProcess pid=2589) ERROR 02-24 05:13:24 multiproc_worker_utils.py:242] output = run_method(worker, method, args, kwargs)
(VllmWorkerProcess pid=2589) ERROR 02-24 05:13:24 multiproc_worker_utils.py:242] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(VllmWorkerProcess pid=2589) ERROR 02-24 05:13:24 multiproc_worker_utils.py:242] File "/home/user01/vllm/lib/python3.12/site-packages/vllm/utils.py", line 2196, in run_method
(VllmWorkerProcess pid=2589) ERROR 02-24 05:13:24 multiproc_worker_utils.py:242] return func(*args, **kwargs)
(VllmWorkerProcess pid=2589) ERROR 02-24 05:13:24 multiproc_worker_utils.py:242] ^^^^^^^^^^^^^^^^^^^^^
(VllmWorkerProcess pid=2589) ERROR 02-24 05:13:24 multiproc_worker_utils.py:242] File "/home/user01/vllm/lib/python3.12/site-packages/vllm/worker/worker.py", line 183, in load_model
(VllmWorkerProcess pid=2589) ERROR 02-24 05:13:24 multiproc_worker_utils.py:242] self.model_runner.load_model()
(VllmWorkerProcess pid=2589) ERROR 02-24 05:13:24 multiproc_worker_utils.py:242] File "/home/user01/vllm/lib/python3.12/site-packages/vllm/worker/model_runner.py", line 1112, in load_model
(VllmWorkerProcess pid=2589) ERROR 02-24 05:13:24 multiproc_worker_utils.py:242] self.model = get_model(vllm_config=self.vllm_config)
(VllmWorkerProcess pid=2589) ERROR 02-24 05:13:24 multiproc_worker_utils.py:242] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(VllmWorkerProcess pid=2589) ERROR 02-24 05:13:24 multiproc_worker_utils.py:242] File "/home/user01/vllm/lib/python3.12/site-packages/vllm/model_executor/model_loader/init.py", line 14, in get_model
(VllmWorkerProcess pid=2589) ERROR 02-24 05:13:24 multiproc_worker_utils.py:242] return loader.load_model(vllm_config=vllm_config)
(VllmWorkerProcess pid=2589) ERROR 02-24 05:13:24 multiproc_worker_utils.py:242] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(VllmWorkerProcess pid=2589) ERROR 02-24 05:13:24 multiproc_worker_utils.py:242] File "/home/user01/vllm/lib/python3.12/site-packages/vllm/model_executor/model_loader/loader.py", line 409, in load_model
(VllmWorkerProcess pid=2589) ERROR 02-24 05:13:24 multiproc_worker_utils.py:242] loaded_weights = model.load_weights(
(VllmWorkerProcess pid=2589) ERROR 02-24 05:13:24 multiproc_worker_utils.py:242] ^^^^^^^^^^^^^^^^^^^
(VllmWorkerProcess pid=2589) ERROR 02-24 05:13:24 multiproc_worker_utils.py:242] File "/home/user01/vllm/lib/python3.12/site-packages/vllm/model_executor/models/mllama.py", line 1468, in load_weights
(VllmWorkerProcess pid=2589) ERROR 02-24 05:13:24 multiproc_worker_utils.py:242] weight_loader(param, loaded_weight)
(VllmWorkerProcess pid=2589) ERROR 02-24 05:13:24 multiproc_worker_utils.py:242] File "/home/user01/vllm/lib/python3.12/site-packages/vllm/model_executor/layers/linear.py", line 1121, in weight_loader
(VllmWorkerProcess pid=2589) ERROR 02-24 05:13:24 multiproc_worker_utils.py:242] assert param_data.shape == loaded_weight.shape
(VllmWorkerProcess pid=2589) ERROR 02-24 05:13:24 multiproc_worker_utils.py:242] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(VllmWorkerProcess pid=2589) ERROR 02-24 05:13:24 multiproc_worker_utils.py:242] AssertionError
(VllmWorkerProcess pid=2590) ERROR 02-24 05:13:24 multiproc_worker_utils.py:242] Exception in worker VllmWorkerProcess while processing method load_model.
(VllmWorkerProcess pid=2590) ERROR 02-24 05:13:24 multiproc_worker_utils.py:242] Traceback (most recent call last):
(VllmWorkerProcess pid=2590) ERROR 02-24 05:13:24 multiproc_worker_utils.py:242] File "/home/user01/vllm/lib/python3.12/site-packages/vllm/executor/multiproc_worker_utils.py", line 236, in _run_worker_process
(VllmWorkerProcess pid=2590) ERROR 02-24 05:13:24 multiproc_worker_utils.py:242] output = run_method(worker, method, args, kwargs)
(VllmWorkerProcess pid=2590) ERROR 02-24 05:13:24 multiproc_worker_utils.py:242] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(VllmWorkerProcess pid=2590) ERROR 02-24 05:13:24 multiproc_worker_utils.py:242] File "/home/user01/vllm/lib/python3.12/site-packages/vllm/utils.py", line 2196, in run_method
(VllmWorkerProcess pid=2590) ERROR 02-24 05:13:24 multiproc_worker_utils.py:242] return func(*args, **kwargs)
(VllmWorkerProcess pid=2590) ERROR 02-24 05:13:24 multiproc_worker_utils.py:242] ^^^^^^^^^^^^^^^^^^^^^
(VllmWorkerProcess pid=2590) ERROR 02-24 05:13:24 multiproc_worker_utils.py:242] File "/home/user01/vllm/lib/python3.12/site-packages/vllm/worker/worker.py", line 183, in load_model
(VllmWorkerProcess pid=2590) ERROR 02-24 05:13:24 multiproc_worker_utils.py:242] self.model_runner.load_model()
(VllmWorkerProcess pid=2590) ERROR 02-24 05:13:24 multiproc_worker_utils.py:242] File "/home/user01/vllm/lib/python3.12/site-packages/vllm/worker/model_runner.py", line 1112, in load_model
(VllmWorkerProcess pid=2590) ERROR 02-24 05:13:24 multiproc_worker_utils.py:242] self.model = get_model(vllm_config=self.vllm_config)
(VllmWorkerProcess pid=2590) ERROR 02-24 05:13:24 multiproc_worker_utils.py:242] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(VllmWorkerProcess pid=2590) ERROR 02-24 05:13:24 multiproc_worker_utils.py:242] File "/home/user01/vllm/lib/python3.12/site-packages/vllm/model_executor/model_loader/init.py", line 14, in get_model
(VllmWorkerProcess pid=2590) ERROR 02-24 05:13:24 multiproc_worker_utils.py:242] return loader.load_model(vllm_config=vllm_config)
(VllmWorkerProcess pid=2590) ERROR 02-24 05:13:24 multiproc_worker_utils.py:242] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(VllmWorkerProcess pid=2590) ERROR 02-24 05:13:24 multiproc_worker_utils.py:242] File "/home/user01/vllm/lib/python3.12/site-packages/vllm/model_executor/model_loader/loader.py", line 409, in load_model
(VllmWorkerProcess pid=2590) ERROR 02-24 05:13:24 multiproc_worker_utils.py:242] loaded_weights = model.load_weights(
(VllmWorkerProcess pid=2590) ERROR 02-24 05:13:24 multiproc_worker_utils.py:242] ^^^^^^^^^^^^^^^^^^^
(VllmWorkerProcess pid=2590) ERROR 02-24 05:13:24 multiproc_worker_utils.py:242] File "/home/user01/vllm/lib/python3.12/site-packages/vllm/model_executor/models/mllama.py", line 1468, in load_weights
(VllmWorkerProcess pid=2590) ERROR 02-24 05:13:24 multiproc_worker_utils.py:242] weight_loader(param, loaded_weight)
(VllmWorkerProcess pid=2590) ERROR 02-24 05:13:24 multiproc_worker_utils.py:242] File "/home/user01/vllm/lib/python3.12/site-packages/vllm/model_executor/layers/linear.py", line 1121, in weight_loader
(VllmWorkerProcess pid=2590) ERROR 02-24 05:13:24 multiproc_worker_utils.py:242] assert param_data.shape == loaded_weight.shape
(VllmWorkerProcess pid=2590) ERROR 02-24 05:13:24 multiproc_worker_utils.py:242] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(VllmWorkerProcess pid=2590) ERROR 02-24 05:13:24 multiproc_worker_utils.py:242] AssertionError
ERROR 02-24 05:13:25 engine.py:400]
Traceback (most recent call last):
File "/home/user01/vllm/lib/python3.12/site-packages/vllm/engine/multiprocessing/engine.py", line 391, in run_mp_engine
engine = MQLLMEngine.from_engine_args(engine_args=engine_args,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/user01/vllm/lib/python3.12/site-packages/vllm/engine/multiprocessing/engine.py", line 124, in from_engine_args
return cls(ipc_path=ipc_path,
^^^^^^^^^^^^^^^^^^^^^^
File "/home/user01/vllm/lib/python3.12/site-packages/vllm/engine/multiprocessing/engine.py", line 76, in init
self.engine = LLMEngine(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/user01/vllm/lib/python3.12/site-packages/vllm/engine/llm_engine.py", line 273, in init
self.model_executor = executor_class(vllm_config=vllm_config, )
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/user01/vllm/lib/python3.12/site-packages/vllm/executor/executor_base.py", line 271, in init
super().init(*args, **kwargs)
File "/home/user01/vllm/lib/python3.12/site-packages/vllm/executor/executor_base.py", line 52, in init
self._init_executor()
File "/home/user01/vllm/lib/python3.12/site-packages/vllm/executor/mp_distributed_executor.py", line 125, in _init_executor
self._run_workers("load_model",
File "/home/user01/vllm/lib/python3.12/site-packages/vllm/executor/mp_distributed_executor.py", line 185, in _run_workers
driver_worker_output = run_method(self.driver_worker, sent_method,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/user01/vllm/lib/python3.12/site-packages/vllm/utils.py", line 2196, in run_method
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/home/user01/vllm/lib/python3.12/site-packages/vllm/worker/worker.py", line 183, in load_model
self.model_runner.load_model()
File "/home/user01/vllm/lib/python3.12/site-packages/vllm/worker/model_runner.py", line 1112, in load_model
self.model = get_model(vllm_config=self.vllm_config)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/user01/vllm/lib/python3.12/site-packages/vllm/model_executor/model_loader/init.py", line 14, in get_model
return loader.load_model(vllm_config=vllm_config)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/user01/vllm/lib/python3.12/site-packages/vllm/model_executor/model_loader/loader.py", line 409, in load_model
loaded_weights = model.load_weights(
^^^^^^^^^^^^^^^^^^^
File "/home/user01/vllm/lib/python3.12/site-packages/vllm/model_executor/models/mllama.py", line 1468, in load_weights
weight_loader(param, loaded_weight)
File "/home/user01/vllm/lib/python3.12/site-packages/vllm/model_executor/layers/linear.py", line 1121, in weight_loader
assert param_data.shape == loaded_weight.shape
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
AssertionError
ERROR 02-24 05:13:25 multiproc_worker_utils.py:124] Worker VllmWorkerProcess pid 2588 died, exit code: -15
ERROR 02-24 05:13:25 multiproc_worker_utils.py:124] Worker VllmWorkerProcess pid 2589 died, exit code: -15
INFO 02-24 05:13:25 multiproc_worker_utils.py:128] Killing local vLLM worker processes
Process SpawnProcess-1:
Traceback (most recent call last):
File "/usr/lib/python3.12/multiprocessing/process.py", line 314, in _bootstrap
self.run()
File "/usr/lib/python3.12/multiprocessing/process.py", line 108, in run
self._target(*self._args, **self._kwargs)
File "/home/user01/vllm/lib/python3.12/site-packages/vllm/engine/multiprocessing/engine.py", line 402, in run_mp_engine
raise e
File "/home/user01/vllm/lib/python3.12/site-packages/vllm/engine/multiprocessing/engine.py", line 391, in run_mp_engine
engine = MQLLMEngine.from_engine_args(engine_args=engine_args,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/user01/vllm/lib/python3.12/site-packages/vllm/engine/multiprocessing/engine.py", line 124, in from_engine_args
return cls(ipc_path=ipc_path,
^^^^^^^^^^^^^^^^^^^^^^
File "/home/user01/vllm/lib/python3.12/site-packages/vllm/engine/multiprocessing/engine.py", line 76, in init
self.engine = LLMEngine(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/user01/vllm/lib/python3.12/site-packages/vllm/engine/llm_engine.py", line 273, in init
self.model_executor = executor_class(vllm_config=vllm_config, )
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/user01/vllm/lib/python3.12/site-packages/vllm/executor/executor_base.py", line 271, in init
super().init(*args, **kwargs)
File "/home/user01/vllm/lib/python3.12/site-packages/vllm/executor/executor_base.py", line 52, in init
self._init_executor()
File "/home/user01/vllm/lib/python3.12/site-packages/vllm/executor/mp_distributed_executor.py", line 125, in _init_executor
self._run_workers("load_model",
File "/home/user01/vllm/lib/python3.12/site-packages/vllm/executor/mp_distributed_executor.py", line 185, in _run_workers
driver_worker_output = run_method(self.driver_worker, sent_method,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/user01/vllm/lib/python3.12/site-packages/vllm/utils.py", line 2196, in run_method
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/home/user01/vllm/lib/python3.12/site-packages/vllm/worker/worker.py", line 183, in load_model
self.model_runner.load_model()
File "/home/user01/vllm/lib/python3.12/site-packages/vllm/worker/model_runner.py", line 1112, in load_model
self.model = get_model(vllm_config=self.vllm_config)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/user01/vllm/lib/python3.12/site-packages/vllm/model_executor/model_loader/init.py", line 14, in get_model
return loader.load_model(vllm_config=vllm_config)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/user01/vllm/lib/python3.12/site-packages/vllm/model_executor/model_loader/loader.py", line 409, in load_model
loaded_weights = model.load_weights(
^^^^^^^^^^^^^^^^^^^
File "/home/user01/vllm/lib/python3.12/site-packages/vllm/model_executor/models/mllama.py", line 1468, in load_weights
weight_loader(param, loaded_weight)
File "/home/user01/vllm/lib/python3.12/site-packages/vllm/model_executor/layers/linear.py", line 1121, in weight_loader
assert param_data.shape == loaded_weight.shape
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
AssertionError
Loading safetensors checkpoint shards: 0% Completed | 0/10 [00:00<?, ?it/s]
[rank0]:[W224 05:13:25.511680124 ProcessGroupNCCL.cpp:1250] Warning: WARNING: process group has NOT been destroyed before we destruct ProcessGroupNCCL. On normal program exit, the application should call destroy_process_group to ensure that any pending NCCL operations have finished in this process. In rare cases this process can exit before this point and block the progress of another member of the process group. This constraint has always been present, but this warning has only been added since PyTorch 2.4 (function operator())
Traceback (most recent call last):
File "/home/user01/vllm/bin/vllm", line 8, in
sys.exit(main())
^^^^^^
File "/home/user01/vllm/lib/python3.12/site-packages/vllm/entrypoints/cli/main.py", line 73, in main
args.dispatch_function(args)
File "/home/user01/vllm/lib/python3.12/site-packages/vllm/entrypoints/cli/serve.py", line 34, in cmd
uvloop.run(run_server(args))
File "/home/user01/vllm/lib/python3.12/site-packages/uvloop/init.py", line 109, in run
return __asyncio.run(
^^^^^^^^^^^^^^
File "/usr/lib/python3.12/asyncio/runners.py", line 194, in run
return runner.run(main)
^^^^^^^^^^^^^^^^
File "/usr/lib/python3.12/asyncio/runners.py", line 118, in run
return self._loop.run_until_complete(task)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "uvloop/loop.pyx", line 1518, in uvloop.loop.Loop.run_until_complete
File "/home/user01/vllm/lib/python3.12/site-packages/uvloop/init.py", line 61, in wrapper
return await main
^^^^^^^^^^
File "/home/user01/vllm/lib/python3.12/site-packages/vllm/entrypoints/openai/api_server.py", line 947, in run_server
async with build_async_engine_client(args) as engine_client:
File "/usr/lib/python3.12/contextlib.py", line 210, in aenter
return await anext(self.gen)
^^^^^^^^^^^^^^^^^^^^^
File "/home/user01/vllm/lib/python3.12/site-packages/vllm/entrypoints/openai/api_server.py", line 139, in build_async_engine_client
async with build_async_engine_client_from_engine_args(
File "/usr/lib/python3.12/contextlib.py", line 210, in aenter
return await anext(self.gen)
^^^^^^^^^^^^^^^^^^^^^
File "/home/user01/vllm/lib/python3.12/site-packages/vllm/entrypoints/openai/api_server.py", line 233, in build_async_engine_client_from_engine_args
raise RuntimeError(
RuntimeError: Engine process failed to start. See stack trace for the root cause.