Can't run with vllm
#1
by
EmilPi
- opened
I have download .q8.gguf file to /mnt/models/gguf/mistralai/Devstral-Small-2505_gguf/devstralQ8_0.gguf
. I have 4 GPUs. I run
vllm serve /mnt/models/gguf/mistralai/Devstral-Small-2505_gguf/devstralQ8_0.gguf --tokenizer mistralai/Devstral-Small-2505 --hf-config-path mistralai/Devstral-Small-2505 --tokenizer_mode mistral --config_format mistral --load_format mistral --tool-call-parser mistral --enable-auto-tool-choice --tensor-parallel-size 4 --max-model-len 32768
vllm exits (after some loading time) with error:
ERROR 05-21 23:27:56 [engine.py:448] Unknown gguf model_type: transformer
I tried adding -q gguf
, changing --load_format mistral
to --load_format gguf
, without success.
Full log:
ai@builder:~/3rdparty/vllm_dir$ vllm serve /mnt/models/gguf/mistralai/Devstral-Small-2505_gguf/devstralQ8_0.gguf --tokenizer mistralai/Devstral-Small-2505 --hf-config-path mistralai/Devstral-Small-2505 --tokenizer_mode mistral --config_format mistral --load_format mistral --tool-call-parser mistral --enable-auto-tool-choice --tensor-parallel-size 4 --max-model-len 32768
INFO 05-21 23:27:29 [__init__.py:239] Automatically detected platform cuda.
INFO 05-21 23:27:34 [api_server.py:1043] vLLM API server version 0.8.5.post1
INFO 05-21 23:27:34 [api_server.py:1044] args: Namespace(subparser='serve', model_tag='/mnt/models/gguf/mistralai/Devstral-Small-2505_gguf/devstralQ8_0.gguf', config='', host=None, port=8000, uvicorn_log_level='info', disable_uvicorn_access_log=False, allow_credentials=False, allowed_origins=['*'], allowed_methods=['*'], allowed_headers=['*'], api_key=None, lora_modules=None, prompt_adapters=None, chat_template=None, chat_template_content_format='auto', response_role='assistant', ssl_keyfile=None, ssl_certfile=None, ssl_ca_certs=None, enable_ssl_refresh=False, ssl_cert_reqs=0, root_path=None, middleware=[], return_tokens_as_token_ids=False, disable_frontend_multiprocessing=False, enable_request_id_headers=False, enable_auto_tool_choice=True, tool_call_parser='mistral', tool_parser_plugin='', model='/mnt/models/gguf/mistralai/Devstral-Small-2505_gguf/devstralQ8_0.gguf', task='auto', tokenizer='mistralai/Devstral-Small-2505', hf_config_path='mistralai/Devstral-Small-2505', skip_tokenizer_init=False, revision=None, code_revision=None, tokenizer_revision=None, tokenizer_mode='mistral', trust_remote_code=False, allowed_local_media_path=None, load_format='mistral', download_dir=None, model_loader_extra_config={}, use_tqdm_on_load=True, config_format='mistral', dtype='auto', max_model_len=32768, guided_decoding_backend='auto', reasoning_parser=None, logits_processor_pattern=None, model_impl='auto', distributed_executor_backend=None, pipeline_parallel_size=1, tensor_parallel_size=4, data_parallel_size=1, enable_expert_parallel=False, max_parallel_loading_workers=None, ray_workers_use_nsight=False, disable_custom_all_reduce=False, block_size=None, gpu_memory_utilization=0.9, swap_space=4, kv_cache_dtype='auto', num_gpu_blocks_override=None, enable_prefix_caching=None, prefix_caching_hash_algo='builtin', cpu_offload_gb=0, calculate_kv_scales=False, disable_sliding_window=False, use_v2_block_manager=True, seed=None, max_logprobs=20, disable_log_stats=False, quantization=None, rope_scaling=None, rope_theta=None, hf_token=None, hf_overrides=None, enforce_eager=False, max_seq_len_to_capture=8192, tokenizer_pool_size=0, tokenizer_pool_type='ray', tokenizer_pool_extra_config={}, limit_mm_per_prompt={}, mm_processor_kwargs=None, disable_mm_preprocessor_cache=False, enable_lora=None, enable_lora_bias=False, max_loras=1, max_lora_rank=16, lora_extra_vocab_size=256, lora_dtype='auto', long_lora_scaling_factors=None, max_cpu_loras=None, fully_sharded_loras=False, enable_prompt_adapter=None, max_prompt_adapters=1, max_prompt_adapter_token=0, device='auto', speculative_config=None, ignore_patterns=[], served_model_name=None, qlora_adapter_name_or_path=None, show_hidden_metrics_for_version=None, otlp_traces_endpoint=None, collect_detailed_traces=None, disable_async_output_proc=False, max_num_batched_tokens=None, max_num_seqs=None, max_num_partial_prefills=1, max_long_partial_prefills=1, long_prefill_token_threshold=0, num_lookahead_slots=0, scheduler_delay_factor=0.0, preemption_mode=None, num_scheduler_steps=1, multi_step_stream_outputs=True, scheduling_policy='fcfs', enable_chunked_prefill=None, disable_chunked_mm_input=False, scheduler_cls='vllm.core.scheduler.Scheduler', override_neuron_config=None, override_pooler_config=None, compilation_config=None, kv_transfer_config=None, worker_cls='auto', worker_extension_cls='', generation_config='auto', override_generation_config=None, enable_sleep_mode=False, additional_config=None, enable_reasoning=False, disable_cascade_attn=False, disable_log_requests=False, max_log_len=None, disable_fastapi_docs=False, enable_prompt_tokens_details=False, enable_server_load_tracking=False, dispatch_function=<function ServeSubcommand.cmd at 0x736e1f94c9d0>)
INFO 05-21 23:27:34 [config.py:2968] Downcasting torch.float32 to torch.float16.
INFO 05-21 23:27:40 [config.py:717] This model supports multiple tasks: {'generate', 'embed', 'reward', 'classify', 'score'}. Defaulting to 'generate'.
WARNING 05-21 23:27:40 [config.py:830] gguf quantization is not fully optimized yet. The speed can be slower than non-quantized models.
WARNING 05-21 23:27:40 [arg_utils.py:1658] --quantization gguf is not supported by the V1 Engine. Falling back to V0.
INFO 05-21 23:27:40 [config.py:1770] Defaulting to use mp for distributed inference
INFO 05-21 23:27:40 [api_server.py:246] Started engine process with PID 831026
/home/ai/miniconda3/envs/vllm_learn/lib/python3.10/site-packages/mistral_common/tokens/tokenizers/tekken.py:184: FutureWarning: Special tokens not found in /home/ai/.cache/huggingface/hub/models--mistralai--Devstral-Small-2505/snapshots/a18746ea7ad6e2241ef0358a536dff01754b4aa2/tekken.json and default to ({'rank': 0, 'token_str': <SpecialTokens.unk: '<unk>'>, 'is_control': True}, {'rank': 1, 'token_str': <SpecialTokens.bos: '<s>'>, 'is_control': True}, {'rank': 2, 'token_str': <SpecialTokens.eos: '</s>'>, 'is_control': True}, {'rank': 3, 'token_str': <SpecialTokens.begin_inst: '[INST]'>, 'is_control': True}, {'rank': 4, 'token_str': <SpecialTokens.end_inst: '[/INST]'>, 'is_control': True}, {'rank': 5, 'token_str': <SpecialTokens.begin_tools: '[AVAILABLE_TOOLS]'>, 'is_control': True}, {'rank': 6, 'token_str': <SpecialTokens.end_tools: '[/AVAILABLE_TOOLS]'>, 'is_control': True}, {'rank': 7, 'token_str': <SpecialTokens.begin_tool_results: '[TOOL_RESULTS]'>, 'is_control': True}, {'rank': 8, 'token_str': <SpecialTokens.end_tool_results: '[/TOOL_RESULTS]'>, 'is_control': True}, {'rank': 9, 'token_str': <SpecialTokens.tool_calls: '[TOOL_CALLS]'>, 'is_control': True}, {'rank': 10, 'token_str': <SpecialTokens.img: '[IMG]'>, 'is_control': True}, {'rank': 11, 'token_str': <SpecialTokens.pad: '<pad>'>, 'is_control': True}, {'rank': 12, 'token_str': <SpecialTokens.img_break: '[IMG_BREAK]'>, 'is_control': True}, {'rank': 13, 'token_str': <SpecialTokens.img_end: '[IMG_END]'>, 'is_control': True}, {'rank': 14, 'token_str': <SpecialTokens.prefix: '[PREFIX]'>, 'is_control': True}, {'rank': 15, 'token_str': <SpecialTokens.middle: '[MIDDLE]'>, 'is_control': True}, {'rank': 16, 'token_str': <SpecialTokens.suffix: '[SUFFIX]'>, 'is_control': True}, {'rank': 17, 'token_str': <SpecialTokens.begin_system: '[SYSTEM_PROMPT]'>, 'is_control': True}, {'rank': 18, 'token_str': <SpecialTokens.end_system: '[/SYSTEM_PROMPT]'>, 'is_control': True}, {'rank': 19, 'token_str': <SpecialTokens.begin_tool_content: '[TOOL_CONTENT]'>, 'is_control': True}). This behavior will be deprecated going forward. Please update your tokenizer file and include all special tokens you need.
warnings.warn(
INFO 05-21 23:27:42 [__init__.py:239] Automatically detected platform cuda.
INFO 05-21 23:27:45 [llm_engine.py:240] Initializing a V0 LLM engine (v0.8.5.post1) with config: model='/mnt/models/gguf/mistralai/Devstral-Small-2505_gguf/devstralQ8_0.gguf', speculative_config=None, tokenizer='mistralai/Devstral-Small-2505', skip_tokenizer_init=False, tokenizer_mode=mistral, revision=None, override_neuron_config=None, tokenizer_revision=None, trust_remote_code=False, dtype=torch.float16, max_seq_len=32768, download_dir=None, load_format=gguf, tensor_parallel_size=4, pipeline_parallel_size=1, disable_custom_all_reduce=False, quantization=gguf, enforce_eager=False, kv_cache_dtype=auto, device_config=cuda, decoding_config=DecodingConfig(guided_decoding_backend='auto', reasoning_backend=None), observability_config=ObservabilityConfig(show_hidden_metrics=False, otlp_traces_endpoint=None, collect_model_forward_time=False, collect_model_execute_time=False), seed=None, served_model_name=/mnt/models/gguf/mistralai/Devstral-Small-2505_gguf/devstralQ8_0.gguf, num_scheduler_steps=1, multi_step_stream_outputs=True, enable_prefix_caching=None, chunked_prefill_enabled=False, use_async_output_proc=True, disable_mm_preprocessor_cache=False, mm_processor_kwargs=None, pooler_config=None, compilation_config={"splitting_ops":[],"compile_sizes":[],"cudagraph_capture_sizes":[256,248,240,232,224,216,208,200,192,184,176,168,160,152,144,136,128,120,112,104,96,88,80,72,64,56,48,40,32,24,16,8,4,2,1],"max_capture_size":256}, use_cached_outputs=True,
/home/ai/miniconda3/envs/vllm_learn/lib/python3.10/site-packages/mistral_common/tokens/tokenizers/tekken.py:184: FutureWarning: Special tokens not found in /home/ai/.cache/huggingface/hub/models--mistralai--Devstral-Small-2505/snapshots/a18746ea7ad6e2241ef0358a536dff01754b4aa2/tekken.json and default to ({'rank': 0, 'token_str': <SpecialTokens.unk: '<unk>'>, 'is_control': True}, {'rank': 1, 'token_str': <SpecialTokens.bos: '<s>'>, 'is_control': True}, {'rank': 2, 'token_str': <SpecialTokens.eos: '</s>'>, 'is_control': True}, {'rank': 3, 'token_str': <SpecialTokens.begin_inst: '[INST]'>, 'is_control': True}, {'rank': 4, 'token_str': <SpecialTokens.end_inst: '[/INST]'>, 'is_control': True}, {'rank': 5, 'token_str': <SpecialTokens.begin_tools: '[AVAILABLE_TOOLS]'>, 'is_control': True}, {'rank': 6, 'token_str': <SpecialTokens.end_tools: '[/AVAILABLE_TOOLS]'>, 'is_control': True}, {'rank': 7, 'token_str': <SpecialTokens.begin_tool_results: '[TOOL_RESULTS]'>, 'is_control': True}, {'rank': 8, 'token_str': <SpecialTokens.end_tool_results: '[/TOOL_RESULTS]'>, 'is_control': True}, {'rank': 9, 'token_str': <SpecialTokens.tool_calls: '[TOOL_CALLS]'>, 'is_control': True}, {'rank': 10, 'token_str': <SpecialTokens.img: '[IMG]'>, 'is_control': True}, {'rank': 11, 'token_str': <SpecialTokens.pad: '<pad>'>, 'is_control': True}, {'rank': 12, 'token_str': <SpecialTokens.img_break: '[IMG_BREAK]'>, 'is_control': True}, {'rank': 13, 'token_str': <SpecialTokens.img_end: '[IMG_END]'>, 'is_control': True}, {'rank': 14, 'token_str': <SpecialTokens.prefix: '[PREFIX]'>, 'is_control': True}, {'rank': 15, 'token_str': <SpecialTokens.middle: '[MIDDLE]'>, 'is_control': True}, {'rank': 16, 'token_str': <SpecialTokens.suffix: '[SUFFIX]'>, 'is_control': True}, {'rank': 17, 'token_str': <SpecialTokens.begin_system: '[SYSTEM_PROMPT]'>, 'is_control': True}, {'rank': 18, 'token_str': <SpecialTokens.end_system: '[/SYSTEM_PROMPT]'>, 'is_control': True}, {'rank': 19, 'token_str': <SpecialTokens.begin_tool_content: '[TOOL_CONTENT]'>, 'is_control': True}). This behavior will be deprecated going forward. Please update your tokenizer file and include all special tokens you need.
warnings.warn(
WARNING 05-21 23:27:47 [multiproc_worker_utils.py:306] Reducing Torch parallelism from 32 threads to 1 to avoid unnecessary CPU contention. Set OMP_NUM_THREADS in the external environment to tune this value as needed.
INFO 05-21 23:27:47 [cuda.py:292] Using Flash Attention backend.
INFO 05-21 23:27:50 [__init__.py:239] Automatically detected platform cuda.
INFO 05-21 23:27:50 [__init__.py:239] Automatically detected platform cuda.
INFO 05-21 23:27:50 [__init__.py:239] Automatically detected platform cuda.
(VllmWorkerProcess pid=831116) INFO 05-21 23:27:53 [multiproc_worker_utils.py:225] Worker ready; awaiting tasks
(VllmWorkerProcess pid=831117) INFO 05-21 23:27:53 [multiproc_worker_utils.py:225] Worker ready; awaiting tasks
(VllmWorkerProcess pid=831116) INFO 05-21 23:27:53 [cuda.py:292] Using Flash Attention backend.
(VllmWorkerProcess pid=831117) INFO 05-21 23:27:53 [cuda.py:292] Using Flash Attention backend.
(VllmWorkerProcess pid=831118) INFO 05-21 23:27:53 [multiproc_worker_utils.py:225] Worker ready; awaiting tasks
(VllmWorkerProcess pid=831118) INFO 05-21 23:27:53 [cuda.py:292] Using Flash Attention backend.
(VllmWorkerProcess pid=831116) INFO 05-21 23:27:55 [utils.py:1055] Found nccl from library libnccl.so.2
(VllmWorkerProcess pid=831118) INFO 05-21 23:27:55 [utils.py:1055] Found nccl from library libnccl.so.2
(VllmWorkerProcess pid=831116) INFO 05-21 23:27:55 [pynccl.py:69] vLLM is using nccl==2.21.5
INFO 05-21 23:27:55 [utils.py:1055] Found nccl from library libnccl.so.2
(VllmWorkerProcess pid=831118) INFO 05-21 23:27:55 [pynccl.py:69] vLLM is using nccl==2.21.5
INFO 05-21 23:27:55 [pynccl.py:69] vLLM is using nccl==2.21.5
(VllmWorkerProcess pid=831117) INFO 05-21 23:27:55 [utils.py:1055] Found nccl from library libnccl.so.2
(VllmWorkerProcess pid=831117) INFO 05-21 23:27:55 [pynccl.py:69] vLLM is using nccl==2.21.5
WARNING 05-21 23:27:55 [custom_all_reduce.py:136] Custom allreduce is disabled because it's not supported on more than two PCIe-only GPUs. To silence this warning, specify disable_custom_all_reduce=True explicitly.
(VllmWorkerProcess pid=831117) WARNING 05-21 23:27:55 [custom_all_reduce.py:136] Custom allreduce is disabled because it's not supported on more than two PCIe-only GPUs. To silence this warning, specify disable_custom_all_reduce=True explicitly.
(VllmWorkerProcess pid=831116) WARNING 05-21 23:27:55 [custom_all_reduce.py:136] Custom allreduce is disabled because it's not supported on more than two PCIe-only GPUs. To silence this warning, specify disable_custom_all_reduce=True explicitly.
(VllmWorkerProcess pid=831118) WARNING 05-21 23:27:55 [custom_all_reduce.py:136] Custom allreduce is disabled because it's not supported on more than two PCIe-only GPUs. To silence this warning, specify disable_custom_all_reduce=True explicitly.
INFO 05-21 23:27:55 [shm_broadcast.py:266] vLLM message queue communication handle: Handle(local_reader_ranks=[1, 2, 3], buffer_handle=(3, 4194304, 6, 'psm_66cd974d'), local_subscribe_addr='ipc:///tmp/6c51133d-2719-46f9-8665-d78baee4a362', remote_subscribe_addr=None, remote_addr_ipv6=False)
(VllmWorkerProcess pid=831118) INFO 05-21 23:27:55 [parallel_state.py:1004] rank 3 in world size 4 is assigned as DP rank 0, PP rank 0, TP rank 3
(VllmWorkerProcess pid=831117) INFO 05-21 23:27:55 [parallel_state.py:1004] rank 2 in world size 4 is assigned as DP rank 0, PP rank 0, TP rank 2
(VllmWorkerProcess pid=831116) INFO 05-21 23:27:55 [parallel_state.py:1004] rank 1 in world size 4 is assigned as DP rank 0, PP rank 0, TP rank 1
INFO 05-21 23:27:55 [parallel_state.py:1004] rank 0 in world size 4 is assigned as DP rank 0, PP rank 0, TP rank 0
INFO 05-21 23:27:55 [model_runner.py:1108] Starting to load model /mnt/models/gguf/mistralai/Devstral-Small-2505_gguf/devstralQ8_0.gguf...
(VllmWorkerProcess pid=831117) INFO 05-21 23:27:56 [model_runner.py:1108] Starting to load model /mnt/models/gguf/mistralai/Devstral-Small-2505_gguf/devstralQ8_0.gguf...
(VllmWorkerProcess pid=831118) INFO 05-21 23:27:56 [model_runner.py:1108] Starting to load model /mnt/models/gguf/mistralai/Devstral-Small-2505_gguf/devstralQ8_0.gguf...
(VllmWorkerProcess pid=831116) INFO 05-21 23:27:56 [model_runner.py:1108] Starting to load model /mnt/models/gguf/mistralai/Devstral-Small-2505_gguf/devstralQ8_0.gguf...
ERROR 05-21 23:27:56 [engine.py:448] Unknown gguf model_type: transformer
ERROR 05-21 23:27:56 [engine.py:448] Traceback (most recent call last):
ERROR 05-21 23:27:56 [engine.py:448] File "/home/ai/miniconda3/envs/vllm_learn/lib/python3.10/site-packages/vllm/engine/multiprocessing/engine.py", line 436, in run_mp_engine
ERROR 05-21 23:27:56 [engine.py:448] engine = MQLLMEngine.from_vllm_config(
ERROR 05-21 23:27:56 [engine.py:448] File "/home/ai/miniconda3/envs/vllm_learn/lib/python3.10/site-packages/vllm/engine/multiprocessing/engine.py", line 128, in from_vllm_config
ERROR 05-21 23:27:56 [engine.py:448] return cls(
ERROR 05-21 23:27:56 [engine.py:448] File "/home/ai/miniconda3/envs/vllm_learn/lib/python3.10/site-packages/vllm/engine/multiprocessing/engine.py", line 82, in __init__
ERROR 05-21 23:27:56 [engine.py:448] self.engine = LLMEngine(*args, **kwargs)
ERROR 05-21 23:27:56 [engine.py:448] File "/home/ai/miniconda3/envs/vllm_learn/lib/python3.10/site-packages/vllm/engine/llm_engine.py", line 275, in __init__
ERROR 05-21 23:27:56 [engine.py:448] self.model_executor = executor_class(vllm_config=vllm_config)
ERROR 05-21 23:27:56 [engine.py:448] File "/home/ai/miniconda3/envs/vllm_learn/lib/python3.10/site-packages/vllm/executor/executor_base.py", line 286, in __init__
ERROR 05-21 23:27:56 [engine.py:448] super().__init__(*args, **kwargs)
ERROR 05-21 23:27:56 [engine.py:448] File "/home/ai/miniconda3/envs/vllm_learn/lib/python3.10/site-packages/vllm/executor/executor_base.py", line 52, in __init__
ERROR 05-21 23:27:56 [engine.py:448] self._init_executor()
ERROR 05-21 23:27:56 [engine.py:448] File "/home/ai/miniconda3/envs/vllm_learn/lib/python3.10/site-packages/vllm/executor/mp_distributed_executor.py", line 125, in _init_executor
ERROR 05-21 23:27:56 [engine.py:448] self._run_workers("load_model",
ERROR 05-21 23:27:56 [engine.py:448] File "/home/ai/miniconda3/envs/vllm_learn/lib/python3.10/site-packages/vllm/executor/mp_distributed_executor.py", line 185, in _run_workers
ERROR 05-21 23:27:56 [engine.py:448] driver_worker_output = run_method(self.driver_worker, sent_method,
ERROR 05-21 23:27:56 [engine.py:448] File "/home/ai/miniconda3/envs/vllm_learn/lib/python3.10/site-packages/vllm/utils.py", line 2456, in run_method
ERROR 05-21 23:27:56 [engine.py:448] return func(*args, **kwargs)
ERROR 05-21 23:27:56 [engine.py:448] File "/home/ai/miniconda3/envs/vllm_learn/lib/python3.10/site-packages/vllm/worker/worker.py", line 203, in load_model
ERROR 05-21 23:27:56 [engine.py:448] self.model_runner.load_model()
ERROR 05-21 23:27:56 [engine.py:448] File "/home/ai/miniconda3/envs/vllm_learn/lib/python3.10/site-packages/vllm/worker/model_runner.py", line 1111, in load_model
ERROR 05-21 23:27:56 [engine.py:448] self.model = get_model(vllm_config=self.vllm_config)
ERROR 05-21 23:27:56 [engine.py:448] File "/home/ai/miniconda3/envs/vllm_learn/lib/python3.10/site-packages/vllm/model_executor/model_loader/__init__.py", line 14, in get_model
ERROR 05-21 23:27:56 [engine.py:448] return loader.load_model(vllm_config=vllm_config)
ERROR 05-21 23:27:56 [engine.py:448] File "/home/ai/miniconda3/envs/vllm_learn/lib/python3.10/site-packages/vllm/model_executor/model_loader/loader.py", line 1398, in load_model
ERROR 05-21 23:27:56 [engine.py:448] gguf_weights_map = self._get_gguf_weights_map(model_config)
ERROR 05-21 23:27:56 [engine.py:448] File "/home/ai/miniconda3/envs/vllm_learn/lib/python3.10/site-packages/vllm/model_executor/model_loader/loader.py", line 1371, in _get_gguf_weights_map
ERROR 05-21 23:27:56 [engine.py:448] raise RuntimeError(f"Unknown gguf model_type: {model_type}")
ERROR 05-21 23:27:56 [engine.py:448] RuntimeError: Unknown gguf model_type: transformer
ERROR 05-21 23:27:56 [multiproc_worker_utils.py:120] Worker VllmWorkerProcess pid 831117 died, exit code: -15