Random gibberish and endless gibberish appear after replying 1000-2000 characters.

#1
by su400 - opened

Random gibberish and endless gibberish appear after replying 1000-2000 characters. Here is my startup command.
RAY_IGNORE_UNHANDLED_ERRORS=1 python -m vllm.entrypoints.openai.api_server
--model /home/kkk/ai/models/DeepSeek-R1-0528-GPTQ
--tensor-parallel-size 8
--pipeline-parallel-size 2
--host 0.0.0.0
--port 9997
--enable-prefix-caching
--served-model-name DeepSeek-R1
--gpu-memory-utilization 0.95
--trust-remote-code
--max-num-batched-tokens 32768
--max-model-len 65535
--dtype float16

IST Austria Distributed Algorithms and Systems Lab org

@su400 which version of vLLM and compressed_tensors are you using?

IST Austria Distributed Algorithms and Systems Lab org

@su400 and by the way, which GPU? We observed that there are currently issues for execution on GB200. Execution on H100/H200 yields coherent outputs.

vllm 0.9.1.dev85+g0f71e2403 /home/kkk/ai/vllm,compressed-tensors 0.9.4, 16X l40S.

There's no problem now. I switched to VLLM 0.9.01, but the speed is slower than the original R1 AWQ version. The same machine used to have 40 tokens per second, but now it has 30 tokens per second. The reason for VLLM optimization should also be attributed.

Sign up or log in to comment