torch compile compatibility issue
#38
by
axiomlab
- opened
trying to do inference with vllm, getting this error:
[config.py:3785] torch.compile
is turned on, but the model meta-llama/Llama-4-Scout-17B-16E-Instruct does not support it. Please open an issue on GitHub if you want it to be supported.
I'm using vllm version as below:
Version: 0.8.3rc2.dev30+g95d63f38
axiomlab
changed discussion title from
torch compile compatibility
to torch compile compatibility issue