robgreenberg3 commited on
Commit
7a77b3a
·
verified ·
1 Parent(s): a1b43e5

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -75,7 +75,7 @@ vLLM also supports OpenAI-compatible serving. See the [documentation](https://do
75
  <summary>Deploy on <strong>Red Hat AI Inference Server</strong></summary>
76
 
77
  ```bash
78
- $ podman run --rm -it --device nvidia.com/gpu=all -p 8000:8000 \
79
  --ipc=host \
80
  --env "HUGGING_FACE_HUB_TOKEN=$HF_TOKEN" \
81
  --env "HF_HUB_OFFLINE=0" -v ~/.cache/vllm:/home/vllm/.cache \
 
75
  <summary>Deploy on <strong>Red Hat AI Inference Server</strong></summary>
76
 
77
  ```bash
78
+ podman run --rm -it --device nvidia.com/gpu=all -p 8000:8000 \
79
  --ipc=host \
80
  --env "HUGGING_FACE_HUB_TOKEN=$HF_TOKEN" \
81
  --env "HF_HUB_OFFLINE=0" -v ~/.cache/vllm:/home/vllm/.cache \