DhiyaEddine commited on
Commit
f86c138
·
verified ·
1 Parent(s): 9553f2e

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +7 -2
README.md CHANGED
@@ -57,6 +57,7 @@ For more details about the training protocol of this model, please refer to the
57
 
58
  Currently to use this model you can either rely on Hugging Face `transformers`, `vLLM` or our custom fork of `llama.cpp` library.
59
 
 
60
  ## Inference
61
 
62
  Make sure to install the latest version of `transformers` or `vllm`, eventually install these packages from source:
@@ -65,7 +66,11 @@ Make sure to install the latest version of `transformers` or `vllm`, eventually
65
  pip install git+https://github.com/huggingface/transformers.git
66
  ```
67
 
68
- Refer to [the official vLLM documentation for more details on building vLLM from source](https://docs.vllm.ai/en/latest/getting_started/installation/gpu.html#build-wheel-from-source).
 
 
 
 
69
 
70
  ### 🤗 transformers
71
 
@@ -91,7 +96,7 @@ model = AutoModelForCausalLM.from_pretrained(
91
  For vLLM, simply start a server by executing the command below:
92
 
93
  ```
94
- # pip install vllm
95
  vllm serve tiiuae/Falcon-H1-1B-Instruct --tensor-parallel-size 2 --data-parallel-size 1
96
  ```
97
 
 
57
 
58
  Currently to use this model you can either rely on Hugging Face `transformers`, `vLLM` or our custom fork of `llama.cpp` library.
59
 
60
+
61
  ## Inference
62
 
63
  Make sure to install the latest version of `transformers` or `vllm`, eventually install these packages from source:
 
66
  pip install git+https://github.com/huggingface/transformers.git
67
  ```
68
 
69
+ For vLLM, make sure to install `vllm>=0.9.0`:
70
+
71
+ ```bash
72
+ pip install "vllm>=0.9.0"
73
+ ```
74
 
75
  ### 🤗 transformers
76
 
 
96
  For vLLM, simply start a server by executing the command below:
97
 
98
  ```
99
+ # pip install vllm>=0.9.0
100
  vllm serve tiiuae/Falcon-H1-1B-Instruct --tensor-parallel-size 2 --data-parallel-size 1
101
  ```
102