Mistral-small

#19
by Melkiss - opened
Files changed (1) hide show
  1. README.md +3 -3
README.md CHANGED
@@ -164,10 +164,10 @@ to implement production-ready inference pipelines.
164
 
165
  **_Installation_**
166
 
167
- Make sure you install [`vLLM >= 0.8.1`](https://github.com/vllm-project/vllm/releases/tag/v0.8.1):
168
 
169
  ```
170
- pip install vllm --upgrade
171
  ```
172
 
173
  Doing so should automatically install [`mistral_common >= 1.5.4`](https://github.com/mistralai/mistral-common/releases/tag/v1.5.4).
@@ -177,7 +177,7 @@ To check:
177
  python -c "import mistral_common; print(mistral_common.__version__)"
178
  ```
179
 
180
- You can also make use of a ready-to-go [docker image](https://github.com/vllm-project/vllm/blob/main/Dockerfile) or on the [docker hub](https://hub.docker.com/layers/vllm/vllm-openai/latest/images/sha256-de9032a92ffea7b5c007dad80b38fd44aac11eddc31c435f8e52f3b7404bbf39).
181
 
182
  #### Server
183
 
 
164
 
165
  **_Installation_**
166
 
167
+ Make sure you install [`vLLM nightly`](https://github.com/vllm-project/vllm/):
168
 
169
  ```
170
+ pip install vllm --pre --extra-index-url https://wheels.vllm.ai/nightly --upgrade
171
  ```
172
 
173
  Doing so should automatically install [`mistral_common >= 1.5.4`](https://github.com/mistralai/mistral-common/releases/tag/v1.5.4).
 
177
  python -c "import mistral_common; print(mistral_common.__version__)"
178
  ```
179
 
180
+ You can also make use of a ready-to-go [docker image](https://github.com/vllm-project/vllm/blob/main/Dockerfile) or on the [docker hub](https://hub.docker.com/layers/vllm/vllm-openai/latest/images/sha256-de9032a92ffea7b5c007dad80b38fd44aac11eddc31c435f8e52f3b7404bbf39) followed by a nightly install of vllm as shown above.
181
 
182
  #### Server
183