zhiqing commited on
Commit
b9b61a6
·
verified ·
1 Parent(s): 3af83d7

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -2
README.md CHANGED
@@ -106,8 +106,6 @@ outputs = model.generate(tokenized_chat.to(model.device), max_new_tokens=2048)
106
  output_text = tokenizer.decode(outputs[0])
107
  ```
108
 
109
- We recommend using the following set of parameters for inference. Note that our model does not have the default system_prompt.
110
-
111
  ### Use with vLLM
112
  ```SHELL
113
  pip install vllm --upgrade
@@ -116,6 +114,8 @@ pip install vllm --upgrade
116
  ```SHELL
117
  vllm serve zhiqing/Hunyuan-MT-Chimera-7B-INT8
118
  ```
 
 
119
  ```json
120
  {
121
  "top_k": 20,
 
106
  output_text = tokenizer.decode(outputs[0])
107
  ```
108
 
 
 
109
  ### Use with vLLM
110
  ```SHELL
111
  pip install vllm --upgrade
 
114
  ```SHELL
115
  vllm serve zhiqing/Hunyuan-MT-Chimera-7B-INT8
116
  ```
117
+
118
+ We recommend using the following set of parameters for inference. Note that our model does not have the default system_prompt.
119
  ```json
120
  {
121
  "top_k": 20,