LG-AI-EXAONE commited on
Commit
8e04bab
·
1 Parent(s): a9f6593
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -110,7 +110,7 @@ You can run EXAONE models locally using llama.cpp by following these steps:
110
  llama-server -m EXAONE-4.0-32B-Q4_K_M.gguf \
111
  -c 131072 -fa -ngl 64 \
112
  --temp 0.6 --top-p 0.95 \
113
- --jinja --chat-template-format chat_template.jinja \
114
  --host 0.0.0.0 --port 8820 \
115
  -a EXAONE-4.0-32B-Q4_K_M
116
  ```
 
110
  llama-server -m EXAONE-4.0-32B-Q4_K_M.gguf \
111
  -c 131072 -fa -ngl 64 \
112
  --temp 0.6 --top-p 0.95 \
113
+ --jinja --chat-template-file chat_template.jinja \
114
  --host 0.0.0.0 --port 8820 \
115
  -a EXAONE-4.0-32B-Q4_K_M
116
  ```