Commit
·
22f2066
1
Parent(s):
4d8471b
Fix typo
Browse files
README.md
CHANGED
@@ -126,7 +126,7 @@ You can run EXAONE models locally using llama.cpp by following these steps:
|
|
126 |
llama-server -m EXAONE-4.0-32B-Q4_K_M.gguf \
|
127 |
-c 131072 -fa -ngl 64 \
|
128 |
--temp 0.6 --top-p 0.95 \
|
129 |
-
--jinja --chat-template-
|
130 |
--host 0.0.0.0 --port 8820 \
|
131 |
-a EXAONE-4.0-32B-Q4_K_M
|
132 |
```
|
|
|
126 |
llama-server -m EXAONE-4.0-32B-Q4_K_M.gguf \
|
127 |
-c 131072 -fa -ngl 64 \
|
128 |
--temp 0.6 --top-p 0.95 \
|
129 |
+
--jinja --chat-template-file chat_template.jinja \
|
130 |
--host 0.0.0.0 --port 8820 \
|
131 |
-a EXAONE-4.0-32B-Q4_K_M
|
132 |
```
|