Commit
·
8e04bab
1
Parent(s):
a9f6593
Fix typo
Browse files
README.md
CHANGED
@@ -110,7 +110,7 @@ You can run EXAONE models locally using llama.cpp by following these steps:
|
|
110 |
llama-server -m EXAONE-4.0-32B-Q4_K_M.gguf \
|
111 |
-c 131072 -fa -ngl 64 \
|
112 |
--temp 0.6 --top-p 0.95 \
|
113 |
-
--jinja --chat-template-
|
114 |
--host 0.0.0.0 --port 8820 \
|
115 |
-a EXAONE-4.0-32B-Q4_K_M
|
116 |
```
|
|
|
110 |
llama-server -m EXAONE-4.0-32B-Q4_K_M.gguf \
|
111 |
-c 131072 -fa -ngl 64 \
|
112 |
--temp 0.6 --top-p 0.95 \
|
113 |
+
--jinja --chat-template-file chat_template.jinja \
|
114 |
--host 0.0.0.0 --port 8820 \
|
115 |
-a EXAONE-4.0-32B-Q4_K_M
|
116 |
```
|