Commit
·
5c4b94d
1
Parent(s):
f21c95c
Fix typo
Browse files
README.md
CHANGED
@@ -59,8 +59,8 @@ You can run EXAONE models locally using llama.cpp by following these steps:
|
|
59 |
2. Download the EXAONE 4.0 model weights in GGUF format.
|
60 |
|
61 |
```bash
|
62 |
-
huggingface-cli download LGAI-EXAONE/EXAONE-4.0-1.2B-GGUF
|
63 |
-
--include "EXAONE-4.0-1.2B-
|
64 |
--local-dir .
|
65 |
```
|
66 |
|
|
|
59 |
2. Download the EXAONE 4.0 model weights in GGUF format.
|
60 |
|
61 |
```bash
|
62 |
+
huggingface-cli download LGAI-EXAONE/EXAONE-4.0-1.2B-GGUF \
|
63 |
+
--include "EXAONE-4.0-1.2B-Q4_K_M.gguf" \
|
64 |
--local-dir .
|
65 |
```
|
66 |
|