Commit
·
b04e383
1
Parent(s):
e1d6f96
Fix typo
Browse files
README.md
CHANGED
@@ -59,8 +59,8 @@ You can run EXAONE models locally using llama.cpp by following these steps:
|
|
59 |
2. Download the EXAONE 4.0 model weights in GGUF format.
|
60 |
|
61 |
```bash
|
62 |
-
huggingface-cli download LGAI-EXAONE/EXAONE-4.0-32B-GGUF
|
63 |
-
--include "EXAONE-4.0-32B-
|
64 |
--local-dir .
|
65 |
```
|
66 |
|
|
|
59 |
2. Download the EXAONE 4.0 model weights in GGUF format.
|
60 |
|
61 |
```bash
|
62 |
+
huggingface-cli download LGAI-EXAONE/EXAONE-4.0-32B-GGUF \
|
63 |
+
--include "EXAONE-4.0-32B-Q4_K_M.gguf" \
|
64 |
--local-dir .
|
65 |
```
|
66 |
|