Upload README.md with huggingface_hub
Browse files
README.md
CHANGED
@@ -31,16 +31,23 @@ Run them directly with [llama.cpp](https://github.com/ggml-org/llama.cpp), or an
|
|
31 |
<details>
|
32 |
<summary>Click to view download instructions</summary>
|
33 |
First, make sure you have hugginface-cli installed:
|
|
|
34 |
```
|
35 |
pip install -U "huggingface_hub[cli]"
|
36 |
```
|
|
|
37 |
Then, you can target the specific file you want:
|
|
|
38 |
```
|
39 |
huggingface-cli download https://huggingface.co/Brianpuz/gemma-3-27b-it-GGUF --include "gemma-3-27b-it-q4_k_m.gguf" --local-dir ./
|
40 |
```
|
|
|
41 |
If the model is bigger than 50GB, it will have been split into multiple files. In order to download them all to a local folder, run:
|
|
|
42 |
```
|
43 |
huggingface-cli download https://huggingface.co/Brianpuz/gemma-3-27b-it-GGUF --include "gemma-3-27b-it-q4_k_m.gguf/*" --local-dir ./
|
44 |
```
|
|
|
45 |
You can either specify a new local-dir (deepseek-ai_DeepSeek-V3-0324-Q8_0) or download them all in place (./)
|
|
|
46 |
</details>
|
|
|
31 |
<details>
|
32 |
<summary>Click to view download instructions</summary>
|
33 |
First, make sure you have hugginface-cli installed:
|
34 |
+
|
35 |
```
|
36 |
pip install -U "huggingface_hub[cli]"
|
37 |
```
|
38 |
+
|
39 |
Then, you can target the specific file you want:
|
40 |
+
|
41 |
```
|
42 |
huggingface-cli download https://huggingface.co/Brianpuz/gemma-3-27b-it-GGUF --include "gemma-3-27b-it-q4_k_m.gguf" --local-dir ./
|
43 |
```
|
44 |
+
|
45 |
If the model is bigger than 50GB, it will have been split into multiple files. In order to download them all to a local folder, run:
|
46 |
+
|
47 |
```
|
48 |
huggingface-cli download https://huggingface.co/Brianpuz/gemma-3-27b-it-GGUF --include "gemma-3-27b-it-q4_k_m.gguf/*" --local-dir ./
|
49 |
```
|
50 |
+
|
51 |
You can either specify a new local-dir (deepseek-ai_DeepSeek-V3-0324-Q8_0) or download them all in place (./)
|
52 |
+
|
53 |
</details>
|