mohanz commited on
Commit
ddc409b
·
verified ·
1 Parent(s): e77b1c4

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +47 -0
README.md ADDED
@@ -0,0 +1,47 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model: google/gemma-3-27b-it
3
+ library_name: transformers
4
+ license: gemma
5
+ pipeline_tag: image-text-to-text
6
+ tags:
7
+ - llama-cpp
8
+ - gguf-my-repo
9
+ extra_gated_heading: Access Gemma on Hugging Face
10
+ extra_gated_prompt: To access Gemma on Hugging Face, you’re required to review and
11
+ agree to Google’s usage license. To do this, please ensure you’re logged in to Hugging
12
+ Face and click below. Requests are processed immediately.
13
+ extra_gated_button_content: Acknowledge license
14
+ ---
15
+
16
+ *Produced by [Antigma Labs](https://antigma.ai)*
17
+ ## llama.cpp quantization
18
+ Using <a href="https://github.com/ggml-org/llama.cpp">llama.cpp</a> release <a href="https://github.com/ggml-org/llama.cpp/releases/tag/b5162">b4944</a> for quantization.
19
+ Original model: https://huggingface.co/google/gemma-3-27b-it
20
+ Run them directly with [llama.cpp](https://github.com/ggml-org/llama.cpp), or any other llama.cpp based project
21
+ ## Prompt format
22
+ ```
23
+ <|begin▁of▁sentence|>{system_prompt}<|User|>{prompt}<|Assistant|><|end▁of▁sentence|><|Assistant|>
24
+ ```
25
+ ## Download a file (not the whole branch) from below:
26
+ | Filename | Quant type | File Size | Split |
27
+ | -------- | ---------- | --------- | ----- |
28
+ | [gemma-3-27b-it-q4_k_m.gguf](https://huggingface.co/mohanz/gemma-3-27b-it-Q4_K_M-Q6_K-GGUF/blob/main/gemma-3-27b-it-q4_k_m.gguf)|Q4_K_M|15.41 GB|False|
29
+ |[gemma-3-27b-it-q6_k.gguf](https://huggingface.co/mohanz/gemma-3-27b-it-Q4_K_M-Q6_K-GGUF/blob/main/gemma-3-27b-it-q6_k.gguf)|Q6_K|20.64 GB|False|
30
+
31
+ ## Downloading using huggingface-cli
32
+ <details>
33
+ <summary>Click to view download instructions</summary>
34
+ First, make sure you have hugginface-cli installed:
35
+ ```
36
+ pip install -U "huggingface_hub[cli]"
37
+ ```
38
+ Then, you can target the specific file you want:
39
+ ```
40
+ huggingface-cli download https://huggingface.co/mohanz/gemma-3-27b-it-Q4_K_M-Q6_K-GGUF --include "gemma-3-27b-it-q4_k_m.gguf" --local-dir ./
41
+ ```
42
+ If the model is bigger than 50GB, it will have been split into multiple files. In order to download them all to a local folder, run:
43
+ ```
44
+ huggingface-cli download https://huggingface.co/mohanz/gemma-3-27b-it-Q4_K_M-Q6_K-GGUF --include "gemma-3-27b-it-q4_k_m.gguf/*" --local-dir ./
45
+ ```
46
+ You can either specify a new local-dir (deepseek-ai_DeepSeek-V3-0324-Q8_0) or download them all in place (./)
47
+ </details>