This is a "self" merge of https://huggingface.co/google/gemma-3-12b-it-qat-q4_0-gguf and https://huggingface.co/bartowski/google_gemma-3-12b-it-GGUF.

The official QAT weights released by google use fp16 (instead of Q6_K) for the embeddings table, which makes this model take a significant extra amount of memory (and storage) compared to what Q4_0 quants are supposed to take. Instead of quantizing the table myself, I extracted it from Bartowski's quantized models because I thought using imatrix quants would give better quality (it doesn't, imatrix isn't used for token embeddings).

Here are some perplexity measurements:

Model File size ↓ PPL (wiki.text.raw) ↓ Hellaswag, 4k tasks ↑
iQ3_xs (bartowski) 5.21 GB 10.0755 +/- 0.08024 ---
This model 6.89 GB 9.2637 +/- 0.07216 72.925% [71.5366M, 74.2794%]
Q4_0 (bartowski) 6.91 GB 9.5589 +/- 0.07527 73.125% [71.7295%, 74.4761%]
QAT Q4_0 (google) 8.07 GB 9.2565 +/- 0.07212 72.850% [71.4505%, 74.2056%]
Q5_K_S (bartowski) 8.23 GB 9.8540 +/- 0.08016 ---

Note that this model ends up smaller than the Q4_0 from Bartowski. This is because llama.cpp sets some tensors to Q4_1 when quantizing models to Q4_0 with imatrix, but this is a static quant. I don't understand why Q5_K_S is performing worse on that test than the default Q4_0, I wasn't expecting this outcome. This merge seems to be a good balance between model size and perplexity. I believe this is representative to the overall quality of the model.

Downloads last month
5,018
GGUF
Model size
11.8B params
Architecture
gemma3
Hardware compatibility
Log In to view the estimation

4-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 2 Ask for provider support

Model tree for stduhpf/google-gemma-3-12b-it-qat-q4_0-gguf-small

Quantized
(1)
this model