Explain this model in detail.
#1
by
kreier
- opened
Please check the work
Please add a link to this special 4B instruction-tuned version of the Gemma 3 model in GGUF format using Quantization Aware Training (QAT). The GGUF corresponds to Q4_0 quantization.
https://huggingface.co/google/gemma-3-4b-it-qat-q4_0-gguf
This might need another pull request.
kreier
changed pull request status to
merged