This page contains 4-bit quantized GGUF files for the model gemma-3-12b-it-antislop by Sam Paech.

Quants are made with imatrix calibration using Bartowski's public imatrix dataset.

Downloads last month
65
GGUF
Model size
11.8B params
Architecture
gemma3
Hardware compatibility
Log In to view the estimation

4-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for Sketchfellow/sam-paech-gemma-3-12b-it-antislop-4bit-GGUF

Quantized
(1)
this model