Llama-3.2-1B-2bit-gguf / quant_config.json
codewithdark's picture
Add 2-bit Q2_K GGUF model quantized from meta-llama/Llama-3.2-1B (2025-06-25 10:08:02)
6c2cf6a verified
raw
history blame
60 Bytes
{
"bits": 2,
"quant_type": "Q2_K",
"group_size": 128
}