GGUF Generation Script

#3
by RonanMcGovern - opened

Howdy, could you kindly share how this GGUF was made, i.e. scripts? Thanks

yeah sure, it's just the usual:

python3 convert_hf_to_gguf.py --outtype ${DTYPE} --outfile /models_out/${MODEL}-GGUF/${MODEL}-${DTYPE}.gguf /models/${MODEL}/

where DTYPE is one of f16, f32, bf16, and MODEL is the name of the model I'm using

Then I create my imatrix:

./llama-imatrix -m /models_out/${MODEL}-GGUF/${MODEL}-${DTYPE}.gguf -f /training_data/calibration_datav3.txt --output-file /models/${MODEL}-GGUF/${MODEL}.imatrix

Then I run:

./llama-quantize --imatrix /models/${MODEL}-GGUF/${MODEL}.imatrix /models/${MODEL}-GGUF/${MODEL}-${DTYPE}.gguf /models/${MODEL}-GGUF/${MODEL}-${QUANT}.gguf ${QUANT}

where QUANT is the quant type i'm making (Q4_K_M for example)

@bartowski How do you pick initial DTYPE? I know some models are originally provided as f32 or bf16, but I usually just go with f16 - and it seems to be working fine (f32 takes too much disk space, and I've read somewhere bf16 isn't yet fully supported in llama.cpp - and it might require newer GPU, too). Are there any use cases for starting with f32 or bf16, instead of f16 (with measurable quality difference in final quants like Q6_K)?

Oh sorry I missed this

Just go with f16 imo

I used to say f32 to guarantee quality (bf16 -> fp16 isn't lossless) but I've come around for a few reasons

1, llama.cpp does internal conversions to f16 at many points in the process so it doesn't super matter

2, values that are so small that fp16 can't represent them probably don't contribute much to the final output anyways and can pretty safely be squashed to 0

And besides, with quantization, the loss of bf16 vs fp16 is orders of magnitude less than converting to 8bit

Sign up or log in to comment