llama.cpp - GGUF support
#1
by
Doctor-Chad-PhD
- opened
Hi,
Is this model not supported by llama.cpp (gguf format)?
Or is there an error in the implementation?
I'm getting this message when trying to quantize to gguf at the moment:
File "convert_hf_to_gguf.py", line 2553, in set_gguf_parameters
logit_scale = self.hparams["hidden_size"] / self.hparams["dim_model_base"]
~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^
KeyError: 'dim_model_base'
Thank you for your time.
Doctor-Chad-PhD
changed discussion title from
GGUF support
to llama.cpp - GGUF support
We have already fixed this error. add dim_model_base = 256. You can update the model and try again