llama.cpp - GGUF support

#1
by Doctor-Chad-PhD - opened

Hi,

Is this model not supported by llama.cpp (gguf format)?
Or is there an error in the implementation?

I'm getting this message when trying to quantize to gguf at the moment:

  File "convert_hf_to_gguf.py", line 2553, in set_gguf_parameters
    logit_scale = self.hparams["hidden_size"] / self.hparams["dim_model_base"]
                                                ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^
KeyError: 'dim_model_base'

Thank you for your time.

Doctor-Chad-PhD changed discussion title from GGUF support to llama.cpp - GGUF support
OpenBMB org

We have already fixed this error. add dim_model_base = 256. You can update the model and try again

I don't understand why you didn't just merge my commit?
I created that commit myself, then your team member "xcjthu" copied it (6 hours later), and merged it as his own commit here.
Then you closed my original commit and said that it had already been fixed?

Sign up or log in to comment