Convert model to GGUF or format compatible with LM Studio

#66
by anthropoleo - opened

Hi,

Does anyone know how to convert a fine-tuned version of Bert-base-uncased for text classification into a format that allows me to load it using LM Studio or Ollama?

After training the model I pushed it to the hub but know I would like to use it in a way that it's friendly to it's users, not just piping the model on a notebook.

If it helps, the are the files the model created when I pushed it:

Screenshot 2024-04-05 at 08.26.36.png

Hi @anthropoleo
BERT is a relatively small model which is not auto-regressive, in most cases using a simple python backend such as transformers suffice for most use-cases I would say, even for running the model locally on CPU.
To convert to GGUF, I would advise you to open an issue on ggml / llama.cpp repositories on GitHub and see if the maintainers are keen to add BERT support !

Hey, Did you find any way to do it?

Hi @anthropoleo , I'm in the same predicament as you, did you find a solution? I'd be grateful if you could share it.

Sign up or log in to comment