I'm curious how did you convert it to GGUF ?

#3
by prayanksai - opened

I have been unsuccessful to convert this to GGUF using llama.cpp. Newbie here. Any tips ?
Hitting this error : ERROR:hf-to-gguf:Model Mistral3ForConditionalGeneration is not supported

image.png

The org version is multi-modal ; looks like LLamacpp needs an update to work with it.
Suggest you submit a ticket at Llamacpp/Github asap.

RE: Quants.
Used a "bootleg" version of the source files with "vision" components removed.
Someone converted the VLLM to safetensors with config files and I used that to create the GGUFs.

Source: (this is one version)
https://huggingface.co/mrfakename/mistral-small-3.1-24b-instruct-2503-hf

Your need to confirm your account before you can post a new comment.

Sign up or log in to comment