No Vision Layers?

#3
by chibop - opened

Thanks for this.
Is this only for text? Is vision omitted?

Hi! You can use the mmproj-model-f16.gguf model from here: https://huggingface.co/lmstudio-community/gemma-3-27b-it-GGUF/tree/main. Simply place it together with the gemma-3-27b-it-abliterated.q4_k_m.gguf model. I was able to run it normally in LM Studio, and it can recognize image content.

Thanks! But what if I use Ollama?

Thanks! But what if I use Ollama?

ditto

"""Hi! You can use the mmproj-model-f16.gguf model from here: https://huggingface.co/lmstudio-community/gemma-3-27b-it-GGUF/tree/main. Simply place it together with the gemma-3-27b-it-abliterated.q4_k_m.gguf model. I was able to run it normally in LM Studio, and it can recognize image content.
"""
worked flawlessly in LM studio, thanks. Bit tricky in OLLAMA though.

"""Hi! You can use the mmproj-model-f16.gguf model from here: https://huggingface.co/lmstudio-community/gemma-3-27b-it-GGUF/tree/main. Simply place it together with the gemma-3-27b-it-abliterated.q4_k_m.gguf model. I was able to run it normally in LM Studio, and it can recognize image content.
"""
worked flawlessly in LM studio, thanks. Bit tricky in OLLAMA though.

What did you do??

Did anyone figure out ollama out?

Nope. The most helpful thread between ollama repo discussions I've found is this one: https://github.com/ollama/ollama/issues/9967
But I'm still unable to load the mmproj file along with gemma-3-27b-it-abliterated.q4_k_m.gguf, and I'm getting the error "Failed to create new sequence: failed to process inputs: this model is missing data required for image input".

Your need to confirm your account before you can post a new comment.

Sign up or log in to comment