Ollama deployment
#7
by
sedatkaradag
- opened
Hi! Thanks for the great work!
I wonder if we can deploy to Ollama right now? Or is there any method to do it? Best wishes!
llama.cpp support of vision models is usually late or does not appear at all, except for very popular models eg Gemma Mistral.
You need to use vllm with gptq/awq which support vision.