where is the mmproj file?
thanks for your excellent MiMo-VL-7B-RL model.
I build a MiMo-VL-7B-RL-q4_k_m.gguf model in https://huggingface.co/zhouwg/kantv/tree/main by the official tools provided in llama.cpp.
then I compared it with Qwen1.5-1.8B,Qwen2.5-3B,Qwen3-4B,Qwen3-8B,gemma-3-4b,gemma-3-12b,Llama-3.1-Nemotron-Nano-4B,Phi-4-mini-reasoning,DeepSeek-R1-0528-Qwen3-8B on my Qualcomm Snapdragon 8Elite based phone, I can't believe that it achieved the second best overall experience at the moment:
for some questions, the MiMo-VL-7B-RL is much better than DeepSeek-R1-0528-Qwen3-8B and Qwen series, Gemma-3-4B is a little better than MiMo-VL-7B-RL.
the MiMo-VL-7B-RL also gave some stupid answers to some simple questions, DeepSeek-R1-0528-Qwen3-8B is better on these questions, Google's Gemma-3-4B achieved the best overall experience.
my question is:
as an Image-Text-to-Text multimodal model, where is the mmproj model file? should we create the mmproj model file manually? how to create the mmproj model file?
thanks.
Hi, please follow the recipe of Qwen25VL for deploying with llama.cpp.
thanks for you reminder and help, I'll try.