qwen2.5-vl-3b-it-gguf

  • for text/image-text-to-text generation
  • work as text encoder
  • compatible with both comfyui-gguf and gguf-node
  • example model supported: omnigen
Downloads last month
46
GGUF
Model size
3.09B params
Architecture
qwen2vl
Hardware compatibility
Log In to view the estimation

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for chatpig/qwen2.5-vl-3b-it-gguf

Quantized
(59)
this model