Qwen2.5-VL-7B-Instruct-FP4

FP4 quant of Qwen/Qwen2.5-VL-7B-Instruct suitable for fast, low-loss use on Blackwell GPUs.

Downloads last month
13
Safetensors
Model size
4.8B params
Tensor type
F32
FP16
U8
Inference Providers NEW
This model isn't deployed by any Inference Provider. 馃檵 Ask for provider support

Model tree for asi992h/Qwen2.5-VL-7B-Instruct-FP4

Quantized
(69)
this model