4-bit GPTQ quantized version of FuseChat-Qwen-2.5-7B-Instruct for inference with the Private LLM app.

Downloads last month
0
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for numen-tech/FuseChat-Qwen-2.5-7B-Instruct-GPTQ-Int4

Quantized
(13)
this model