EXL2 Quantizations of Qwen2.5-3B-Instruct
Using exllamav2 release 0.2.6 for quantization.
Original model: https://huggingface.co/Qwen/Qwen2.5-3B-Instruct
Bits 8.0, lm_head 8.0
- Downloads last month
- 7
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support