Configuration Parsing Warning: In config.json: "quantization_config.bits" must be an integer

exl2 quant of Sao10K/72B-Qwen2.5-Kunou-v1

I noticed nobody uploaded exl2 quants yet so here's my 6.5bpw quant of 72B-Qwen2.5-Kunou-v1

  • mesaurement.json

I'll probably delete this once the big quanters get around to it.

Downloads last month
0
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for gghfez/72B-Qwen2.5-Kunou-v1-exl2-6.5bpw

Base model

Qwen/Qwen2.5-72B
Quantized
(16)
this model

Collection including gghfez/72B-Qwen2.5-Kunou-v1-exl2-6.5bpw