Configuration Parsing Warning: In config.json: "quantization_config.bits" must be an integer

FimbulHermes-15B-v0.1 EXL2 6.5bpw

This is a 6.5bpw quant of steinzer-narayan/fimbulhermes-15B-v0.1_exl2_6.5bpw.

Downloads last month
4
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and the model is not deployed on the HF Inference API.

Model tree for steinzer-narayan/fimbulhermes-15B-v0.1_exl2_6.5bpw