LiberatedHermes-2-Pro-Mistral-7B-HQQ

This is a 4bit quantization using HQQ

Load Script

from hqq.engine.hf import HQQModelForCausalLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained(model_id)
model     = HQQModelForCausalLM.from_quantized(model_id)
Downloads last month
13
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for macadeliccc/LiberatedHermes-2-Pro-Mistral-7B-HQQ