Nous-Hermes-2-Mixtral-8x7B-DPO-HQQ
This model is part of a series of HQQ tests. I make no claims on the performance of this model and it very well may change or be deleted.
This is a very extreme example of quantization.
from hqq.engine.hf import HQQModelForCausalLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained('macadeliccc/Nous-Hermes-2-Mixtral-8x7B-DPO-HQQ', trust_remote_code=True)
model = HQQModelForCausalLM.from_pretrained(
"macadeliccc/Nous-Hermes-2-Mixtral-8x7B-DPO-HQQ",
torch_dtype=torch.float16,
device_map="auto"
)
- Downloads last month
- 5
Inference Providers
NEW
This model is not currently available via any of the supported third-party Inference Providers, and
the model is not deployed on the HF Inference API.
Model tree for macadeliccc/Nous-Hermes-2-Mixtral-8x7B-DPO-HQQ
Base model
mistralai/Mixtral-8x7B-v0.1