Model Card for Model ID

Optimum GPTQ quantized 8-bit version of Mixtral-8x22B-Instruct-v0.1
See original model card for more information.

How to load

Downloads last month
15
Safetensors
Model size
19.2B params
Tensor type
I32
·
FP16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support