4-bit OmniQuant quantized version of Mixtral-8x7B-Instruct-v0.1. Note that the embedding and the MoE gate weights are not quantized in this version.
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support