4-bit OmniQuant quantized version of Mistral-7B-Instruct-v0.3.
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support
4-bit OmniQuant quantized version of Mistral-7B-Instruct-v0.3.