Quantized using the default exllamav3 (0.0.4) quantization process.
- Original model: DavidAU/Mistral-Small-3.2-46B-The-Brilliant-Raconteur-Instruct-2506 - refer for more details on the model.
- exllamav3: https://github.com/turboderp-org/exllamav3
EXL3 quants available:
- 3.5bpw
- Go to "Files and versions", then click on "Main" to choose your quant
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support
Model tree for s1arsky/Mistral-Small-3.2-46B-The-Brilliant-Raconteur-Instruct-2506-EXL3
Base model
mistralai/Mistral-Small-3.1-24B-Base-2503