Quantized using the default exllamav3 (0.0.4) quantization process.
- Original model: Nitral-AI/Violet_Magcap-12B - refer for more details on the model.
- exllamav3: https://github.com/turboderp-org/exllamav3
EXL3 quants available:
- 3.5bpw, 4.0bpw, 5bpw, 6.0bpw
- Go to "Files and versions", then click on "Main" to choose your quant
- download command example 'git clone -b 4.0bpw https://huggingface.co/s1arsky/Violet_Magcap-12B-EXL3'
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support
Model tree for s1arsky/Violet_Magcap-12B-EXL3
Base model
Nitral-AI/Captain-Eris_Violet-GRPO-v0.420
Finetuned
Nitral-AI/Violet_Magcap-12B