L quants (or more), for fun/testing, probably prefer bartowski or mradermacher's quants if available

Original Model: https://huggingface.co/jeiku/AuraFinal12B

Made with a modified version of https://huggingface.co/FantasiaFoundry/GGUF-Quantization-Script

Q2_K_L, Q4_K_L, Q5_K_L, Q6_K_L, are using Q_8 output tensors and token embeddings. imatrix is done using bartowski's imatrix dataset

Downloads last month
4
GGUF
Model size
12.2B params
Architecture
llama
Hardware compatibility
Log In to view the estimation

4-bit

6-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for Reiterate3680/AuraFinal12B-GGUF

Quantized
(2)
this model

Space using Reiterate3680/AuraFinal12B-GGUF 1