Llamacpp imatrix Quantizations of Gemma3-12B-CIC-ACLARC-GGUF

Using llama.cpp for quantization.

Original model: https://huggingface.co/sknow-lab/Gemma3-12B-CIC-ACLARC

Prompt format

<bos><start_of_turn>user
{system_prompt}

{prompt}<end_of_turn>
<start_of_turn>model
<end_of_turn>
<start_of_turn>model

Citation

@misc{koloveas2025llmspredictcitationintent,
      title={Can LLMs Predict Citation Intent? An Experimental Analysis of In-context Learning and Fine-tuning on Open LLMs}, 
      author={Paris Koloveas and Serafeim Chatzopoulos and Thanasis Vergoulis and Christos Tryfonopoulos},
      year={2025},
      eprint={2502.14561},
      archivePrefix={arXiv},
      primaryClass={cs.CL},
      url={https://arxiv.org/abs/2502.14561}, 
}
Downloads last month
39
GGUF
Model size
11.8B params
Architecture
gemma3
Hardware compatibility
Log In to view the estimation

8-bit

16-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for sknow-lab/Gemma3-12B-CIC-ACLARC-GGUF

Quantized
(74)
this model

Dataset used to train sknow-lab/Gemma3-12B-CIC-ACLARC-GGUF