--- library_name: llama.cpp license: mit tags: - gguf - q2-k --- # M3.2-36B-Animus-V8.0-GGUF GGUF model files for `M3.2-36B-Animus-V8.0`. This repository contains GGUF models quantized using [`llama.cpp`](https://github.com/ggerganov/llama.cpp). - **Base Model:** `M3.2-36B-Animus-V8.0` - **Quantization Methods Processed in this Job:** `Q8_0`, `Q6_K`, `Q5_K_M`, `Q5_0`, `Q5_K_S`, `Q4_K_M`, `Q4_K_S`, `Q4_0`, `Q3_K_L`, `Q3_K_M`, `Q3_K_S`, `Q2_K` - **Importance Matrix Used:** No This specific upload is for the **`Q2_K`** quantization.