--- library_name: llama.cpp license: mit tags: - gguf - q2-k --- # L3.3-70B-Animus-V9.0A-failed-GGUF GGUF model files for `L3.3-70B-Animus-V9.0A-failed`. This repository contains GGUF models quantized using [`llama.cpp`](https://github.com/ggerganov/llama.cpp). - **Base Model:** `L3.3-70B-Animus-V9.2` - **Quantization Methods Processed in this Job:** `Q4_K_M`, `Q5_0`, `Q5_K_M`, `Q4_K_S`, `Q4_0`, `Q3_K_L`, `Q3_K_M`, `Q3_K_S`, `Q2_K` - **Importance Matrix Used:** No This specific upload is for the **`Q2_K`** quantization.