--- library_name: llama.cpp license: mit tags: - gguf - iq1-m base_model: - Darkhn/M3.2-24B-Animus-V7.1 --- # M3.2-24B-Animus-V7.1-GGUF GGUF model files for `M3.2-24B-Animus-V7.1`. This repository contains GGUF models quantized using [`llama.cpp`](https://github.com/ggerganov/llama.cpp). - **Base Model:** `M3.2-24B-Animus-V7.1` - **Quantization Methods Processed in this Job:** `IQ4_NL`, `Q5_K_S`, `IQ4_XS`, `IQ3_M`, `IQ3_S`, `IQ3_XS`, `IQ3_XXS`, `Q2_K_S`, `IQ2_M`, `IQ2_S`, `IQ2_XS`, `IQ2_XXS`, `IQ1_M` - **Importance Matrix Used:** Yes This specific upload is for the **`IQ1_M`** quantization.