library_name: llama.cpp | |
license: mit | |
tags: | |
- gguf | |
base_model: | |
- Darkhn/L3.3-70B-Animus-V1 | |
# L3.3-70B-Animus-V1-GGUF | |
GGUF model files for `L3.3-70B-Animus-V1` (original base: `L3.3-70B-Animus-V1`). | |
This repository contains the following quantization: **Q5_K_M**. | |
## Files | |
- `L3.3-70B-Animus-V1-Q5_K_M.gguf` | |
Converted and quantized using [llama.cpp](https://github.com/ggerganov/llama.cpp). |