File size: 182 Bytes
90f6cc7 |
1 2 3 4 5 6 7 |
---
language:
- en
- et
---
# 4-bit Llammas in gguf
This is a 4-bit quantized version of [TartuNLP/Llammas](https://huggingface.co/tartuNLP/Llammas) Llama2 model in gguf file format. |