language: | |
- en | |
- et | |
# 4-bit Llammas in gguf | |
This is a 4-bit quantized version of [TartuNLP/Llammas](https://huggingface.co/tartuNLP/Llammas) Llama2 model in gguf file format. |
language: | |
- en | |
- et | |
# 4-bit Llammas in gguf | |
This is a 4-bit quantized version of [TartuNLP/Llammas](https://huggingface.co/tartuNLP/Llammas) Llama2 model in gguf file format. |