!!! Archive of LLaMa-1-13B Model !!!

May 27, 2023 - Monero/Manticore-13b-Chat-Pyg-Guanaco

v000000

This model was converted to GGUF format from Monero/Manticore-13b-Chat-Pyg-Guanaco using llama.cpp Refer to the original model card for more details on the model.

  • [Quants in repo:] static Q5_K_M, static Q6_K, static Q8_0

Manticore-13b-Chat-Pyg with the Guanaco 13b qLoRa from TimDettmers applied

Downloads last month
26
GGUF
Model size
13B params
Architecture
llama
Hardware compatibility
Log In to view the estimation

5-bit

6-bit

8-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for v000000/Manticore-13b-Chat-Pyg-Guanaco-GGUFs

Quantized
(1)
this model