Some of my own quants:

  • Pygmalion-2-13b-SuperCOT2_Q4_K_M.gguf
  • Pygmalion-2-13b-SuperCOT2_Q5_K_M.gguf

Source: royallab

Source Model: Pygmalion-2-13b-SuperCOT2

Source models for royallab/Pygmalion-2-13b-SuperCOT2 (Merge)

Downloads last month
20
GGUF
Model size
13B params
Architecture
llama
Hardware compatibility
Log In to view the estimation

4-bit

5-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support