Some of my own quants:

  • WizardLM-13B-V1.2_Q5_1_4K.gguf
  • WizardLM-13B-V1.2_Q5_1_8K.gguf

Source: WizardLM

Source Model: WizardLM-13B-V1.2

Models utilizing WizardLM/WizardLM-13B-V1.2

Downloads last month
7
GGUF
Model size
13B params
Architecture
llama
Hardware compatibility
Log In to view the estimation

5-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support