WizardLM-2-8x22B-GGUF Quants

Readme to be updated as addtional quants are uploaded.

Q4_K - ~80GB

Downloads last month
1
GGUF
Model size
141B params
Architecture
llama
Hardware compatibility
Log In to view the estimation
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support