Llama 3.2 3B Instruct GGUF model file

python llama.cpp\convert_hf_to_gguf.py llama-3.2-3b-instruct --outfile llama-3.2-3b-instruct.gguf --outtype q8_0
Downloads last month
0
GGUF
Model size
3.21B params
Architecture
llama
Hardware compatibility
Log In to view the estimation
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support