Llama-3.2-1B-Instruct-f32-GGUF

Llama 3.2 1B Instruct by Meta is a powerful, multilingual large language model with 1.23 billion parameters, optimized for instruction-tuned text-only tasks such as dialogue, agentic retrieval, and summarization. It employs an efficient auto-regressive transformer architecture enhanced with supervised fine-tuning and reinforcement learning from human feedback to align with human preferences for helpfulness and safety. The model supports multiple languages including English, German, French, Italian, Portuguese, Hindi, Spanish, and Thai, and can handle a large context length of up to 128,000 tokens.

It delivers strong performance on various industry benchmarks, making it suited for commercial and research use in building safe and flexible AI systems. Llama 3.2 1B Instruct also emphasizes responsible deployment and safety, encouraging developers to implement appropriate safeguards.

Model Files

Model File name Size QuantType
Llama-3.2-1B-Instruct.BF16.gguf 2.48 GB BF16
Llama-3.2-1B-Instruct.F16.gguf 2.48 GB F16
Llama-3.2-1B-Instruct.F32.gguf 4.95 GB F32
Llama-3.2-1B-Instruct.Q2_K.gguf 581 MB Q2_K
Llama-3.2-1B-Instruct.Q3_K_L.gguf 733 MB Q3_K_L
Llama-3.2-1B-Instruct.Q3_K_M.gguf 691 MB Q3_K_M
Llama-3.2-1B-Instruct.Q3_K_S.gguf 642 MB Q3_K_S
Llama-3.2-1B-Instruct.Q4_K_M.gguf 808 MB Q4_K_M
Llama-3.2-1B-Instruct.Q4_K_S.gguf 776 MB Q4_K_S
Llama-3.2-1B-Instruct.Q5_K_M.gguf 912 MB Q5_K_M
Llama-3.2-1B-Instruct.Q5_K_S.gguf 893 MB Q5_K_S
Llama-3.2-1B-Instruct.Q6_K.gguf 1.02 GB Q6_K
Llama-3.2-1B-Instruct.Q8_0.gguf 1.32 GB Q8_0

Quants Usage

(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)

Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better):

image.png

Downloads last month
83
GGUF
Model size
1.24B params
Architecture
llama
Hardware compatibility
Log In to view the estimation

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

16-bit

32-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for prithivMLmods/Llama-3.2-1B-Instruct-f32-GGUF

Quantized
(302)
this model