license: apache-2.0
base_model:
- meta-llama/Llama-3.2-1B-Instruct
language:
- en
pipeline_tag: text-generation
library_name: transformers
tags:
- text-generation-inference
Llama-3.2-1B-Instruct-f32-GGUF
Llama 3.2 1B Instruct by Meta is a powerful, multilingual large language model with 1.23 billion parameters, optimized for instruction-tuned text-only tasks such as dialogue, agentic retrieval, and summarization. It employs an efficient auto-regressive transformer architecture enhanced with supervised fine-tuning and reinforcement learning from human feedback to align with human preferences for helpfulness and safety. The model supports multiple languages including English, German, French, Italian, Portuguese, Hindi, Spanish, and Thai, and can handle a large context length of up to 128,000 tokens.
It delivers strong performance on various industry benchmarks, making it suited for commercial and research use in building safe and flexible AI systems. Llama 3.2 1B Instruct also emphasizes responsible deployment and safety, encouraging developers to implement appropriate safeguards.
Model Files
| Model File name | Size | QuantType |
|---|---|---|
| Llama-3.2-1B-Instruct.BF16.gguf | 2.48 GB | BF16 |
| Llama-3.2-1B-Instruct.F16.gguf | 2.48 GB | F16 |
| Llama-3.2-1B-Instruct.F32.gguf | 4.95 GB | F32 |
| Llama-3.2-1B-Instruct.Q2_K.gguf | 581 MB | Q2_K |
| Llama-3.2-1B-Instruct.Q3_K_L.gguf | 733 MB | Q3_K_L |
| Llama-3.2-1B-Instruct.Q3_K_M.gguf | 691 MB | Q3_K_M |
| Llama-3.2-1B-Instruct.Q3_K_S.gguf | 642 MB | Q3_K_S |
| Llama-3.2-1B-Instruct.Q4_K_M.gguf | 808 MB | Q4_K_M |
| Llama-3.2-1B-Instruct.Q4_K_S.gguf | 776 MB | Q4_K_S |
| Llama-3.2-1B-Instruct.Q5_K_M.gguf | 912 MB | Q5_K_M |
| Llama-3.2-1B-Instruct.Q5_K_S.gguf | 893 MB | Q5_K_S |
| Llama-3.2-1B-Instruct.Q6_K.gguf | 1.02 GB | Q6_K |
| Llama-3.2-1B-Instruct.Q8_0.gguf | 1.32 GB | Q8_0 |
Quants Usage
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better):
