llama-3.2-1b-bf16

  • base model from meta
  • for text generation
  • can be converted to gguf with convert_hf_to_gguf.py
  • then run it with llama.cpp, ollama, etc.
Downloads last month
7
Safetensors
Model size
1.24B params
Tensor type
BF16
·
Inference Providers NEW

Model tree for callgg/llama-3.2-1b-bf16

Finetuned
(686)
this model