Usage

import torch
from transformers import AutoTokenizer, AutoModelForCausalLM

model_name = "ziyingchen1106/Llama-3.2-3B-Instruct-fp16-lora-gptqmodel-4bit"
model = AutoModelForCausalLM.from_pretrained(
   model_name,
   torch_dtype=torch.float16,
   device_map="cuda:0"
)
tokenizer = AutoTokenizer.from_pretrained(model_name)

Attribution

  • Built with Llama
  • Llama 3.2 Community License 漏 Meta Platforms, Inc.
Downloads last month
23
Safetensors
Model size
846M params
Tensor type
F32
I32
FP16
Inference Providers NEW
This model isn't deployed by any Inference Provider. 馃檵 Ask for provider support

Model tree for ziyingchen1106/Llama-3.2-3B-Instruct-fp16-lora-gptqmodel-4bit

Quantized
(318)
this model