Fine-tuned Llama-3 1B
This repository contains a fine-tuned version of the meta-llama/Meta-Llama-3-1B model.
Clone with git lfs clone https://huggingface.co/<REPO_ID>
or load directly in Transformers:
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("<REPO_ID>")