YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)

Infatoshi/llama3.1-sft-v1

This is a fine-tuned version of NousResearch/Llama-3.2-1B using QLoRA.

Model description

Base model: NousResearch/Llama-3.2-1B Training technique: QLoRA Training data: Custom dataset

Usage

from transformers import AutoModelForCausalLM, AutoTokenizer

model = AutoModelForCausalLM.from_pretrained("Infatoshi/llama3.1-sft-v1") tokenizer = AutoTokenizer.from_pretrained("Infatoshi/llama3.1-sft-v1")

prompt = "Your prompt here" inputs = tokenizer(prompt, return_tensors="pt") outputs = model.generate(**inputs, max_new_tokens=128) response = tokenizer.decode(outputs[0], skip_special_tokens=True)

from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("Infatoshi/llama3.1-sft-v1")
tokenizer = AutoTokenizer.from_pretrained("Infatoshi/llama3.1-sft-v1")
prompt = "Your prompt here"
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(inputs, max_new_tokens=128)
response = tokenizer.decode(outputs[0], skip_special_tokens=True)
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support