This repository contains only LoRA/QLoRA adapter weights. You must load them on top of the base model (e.g., google/gemma-2-2B-Instruct) using peft.
🚀 How to Use This Model
This model is fine-tuned for Sinhala Abstractive QA using QLoRA. You must load it with the base model:
# Example Inference
from transformers import AutoTokenizer, AutoModelForCausalLM
from peft import PeftModel
base_model = AutoModelForCausalLM.from_pretrained("meta-llama/Llama-3.2-3B-Instruct")
tokenizer = AutoTokenizer.from_pretrained("meta-llama/Llama-3.2-3B-Instruct")
model = PeftModel.from_pretrained(base_model, "Sachin-Hansaka/Sinhala-qa-QLoRA-LLaMA-3.2-3B-Instruct")
- Downloads last month
- 2
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support
Model tree for Sachin-Hansaka/Sinhala-qa-QLoRA-LLaMA-3.2-3B-Instruct
Base model
meta-llama/Llama-3.2-3B-Instruct