⚖️ Legal-LLaMA-3B (Fine-tuned on Indian Legal QA)
Model Description
This is a LLaMA-3B model fine-tuned using LoRA (merged) with Unsloth on ~14.5K Indian legal question-answer pairs.
The model is designed to act as a legal assistant chatbot specialized in Indian law (contracts, consumer protection, family law, etc.).
- Developed by: DeathBlade020
- Model type: Causal LM (decoder-only)
- Language(s): English (with Indian legal terminology)
- Finetuned from: LLaMA-3B base
- License: LLaMA license (Meta AI)
Uses
Direct Use
- Educational / research purposes for Indian law Q&A
- Chatbot-style applications in legal learning
Out-of-Scope Use
- ❌ Not a substitute for professional legal advice
- ❌ Not intended for real-world legal decision making
Training Details
- Dataset: ~14,543 Indian legal QA pairs
- Training split: ~13,815 train / ~728 validation
- Method: LoRA fine-tuning with Unsloth
- Epochs: 3
- Max seq length: 2048
- Optimizer: AdamW, lr=2e-4
- Hardware: Google Colab T4 GPU(Free)
Installation
Make sure you have the required libraries installed:
pip install unsloth transformers accelerate torch
Example Usage
from unsloth import FastLanguageModel
model_id = "DeathBlade020/legal-llama-3b"
model, tokenizer = FastLanguageModel.from_pretrained(model_id, max_seq_length=2048)
FastLanguageModel.for_inference(model)
messages = [
{"role": "system", "content": "You are a legal expert specializing in Indian law."},
{"role": "user", "content": "What are the essential elements of a valid contract under Indian law?"},
]
inputs = tokenizer.apply_chat_template(messages, tokenize=True, add_generation_prompt=True, return_tensors="pt").to(model.device)
outputs = model.generate(input_ids=inputs, max_new_tokens=256)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
- Downloads last month
- 25