Insurance LoRA Model
This is a fine-tuned LoRA adapter for the unsloth/Llama-3.2-3B-Instruct-bnb-4bit
model, trained on an insurance Q&A dataset with 2000 examples. It uses LoRA with rank 16 and targets transformer layers for efficient fine-tuning.
Usage
Load the model with unsloth
or transformers
:
from unsloth import FastLanguageModel
model, tokenizer = FastLanguageModel.from_pretrained(
model_name="your-username/insurance-lora-model",
max_seq_length=2048,
dtype=torch.float16,
load_in_4bit=True,
)
Training Details
- Dataset: 2000 insurance Q&A pairs
- Validation Split: 80-20 (1600 training, 400 validation)
- Training Steps: 250
- Batch Size: 2 (with gradient accumulation)
- Learning Rate: 2e-4
- Optimizer: adamw_8bit
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support
Model tree for ZRX1Raziel/insurance-lora-model
Base model
meta-llama/Llama-3.2-3B-Instruct
Quantized
unsloth/Llama-3.2-3B-Instruct-bnb-4bit