My Fine-Tuned BLIP-2 Model

Custom BLIP-2 model fine-tuned for visual question answering with LoRA adapters

Usage

from transformers import Blip2ForConditionalGeneration, Blip2Processor
import torch

model = Blip2ForConditionalGeneration.from_pretrained(
    "Magneto76/lora_blip2",
    torch_dtype=torch.float16,
    device_map="auto"
)
processor = Blip2Processor.from_pretrained("Magneto76/lora_blip2")

def infer(image, question):
    inputs = processor(image, question, return_tensors="pt").to(model.device)
    outputs = model.generate(**inputs)
    return processor.decode(outputs[0], skip_special_tokens=True)
    
Downloads last month
40
Safetensors
Model size
3.94B params
Tensor type
F32
·
FP16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support