🩺 Mistral-7B-Medical-QA-LoRA

This repository contains a LoRA fine-tuned version of [mistralai/Mistral-7B-Instruct-v0.2] for medical question answering, fine-tuned using QLoRA (4-bit) and PEFT on a custom medical Q&A dataset.

βœ… Model Overview

  • πŸ”¬ Base model: mistralai/Mistral-7B-Instruct-v0.2
  • 🧠 Task: Medical question answering
  • βš™οΈ Technique: LoRA (Low-Rank Adaptation) + 4-bit QLoRA
  • πŸ“¦ Format: Adapter-only (adapter_model.safetensors)

πŸ’‘ Intended Use

This model is intended for:

  • Patient education
  • Clinical assistant prototypes
  • Biomedical NLP research

⚠️ Not for real-world clinical use. This model is for research/educational purposes only.

πŸ§ͺ Evaluation

Metric Before LoRA After LoRA
BLEU 0.0145 0.0721
F1 0.2457 0.3901

Tested on 100 medical QA samples. Fine-tuning improved answer completeness and accuracy.

πŸ› οΈ Training Details Config Value LoRA Rank 16 LoRA Alpha 32 Target Modules q_proj, k_proj, v_proj, o_proj, etc. Epochs 2 Batch Size 8 (effective via gradient accumulation) Max Length 512 tokens Quantization 4-bit (nf4, double quant) Framework Hugging Face Transformers + PEFT

πŸ“ Files Included adapter_model.safetensors – LoRA weights

adapter_config.json – LoRA structure

tokenizer.json, tokenizer_config.json – Tokenizer files

README.md – This file

model_card_data.yaml – Metadata for HF Hub

eval_results.json – Evaluation scores

✍️ Citation

Please credit the original authors of Mistral and cite this fine-tuning work if used in your research or applications.

πŸ“Œ Developed as part of Phase 1 of a Multimodal Clinical AI Assistant Project.

πŸ™‹β€β™€οΈ Author

Dileep Reddy Suram


πŸš€ Inference Example

from transformers import AutoModelForCausalLM, AutoTokenizer
from peft import PeftModel
import torch

tokenizer = AutoTokenizer.from_pretrained("your-username/mistral-7b-medical-qa-lora")
base_model = AutoModelForCausalLM.from_pretrained("mistralai/Mistral-7B-Instruct-v0.2", torch_dtype=torch.float16, device_map="auto")
model = PeftModel.from_pretrained(base_model, "your-username/mistral-7b-medical-qa-lora")

def ask_medical_question(question):
    prompt = f"<s>[INST] {question} [/INST]"
    inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
    with torch.no_grad():
        output = model.generate(**inputs, max_new_tokens=150, temperature=0.7)
    return tokenizer.decode(output[0], skip_special_tokens=True).split("[/INST]")[-1].strip()

print(ask_medical_question("What is diabetes?"))
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for dsuram/mistral-medical-finetuned

Adapter
(962)
this model