π©Ί Mistral-7B-Medical-QA-LoRA
This repository contains a LoRA fine-tuned version of [mistralai/Mistral-7B-Instruct-v0.2
] for medical question answering, fine-tuned using QLoRA (4-bit) and PEFT on a custom medical Q&A dataset.
β Model Overview
- π¬ Base model:
mistralai/Mistral-7B-Instruct-v0.2
- π§ Task: Medical question answering
- βοΈ Technique: LoRA (Low-Rank Adaptation) + 4-bit QLoRA
- π¦ Format: Adapter-only (
adapter_model.safetensors
)
π‘ Intended Use
This model is intended for:
- Patient education
- Clinical assistant prototypes
- Biomedical NLP research
β οΈ Not for real-world clinical use. This model is for research/educational purposes only.
π§ͺ Evaluation
Metric | Before LoRA | After LoRA |
---|---|---|
BLEU | 0.0145 | 0.0721 |
F1 | 0.2457 | 0.3901 |
Tested on 100 medical QA samples. Fine-tuning improved answer completeness and accuracy.
π οΈ Training Details Config Value LoRA Rank 16 LoRA Alpha 32 Target Modules q_proj, k_proj, v_proj, o_proj, etc. Epochs 2 Batch Size 8 (effective via gradient accumulation) Max Length 512 tokens Quantization 4-bit (nf4, double quant) Framework Hugging Face Transformers + PEFT
π Files Included adapter_model.safetensors β LoRA weights
adapter_config.json β LoRA structure
tokenizer.json, tokenizer_config.json β Tokenizer files
README.md β This file
model_card_data.yaml β Metadata for HF Hub
eval_results.json β Evaluation scores
βοΈ Citation
Please credit the original authors of Mistral and cite this fine-tuning work if used in your research or applications.
π Developed as part of Phase 1 of a Multimodal Clinical AI Assistant Project.
πββοΈ Author
π Inference Example
from transformers import AutoModelForCausalLM, AutoTokenizer
from peft import PeftModel
import torch
tokenizer = AutoTokenizer.from_pretrained("your-username/mistral-7b-medical-qa-lora")
base_model = AutoModelForCausalLM.from_pretrained("mistralai/Mistral-7B-Instruct-v0.2", torch_dtype=torch.float16, device_map="auto")
model = PeftModel.from_pretrained(base_model, "your-username/mistral-7b-medical-qa-lora")
def ask_medical_question(question):
prompt = f"<s>[INST] {question} [/INST]"
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
with torch.no_grad():
output = model.generate(**inputs, max_new_tokens=150, temperature=0.7)
return tokenizer.decode(output[0], skip_special_tokens=True).split("[/INST]")[-1].strip()
print(ask_medical_question("What is diabetes?"))
Model tree for dsuram/mistral-medical-finetuned
Base model
mistralai/Mistral-7B-Instruct-v0.2