π§ LLaMA 2 Medical LoRA - Symptom Diagnosis Assistant
This repository contains a LoRA fine-tuned adapter trained on medical symptom data using the meta-llama/Llama-2-7b-chat-hf
base model. It enables AI-driven, English-language medical assistance by suggesting possible diagnoses based on user-described symptoms.
π¨βπ¬ Developed By
- Name: Himanshu Talodhikar
- Hugging Face: himanshu8459324875
- Email: [email protected]
π― Purpose
The model was fine-tuned for the following objectives:
- Help suggest potential diagnoses based on natural language descriptions of symptoms.
- Assist developers and researchers in building medical chatbots.
- Explore the use of LoRA + PEFT fine-tuning to reduce computational cost in domain-specific LLMs.
β οΈ This model is not a substitute for professional medical advice and should be used only in research or educational contexts.
π§ββοΈ Example Usage
from transformers import AutoTokenizer, AutoModelForCausalLM, pipeline
from peft import PeftModel
# Load base model and LoRA adapter
base_model = AutoModelForCausalLM.from_pretrained("meta-llama/Llama-2-7b-chat-hf", device_map="auto")
model = PeftModel.from_pretrained(base_model, "himanshu8459324875/llama2-medical-lora")
# Load tokenizer
tokenizer = AutoTokenizer.from_pretrained("meta-llama/Llama-2-7b-chat-hf")
# Create inference pipeline
pipe = pipeline("text-generation", model=model, tokenizer=tokenizer)
# Run inference
prompt = "Patient has fatigue, unexplained weight loss, and persistent cough. What could be the diagnosis?"
output = pipe(prompt, max_new_tokens=100, temperature=0.7, top_p=0.9, do_sample=True)
print(output[0]['generated_text'])
π Dataset
The model was fine-tuned on a medical symptom-diagnosis dataset formatted for causal language modeling. Input examples were natural language sentences like:
βI have fever, chills, and sore throat. What could be the issue?β
βΈ»
π οΈ Training Details
β’ Base model: meta-llama/Llama-2-7b-chat-hf
β’ LoRA config: PEFT (8-bit adapters)
β’ Epochs: 3
β’ Batch size: 2 (with gradient accumulation)
β’ Framework: Transformers, PEFT, Hugging Face Hub
β’ Training platform: Google Colab / Kaggle
βΈ»
βοΈ Model Architecture
β’ LLaMA 2 7B with LoRA adapters
β’ Adapter layers applied to Q, K, V, O projections in attention and MLP layers
β’ PEFT for efficient storage and inference
- Downloads last month
- 23
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
π
Ask for provider support
Model tree for himanshu8459324875/Medic-Llama2
Base model
meta-llama/Llama-2-7b-chat-hf