You need to agree to share your contact information to access this model

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Log in or Sign Up to review the conditions and access this model content.

🧠 LLaMA 2 Medical LoRA - Symptom Diagnosis Assistant

This repository contains a LoRA fine-tuned adapter trained on medical symptom data using the meta-llama/Llama-2-7b-chat-hf base model. It enables AI-driven, English-language medical assistance by suggesting possible diagnoses based on user-described symptoms.


πŸ‘¨β€πŸ”¬ Developed By


🎯 Purpose

The model was fine-tuned for the following objectives:

  • Help suggest potential diagnoses based on natural language descriptions of symptoms.
  • Assist developers and researchers in building medical chatbots.
  • Explore the use of LoRA + PEFT fine-tuning to reduce computational cost in domain-specific LLMs.

⚠️ This model is not a substitute for professional medical advice and should be used only in research or educational contexts.


πŸ§‘β€βš•οΈ Example Usage

from transformers import AutoTokenizer, AutoModelForCausalLM, pipeline
from peft import PeftModel

# Load base model and LoRA adapter
base_model = AutoModelForCausalLM.from_pretrained("meta-llama/Llama-2-7b-chat-hf", device_map="auto")
model = PeftModel.from_pretrained(base_model, "himanshu8459324875/llama2-medical-lora")

# Load tokenizer
tokenizer = AutoTokenizer.from_pretrained("meta-llama/Llama-2-7b-chat-hf")

# Create inference pipeline
pipe = pipeline("text-generation", model=model, tokenizer=tokenizer)

# Run inference
prompt = "Patient has fatigue, unexplained weight loss, and persistent cough. What could be the diagnosis?"
output = pipe(prompt, max_new_tokens=100, temperature=0.7, top_p=0.9, do_sample=True)
print(output[0]['generated_text'])




πŸ“š Dataset

The model was fine-tuned on a medical symptom-diagnosis dataset formatted for causal language modeling. Input examples were natural language sentences like:

β€œI have fever, chills, and sore throat. What could be the issue?”

βΈ»

πŸ› οΈ Training Details
    β€’	Base model: meta-llama/Llama-2-7b-chat-hf
    β€’	LoRA config: PEFT (8-bit adapters)
    β€’	Epochs: 3
    β€’	Batch size: 2 (with gradient accumulation)
    β€’	Framework: Transformers, PEFT, Hugging Face Hub
    β€’	Training platform: Google Colab / Kaggle

βΈ»

βš™οΈ Model Architecture
    β€’	LLaMA 2 7B with LoRA adapters
    β€’	Adapter layers applied to Q, K, V, O projections in attention and MLP layers
    β€’	PEFT for efficient storage and inference
Downloads last month
23
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for himanshu8459324875/Medic-Llama2

Adapter
(1151)
this model