Medical Assistant LoRA Adapter
This is a LoRA (Low-Rank Adaptation) adapter for medical conversations, trained on the LLaMA-7B model.
Model Details
- Base Model:
baffo32/decapoda-research-llama-7B-hf
- Adapter Type: LoRA
- LoRA Rank: 2
- LoRA Alpha: 4
- Target Modules: q_proj, v_proj
- Task: Medical Conversation Generation
Usage
from transformers import AutoModelForCausalLM, AutoTokenizer
from peft import PeftModel
# Load base model
base_model = AutoModelForCausalLM.from_pretrained("baffo32/decapoda-research-llama-7B-hf")
tokenizer = AutoTokenizer.from_pretrained("baffo32/decapoda-research-llama-7B-hf")
# Load LoRA adapter
model = PeftModel.from_pretrained(base_model, "morvinp/medical-llama-lora-adapter")
# Generate response
prompt = "What are the symptoms of flu?"
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_length=200, temperature=0.7)
response = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(response)
Training Details
This adapter was trained using the following configuration:
- LoRA rank: 2
- LoRA alpha: 4
- LoRA dropout: 0.05
- Target modules: ["q_proj", "v_proj"]
- Training data: Medical dialogue dataset
Intended Use
- Medical conversation assistance
- Healthcare information queries
- Educational purposes in medical domain
Limitations
โ ๏ธ Important Disclaimers:
- This model is for informational purposes only
- Should not replace professional medical advice
- Always consult healthcare professionals for medical decisions
- Not validated for clinical use
License
Apache 2.0
Citation
If you use this model, please cite:
@misc{medical-lora-adapter,
title={Medical Assistant LoRA Adapter},
author={Morvin},
year={2025},
url={https://huggingface.co/morvinp/medical-llama-lora-adapter}
}
- Downloads last month
- 5
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support
Model tree for morvinp/medical-llama-lora-adapter
Base model
baffo32/decapoda-research-llama-7B-hf