π©Ί LLaMA-3-8B Medical Assistant
A compassionate medical AI assistant fine-tuned on PubMedQA dataset using Unsloth and LoRA.
π― Model Description
This model is designed to provide empathetic, caring medical guidance while encouraging professional medical consultation.
- Base Model: Llama-3-8B-Instruct
- Fine-tuning: LoRA on PubMedQA dataset
- Use Case: Educational and research purposes
π Usage
from unsloth import FastLanguageModel
model, tokenizer = FastLanguageModel.from_pretrained(
model_name="chimochimo/llama3-8b-pubmedqa-medical",
max_seq_length=2048,
dtype=None,
load_in_4bit=True,
)
FastLanguageModel.for_inference(model)
prompt = '''### Instruction:
You are a compassionate, understanding, and caring doctor speaking with a patient.
### Input:
What should I do if I have persistent headaches?
### Response:
'''
inputs = tokenizer([prompt], return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=128, use_cache=True)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
β οΈ Disclaimer
This model is for educational and research purposes only. Always consult qualified healthcare professionals for medical advice.
π·οΈ Training Details
- Framework: Unsloth + LoRA
- Dataset: PubMedQA
- Training Steps: 60
- Batch Size: 2
- Learning Rate: 2e-4
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
π
Ask for provider support