🩺 LLaMA-3-8B Medical Assistant

A compassionate medical AI assistant fine-tuned on PubMedQA dataset using Unsloth and LoRA.

🎯 Model Description

This model is designed to provide empathetic, caring medical guidance while encouraging professional medical consultation.

  • Base Model: Llama-3-8B-Instruct
  • Fine-tuning: LoRA on PubMedQA dataset
  • Use Case: Educational and research purposes

πŸš€ Usage

from unsloth import FastLanguageModel

model, tokenizer = FastLanguageModel.from_pretrained(
    model_name="chimochimo/llama3-8b-pubmedqa-medical",
    max_seq_length=2048,
    dtype=None,
    load_in_4bit=True,
)

FastLanguageModel.for_inference(model)

prompt = '''### Instruction:
You are a compassionate, understanding, and caring doctor speaking with a patient.

### Input:
What should I do if I have persistent headaches?

### Response:
'''

inputs = tokenizer([prompt], return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=128, use_cache=True)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))

⚠️ Disclaimer

This model is for educational and research purposes only. Always consult qualified healthcare professionals for medical advice.

🏷️ Training Details

  • Framework: Unsloth + LoRA
  • Dataset: PubMedQA
  • Training Steps: 60
  • Batch Size: 2
  • Learning Rate: 2e-4
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Dataset used to train chimochimo/llama3-8b-pubmedqa-medical

Space using chimochimo/llama3-8b-pubmedqa-medical 1