--- license: llama3 base_model: unsloth/llama-3-8b-bnb-4bit-Instruct tags: - llama-3 - instruct - medical - pubmed - fine-tuned - unsloth - lora - text-generation datasets: - qiaojin/PubMedQA language: - en pipeline_tag: text-generation --- # 🩺 LLaMA-3-8B Medical Assistant A compassionate medical AI assistant fine-tuned on PubMedQA dataset using Unsloth and LoRA. ## 🎯 Model Description This model is designed to provide empathetic, caring medical guidance while encouraging professional medical consultation. - **Base Model**: Llama-3-8B-Instruct - **Fine-tuning**: LoRA on PubMedQA dataset - **Use Case**: Educational and research purposes ## 🚀 Usage ```python from unsloth import FastLanguageModel model, tokenizer = FastLanguageModel.from_pretrained( model_name="chimochimo/llama3-8b-pubmedqa-medical", max_seq_length=2048, dtype=None, load_in_4bit=True, ) FastLanguageModel.for_inference(model) prompt = '''### Instruction: You are a compassionate, understanding, and caring doctor speaking with a patient. ### Input: What should I do if I have persistent headaches? ### Response: ''' inputs = tokenizer([prompt], return_tensors="pt") outputs = model.generate(**inputs, max_new_tokens=128, use_cache=True) print(tokenizer.decode(outputs[0], skip_special_tokens=True)) ``` ## ⚠️ Disclaimer This model is for educational and research purposes only. Always consult qualified healthcare professionals for medical advice. ## 🏷️ Training Details - **Framework**: Unsloth + LoRA - **Dataset**: PubMedQA - **Training Steps**: 60 - **Batch Size**: 2 - **Learning Rate**: 2e-4