๐Ÿฆท doctor-dental-implant-LoRA-llama3.2-3B

This is a LoRA adapter trained on top of meta-llama/Llama-3.2-3B using Unsloth, for the purpose of aligning the model to doctorโ€“patient conversations and dental implant-related Q&A.

The adapter improves the model's performance in instruction-following and medical dialogue within the dental implant domain (e.g. Straumannยฎ surgical workflows).


๐Ÿ”ง Model Details

  • Base model: meta-llama/Llama-3.2-3B
  • Adapter type: LoRA via PEFT
  • Framework: Unsloth
  • Quantization for training: QLoRA (bnb 4-bit)
  • Training objective: Instruction-tuning on domain-specific dialogue
  • Dataset: BirdieByte1024/doctor-dental-llama-qa

๐Ÿง  Dataset


๐Ÿ’ฌ Expected Prompt Format

{
  "conversation": [
    { "from": "patient", "value": "What is the purpose of a healing abutment?" },
    { "from": "doctor", "value": "It helps shape the gum tissue and protect the implant site during healing." }
  ]
}

๐Ÿ’ป How to Use the Adapter

from transformers import AutoTokenizer, AutoModelForCausalLM
from peft import PeftModel

# Load base model
base = AutoModelForCausalLM.from_pretrained("meta-llama/Llama-3.2-3B")
tokenizer = AutoTokenizer.from_pretrained("meta-llama/Llama-3.2-3B")

# Load LoRA adapter
model = PeftModel.from_pretrained(base, "BirdieByte1024/doctor-dental-implant-LoRA-llama3.2-3B")

โœ… Intended Use

  • Domain adaptation for dental and clinical chatbots
  • Offline inference for healthcare-specific assistants
  • Safe instruction-following aligned with patient communication

โš ๏ธ Limitations

  • Not a diagnostic tool
  • May hallucinate or oversimplify
  • Based on non-clinical and synthetic data

๐Ÿ›  Authors

Developed by (BirdieByte1024)
Fine-tuned using Unsloth and PEFT


๐Ÿ“œ License

MIT

Downloads last month
0
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for BirdieByte1024/doctor-dental-implant-LoRA-llama3.2-3B

Adapter
(105)
this model

Dataset used to train BirdieByte1024/doctor-dental-implant-LoRA-llama3.2-3B