๐ฆท doctor-dental-implant-llama3.2-3B-full-model
This model is a fine-tuned version of meta-llama/Llama-3.2-3B
, trained using the Unsloth framework on a domain-specific instruction dataset focused on medical and dental implant conversations.
The model has been optimized for chat-style reasoning in doctorโpatient scenarios, particularly within the domain of Straumannยฎ dental implant systems, as well as general medical question answering.
๐ Model Details
- Base model:
meta-llama/Llama-3.2-3B
- Training framework: Unsloth with LoRA + QLoRA support
- Training format: Conversational JSON with
{"from": "patient"/"doctor", "value": ...}
messages - Checkpoint format: Full model merged, usable as standard HF or GGUF (Ollama / llama.cpp)
- Tokenizer: Inherited from base model
- Model size: 3B parameters (efficient for consumer-grade inference)
๐ Dataset
This model was trained on:
The dataset contains synthetic and handbook-derived doctor-patient conversations focused on:
- Dental implant systems (e.g. surgical kits, guided procedures)
- General medical Q&A relevant to clinics and telemedicine
- Clinical assistant-style instruction-following
๐ฌ Prompt Format
The model expects a chat-style format:
{
"conversation": [
{ "from": "patient", "value": "What are the advantages of guided implant surgery?" },
{ "from": "doctor", "value": "Guided surgery improves accuracy, safety, and esthetic outcomes." }
]
}
โ Intended Use
- Virtual assistants in dental or medical Q&A
- Instruction-tuned experimentation on health topics
- Local chatbot agents (Ollama / llama.cpp compatible)
โ ๏ธ Limitations
- Model is not a medical device or diagnostic tool
- Hallucinations and factual errors may occur
- Content was fine-tuned using synthetic and handbook-based sources (not real EMR)
๐งช Example Prompt
{
"conversation": [
{ "from": "human", "value": "What should I expect after a Straumann implant surgery?" },
{ "from": "assistant", "value": "[MODEL RESPONSE HERE]" }
]
}
๐ Deployment
Local Use with Hugging Face Transformers
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("BirdieByte1024/doctor-dental-implant-llama3.2-3B-full-model")
model = AutoModelForCausalLM.from_pretrained("BirdieByte1024/doctor-dental-implant-llama3.2-3B-full-model")
GGUF / Ollama / llama.cpp
ollama run doctor-dental-llama3.2
If using a local
Modelfile
, ensure the prompt template matches chat formatting (no Alpaca-style).
โ๏ธ Author
Created by (BirdieByte1024) as part of a medical AI research project using Unsloth and LLaMA 3.2.
๐ License
MIT
- Downloads last month
- 0
16-bit
Model tree for BirdieByte1024/doctor-dental-implant-llama3.2-3B-full-model
Base model
meta-llama/Llama-3.2-3B