BirdieByte1024 commited on
Commit
6a1f254
·
verified ·
1 Parent(s): 181c6bc

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +103 -5
README.md CHANGED
@@ -1,5 +1,103 @@
1
- ---
2
- license: mit
3
- tags:
4
- - unsloth
5
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ base_model: meta-llama/Llama-3.2-3B
4
+ library_name: peft
5
+ tags:
6
+ - llama-3.2
7
+ - unsloth
8
+ - lora
9
+ - peft
10
+ - fine-tuned
11
+ - doctor
12
+ - dental
13
+ - medical
14
+ - instruction-tuning
15
+ - adapter
16
+ datasets:
17
+ - BirdieByte1024/doctor-dental-llama-qa
18
+ ---
19
+
20
+ # 🦷 doctor-dental-implant-LoRA-llama3.2-3B
21
+
22
+ This is a **LoRA adapter** trained on top of [`meta-llama/Llama-3.2-3B`](https://huggingface.co/meta-llama/Llama-3.2-3B) using [Unsloth](https://github.com/unslothai/unsloth), for the purpose of aligning the model to **doctor–patient conversations and dental implant-related Q&A**.
23
+
24
+ The adapter improves the model's performance in instruction-following and medical dialogue within the dental implant domain (e.g. Straumann® surgical workflows).
25
+
26
+ ---
27
+
28
+ ## 🔧 Model Details
29
+
30
+ - **Base model:** `meta-llama/Llama-3.2-3B`
31
+ - **Adapter type:** LoRA via PEFT
32
+ - **Framework:** [Unsloth](https://github.com/unslothai/unsloth)
33
+ - **Quantization for training:** QLoRA (bnb 4-bit)
34
+ - **Training objective:** Instruction-tuning on domain-specific dialogue
35
+ - **Dataset:** `BirdieByte1024/doctor-dental-llama-qa`
36
+
37
+ ---
38
+
39
+ ## 🧠 Dataset
40
+
41
+ - [`BirdieByte1024/doctor-dental-llama-qa`](https://huggingface.co/datasets/BirdieByte1024/doctor-dental-llama-qa)
42
+ - Includes synthetic doctor–patient chat covering:
43
+ - Straumann® dental implant systems
44
+ - Guided surgery workflows
45
+ - General clinical Q&A
46
+
47
+ ---
48
+
49
+ ## 💬 Expected Prompt Format
50
+
51
+ ```json
52
+ {
53
+ "conversation": [
54
+ { "from": "patient", "value": "What is the purpose of a healing abutment?" },
55
+ { "from": "doctor", "value": "It helps shape the gum tissue and protect the implant site during healing." }
56
+ ]
57
+ }
58
+ ```
59
+
60
+ ---
61
+
62
+ ## 💻 How to Use the Adapter
63
+
64
+ ```python
65
+ from transformers import AutoTokenizer, AutoModelForCausalLM
66
+ from peft import PeftModel
67
+
68
+ # Load base model
69
+ base = AutoModelForCausalLM.from_pretrained("meta-llama/Llama-3.2-3B")
70
+ tokenizer = AutoTokenizer.from_pretrained("meta-llama/Llama-3.2-3B")
71
+
72
+ # Load LoRA adapter
73
+ model = PeftModel.from_pretrained(base, "BirdieByte1024/doctor-dental-implant-LoRA-llama3.2-3B")
74
+ ```
75
+
76
+ ---
77
+
78
+ ## ✅ Intended Use
79
+
80
+ - Domain adaptation for dental and clinical chatbots
81
+ - Offline inference for healthcare-specific assistants
82
+ - Safe instruction-following aligned with patient communication
83
+
84
+ ---
85
+
86
+ ## ⚠️ Limitations
87
+
88
+ - Not a diagnostic tool
89
+ - May hallucinate or oversimplify
90
+ - Based on non-clinical and synthetic data
91
+
92
+ ---
93
+
94
+ ## 🛠 Authors
95
+
96
+ Developed by [(BirdieByte1024)](https://huggingface.co/BirdieByte1024)
97
+ Fine-tuned using Unsloth and PEFT
98
+
99
+ ---
100
+
101
+ ## 📜 License
102
+
103
+ MIT