File size: 2,713 Bytes
b5a4df9 7aace9c b5a4df9 7ab4116 b5a4df9 7ab4116 b5a4df9 7ab4116 b5a4df9 7ab4116 b5a4df9 7aace9c b5a4df9 7aace9c b5a4df9 7aace9c b5a4df9 7aace9c b5a4df9 7aace9c b5a4df9 7aace9c b5a4df9 7aace9c b5a4df9 7aace9c b5a4df9 7aace9c b5a4df9 7378241 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 |
---
library_name: transformers
tags:
- unsloth
- trl
- sft
license: mit
datasets:
- syubraj/medical-chat-phi-3.5-instruct-1k
language:
- en
base_model:
- microsoft/Phi-3.5-mini-instruct
pipeline_tag: text-generation
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
# π₯ Phi-3.5 Mini Instruct Medical Chat (Unsloth)
The **MedicalChat-Phi-3.5-mini-instruct-** fine-tuned model is designed to simulate doctor-patient conversations, offering medical consultations and suggestions based on patient queries. However, its accuracy may be limited in real-world scenarios, as the training dataset was relatively small.
## π Model Overview
- **π§ββοΈ Developed by:** [syubraj](https://huggingface.co/syubraj)
- **π Model Type:** Causal Language Model (CausalLM)
- **π£οΈ Language:** English
- **π License:** MIT License
- **π οΈ Fine-tuned From:** [microsoft/phi-3.5-mini-instruct](https://huggingface.co/microsoft/Phi-3.5-mini-instruct)
---
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
```bash
!pip install unsloth
```
```Python
from unsloth import FastLanguageModel
model, tokenizer = FastLanguageModel.from_pretrained("syubraj/Phi3.5-medicalchat-unsloth",
max_seq_length = 1024,
load_in_4bit = True,
dtype = None
)
user_query = "<Your medical query here>"
system_prompt = """You are a trusted AI-powered medical assistant. Analyze patient queries carefully and provide accurate, professional, and empathetic responses. Prioritize patient safety, adhere to medical best practices, and recommend consulting a healthcare provider when necessary."""
message = [
{"role": "system", "content": system_prompt},
{"role": "human", "content": user_query}
]
# Creating message based on tokenizers chat template
prompt = tokenizer.apply_chat_template(message, tokenize = False, add_generation_prompt = True)
FastLanguageModel.for_inference(model)
# Tokenizing inputs
inputs = tokenizer(prompt, return_tensors = "pt").to("cuda")
# Output Generated
outputs = model.generate(**inputs, max_new_tokens=256, use_cache=True) # Change the `max_new_tokens` according to required objective
tokenizer.batch_decode(outputs)
```
### Model Results
| Step | Training Loss |
|------|--------------|
| 10 | 2.53 |
| 20 | 2.20 |
| 30 | 1.95 |
| 40 | 2.01 |
| 50 | 1.97 |
| 60 | 2.02 |
|