Update README.md
Browse files
README.md
CHANGED
@@ -1,200 +1,91 @@
|
|
1 |
-
|
2 |
-
library_name: transformers
|
3 |
-
tags:
|
4 |
-
- unsloth
|
5 |
-
---
|
6 |
|
7 |
-
|
8 |
|
9 |
-
|
10 |
|
11 |
-
|
12 |
-
|
13 |
-
## Model Details
|
14 |
-
|
15 |
-
### Model Description
|
16 |
-
|
17 |
-
<!-- Provide a longer summary of what this model is. -->
|
18 |
-
|
19 |
-
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
|
20 |
-
|
21 |
-
- **Developed by:** [More Information Needed]
|
22 |
-
- **Funded by [optional]:** [More Information Needed]
|
23 |
-
- **Shared by [optional]:** [More Information Needed]
|
24 |
-
- **Model type:** [More Information Needed]
|
25 |
-
- **Language(s) (NLP):** [More Information Needed]
|
26 |
-
- **License:** [More Information Needed]
|
27 |
-
- **Finetuned from model [optional]:** [More Information Needed]
|
28 |
-
|
29 |
-
### Model Sources [optional]
|
30 |
-
|
31 |
-
<!-- Provide the basic links for the model. -->
|
32 |
-
|
33 |
-
- **Repository:** [More Information Needed]
|
34 |
-
- **Paper [optional]:** [More Information Needed]
|
35 |
-
- **Demo [optional]:** [More Information Needed]
|
36 |
-
|
37 |
-
## Uses
|
38 |
-
|
39 |
-
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
|
40 |
-
|
41 |
-
### Direct Use
|
42 |
-
|
43 |
-
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
|
44 |
-
|
45 |
-
[More Information Needed]
|
46 |
-
|
47 |
-
### Downstream Use [optional]
|
48 |
-
|
49 |
-
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
|
50 |
-
|
51 |
-
[More Information Needed]
|
52 |
-
|
53 |
-
### Out-of-Scope Use
|
54 |
-
|
55 |
-
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
|
56 |
-
|
57 |
-
[More Information Needed]
|
58 |
-
|
59 |
-
## Bias, Risks, and Limitations
|
60 |
-
|
61 |
-
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
|
62 |
-
|
63 |
-
[More Information Needed]
|
64 |
-
|
65 |
-
### Recommendations
|
66 |
-
|
67 |
-
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
|
68 |
-
|
69 |
-
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
|
70 |
-
|
71 |
-
## How to Get Started with the Model
|
72 |
-
|
73 |
-
Use the code below to get started with the model.
|
74 |
-
|
75 |
-
[More Information Needed]
|
76 |
|
77 |
## Training Details
|
78 |
|
79 |
-
|
80 |
-
|
81 |
-
|
82 |
-
|
83 |
-
|
84 |
-
|
85 |
-
|
86 |
-
|
87 |
-
|
88 |
-
|
89 |
-
|
90 |
-
|
91 |
-
[
|
92 |
-
|
93 |
-
|
94 |
-
|
95 |
-
|
96 |
-
|
97 |
-
|
98 |
-
|
99 |
-
|
100 |
-
|
101 |
-
|
102 |
-
|
103 |
-
|
104 |
-
|
105 |
-
|
106 |
-
|
107 |
-
|
108 |
-
|
109 |
-
|
110 |
-
|
111 |
-
|
112 |
-
|
113 |
-
|
114 |
-
|
115 |
-
|
116 |
-
|
117 |
-
|
118 |
-
|
119 |
-
|
120 |
-
|
121 |
-
|
122 |
-
|
123 |
-
|
124 |
-
|
125 |
-
|
126 |
-
|
127 |
-
|
128 |
-
|
129 |
-
|
130 |
-
|
131 |
-
|
132 |
-
|
133 |
-
|
134 |
-
|
135 |
-
|
136 |
-
|
137 |
-
|
138 |
-
|
139 |
-
|
140 |
-
|
141 |
-
|
142 |
-
|
143 |
-
|
144 |
-
|
145 |
-
|
146 |
-
|
147 |
-
|
148 |
-
|
149 |
-
|
150 |
-
|
151 |
-
|
152 |
-
|
153 |
-
|
154 |
-
|
155 |
-
|
156 |
-
|
157 |
-
|
158 |
-
|
159 |
-
|
160 |
-
### Compute Infrastructure
|
161 |
-
|
162 |
-
[More Information Needed]
|
163 |
-
|
164 |
-
#### Hardware
|
165 |
-
|
166 |
-
[More Information Needed]
|
167 |
-
|
168 |
-
#### Software
|
169 |
-
|
170 |
-
[More Information Needed]
|
171 |
-
|
172 |
-
## Citation [optional]
|
173 |
-
|
174 |
-
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
|
175 |
-
|
176 |
-
**BibTeX:**
|
177 |
-
|
178 |
-
[More Information Needed]
|
179 |
-
|
180 |
-
**APA:**
|
181 |
-
|
182 |
-
[More Information Needed]
|
183 |
-
|
184 |
-
## Glossary [optional]
|
185 |
-
|
186 |
-
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
|
187 |
-
|
188 |
-
[More Information Needed]
|
189 |
-
|
190 |
-
## More Information [optional]
|
191 |
-
|
192 |
-
[More Information Needed]
|
193 |
-
|
194 |
-
## Model Card Authors [optional]
|
195 |
-
|
196 |
-
[More Information Needed]
|
197 |
-
|
198 |
-
## Model Card Contact
|
199 |
-
|
200 |
-
[More Information Needed]
|
|
|
1 |
+
# Qwen2.5-1.5B-Instruct Function Calling Model
|
|
|
|
|
|
|
|
|
2 |
|
3 |
+
## Model Description
|
4 |
|
5 |
+
This model is a fine-tuned version of [Qwen/Qwen2.5-1.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-1.5B-Instruct) optimized for function calling capabilities. It was trained using GRPO (Guided Reinforcement Policy Optimization) on the [NousResearch/hermes-function-calling-v1](https://huggingface.co/datasets/NousResearch/hermes-function-calling-v1) dataset, specifically the `func_calling_singleturn` subset.
|
6 |
|
7 |
+
## Intended Uses
|
8 |
+
[YET TO FILL]
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
9 |
|
10 |
## Training Details
|
11 |
|
12 |
+
- **Base Model:** Qwen/Qwen2.5-1.5B-Instruct
|
13 |
+
- **Training Method:** GRPO (Group Relative Policy Optimization)
|
14 |
+
- **Training Framework:** Unsloth
|
15 |
+
- **Dataset:** NousResearch/hermes-function-calling-v1 (func_calling_singleturn)
|
16 |
+
- **Quantization:** 4-bit quantization using bitsandbytes (bnb)
|
17 |
+
|
18 |
+
## Performance and Limitations
|
19 |
+
|
20 |
+
### Strengths
|
21 |
+
[YET TO FILL]
|
22 |
+
|
23 |
+
### Limitations
|
24 |
+
[YET TO FILL]
|
25 |
+
|
26 |
+
## Usage
|
27 |
+
|
28 |
+
```python
|
29 |
+
from unsloth import FastLanguageModel
|
30 |
+
import torch
|
31 |
+
from vllm import SamplingParams
|
32 |
+
|
33 |
+
max_seq_length = 4096 # Can increase for longer reasoning traces
|
34 |
+
lora_rank = 32 # Larger rank = smarter, but slower
|
35 |
+
|
36 |
+
model, tokenizer = FastLanguageModel.from_pretrained(
|
37 |
+
model_name = "Bharatdeep-H/xml_cot_fm_1",
|
38 |
+
#model_name = "unsloth/Qwen2.5-1.5B-Instruct",
|
39 |
+
max_seq_length = max_seq_length,
|
40 |
+
load_in_4bit = True, # False for LoRA 16bit
|
41 |
+
fast_inference = True, # Enable vLLM fast inference
|
42 |
+
max_lora_rank = lora_rank,
|
43 |
+
gpu_memory_utilization = 0.6, # Reduce if out of memory
|
44 |
+
)
|
45 |
+
|
46 |
+
model = FastLanguageModel.get_peft_model(
|
47 |
+
model,
|
48 |
+
r = lora_rank, # Choose any number > 0 ! Suggested 8, 16, 32, 64, 128
|
49 |
+
target_modules = [
|
50 |
+
"q_proj", "k_proj", "v_proj", "o_proj",
|
51 |
+
"gate_proj", "up_proj", "down_proj",
|
52 |
+
], # Remove QKVO if out of memory
|
53 |
+
lora_alpha = lora_rank,
|
54 |
+
use_gradient_checkpointing = "unsloth", # Enable long context finetuning
|
55 |
+
random_state = 3407,
|
56 |
+
)
|
57 |
+
|
58 |
+
# Format definitions
|
59 |
+
FORMAT_PROMPT = """
|
60 |
+
Respond in the following format:
|
61 |
+
<chain_of_thought>
|
62 |
+
...
|
63 |
+
</chain_of_thought>
|
64 |
+
<tool_call>
|
65 |
+
...
|
66 |
+
</tool_call>
|
67 |
+
"""
|
68 |
+
|
69 |
+
SYSTEM_MIX_USER_PROMPT = "You are a function calling AI model. You are provided with function signatures within <tools> </tools> XML tags. You may call one or more functions to assist with the user query. Don't make assumptions about what values to plug into functions.\n\n<tools>[[{'type': 'function', 'function': {'name': 'book_appointment', 'description': 'Books an appointment for a patient with a specific dentist at a given date and time.', 'parameters': {'type': 'object', 'properties': {'patient_id': {'type': 'string', 'description': 'The unique identifier for the patient.'}, 'dentist_id': {'type': 'string', 'description': 'The unique identifier for the dentist.'}, 'preferred_date': {'type': 'string', 'description': 'The preferred date for the appointment.'}, 'time_slot': {'type': 'string', 'description': 'The preferred time slot for the appointment.'}}, 'required': ['patient_id', 'dentist_id', 'preferred_date', 'time_slot']}}}, {'type': 'function', 'function': {'name': 'reschedule_appointment', 'description': 'Reschedules an existing appointment to a new date and time.', 'parameters': {'type': 'object', 'properties': {'appointment_id': {'type': 'string', 'description': 'The unique identifier for the existing appointment.'}, 'new_date': {'type': 'string', 'description': 'The new date for the rescheduled appointment.'}, 'new_time_slot': {'type': 'string', 'description': 'The new time slot for the rescheduled appointment.'}}, 'required': ['appointment_id', 'new_date', 'new_time_slot']}}}, {'type': 'function', 'function': {'name': 'cancel_appointment', 'description': 'Cancels an existing appointment.', 'parameters': {'type': 'object', 'properties': {'appointment_id': {'type': 'string', 'description': 'The unique identifier for the appointment to be canceled.'}}, 'required': ['appointment_id']}}}, {'type': 'function', 'function': {'name': 'find_available_time_slots', 'description': 'Finds available time slots for a dentist on a given date.', 'parameters': {'type': 'object', 'properties': {'dentist_id': {'type': 'string', 'description': 'The unique identifier for the dentist.'}, 'date': {'type': 'string', 'description': 'The date to check for available time slots.'}}, 'required': ['dentist_id', 'date']}}}, {'type': 'function', 'function': {'name': 'send_appointment_reminder', 'description': 'Sends an automated reminder to the patient for an upcoming appointment.', 'parameters': {'type': 'object', 'properties': {'appointment_id': {'type': 'string', 'description': 'The unique identifier for the appointment.'}, 'reminder_time': {'type': 'string', 'description': 'The time before the appointment when the reminder should be sent.'}}, 'required': ['appointment_id', 'reminder_time']}}}]]</tools>\n\nFor each user query, you must:\n\n1. First, generate your reasoning within <chain_of_thought> </chain_of_thought> tags. This should explain your analysis of the user's request and how you determined which function(s) to call, or why no appropriate function is available.\n\n2. Then, call the appropriate function(s) by returning a JSON object within <tool_call> </tool_call> tags using the following schema:\n<tool_call>\n{'arguments': <args-dict>, 'name': <function-name>}\n</tool_call>\n\n3. If you determine that none of the provided tools can appropriately resolve the user's query based on the tools' descriptions, you must still provide your reasoning in <chain_of_thought> tags, followed by:\n<tool_call>NO_CALL_AVAILABLE</tool_call>\n\nRemember that your <chain_of_thought> analysis must ALWAYS precede any <tool_call> tags, regardless of whether a suitable function is available."
|
70 |
+
COMPLETE_SYSTEM_PROMPT = "As the manager of a dental practice, I'm looking to streamline our booking process. I need to schedule an appointment for our patient, John Doe with ID 'p123', with Dr. Sarah Smith, whose dentist ID is 'd456'. Please book this appointment for May 15, 2023, at 2:00 PM. Additionally, I would like to set up an automated reminder for John Doe to ensure he remembers his appointment. Can you book this appointment and arrange for the reminder to be sent out in advance?"
|
71 |
+
|
72 |
+
text = tokenizer.apply_chat_template([
|
73 |
+
{'role': 'system', 'content': FORMAT_PROMPT},
|
74 |
+
{'role': 'user', 'content': SYSTEM_MIX_USER_PROMPT + "\n\nUSER QUERY: " + COMPLETE_SYSTEM_PROMPT}
|
75 |
+
], tokenize = False, add_generation_prompt = True)
|
76 |
+
|
77 |
+
sampling_params = SamplingParams(
|
78 |
+
temperature = 0.8,
|
79 |
+
top_p = 0.95,
|
80 |
+
max_tokens = 1024,
|
81 |
+
)
|
82 |
+
|
83 |
+
|
84 |
+
output = model.fast_generate(
|
85 |
+
text,
|
86 |
+
sampling_params = sampling_params,
|
87 |
+
#lora_request = model.load_lora("grpo_saved_lora"),
|
88 |
+
)[0].outputs[0].text
|
89 |
+
|
90 |
+
print(output)
|
91 |
+
```
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|