File size: 10,161 Bytes
b2682bc
 
 
4f60b87
 
b2682bc
4f60b87
 
 
 
b2682bc
 
 
 
 
 
4f60b87
b2682bc
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
f1b9e61
b2682bc
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
4f60b87
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
---
base_model:
- Qwen/Qwen2.5-Coder-3B-Instruct
language:
- en
library_name: transformers
license: other
license_name: qwen-research
license_link: https://huggingface.co/Qwen/Qwen2.5-Coder-3B-Instruct/blob/main/LICENSE
pipeline_tag: robotics
tags:
- code
- chat
- qwen
- qwen-coder
- agent
- robotics
---

***Tiny-Agent-α*** is an extension of Dria-Agent-a, trained on top of the [Qwen2.5-Coder](https://huggingface.co/collections/Qwen/qwen25-coder-66eaa22e6f99801bf65b0c2f) series to be used in edge devices. These models are carefully fine tuned with quantization aware training to minimize performance degradation after quantization. 

Tiny-Agent-α employs ***Pythonic function calling***, which is LLMs using blocks of Python code to interact with provided tools and output actions. This method was inspired by many previous work, including but not limited to [DynaSaur](https://arxiv.org/pdf/2411.01747), [RLEF](https://arxiv.org/pdf/2410.02089), [ADAS](https://arxiv.org/pdf/2408.08435) and [CAMEL](https://arxiv.org/pdf/2303.17760). This way of function calling has a few advantages over traditional JSON-based function calling methods:

1. **One-shot Parallel Multiple Function Calls:** The model can can utilise many synchronous processes in one chat turn to arrive to a solution, which would require other function calling models multiple turns of conversation.
2. **Free-form Reasoning and Actions:** The model provides reasoning traces freely in natural language and the actions in between \`\`\`python \`\`\` blocks, as it already tends to do without special prompting or tuning. This tries to mitigate the possible performance loss caused by imposing specific formats on LLM outputs discussed in [Let Me Speak Freely?](https://arxiv.org/pdf/2408.02442)
3. **On-the-fly Complex Solution Generation:** The solution provided by the model is essentially a Python program with the exclusion of some "risky" builtins like `exec`, `eval` and `compile` (see full list in **Quickstart** below). This enables the model to implement custom complex logic with conditionals and synchronous pipelines (using the output of one function in the next function's arguments) which would not be possible with the current JSON-based function calling methods (as far as we know).

## Quickstart

You can use tiny-agents easily with the dria_agent package:

````bash
pip install dria_agent
````

This package handles models, tools, code execution, and backend (supports mlx, ollama, and transformers).

### Usage 

Decorate functions with @tool to expose them to the agent.

````python
from dria_agent import tool

@tool
def check_availability(day: str, start_time: str, end_time: str) -> bool:
    """
    Checks if a given time slot is available.

    :param day: The date in "YYYY-MM-DD" format.
    :param start_time: The start time of the desired slot (HH:MM format, 24-hour).
    :param end_time: The end time of the desired slot (HH:MM format, 24-hour).
    :return: True if the slot is available, otherwise False.
    """
    # Mock implementation
    if start_time == "12:00" and end_time == "13:00":
        return False
    return True
````

Create an agent with custom tools:

```python
from dria_agent import ToolCallingAgent

agent = ToolCallingAgent(
    tools=[check_availability]
)
```

Use agent.run(query) to execute tasks with tools.

```python
execution = agent.run("Check my calendar for tomorrow noon", print_results=True)
```

Where output is
````
let me help you check your availability for a 1-hour meditation session       
starting at noon tomorrow.                                                    
                                                                                
Step-by-step reasoning:                                                       
 1. We need to check availability for a specific time slot (noon)              
 2. The duration is 1 hour, so we'll use the same start and end times          
 3. Since it's tomorrow, we should format the date as "YYYY-MM-DD"             
 4. Use the check_availability() function with these parameters                
                                                                                
Here's the code to check your availability:                                   
                                                                                
```python                                                                     
tomorrow = (datetime.now() + timedelta(days=1)).strftime("%Y-%m-%d")          
start_time = "12:00"  # Noon in 24-hour format                                
end_time = "13:00"   # One hour after noon                                    
                                                                                
availability = check_availability(tomorrow, start_time, end_time)             
```                                                                           
                                                                                
The code will:                                                                
- Calculate tomorrow's date using datetime and timedelta                      
- Set the time slot to noon (12:00) for 1 hour duration                       
- Check if this time slot is available using the check_availability function  
                                                                                
The availability variable will contain True if you're available, or False if  
not. 
````

If using dria_agent, system prompts and tooling work out of the box—no extra setup needed. 

You can use Tiny-Agent-a-3B directly with transformers:

````python
import json
from typing import Any, Dict, List
from transformers import AutoModelForCausalLM, AutoTokenizer

# Load model and tokenizer
model_name = "driaforall/Tiny-Agent-a-3B"
model = AutoModelForCausalLM.from_pretrained(
    model_name, device_map="auto", torch_dtype="auto", trust_remote_code=True
)
tokenizer = AutoTokenizer.from_pretrained(model_name)

# System prompt for optimal performance
SYSTEM_PROMPT = """
You are an expert AI assistant that specializes in providing Python code to solve the task/problem at hand provided by the user.

You can use Python code freely, including the following available functions:

<|functions_schema|>
{{functions_schema}}
<|end_functions_schema|>

The following dangerous builtins are restricted for security:
- exec
- eval
- execfile
- compile
- importlib
- input
- exit

Think step by step and provide your reasoning, outside of function calls.
You can write Python code and use the available functions. Provide all your Python code in a SINGLE markdown code block.

DO NOT use print() statements AT ALL. Avoid mutating variables whenever possible.
""".strip()

# Example function schema (define the functions available to the model)
FUNCTIONS_SCHEMA = """
def check_availability(day: str, start_time: str, end_time: str) -> bool:
    """
    Checks if a given time slot is available.

    :param day: The date in "YYYY-MM-DD" format.
    :param start_time: The start time of the desired slot (HH:MM format, 24-hour).
    :param end_time: The end time of the desired slot (HH:MM format, 24-hour).
    :return: True if the slot is available, otherwise False.
    """
    pass
"""

# Format system prompt
system_prompt = SYSTEM_PROMPT.replace("{{functions_schema}}", FUNCTIONS_SCHEMA)

# Example user query
USER_QUERY = "Check if I'm available for an hour long meditation at tomorrow noon."

# Format messages for the model
messages = [
    {"role": "system", "content": system_prompt},
    {"role": "user", "content": USER_QUERY},
]

# Prepare input for the model
text = tokenizer.apply_chat_template(
    messages,
    tokenize=False,
    add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)

# Generate response
generated_ids = model.generate(
    **model_inputs,
    max_new_tokens=512
)

# Extract new generated tokens
generated_ids = [
    output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]

# Decode response
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
print(response)
````

## Evaluation & Performance

We evaluate the model on the **Dria-Pythonic-Agent-Benchmark ([DPAB](https://github.com/firstbatchxyz/function-calling-eval)):** The benchmark we curated with a synthetic data generation +model-based validation + filtering and manual selection to evaluate LLMs on their Pythonic function calling ability, spanning multiple scenarios and tasks. See [blog](https://huggingface.co/blog/andthattoo/dpab-a) for more information.

Below are the DPAB results: 

Current benchmark results for various models **(strict)**:

| Model Name                      | Pythonic | JSON |
|---------------------------------|----------|------|
| **Closed Models**               |          |      |
| Claude 3.5 Sonnet              | 87       | 45   |
| gpt-4o-2024-11-20              | 60       | 30   |
| **Open Models**                 |          |      |
| **> 100B Parameters**           |          |      |
| DeepSeek V3 (685B)             | 63       | 33   |
| MiniMax-01                     | 62       | 40   |
| Llama-3.1-405B-Instruct        | 60       | 38   |
| **> 30B Parameters**            |          |      |
| Qwen-2.5-Coder-32b-Instruct    | 68       | 32   |
| Qwen-2.5-72b-instruct          | 65       | 39   |
| Llama-3.3-70b-Instruct         | 59       | 40   |
| QwQ-32b-Preview                | 47       | 21   |
| **< 20B Parameters**           |          |      |
| Phi-4 (14B)                    | 55       | 35   |
| Qwen2.5-Coder-7B-Instruct      | 44       | 39   |
| Qwen-2.5-7B-Instruct           | 47       | 34   |
| **Tiny-Agent-a-3B**               | **72**       | 34   |
| Qwen2.5-Coder-3B-Instruct      | 26       | 37   |
| **Tiny-Agent-a-1.5B**               | **73**       | 30   |

#### Citation

```
@misc{Dria-Agent-a,
      url={https://huggingface.co/blog/andthattoo/dria-agent-a},
      title={Dria-Agent-a},
      author={"andthattoo", "Atakan Tekparmak"}
}
```

Code: https://github.com/firstbatchxyz/dria-agent