Let model decide to respond with "assistant" or "function_call"
Hi community,
Is it possible to let model decide to respond with "assistant" or "function_call"? Currently it only responds with assistant request if add_generation_prompt set to False.
Howdy.
Yes, I actually tried doing this, and it would be possible if you wanted to do a major fine-tuning.
As it is, the instruct model (which has lots of useful fine-tuning built in) always expects the assistant prompt to be there at the start of the response - so that's how I have left things so as not to majorly change the tuning of the model.
As you point out, the model only responses if you have added that generation prompt.
Howdy.
Yes, I actually tried doing this, and it would be possible if you wanted to do a major fine-tuning.
As it is, the instruct model (which has lots of useful fine-tuning built in) always expects the assistant prompt to be there at the start of the response - so that's how I have left things so as not to majorly change the tuning of the model.
As you point out, the model only responses if you have added that generation prompt.
Hi thanks for your reply, should I acquire a large dataset to do fine-tuning?
I also have a single RTX4090.
Thanks,
Toby.
What specifically is the problem you are trying to solve?
You should be able to detect whether there's a function call just by checking for a json in the response, so re-tuning to have a response with function_call isn't needed.
But according to the example, only responding as "function_call" should trigger a JSON output, responding as "assistant" isn't reliable.
Special roles should have dedicated usage, like "function_metadata" does.
Thanks so much for the aid.
Actually the model will respond (via the assistant role) with a json when it is appropriate (at least that is what it is supposed to do).
I know it's a bit confusing because I'm then saying to take that response and send it back in with a "function_call" role (which really just indicates to the language model that it's a function call).
To say that once more:
- When you send in a user message, there will be an assistant role that follows because of add_generation_prompt . This is correct and will illicit a function call (json) if needed.
- Once you have this json AND you have the function response (which you get programmatically), then you should feed them back in using function_call and function_response roles. These aren't "real" roles per se, but they indicate to the model that these are function calls and responses. This helps the model properly make use of the info when giving the next response (for which, btw, the add_generation_prompt will again add an assistant role - according to the chat template).
@RonanMcGovern I kinda got it here! So it's basically just generating as normal role system, but when feeding it back I should change the role manually to function_call! :O Now it makes sense. Gonna try thx!! <3