Chat template
Hi,
The chat/multi-turn template in the tokenizer suggests this:
"<s>[INST] Hello, how are you? [/INST]I'm doing great. How can I help you today?</s>[INST] I'd like to show off how chat templating works! [/INST]"
Is this correct? Which template is this? TheMistral is different: https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2#instruction-format
That may be a real problem. I simply copied the llama tokenizer, without awareness of the chat format feature. Do you know if copying the mistral 7b instruct v.2 tokenizer into the repo would fix this?
I am personally using this via TGI, so I have full control over what prompt template I should use for system/user/assistant. For those who are using HF Pipeline, maybe this file here can be modified to have the correct chat_template
: https://huggingface.co/152334H/miqu-1-70b-sf/blob/main/tokenizer_config.json
But if we are not even sure whether it should be Mistral or Llama-2 tokenizer, I guess we need to use them both and run an eval to test.
I think it's reasonable to assume it has to be using Mistral's format, as this model was trained using their SFT/DPO pipeline && their documentation for mistral-medium formatting has no changes vs their other models.
I will put a notice on the README.
Isn't the mistral format the same as this repo's?
Yes, mistral 7b has the exact same chat template. But the CEO said they did this model long time ago in early days, so it got me wondering whether it should be Mistral or it should be Llama-2. In any case, the instruct template works perfectly. I will do some tests for Chat/multi-turn just to see which one will have less error in remembering the chat_history.
What I mean is that the chat template in this repo is copied from NousResearch/Llama-2-7b-hf. If you are saying the template is mistral's, then it must be the same as llama-2's.
I have tested the current template (same as Mistral 7B) and it seems to be working. I guess this is based on Mistral and not the template from Mixtral (it's a bit different with spaces in different places)
Thanks @152334H
I have been fighting with it being " [/INST] \n" or just " [/INST]\n" and now "[/INST]REPLY</.s>" The model will reply about the same to all of them, but using the more "correct" one leads to less commentary or NOTES:
@jackboot which template worked the best for your tests when it comes to multi-turn conversations?
I both used mixtral and now mistral spacing plus chatML. Am leaving it on mistral for now. until I see a reason to change.