Add chat template to processor_config.json

#36
by Rocketknight1 HF staff - opened
No description provided.

awesome thank you @Rocketknight1 !

VictorSanh changed pull request status to merged

Why is this added to processor_config.json when normally it goes into the tokenizer_config.json file?

https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2/blob/main/tokenizer_config.json#L42

When I use this model, it ends up just using the backup llama tokenizer chat template and is wrong.

Hi! Has there been any updates on this issue?

Hi @pseudotensor , that's not the expected behaviour - can you make sure you're updated to the latest version of transformers and paste me the code you're using?

Sign up or log in to comment