Add chat template to processor_config.json
#36
by
Rocketknight1
HF staff
- opened
No description provided.
awesome thank you @Rocketknight1 !
VictorSanh
changed pull request status to
merged
Why is this added to processor_config.json when normally it goes into the tokenizer_config.json file?
When I use this model, it ends up just using the backup llama tokenizer chat template and is wrong.
Hi! Has there been any updates on this issue?
Hi
@pseudotensor
, that's not the expected behaviour - can you make sure you're updated to the latest version of transformers
and paste me the code you're using?