Update tokenizer_config.json
Hi @andrewwa-nvidia , thanks for creating this change.
I gave this a try, and noticed that the template doesn't terminate the string with eos_token
.
I tried fine-tuning with your template, and the model does generate the expected output, but keeps going on and on after that until it hits the maximum number of generation tokens.
In LLaMa 3.1 template for example, they always add a <|eot_id|>
at the end of each turn. So I made a minor modification to your template just to try, and this seems to fix the issue:
{%- if add_generation_prompt -%}{{ '<extra_id_1>Assistant\n' }}{%- else -%}{{- eos_token }}{%- endif -%}
Let me know what you think about this.
My concern is possible downstream effects on model accuracy (since I do not know if the model was trained with eos_token
ending each turn, like Llama 3.1, or if there was some different format it was trained on. Let's see what the training team says about this..