Can *fim* and *instruct* datasets be mixed for lora training?
#23
by
JacobHsu
- opened
I want to build a custom Lora coding style based on the Qwen2.5-Coder-7B-Instruct model.
If I train fim or instruct separately with Lora, they both achieve decent results on their respective tasks (but not on the others).
However, I want to have a model like yours that has both fim and chat ability.
But if I mix both datsets and do sft lora at the same time, the results are terrible.
Is this reasonable?
Or maybe it would be better to continue pre-train with the fim dataset first and then do sft with the chat dataset?