Error Occurred During Batch Inference
#4
by
BigPan98
- opened
When performing batch inference using the native transformer implementation, I encountered the following error:
Asking to pad but the tokenizer does not have a padding token. Please select a token to use as pad_token (tokenizer.pad_token = tokenizer.eos_token e.g.) or add a new pad token via tokenizer.add_special_tokens({'pad_token': '[PAD]'}).
It seems that the pad token is not specified in the model configuration file. Which pad token should I use? Should I keep it consistent with Mistral?
@BigPan98 Thank you for your interest in our models. We recommend using VLLM decoding, so there's no need to specify the pad token. If necessary, you can use "</s>" as the padding token.