Could anyone can tell me how to set the prompt template when i use the model in the pycharm by transformers?

#46
by LAKSERS - opened

tokenizer = AutoTokenizer.from_pretrained(model_id)
tokenizer.pad_token = tokenizer.eos_token
model = AutoModelForCausalLM.from_pretrained(
model_id,
# torch_dtype=torch.bfloat16,
device_map="auto",
)

I saw that a contributor of the official Llama3 repo had posted to use "-1" (not as a string but as the ID itself so as to disable it) as the pad token in one of the issues on that GitHub repo, so that config you have so far is also wrong according to that.

But you can use Huggingface's prompt formatter from the tokenizer to format your prompt for you:

https://huggingface.co/docs/transformers/chat_templating

Sign up or log in to comment