Output Token Repetition – Produces Gibberish

#7
by Ranjit - opened

image.png

Did anything go wrong here above? It seems to work fine on Playground though. Aren’t both the same?

image.png

Sarvam AI org

Thanks for reporting the issue. We found a minor issue in the README which we have corrected now.

To the tokenizer.apply_chat_template method, we need to pass add_generation_prompt=True when we run HF inference using model.generate method.

Please try it out and let us know if you still face issues.

Sarvam AI org
edited 1 day ago

The output from the model now seems same as what you see on the playground.

rahular changed discussion status to closed

Yes it is, Thanks!

Sign up or log in to comment