Hallucination

#2
by Farouk09 - opened

Hi,

Sometimes when using the model, it outputs some chinese words, do you have ideas how to solve this? ( the prompt is in french)
Thank you

Screenshot 2025-05-26 at 20.39.23.png

hi @Farouk09 thanks for your message ! That’s indeed strange — I’ve never observed such behavior from the model before. Did you by use a quantized version ?

Hi @jpacifico , thanks for your reply! I served the model using vLLM with dtype="auto" (which uses FP16 precision for FP32/FP16 models and BF16 for BF16 models), and a temperature setting of 0.3.

The system prompt instructed the LLM to act as "un tuteur pour orienter les étudiants à choisir une formation adéquate."

Sign up or log in to comment