Chat template for llama.cpp?
#2
by
JankyMudFart
- opened
Hi there, thanks a lot for generating this GGUF!
I'm trying to run this with llama-server, but it just generates and endless loop of gibberish tags. The same thing happens with bartowski/nvidia_NVIDIA-Nemotron-Nano-9B-v2-GGUF and bartowski/nvidia_NVIDIA-Nemotron-Nano-12B-v2-GGUF:
im_end_start<_enduserim>_start|imassim_start<ass userass_end>
ass|<>
hi
_starthiuseristant_end_start>
<
istanthihiass_startass>istant|<im<
im
im
_endistant_start>
<<ass_end>>
userhiassuseristant<hi_start>userhiim
imhi
_start_endistant_end
_endistant|istant_end_start>im<<
assistant_end
_endistant<
_start_starthi<>|istant<userim_endhi
istant>
ass_end_enduserasshiassassim_end
_start>
|
istant_startassim|_end_startass|hi<>
ass>
|_endistant<