Final GLM-4-32B-0414-GGUF fixes!

#8
by shimmyshimmer - opened

Hey guys we reuploaded the quants with more fixes. Hopefully it's the final fixes! Please use --jinja

If you don't use --jinja, which applies the chat template, then you will get gibberish!

Results should be much better so let us know!:

./llama.cpp/llama-cli -hf unsloth/GLM-4-32B-0414-GGUF:Q4_K_XL -ngl 99 --jinja

Thank you!

shimmyshimmer pinned discussion

what does --jinja do ?

Unsloth AI org

--jinja applies the chat template - if you don't, you will get gibberish

--jinja applies the chat template - if you don't, you will get gibberish

Is it different for other models, like qwen? I never used --jinja

Unsloth AI org

Yes always use --jinja! LM Studio turns it on by default. I think llama.cpp also does it by default now, but unsure

do you know if koboldcpp uses it by default? you can't use the argument in kobold ("unrecognized arguments: --jinja"). However, it derives from llamacpp

KoboldCpp has its own system, but it is compatible out of the box with this model. When loading it should mention its detected as GLM4.
Do note, we had to do a couple of changes in the ways EOS was handled as well as fixing tokenization multiple versions ago. If this model works poorly for you make sure to update to the latest KoboldCpp.
Likewise I never got confirmation llamacpp was 100% stable with this model on Vulkan, so Vulkan may behave odd.

Theres some complexities with this models way of doing BOS in general, that part is handled internally by KoboldCpp when GLM is detected.

Update: I remembered the why of it all, this model has that odd [gMASK] thing in the jinja. KoboldCpp has its own GLM4 code handle this automatically in the background. Even if you don't use it for instruct it should automatically just work.

Unsloth AI org

do you know if koboldcpp uses it by default? you can't use the argument in kobold ("unrecognized arguments: --jinja"). However, it derives from llamacpp

@doc-acula see above ^

Sign up or log in to comment