llama.cpp help

#1
by urtuuuu - opened

Any ideas how to run it in llama.cpp? Just a standard way of launching doesn't work. It won't think properly and uses <thought> and </thought>

This doesn't work >>> llama-server -m LGAI-EXAONE_EXAONE-Deep-7.8B-Q4_K_M.gguf -ngl 99 --temp 0.6
llama-b4916-bin-win-vulkan-x64

Your need to confirm your account before you can post a new comment.

Sign up or log in to comment