provide int4 version pls

#2
by Josh1026 - opened

provide int4 version pls...

Here is an AWQ quantized version👋: https://modelscope.cn/models/swift/QwenLong-L1-32B-AWQ

Thanks. Does anyone have them packaged in .gguf single file to be run with llama.cpp?

Tongyi-Zhiwen org

Thanks. Does anyone have them packaged in .gguf single file to be run with llama.cpp?

Here is a Q4_K_M version (https://huggingface.co/mradermacher/QwenLong-L1-32B-GGUF/blob/main/QwenLong-L1-32B.Q4_K_M.gguf) from the community.

Thank you! :-)

Sign up or log in to comment