Classic Q4_K_M?

#7
by EricTri - opened

Hi,

These GGUF quants are using the more modern UD 2.0 quantization method and are incompatible with KTransformers.

https://github.com/kvcache-ai/ktransformers/issues/1195

Would it be possible to upload a Q4_K_M quant using the old quantization method?

Unsloth AI org

A lot of them are still uploading! Please wait for our official announcement! <3

Amazing! Thanks for all your hard work

After downloading it for 93 minutes, I got this with llama.cpp v5124 and Q4_K_M:

93.57.161.372 E llama_model_load: error loading model: check_tensor_dims: tensor 'blk.0.attn_q_b.weight' has wrong shape; expected  1536, 73728, got  1536, 24576,     1,     1
93.57.161.382 E llama_model_load_from_file_impl: failed to load model

When I attempted to run it again, it found a different version on HF and it's downloading it again:

0.00.870.633 W common_download_file_single: ETag header is different ("8ad0d2516c8115fdd0a83085b66211307e4fab8f3fbdf48e373d5f6527266957" != "e10f1d54ba46812c08260ec3b560ff14c806f8ffb3cd811a227cea0d77cc0f4f"): triggering a new download

I'm guessing there was some problem with the initial version and it has been re-uploaded?

OK, from the other threads apparently I need to update my llama.cpp to a more recent version to get rid of that error. After it completes downloading the second time I'll do that.

With the latest llama.cpp version (5537) it loaded, but it took 50 minutes on my system as it was loading the model with a single thread, one layer at a time, and doing some tensor repacking for some reason:

.45.09.074.148 D repack: repack tensor blk.58.ffn_up_exps.weight with q4_K_8x8
45.28.689.530 D repack: repack tensor blk.58.ffn_gate_shexp.weight with q4_K_8x8
45.28.770.822 D repack: repack tensor blk.58.ffn_up_shexp.weight with q4_K_8x8
45.28.852.826 D repack: repack tensor blk.59.attn_q_a.weight with q4_K_8x8
45.28.920.638 D repack: repack tensor blk.59.attn_q_b.weight with q4_K_8x8
45.29.154.179 D repack: repack tensor blk.59.attn_kv_a_mqa.weight with q4_K_8x8
45.29.178.712 D repack: repack tensor blk.59.attn_output.weight with q4_K_8x8
45.30.517.695 D repack: repack tensor blk.59.ffn_gate_exps.weight with q4_K_8x8
.45.49.762.411 D repack: repack tensor blk.59.ffn_up_exps.weight with q4_K_8x8
46.08.744.034 D repack: repack tensor blk.59.ffn_gate_shexp.weight with q4_K_8x8
46.08.813.393 D repack: repack tensor blk.59.ffn_up_shexp.weight with q4_K_8x8
46.08.881.724 D repack: repack tensor blk.60.attn_q_a.weight with q4_K_8x8
46.08.934.546 D repack: repack tensor blk.60.attn_q_b.weight with q4_K_8x8
46.09.107.476 D repack: repack tensor blk.60.attn_kv_a_mqa.weight with q4_K_8x8
46.09.131.141 D repack: repack tensor blk.60.attn_output.weight with q4_K_8x8
46.09.762.706 D repack: repack tensor blk.60.ffn_gate_exps.weight with q4_K_8x8
.46.29.210.163 D repack: repack tensor blk.60.ffn_up_exps.weight with q4_K_8x8
46.48.630.578 D repack: repack tensor blk.60.ffn_gate_shexp.weight with q4_K_8x8
46.48.712.425 D repack: repack tensor blk.60.ffn_up_shexp.weight with q4_K_8x8

In the past I used 6 bit quantizations for DeepSeek models, lmstudio-community_DeepSeek-R1-GGUF_DeepSeek-R1-Q6_K to be precise, and I don't remember llama.cpp doing a repacking for it while loading it. Any chance you'll make a 6 bit quantization for this model available too?

Unsloth AI org

With the latest llama.cpp version (5537) it loaded, but it took 50 minutes on my system as it was loading the model with a single thread, one layer at a time, and doing some tensor repacking for some reason:

.45.09.074.148 D repack: repack tensor blk.58.ffn_up_exps.weight with q4_K_8x8
45.28.689.530 D repack: repack tensor blk.58.ffn_gate_shexp.weight with q4_K_8x8
45.28.770.822 D repack: repack tensor blk.58.ffn_up_shexp.weight with q4_K_8x8
45.28.852.826 D repack: repack tensor blk.59.attn_q_a.weight with q4_K_8x8
45.28.920.638 D repack: repack tensor blk.59.attn_q_b.weight with q4_K_8x8
45.29.154.179 D repack: repack tensor blk.59.attn_kv_a_mqa.weight with q4_K_8x8
45.29.178.712 D repack: repack tensor blk.59.attn_output.weight with q4_K_8x8
45.30.517.695 D repack: repack tensor blk.59.ffn_gate_exps.weight with q4_K_8x8
.45.49.762.411 D repack: repack tensor blk.59.ffn_up_exps.weight with q4_K_8x8
46.08.744.034 D repack: repack tensor blk.59.ffn_gate_shexp.weight with q4_K_8x8
46.08.813.393 D repack: repack tensor blk.59.ffn_up_shexp.weight with q4_K_8x8
46.08.881.724 D repack: repack tensor blk.60.attn_q_a.weight with q4_K_8x8
46.08.934.546 D repack: repack tensor blk.60.attn_q_b.weight with q4_K_8x8
46.09.107.476 D repack: repack tensor blk.60.attn_kv_a_mqa.weight with q4_K_8x8
46.09.131.141 D repack: repack tensor blk.60.attn_output.weight with q4_K_8x8
46.09.762.706 D repack: repack tensor blk.60.ffn_gate_exps.weight with q4_K_8x8
.46.29.210.163 D repack: repack tensor blk.60.ffn_up_exps.weight with q4_K_8x8
46.48.630.578 D repack: repack tensor blk.60.ffn_gate_shexp.weight with q4_K_8x8
46.48.712.425 D repack: repack tensor blk.60.ffn_up_shexp.weight with q4_K_8x8

In the past I used 6 bit quantizations for DeepSeek models, lmstudio-community_DeepSeek-R1-GGUF_DeepSeek-R1-Q6_K to be precise, and I don't remember llama.cpp doing a repacking for it while loading it. Any chance you'll make a 6 bit quantization for this model available too?

Yes ofc it's just still uploading

Unsloth AI org

@EricTri @VladNC they're all up now!

Sign up or log in to comment