Fails to load

#1
by snombler - opened

Hey! Thanks for the rapid Instruct finetune.

This gguf, sadly, doesn't load for me. Both the latest koboldcpp and a llama.cpp build from yesterday silently crash on load.

Didnt try this one, but I was yesterday sucesfull in loading Mixttral the non-instruct version, using gguf and kobolcpp without issues.

Yeah, the base model loads fine.

Yup also tried it also. It crashes instantly when loaded.

Do we need to re-combine these somehow?

Yeah, not sure why split it up into 6 quintilian pieces I think that might be the issue. But pretty sure recent versions of llama cpp can handle this as long as you load the first file, but I've never seen it split this much so maybe that's a issue. There is instructions on how to recombine them though, I haven't tried for myself though

You can combine them like this, but I havent tried, because there is too many pieces and I am lazy :) On Windows: "COPY /b Mixtral-8x22B-v0.1.Q2_K-00001-of-00005.gguf + Mixtral-8x22B-v0.1.Q2_K-00002-of-00005.gguf + Mixtral-8x22B-v0.1.Q2_K-00003-of-00005.gguf + Mixtral-8x22B-v0.1.Q2_K-00004-of-00005.gguf + Mixtral-8x22B-v0.1.Q2_K-00005-of-00005.gguf"

Lightblue KK. org

Hey, sorry, it was like 1am when I did the conversion > quantization > splitting. Will look at this again today!

No problem :3

Lightblue KK. org

Can you be my free QA tester here? It might be fixed now.

Previously I ran

./convert.py --outfile Karasu-Mixtral-8x22B-v0.1-q3_k_m --outtype f16 /workspace/llm_training/axolotl/mixtral_8x22B_training/merged_model_multiling

./quantize /workspace/Karasu-Mixtral-8x22B-v0.1.gguf /workspace/Karasu-Mixtral-8x22B-v0.1_q3_k_m.gguf Q3_K_M

./gguf-split --split --split-max-size 5G /workspace/Karasu-Mixtral-8x22B-v0.1_q3_k_m.gguf /workspace/somewhere-sensible

This time, I ran:

./convert-hf-to-gguf.py --outfile  /workspace/Karasu-Mixtral-8x22B-v0.1.gguf --outtype f16 /workspace/llm_training/axolotl/mixtral_8x22B_training/merged_model_multiling

./quantize /workspace/Karasu-Mixtral-8x22B-v0.1.gguf /workspace/Karasu-Mixtral-8x22B-v0.1-Q3_K_M.gguf Q3_K_M

./gguf-split  --split --split-max-tensors 128  /workspace/Karasu-Mixtral-8x22B-v0.1-Q3_K_M.gguf /workspace/split_gguf_q3km/Karasu-Mixtral-8x22B-v0.1-Q3_K_M

I think the crucial difference was using convert-hf-to-gguf.py rather than convert.py. convert-hf-to-gguf.py took a lot longer to load and in my monkey brain, longer loading means it's doing something more meaningful. Everything else is pretty much the same as before, but the splitting turned out nicely this time (5GB per file). Will maybe investigate the difference between convert.py and convert-hf-to-gguf.py as I presumed that convert.py would basically be a catch-all for all different types of files.

Seems to work with koboldcpp but I don't know for sure if its loading all 5 parts.

Lightblue KK. org

You might also want to take a look at an experimental repo I made that splits into smaller pieces:
https://huggingface.co/lightblue/Karasu-Mixtral-8x22B-v0.1-gguf-test

I tested in Kobold CPP, the 3KM 5 split. Works without any issues, loads everything :)

Yep, five split is working for me. Gonna close this up. Thanks again!

snombler changed discussion status to closed
Lightblue KK. org

Woohoo! Enjoy!

Works for me too

Sign up or log in to comment