new Mistral!!!
It's queued! :D
Thanks a lot for the recommendation. I'm so excited to try it out. New instruction tuned mistral is huge.
You can check for progress at http://hf.tst.eu/status.html or regularly check the model
summary page at https://hf.tst.eu/model#Mistral-Small-3.2-24B-Instruct-2506-GGUF for quants to appear.
LOL no way they messed this up:
INFO:hf-to-gguf:Loading model: Mistral-Small-3.2-24B-Instruct-2506
INFO:hf-to-gguf:Model architecture: Mistral3ForConditionalGeneration
INFO:gguf.gguf_writer:gguf: This GGUF file is for Little Endian only
Traceback (most recent call last):
File "/llmjob/llama.cpp/convert_hf_to_gguf.py", line 6536, in <module>
main()
File "/llmjob/llama.cpp/convert_hf_to_gguf.py", line 6515, in main
model_instance = model_class(dir_model, output_type, fname_out,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/llmjob/llama.cpp/convert_hf_to_gguf.py", line 2031, in __init__
super().__init__(*args, **kwargs)
File "/llmjob/llama.cpp/convert_hf_to_gguf.py", line 1179, in __init__
with open(self.dir_model / "preprocessor_config.json", "r", encoding="utf-8") as f:
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
FileNotFoundError: [Errno 2] No such file or directory: 'Mistral-Small-3.2-24B-Instruct-2506/preprocessor_config.json'
job finished, status 1
job-done<0 Mistral-Small-3.2-24B-Instruct-2506 noquant 1>
could you comment on model site? maybe they were thinking about weekend ;)
No they also didn't upload it for the non-Instruction tuned model. I wil try to just use https://huggingface.co/mistralai/Mistral-Small-3.1-24B-Instruct-2503/blob/main/preprocessor_config.json from that older model.
I see Devstral is also missing this file
It gets even worse:
INFO:hf-to-gguf:Loading model: Mistral-Small-3.2-24B-Instruct-2506
INFO:hf-to-gguf:Model architecture: Mistral3ForConditionalGeneration
INFO:gguf.gguf_writer:gguf: This GGUF file is for Little Endian only
Traceback (most recent call last):
File "/llmjob/llama.cpp/convert_hf_to_gguf.py", line 6536, in <module>
main()
File "/llmjob/llama.cpp/convert_hf_to_gguf.py", line 6515, in main
model_instance = model_class(dir_model, output_type, fname_out,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/llmjob/llama.cpp/convert_hf_to_gguf.py", line 2035, in __init__
self.img_break_tok_id = self.get_token_id("[IMG_BREAK]")
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/llmjob/llama.cpp/convert_hf_to_gguf.py", line 2042, in get_token_id
with open(tokenizer_config_file, "r", encoding="utf-8") as f:
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
FileNotFoundError: [Errno 2] No such file or directory: 'Mistral-Small-3.2-24B-Instruct-2506/tokenizer_config.json'
Interesting so Magistral-Small-2506-abliterated
has both touse files: https://huggingface.co/huihui-ai/Magistral-Small-2506-abliterated/blob/main/tokenizer_config.json and https://huggingface.co/huihui-ai/Magistral-Small-2506-abliterated/blob/main/generation_config.json
I see what they did. Here a fixed repsository we can quant: https://huggingface.co/anthracite-core/Mistral-Small-3.2-24B-Instruct-2506-ChatML
Unfortinately they also convearted it to MistralForCausalLM losing al the vision which we want to keep.
Justy copying them all from https://huggingface.co/mistralai/Mistral-Small-3.1-24B-Instruct-2503 seams to be the way to go acording to https://huggingface.co/gabriellarson/Mistral-Small-3.2-24B-Instruct-2506-GGUF
The cleanest way would likely be me creating a llamacppfixed fork of booth instruct and base.
I created https://huggingface.co/nicoboss/Mistral-Small-3.2-24B-Instruct-2506-llamacppfixed - let's see if that works.
on reddit discussion someone tested that https://huggingface.co/gabriellarson/Mistral-Small-3.2-24B-Instruct-2506-GGUF/tree/main and vision works