Speculative Decoding for Mistral Large

#3
by ernestr - opened

Hey folks.

I recognize this isn't the official Mistral repo but figured fellow enthusiasts of bartowski's quants might have some ideas.

I'm searching for a suitable small GGUF quantized model to use for speculative decoding with Mistral Large 2411 in Llama.cpp. I've tried Mistral 7B 0.2 and 0.3 as well as Ministral. The tokenizers differ.

common_speculative_are_compatible: draft vocab vocab must match target vocab to use speculation but token 10 content differs - target '[IMG]', draft '[control_8]'                            
srv    load_model: the draft model '/home/x0xxin/GGUF/Mistral-7B-Instruct-v0.3.Q4_K_M.gguf' is not compatible with the target model '/home/x0xxin/GGUF/Mistral-Large-Instruct-2407-Q4_K_M.gguf
' 

I really like the Mistral 123B models and used Mistral 7B as the draft when running them with Exllamav2. It worked well. I can't get speculative decoding working with llama.cpp because it (correctly) throws an error due to different tokens.

Sign up or log in to comment