Trained context size and rope scaling inconsistency

#4
by lukasstraub2 - opened

Hello,
Looking at gguf quants, I see that for both base models DeepSeek-R1-0528-Qwen3-8B and Qwen3, the trained context length qwen3.context_length is 131072 and qwen3.rope.scaling.original_context_length is 32768.

However, for this model qwen3.context_length is 40960 and rope scaling parameters look off too. Is this intended? Would rope scaling work with this model?

Best regards

Hello,
Looking at gguf quants, I see that for both base models DeepSeek-R1-0528-Qwen3-8B and Qwen3, the trained context length qwen3.context_length is 131072 and qwen3.rope.scaling.original_context_length is 32768.

However, for this model qwen3.context_length is 40960 and rope scaling parameters look off too. Is this intended? Would rope scaling work with this model?

Best regards

This doesnt seem to be intended behaviour, should be the same as the parent models. Maybe an issue with mergekit.

I found out more. So first the official Qwen3 GGUFs do in fact have a qwen3.context_length of 40960 and it is explained in the Model Card:

The default max_position_embeddings in config.json is set to 40,960. This allocation includes reserving 32,768 tokens for outputs and 8,192 tokens for typical prompts, which is sufficient for most scenarios involving short text processing. If the average context length does not exceed 32,768 tokens, we do not recommend enabling YaRN in this scenario, as it may potentially degrade model performance.

Moreover, the official GGUFs also don't have the rope scaling parameters, you are supposed to supply them manually.

So I think this is actually as intended.

Sign up or log in to comment