Will 120B be also treated with this?
#14
by
Canam
- opened
I seen you did large models like Qwen 3 235B and Deepseek R1. Wouldn't it be logical to apply this to 120B?
Given that larger models are consistently being released, there's already a long-pending list of models that could be applied. I'm not sure if it's needed for 120B at this point—most users have limited GPU memory, so the 120B model isn't currently prioritized. However, I'm keeping track of community feedback, and if there's significant demand, we may support 120B.
Best,
Jinx Team
i would love the 120B to have this as well pleaseee :D