Feasibility of 32B language model base

#2
by brandonbeiler - opened

This model is super promising, given the vision encoder used and the power of the recent qwen 3 models. Very curious what the level of difficulty/time it would be to utilize the Qwen3-32B model as the language model for this architecture. I have to imagine that with the performance of this 8B model, a larger parameter at this architecture/training method would be open source SOTA.

Sign up or log in to comment