confusion with model import

#64
by Charushila - opened

Please review this model is an image-text-to-text or not because it is posted under that category.

Charushila changed discussion status to closed
Charushila changed discussion title from Not an Image-text-to-text model to confusion with model import

It doesn't support AutoModelForCasualLLM import from transformers, we need to specify the model Gemma3ForConditionalGeneration import explicitly.

Google org

Hi ,

Yes, the Gemma 3 model specifically the 4B, 12B, 27B sizes is a multimodal model. It is designed to take both image and text as input and generate text as output. This is a key feature of the Gemma series, which sets is apart from earlier, text -only versions of Gemma.

The reason your getting an error with AutoModelForCasualLLM is that this is class is designed for standard, text-only, autoregressive models. The Gemma 3 model, with its unique multimodal architecture, requires a more specific class to load all of its component correctly.

The dedicated class for this model is Gemma3ForConditionalGeneration. Using this explicit class ensure that the model's architecture, including its vision encoder, is loaded properly and functions as intended. The "Auto" classes are designed for convenience, but they can't always handle models with novel. non - standard architecture.

Kindly refer this link for more information. Thank you.

Sign up or log in to comment