The following generation flags are not valid and may be ignored
I could run the script under "Running the model on a single GPU" on my Mac locally using CPU with some modifications but got the following message:
The following generation flags are not valid and may be ignored: ['top_p', 'top_k']. Set TRANSFORMERS_VERBOSITY=info
for more details.
Why is there such a message?
Hi @myyim ,
Welcome to Gemma family of open source models, the message is an informative one, not an error that stops your code. It's letting you know that certain common generation
parameters aren't applicable in this specific context. This is because the generate
function from transformers utilized in the Gemma models, where it has specific implementation. The specific model implementations can override or ignore certain parameters.
The following code script can help you to log the logger while executing the code:
import transformers
transformers.logging.set_verbosity_info()
from transformers import AutoProcessor, Gemma3nForConditionalGeneration
from PIL import Image
import requests
import torch
model_id = "google/gemma-3n-e2b" #or any other model id
model = Gemma3nForConditionalGeneration.from_pretrained(model_id, device="cuda", torch_dtype=torch.bfloat16,).eval()
processor = AutoProcessor.from_pretrained(model_id)
url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/bee.jpg"
image = Image.open(requests.get(url, stream=True).raw)
prompt = " in this image, there is"
model_inputs = processor(text=prompt, images=image, return_tensors="pt").to(model.device)
input_len = model_inputs["input_ids"].shape[-1]
with torch.inference_mode():
generation = model.generate(**model_inputs, max_new_tokens=10)
generation = generation[0][input_len:]
decoded = processor.decode(generation, skip_special_tokens=True)
print(decoded)
one picture of flowers which shows that the flower is
In short the above logger is a info/warn level message not an error, it will not cause any failure while your execution.
Thanks.