Bug: Llama4 Multimodal (Llama4ForConditionalGeneration) Fails with Optimized Attention (sdpa, eager) and KV Cache for Effective Sequence Lengths > attention_chunk_size (8192)
Model: meta-llama/Llama-4-Scout-17B-16E-Instruct
Transformers Version: 4.53.2
PyTorch Version: 2.7
Accelerate: 1.7.0
Description:
When performing multimodal inference with Llama4ForConditionalGeneration using use_cache=True and either attn_implementation="sdpa" or attn_implementation="eager", runtime errors occur if the total effective sequence length (text tokens + image feature representations + past_key_values_length) exceeds the model's config.text_config.attention_chunk_size (which is 8192, also matching config.text_config.rope_scaling.original_max_position_embeddings).
The issue is reproducible when using AutoProcessor.apply_chat_template with multiple image inputs as demonstrated in documentation examples, provided the resulting effective sequence length crosses this 8192 threshold.
Key Configuration Parameters (from model's config.json):
"text_config": {
"attention_chunk_size": 8192,
"max_position_embeddings": 10485760,
"rope_scaling": {
"original_max_position_embeddings": 8192,
"factor": 16.0,
"rope_type": "llama3"
}
}
Errors Encountered (with use_cache=True and effective length > 8192):
With attn_implementation="sdpa":
Input causing error (example from apply_chat_template with 16 images):
input_ids shape: [1, text_len_with_placeholders]
attention_mask shape: [1, text_len_with_placeholders]
pixel_values shape: [num_images, 3, H, W] (e.g., [16, 3, 336, 336])
(Note: processor.apply_chat_template loads images and passes pixel_values correctly to model.generate)
Error:
RuntimeError: The size of tensor a (e.g., 39497) must match the size of tensor b (e.g., 40008) at non-singleton dimension 5
Traceback points to: transformers/models/llama4/modeling_llama4.py, line ~739, in _update_causal_mask: chunked_attention_mask = chunked_attention_mask * local_attention_mask[:, None, None, :]
attn_implementation="sdpa" with use_cache=True works correctly and performantly if the total effective input sequence length (text + image features from pixel_values) is kept at or below 8192 tokens. This was observed when max_length was not passed to model.generate(), and the input itself was short enough.
Hypothesis:
The sliding_window logic to compute the causal_attention_mask is defective.
Suggested Investigation for Maintainers:
Examine Llama4Model._update_causal_mask for multimodal inputs with use_cache=True.
Focus on the logic paths triggered when past_key_values_length + current_input_length > config.text_config.attention_chunk_size (8192).
For the sdpa path: Analyze how chunked_attention_mask (representing current effective sequence length) and local_attention_mask (derived from initial input padding and attention_chunk_size) are generated and why their key dimensions (e.g., 39497 vs. 40008) become incompatible for the multiplication at line ~739. The derivation of local_attention_mask's effective length seems particularly relevant.
Logs:
Exception has occurred: RuntimeError (note: full exception trace is shown but execution is paused at: _run_module_as_main)
Demo run failed: The size of tensor a (39497) must match the size of tensor b (40008) at non-singleton dimension 5
File "/home/electric/Llama4Engine/src/model/server/inference/engine.py", line 268, in outputs = lm.model.generate( File "/home/electric/.pyenv/versions/Llama4Engine/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context return func(*args, **kwargs) File "/home/electric/.pyenv/versions/Llama4Engine/lib/python3.10/site-packages/transformers/generation/utils.py", line 2597, in generate result = self._sample( File "/home/electric/.pyenv/versions/Llama4Engine/lib/python3.10/site-packages/transformers/generation/utils.py", line 3557, in _sample outputs = self(**model_inputs, return_dict=True) File "/home/electric/.pyenv/versions/Llama4Engine/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1751, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "/home/electric/.pyenv/versions/Llama4Engine/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1762, in _call_impl return forward_call(*args, **kwargs) File "/home/electric/.pyenv/versions/Llama4Engine/lib/python3.10/site-packages/accelerate/hooks.py", line 175, in new_forward output = module._old_forward(*args, **kwargs) File "/home/electric/.pyenv/versions/Llama4Engine/lib/python3.10/site-packages/transformers/models/llama4/modeling_llama4.py", line 1652, in forward outputs = self.language_model( File "/home/electric/.pyenv/versions/Llama4Engine/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1751, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "/home/electric/.pyenv/versions/Llama4Engine/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1762, in _call_impl return forward_call(*args, **kwargs) File "/home/electric/.pyenv/versions/Llama4Engine/lib/python3.10/site-packages/transformers/utils/generic.py", line 969, in wrapper output = func(self, *args, **kwargs) File "/home/electric/.pyenv/versions/Llama4Engine/lib/python3.10/site-packages/transformers/models/llama4/modeling_llama4.py", line 936, in forward outputs = self.model( File "/home/electric/.pyenv/versions/Llama4Engine/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1751, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "/home/electric/.pyenv/versions/Llama4Engine/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1762, in _call_impl return forward_call(*args, **kwargs) File "/home/electric/.pyenv/versions/Llama4Engine/lib/python3.10/site-packages/transformers/utils/generic.py", line 969, in wrapper output = func(self, *args, **kwargs) File "/home/electric/.pyenv/versions/Llama4Engine/lib/python3.10/site-packages/transformers/models/llama4/modeling_llama4.py", line 578, in forward causal_mask, chunk_causal_mask = self._update_causal_mask( File "/home/electric/.pyenv/versions/Llama4Engine/lib/python3.10/site-packages/transformers/models/llama4/modeling_llama4.py", line 739, in _update_causal_mask chunked_attention_mask = chunked_attention_mask * local_attention_mask[:, None, None, :]RuntimeError: The size of tensor a (39497) must match the size of tensor b (40008) at non-singleton dimension 5The above exception was the direct cause of the following exception: File "/home/electric/Llama4Engine/src/model/server/inference/engine.py", line 279, in raise RuntimeError(f"Demo run failed: {exc}") from exc File "/home/electric/.pyenv/versions/3.10.17/lib/python3.10/runpy.py", line 86, in _run_code exec(code, run_globals) File "/home/electric/.pyenv/versions/3.10.17/lib/python3.10/runpy.py", line 196, in _run_module_as_main (Current frame) return _run_code(code, main_globals, None,
RuntimeError: Demo run failed: The size of tensor a (39497) must match the size of tensor b (40008) at non-singleton dimension 5