Datasets:
metadata
license: mit
task_categories:
- token-classification
language:
- en
tags:
- TAM
- CAM
- MLLM
- VLLM
- Explainability
pretty_name: TAM
size_categories:
- 1B<n<10B
Token Activation Map to Visually Explain Multimodal LLMs
We introduce the Token Activation Map (TAM), a groundbreaking method that cuts through the contextual noise in Multimodal LLMs. This technique produces exceptionally clear and reliable visualizations, revealing the precise visual evidence behind every word the model generates.
Evaluation Datasets
This is a dataset repo to evaluate TAM. The involved datasets are formatted for easy useage.
Paper and Code
Citation
@misc{li2025tokenactivationmapvisually,
title={Token Activation Map to Visually Explain Multimodal LLMs},
author={Yi Li and Hualiang Wang and Xinpeng Ding and Haonan Wang and Xiaomeng Li},
year={2025},
eprint={2506.23270},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2506.23270},
}