yili7eli commited on
Commit
ace65d8
·
verified ·
1 Parent(s): 4a32c95

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +40 -3
README.md CHANGED
@@ -1,3 +1,40 @@
1
- ---
2
- license: mit
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ task_categories:
4
+ - token-classification
5
+ language:
6
+ - en
7
+ tags:
8
+ - TAM
9
+ - CAM
10
+ - MLLM
11
+ - VLLM
12
+ - Explainability
13
+ pretty_name: TAM
14
+ size_categories:
15
+ - 1B<n<10B
16
+ ---
17
+
18
+ # Token Activation Map to Visually Explain Multimodal LLMs
19
+ We introduce the Token Activation Map (TAM), a groundbreaking method that cuts through the contextual noise in Multimodal LLMs. This technique produces exceptionally clear and reliable visualizations, revealing the precise visual evidence behind every word the model generates.
20
+
21
+ # Evaluation Datasets
22
+ This is a dataset repo to evaluate TAM. The involved datasets are formatted for easy useage.
23
+
24
+ # Paper and Code
25
+ [![arXiv](https://img.shields.io/badge/arXiv-2506.23270-brown?logo=arxiv&style=flat-square)](https://arxiv.org/abs/2506.23270)
26
+
27
+ [🐙 GitHub Page](https://github.com/xmed-lab/TAM)
28
+
29
+ ## Citation
30
+ ```
31
+ @misc{li2025tokenactivationmapvisually,
32
+ title={Token Activation Map to Visually Explain Multimodal LLMs},
33
+ author={Yi Li and Hualiang Wang and Xinpeng Ding and Haonan Wang and Xiaomeng Li},
34
+ year={2025},
35
+ eprint={2506.23270},
36
+ archivePrefix={arXiv},
37
+ primaryClass={cs.CV},
38
+ url={https://arxiv.org/abs/2506.23270},
39
+ }
40
+ ```