Datasets:
Tasks:
Text Generation
Formats:
csv
Languages:
English
Size:
1K - 10K
ArXiv:
Tags:
multimodal
License:
File size: 3,942 Bytes
1821449 2b62e5d 1821449 163cc78 4a904cc 2b62e5d 1821449 203b468 efa18b1 8169244 58bcb5a efa18b1 1821449 48f72f7 23c3970 48f72f7 1821449 eb25218 1821449 528ced1 58bcb5a b093c17 203b468 b093c17 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 |
---
license: cc
configs:
- config_name: default
data_files:
- split: default
path: data.csv
task_categories:
- text-generation
language:
- en
size_categories:
- 10K<n<100K
tags:
- multimodal
pretty_name: MCiteBench
---
## MCiteBench Dataset
MCiteBench is a benchmark for evaluating the ability of Multimodal Large Language Models (MLLMs) to generate text with citations in multimodal contexts.
- Websites: https://caiyuhu.github.io/MCiteBench
- Paper: https://arxiv.org/abs/2503.02589
- Code: https://github.com/caiyuhu/MCiteBench
## Data Download
Please download the `MCiteBench_full_dataset.zip`. It contains the `data.jsonl` file and the `visual_resources` folder.
## Data Statistics
<img src="https://raw.githubusercontent.com/caiyuhu/MCiteBench/master/asset/data_statistics.png" style="zoom:50%;" />
## Data Format
The data format for `data_example.jsonl` and `data.jsonl` is as follows:
```yaml
question_type: [str] # The type of question, with possible values: "explanation" or "locating"
question: [str] # The text of the question
answer: [str] # The answer to the question, which can be a string, list, float, or integer, depending on the context
evidence_keys: [list] # A list of abstract references or identifiers for evidence, such as "section x", "line y", "figure z", or "table k".
# These are not the actual content but pointers or descriptions indicating where the evidence can be found.
# Example: ["section 2.1", "line 45", "Figure 3"]
evidence_contents: [list] # A list of resolved or actual evidence content corresponding to the `evidence_keys`.
# These can include text excerpts, image file paths, or table file paths that provide the actual evidence for the answer.
# Each item in this list corresponds directly to the same-index item in `evidence_keys`.
# Example: ["This is the content of section 2.1.", "/path/to/figure_3.jpg"]
evidence_modal: [str] # The modality type of the evidence, with possible values: ['figure', 'table', 'text', 'mixed'] indicating the source type of the evidence
evidence_count: [int] # The total count of all evidence related to the question
distractor_count: [int] # The total number of distractor items, meaning information blocks that are irrelevant or misleading for the answer
info_count: [int] # The total number of information blocks in the document, including text, tables, images, etc.
text_2_idx: [dict[str, str]] # A dictionary mapping text information to corresponding indices
idx_2_text: [dict[str, str]] # A reverse dictionary mapping indices back to the corresponding text content
image_2_idx: [dict[str, str]] # A dictionary mapping image paths to corresponding indices
idx_2_image: [dict[str, str]] # A reverse dictionary mapping indices back to image paths
table_2_idx: [dict[str, str]] # A dictionary mapping table paths to corresponding indices
idx_2_table: [dict[str, str]] # A reverse dictionary mapping indices back to table paths
meta_data: [dict] # Additional metadata used during the construction of the data
distractor_contents: [list] # Similar to `evidence_contents`, but contains distractors, which are irrelevant or misleading information
question_id: [str] # The ID of the question
pdf_id: [str] # The ID of the associated PDF document
```
## Citation
If you find **MCiteBench** useful for your research and applications, please kindly cite using this BibTeX:
```bib
@article{hu2025mcitebench,
title={MCiteBench: A Benchmark for Multimodal Citation Text Generation in MLLMs},
author={Hu, Caiyu and Zhang, Yikai and Zhu, Tinghui and Ye, Yiwei and Xiao, Yanghua},
journal={arXiv preprint arXiv:2503.02589},
year={2025}
}
``` |