You need to agree to share your contact information to access this dataset
This repository is publicly accessible, but you have to accept the conditions to access its files and content.
Terms of Use: The dataset contains images that, by law, are protected by copyright. Therefore, the dataset must not be published to the broad public. Only researchers, educators, and students in the field of automated fact-checking may get access to this dataset—for non-commercial use only.
Log in or Sign Up to review the conditions and access this dataset content.
ClaimReview2024+ Benchmark
This is the ClaimReview2024+ (CR+) benchmark, a dataset used to evaluate multimodal automated fact-checking systems. The task is to classify each claim as either supported
, refuted
, misleading
, or not enough information
. CR+ consists of 300 real-world claims sourced via the ClaimReview markup from professional fact-checking articles. CR+ was specifically constructed to avoid the data leakage problem in which claims released prior to GPT-4o's knowledge cutoff in October 2023 are known to GPT-4o. Hence, CR+ only contains claims from fact-checking articles released starting Nov 1, 2023. Out of the 300 instances, 140 contain an image, the others are text only.
CR+ was constructed along with DEFAME, the current state-of-the-art multimodal fact-checking system and the first that can handle both multimodal claims and multimodal evidence. DEFAME achieved an accuracy of 69.7% on CR+.
For more details on CR+, check out the ICML paper.
Examples

Cite this Work
Please use the following BibTeX to refer to the authors:
@inproceedings{braun2024defame,
title = {{DEFAME: Dynamic Evidence-based FAct-checking with Multimodal Experts}},
author = {Tobias Braun and Mark Rothermel and Marcus Rohrbach and Anna Rohrbach},
booktitle = {Proceedings of the 42nd International Conference on Machine Learning},
year = {2025},
url = {https://arxiv.org/abs/2412.10510},
}
- Downloads last month
- 13