Dataset Viewer
Auto-converted to Parquet
The dataset viewer is not available for this split.
Rows from parquet row groups are too big to be read: 595.10 MiB (max=286.10 MiB)
Error code:   TooBigContentError

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

BannerBench: Benchmarking Vision Language Models for Multi-Ad Selection with Human Preferences

Dataset Summary

The BannerBench is designed to evaluate the ability of VLMs to identify the banner that best matches human preferences from a set of candidates.

Dataset Structure

The structure of the raw dataset is as follows:

{
    "train": Dataset({
        "features": [
          'LPimage', 'image1', 'image2', 'image3', 'image4', 'image5', 
          'annotator1_ranking', 'annotator1_best', 'annotator1_worst', 
          'annotator2_ranking', 'annotator2_best', 'annotator2_worst', 
          'annotator3_ranking', 'annotator3_best', 'annotator3_worst', 
          'annotator4_ranking', 'annotator4_best', 'annotator4_worst', 
          'annotator5_ranking', 'annotator5_best', 'annotator5_worst', 
          'best_annotator', 'average_rank_correlation'
        ],
    })
}

Example

from datasets import load_dataset

dataset = load_dataset("cyberagent/BannerBench")

print(dataset)
# DatasetDict({
#     train: Dataset({
#         features: ['LPimage', 'image1', 'image2', 'image3', 'image4', 'image5', 'annotator1_ranking', 'annotator1_best', 'annotator1_worst', 'annotator2_ranking', 'annotator2_best', 'annotator2_worst', 'annotator3_ranking', 'annotator3_best', 'annotator3_worst', 'annotator4_ranking', 'annotator4_best', 'annotator4_worst', 'annotator5_ranking', 'annotator5_best', 'annotator5_worst', 'best_annotator', 'average_rank_correlation'],
#         num_rows: 900
#     })
# })

An example of the dataset is as follows:

{
  "LPimage": <PIL.PngImagePlugin.PngImageFile image mode=RGB size=1280x5352 at 0x7F09A24675D0>,
  "image1": <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=1080x1080 at 0x7F09A1C9B250>,
  "image2": <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=1080x1080 at 0x7F09A1CB52D0>,
  "image3": <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=1080x1080 at 0x7F09A1CB5810>,
  "image4": <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=1080x1080 at 0x7F09A1CB5E50>, 
  "image5": <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=1080x1080 at 0x7F09A1CB6490>, 
  "annotator1_ranking": [2, 4, 1, 3, 5], 
  "annotator1_best": 3, 
  "annotator1_worst": 5, 
  "annotator2_ranking": [4, 5, 1, 2, 3], 
  "annotator2_best": 3, 
  "annotator2_worst": 2, 
  "annotator3_ranking": [3, 2, 1, 4, 5], 
  "annotator3_best": 3, 
  "annotator3_worst": 5, 
  "annotator4_ranking": [3, 4, 5, 2, 1], 
  "annotator4_best": 5, 
  "annotator4_worst": 3, 
  "annotator5_ranking": [1, 4, 2, 3, 5], 
  "annotator5_best": 1, 
  "annotator5_worst": 5, 
  "best_annotator": "annotator1", 
  "average_rank_correlation": 0.6534000039100647
}

Data Fields

  • LPimage: The Landing-Page image related image[1-5].
  • image[1-5]: The Banners derived from a "LPimage".
  • annotator[1-5]_ranking: Ranking of the advertisemental images in most prefered order by annotators 1 to 5.
  • annotator[1-5]_best: The advertisement image is the most preferred one by annotators 1 to 5 in the Best-Choice task.
  • annotator[1-5]_worst: The advertisement image is the least preferred one by annotators 1 to 5 in the Best-Choice task.
  • best_annotator: The annotator whose average rank correlation with the other four annotators is the highest
  • average_rank_correlation: The average of the top half of all possible annotator pairs, selected based on their rank correlation.

Dataset Creation

BannerBench construction process consists of the following 3 steps;

  1. we collected sets of five banners derived from a single LP (Banner Sets; BSs),
  2. we annotated human preference to the BSs,
  3. we propose two subtasks: Ranking and Best-Choice.

Considerations for Using the Data

Since BannerBench is intended solely for evaluation purposes, it is not designed for training use; the benchmark focuses on assessing the inductive capabilities of VLMs.

License

AdTEC dataset is released under the CreativeCommons Attribution-NonCommercial-ShareAlike 4.0 International license.

Citation Information

To cite this work, please use the following format:

@misc{otake2025banner,
  author = {Hiroto Otake and Peinan Zhang and Yusuke Sakai and Masato Mita and Hiroki Ouchi and Taro Watanabe},
  title = {BannerBench: Benchmarking Vision Language Models for Multi-Ad Selection with Human Preferences},
  year = {2025}
}
Downloads last month
4