Datasets:
metadata
dataset_info:
features:
- name: LPimage
dtype: image
- name: image1
dtype: image
- name: image2
dtype: image
- name: image3
dtype: image
- name: image4
dtype: image
- name: image5
dtype: image
- name: annotator1_ranking
sequence: int32
length: 5
- name: annotator1_best
dtype: int32
- name: annotator1_worst
dtype: int32
- name: annotator2_ranking
sequence: int32
length: 5
- name: annotator2_best
dtype: int32
- name: annotator2_worst
dtype: int32
- name: annotator3_ranking
sequence: int32
length: 5
- name: annotator3_best
dtype: int32
- name: annotator3_worst
dtype: int32
- name: annotator4_ranking
sequence: int32
length: 5
- name: annotator4_best
dtype: int32
- name: annotator4_worst
dtype: int32
- name: annotator5_ranking
sequence: int32
length: 5
- name: annotator5_best
dtype: int32
- name: annotator5_worst
dtype: int32
- name: best_annotator
dtype: string
- name: average_rank_correlation
dtype: float32
splits:
- name: train
num_bytes: 4531824679
num_examples: 900
download_size: 4429349535
dataset_size: 4531824679
license: cc-by-nc-sa-4.0
task_categories:
- visual-question-answering
language:
- ja
size_categories:
- n<1K
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
BannerBench: Benchmarking Vision Language Models for Multi-Ad Selection with Human Preferences
Dataset Summary
The BannerBench is designed to evaluate the ability of VLMs to identify the banner that best matches human preferences from a set of candidates.
Dataset Structure
The structure of the raw dataset is as follows:
{
"train": Dataset({
"features": [
'LPimage', 'image1', 'image2', 'image3', 'image4', 'image5',
'annotator1_ranking', 'annotator1_best', 'annotator1_worst',
'annotator2_ranking', 'annotator2_best', 'annotator2_worst',
'annotator3_ranking', 'annotator3_best', 'annotator3_worst',
'annotator4_ranking', 'annotator4_best', 'annotator4_worst',
'annotator5_ranking', 'annotator5_best', 'annotator5_worst',
'best_annotator', 'average_rank_correlation'
],
})
}
Example
from datasets import load_dataset
dataset = load_dataset("cyberagent/BannerBench")
print(dataset)
# DatasetDict({
# train: Dataset({
# features: ['LPimage', 'image1', 'image2', 'image3', 'image4', 'image5', 'annotator1_ranking', 'annotator1_best', 'annotator1_worst', 'annotator2_ranking', 'annotator2_best', 'annotator2_worst', 'annotator3_ranking', 'annotator3_best', 'annotator3_worst', 'annotator4_ranking', 'annotator4_best', 'annotator4_worst', 'annotator5_ranking', 'annotator5_best', 'annotator5_worst', 'best_annotator', 'average_rank_correlation'],
# num_rows: 900
# })
# })
An example of the dataset is as follows:
{
"LPimage": <PIL.PngImagePlugin.PngImageFile image mode=RGB size=1280x5352 at 0x7F09A24675D0>,
"image1": <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=1080x1080 at 0x7F09A1C9B250>,
"image2": <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=1080x1080 at 0x7F09A1CB52D0>,
"image3": <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=1080x1080 at 0x7F09A1CB5810>,
"image4": <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=1080x1080 at 0x7F09A1CB5E50>,
"image5": <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=1080x1080 at 0x7F09A1CB6490>,
"annotator1_ranking": [2, 4, 1, 3, 5],
"annotator1_best": 3,
"annotator1_worst": 5,
"annotator2_ranking": [4, 5, 1, 2, 3],
"annotator2_best": 3,
"annotator2_worst": 2,
"annotator3_ranking": [3, 2, 1, 4, 5],
"annotator3_best": 3,
"annotator3_worst": 5,
"annotator4_ranking": [3, 4, 5, 2, 1],
"annotator4_best": 5,
"annotator4_worst": 3,
"annotator5_ranking": [1, 4, 2, 3, 5],
"annotator5_best": 1,
"annotator5_worst": 5,
"best_annotator": "annotator1",
"average_rank_correlation": 0.6534000039100647
}
Data Fields
- LPimage: The Landing-Page image related image[1-5].
- image[1-5]: The Banners derived from a "LPimage".
- annotator[1-5]_ranking: Ranking of the advertisemental images in most prefered order by annotators 1 to 5.
- annotator[1-5]_best: The advertisement image is the most preferred one by annotators 1 to 5 in the Best-Choice task.
- annotator[1-5]_worst: The advertisement image is the least preferred one by annotators 1 to 5 in the Best-Choice task.
- best_annotator: The annotator whose average rank correlation with the other four annotators is the highest
- average_rank_correlation: The average of the top half of all possible annotator pairs, selected based on their rank correlation.
Dataset Creation
BannerBench construction process consists of the following 3 steps;
- we collected sets of five banners derived from a single LP (Banner Sets; BSs),
- we annotated human preference to the BSs,
- we propose two subtasks: Ranking and Best-Choice.
Considerations for Using the Data
Since BannerBench is intended solely for evaluation purposes, it is not designed for training use; the benchmark focuses on assessing the inductive capabilities of VLMs.
License
AdTEC dataset is released under the CreativeCommons Attribution-NonCommercial-ShareAlike 4.0 International license.
Citation Information
To cite this work, please use the following format:
@misc{otake2025banner,
author = {Hiroto Otake and Peinan Zhang and Yusuke Sakai and Masato Mita and Hiroki Ouchi and Taro Watanabe},
title = {BannerBench: Benchmarking Vision Language Models for Multi-Ad Selection with Human Preferences},
year = {2025}
}