Datasets:
File size: 5,822 Bytes
5f14409 3f6c998 5f14409 3f6c998 5f14409 99a6999 c44623e 99a6999 c44623e 99a6999 c44623e 99a6999 5f14409 99a6999 5f14409 99a6999 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 |
---
dataset_info:
features:
- name: LPimage
dtype: image
- name: image1
dtype: image
- name: image2
dtype: image
- name: image3
dtype: image
- name: image4
dtype: image
- name: image5
dtype: image
- name: annotator1_ranking
sequence: int32
length: 5
- name: annotator1_best
dtype: int32
- name: annotator1_worst
dtype: int32
- name: annotator2_ranking
sequence: int32
length: 5
- name: annotator2_best
dtype: int32
- name: annotator2_worst
dtype: int32
- name: annotator3_ranking
sequence: int32
length: 5
- name: annotator3_best
dtype: int32
- name: annotator3_worst
dtype: int32
- name: annotator4_ranking
sequence: int32
length: 5
- name: annotator4_best
dtype: int32
- name: annotator4_worst
dtype: int32
- name: annotator5_ranking
sequence: int32
length: 5
- name: annotator5_best
dtype: int32
- name: annotator5_worst
dtype: int32
- name: best_annotator
dtype: string
- name: average_rank_correlation
dtype: float32
splits:
- name: train
num_bytes: 4531824679.0
num_examples: 900
download_size: 4429349535
dataset_size: 4531824679.0
license: cc-by-nc-sa-4.0
task_categories:
- visual-question-answering
language:
- ja
size_categories:
- n<1K
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# BannerBench: Benchmarking Vision Language Models for Multi-Ad Selection with Human Preferences
### Dataset Summary
The BannerBench is designed to evaluate the ability of VLMs to identify the banner that best matches human preferences from a set of candidates.
## Dataset Structure
The structure of the raw dataset is as follows:
```JSON
{
"train": Dataset({
"features": [
'LPimage', 'image1', 'image2', 'image3', 'image4', 'image5',
'annotator1_ranking', 'annotator1_best', 'annotator1_worst',
'annotator2_ranking', 'annotator2_best', 'annotator2_worst',
'annotator3_ranking', 'annotator3_best', 'annotator3_worst',
'annotator4_ranking', 'annotator4_best', 'annotator4_worst',
'annotator5_ranking', 'annotator5_best', 'annotator5_worst',
'best_annotator', 'average_rank_correlation'
],
})
}
```
### Example
```Python
from datasets import load_dataset
dataset = load_dataset("cyberagent/BannerBench")
print(dataset)
# DatasetDict({
# train: Dataset({
# features: ['LPimage', 'image1', 'image2', 'image3', 'image4', 'image5', 'annotator1_ranking', 'annotator1_best', 'annotator1_worst', 'annotator2_ranking', 'annotator2_best', 'annotator2_worst', 'annotator3_ranking', 'annotator3_best', 'annotator3_worst', 'annotator4_ranking', 'annotator4_best', 'annotator4_worst', 'annotator5_ranking', 'annotator5_best', 'annotator5_worst', 'best_annotator', 'average_rank_correlation'],
# num_rows: 900
# })
# })
```
An example of the dataset is as follows:
```JSON
{
"LPimage": <PIL.PngImagePlugin.PngImageFile image mode=RGB size=1280x5352 at 0x7F09A24675D0>,
"image1": <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=1080x1080 at 0x7F09A1C9B250>,
"image2": <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=1080x1080 at 0x7F09A1CB52D0>,
"image3": <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=1080x1080 at 0x7F09A1CB5810>,
"image4": <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=1080x1080 at 0x7F09A1CB5E50>,
"image5": <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=1080x1080 at 0x7F09A1CB6490>,
"annotator1_ranking": [2, 4, 1, 3, 5],
"annotator1_best": 3,
"annotator1_worst": 5,
"annotator2_ranking": [4, 5, 1, 2, 3],
"annotator2_best": 3,
"annotator2_worst": 2,
"annotator3_ranking": [3, 2, 1, 4, 5],
"annotator3_best": 3,
"annotator3_worst": 5,
"annotator4_ranking": [3, 4, 5, 2, 1],
"annotator4_best": 5,
"annotator4_worst": 3,
"annotator5_ranking": [1, 4, 2, 3, 5],
"annotator5_best": 1,
"annotator5_worst": 5,
"best_annotator": "annotator1",
"average_rank_correlation": 0.6534000039100647
}
```
### Data Fields
- LPimage: The Landing-Page image related image[1-5].
- image[1-5]: The Banners derived from a "LPimage".
- annotator[1-5]_ranking: Ranking of the advertisemental images in most prefered order by annotators 1 to 5.
- annotator[1-5]_best: The advertisement image is the most preferred one by annotators 1 to 5 in the Best-Choice task.
- annotator[1-5]_worst: The advertisement image is the least preferred one by annotators 1 to 5 in the Best-Choice task.
- best_annotator: The annotator whose average rank correlation with the other four annotators is the highest
- average_rank_correlation: The average of the top half of all possible annotator pairs, selected based on their rank correlation.
## Dataset Creation
BannerBench construction process consists of the following 3 steps;
1. we collected sets of five banners derived from a single LP (Banner Sets; BSs),
2. we annotated human preference to the BSs,
3. we propose two subtasks: Ranking and Best-Choice.
## Considerations for Using the Data
Since BannerBench is intended solely for evaluation purposes, it is not designed for training use; the benchmark focuses on assessing the inductive capabilities of VLMs.
## License
AdTEC dataset is released under the [CreativeCommons Attribution-NonCommercial-ShareAlike 4.0 International license](./LICENSE).
### Citation Information
To cite this work, please use the following format:
```
@misc{otake2025banner,
author = {Hiroto Otake and Peinan Zhang and Yusuke Sakai and Masato Mita and Hiroki Ouchi and Taro Watanabe},
title = {BannerBench: Benchmarking Vision Language Models for Multi-Ad Selection with Human Preferences},
year = {2025}
}
``` |