Datasets:
license: mit
language:
- en
pretty_name: KontextBench
size_categories:
- 1K<n<10K
Kontext Bench
Kontext Bench is a benchmark for image editing models consisting of source images paired with image editing instructions and category tags.
The benchmark comprises 1026 unique image-prompt pairs derived from 108 base images from diverse sources. It spans five core tasks: local instruction editing, global instruction editing, text editing, style reference, and character reference. We found that the scale of the benchmark provides a good balance between reliable human evaluation and comprehensive coverage of real-world applications.
Benchmark Structure
kontext-bench/
βββ test/
βββ images/
β βββ 0000.jpg
β βββ 0001.jpg
β βββ ...
βββ metadata.jsonl
Fields
Each line in metadata.jsonl
consists of
file_name
: Path to the image file relative to the split directoryinstruction
: The editing instruction to apply to the imagecategory
: Category of the editing instructionkey
: Unique identifier for this image-instruction pairimage_idx
: Index of the source imageprompt_idx
: Index of the prompt for this image
An example entry is shown below:
{
"file_name": "images/0000.jpg"
"instruction": "give the cat a tophat",
"category": "Instruction Editing - Local",
"key": "0000_01",
"img_idx": "0000",
"prompt_idx": "01"
}
Benchmark Statistics
- Total entries: 1026
- Unique images: 108
Category Statistics
- Character Reference: 193 entries
- Instruction Editing - Global: 262 entries
- Instruction Editing - Local: 416 entries
- Style Reference: 63 entries
- Text Editing: 92 entries
License
The benchmark is released under the MIT License. This benchmark and the included Images are made available for scientific and research purposes only. We gratefully acknowledge all contributing photographers, Unsplash, Pexels for making their visuals available to the research community.
Citation
@misc{labs2025flux1kontextflowmatching,
title={FLUX.1 Kontext: Flow Matching for In-Context Image Generation and Editing in Latent Space}, Add commentMore actions
author={Black Forest Labs and Stephen Batifol and Andreas Blattmann and Frederic Boesel and Saksham Consul and Cyril Diagne and Tim Dockhorn and Jack English and Zion English and Patrick Esser and Sumith Kulal and Kyle Lacey and Yam Levi and Cheng Li and Dominik Lorenz and Jonas MΓΌller and Dustin Podell and Robin Rombach and Harry Saini and Axel Sauer and Luke Smith},
year={2025},
eprint={2506.15742},
archivePrefix={arXiv},
primaryClass={cs.GR},
url={https://arxiv.org/abs/2506.15742},
}