Top-Bench-X / README.md
Aleksandar's picture
Update README.md
ff92a06 verified
metadata
dataset_info:
  features:
    - name: input_test
      dtype: image
    - name: input_gt
      dtype: image
    - name: exemplar_input
      dtype: image
    - name: exemplar_edit
      dtype: image
    - name: instruction
      dtype: string
    - name: og_description
      dtype: string
    - name: edit_description
      dtype: string
    - name: input_test_path
      dtype: string
    - name: input_gt_path
      dtype: string
    - name: exemplar_input_path
      dtype: string
    - name: exemplar_edit_path
      dtype: string
    - name: edit
      dtype: string
    - name: invert
      dtype: string
    - name: local
      dtype: bool
    - name: id
      dtype: int32
  splits:
    - name: test
      num_bytes: 4106538055.5
      num_examples: 1277
  download_size: 703956134
  dataset_size: 4106538055.5
configs:
  - config_name: default
    data_files:
      - split: test
        path: data/train-*
task_categories:
  - image-to-image
language:
  - en
tags:
  - Exemplar
  - Editing
  - Image2Image
  - Diffusion
pretty_name: Top-Bench-X
size_categories:
  - 1K<n<10K

EditCLIP: Representation Learning for Image Editing

Paper Project Page GitHub ICCV 2025

📚 Introduction

The TOP-Bench-X dataset offers Query and Exemplar image pairs tailored for exemplar-based image editing. We built it by adapting the TOP-Bench dataset from InstructBrush (also uploaded huggingface version at Aleksandar/InstructBrush-Bench). Specifically, we use the original training split to generate exemplar images and the test split to supply their corresponding queries. In total, TOP-Bench-X comprises 1,277 samples, including 257 distinct exemplars and 124 unique queries.

Teaser figure of EditCLIP

💡 Abstract

We introduce EditCLIP, a novel representation-learning approach for image editing. Our method learns a unified representation of edits by jointly encoding an input image and its edited counterpart, effectively capturing their transformation. To evaluate its effectiveness, we employ EditCLIP to solve two tasks: exemplar-based image editing and automated edit evaluation. In exemplar-based image editing, we replace text-based instructions in InstructPix2Pix with EditCLIP embeddings computed from a reference exemplar image pair. Experiments demonstrate that our approach outperforms state-of-the-art methods while being more efficient and versatile. For automated evaluation, EditCLIP assesses image edits by measuring the similarity between the EditCLIP embedding of a given image pair and either a textual editing instruction or the EditCLIP embedding of another reference image pair. Experiments show that EditCLIP aligns more closely with human judgments than existing CLIP-based metrics, providing a reliable measure of edit quality and structural preservation.

🧠 Data explained

Each sample consists of 4 images (2 pairs of images) and metadata, specifically:

  1. input_test – the query image (I_q) from the test split (“before” image you want to edit)
  2. input_gt – the ground-truth edited version of that query image (“after” image for the test)
  3. exemplar_input – the exemplar’s input image (I_i) from the training split (“before” image of the exemplar)
  4. exemplar_edit – the exemplar’s edited image (I_e) from the training split (“after” image of the exemplar)

🌟 Citation

@article{wang2025editclip,
  title={EditCLIP: Representation Learning for Image Editing},
  author={Wang, Qian and Cvejic, Aleksandar and Eldesokey, Abdelrahman and Wonka, Peter},
  journal={arXiv preprint arXiv:2503.20318},
  year={2025}
}

💳 License

This dataset is mainly a variation of TOP-Bench, confirm the license from the original authors.