mets / README.md
AlexBlck's picture
Update README.md
a5e3d66 verified
metadata
dataset_info:
  features:
    - name: img_id
      dtype: string
    - name: turn_index
      dtype: int32
    - name: source_img
      dtype: image
    - name: target_img
      dtype: image
    - name: difference_caption
      dtype: string
  splits:
    - name: train
      num_bytes: 5491618579.636
      num_examples: 2699
  download_size: 3577187403
  dataset_size: 5491618579.636
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/train-*
license: cc-by-4.0
task_categories:
  - image-to-text
  - text-to-image
  - image-to-image
language:
  - en
tags:
  - image-editing
  - computer-vision
  - image-manipulation
  - sequential-editing
  - difference-captioning
size_categories:
  - 1K<n<10K

Dataset Card for METS (Multiple Edits and Textual Summaries)

Dataset Summary

METS (Multiple Edits and Textual Summaries) is a dataset of image editing sequences with human-annotated textual summaries describing the differences between original and edited images. The dataset captures cumulative changes after sequences of manipulations, providing ground truth for image difference captioning tasks. METS contains images that have undergone 5, 10, or 15 sequential edits, with human-written summaries describing all visible differences from the original image.

Dataset Structure

The dataset contains the following fields:

  • img_id (str): Unique identifier for the image sequence
  • turn_index (int): The number of edits applied (5, 10, or 15)
  • source_img (str): Path to the original unedited image
  • target_img (str): Path to the edited image after the specified number of manipulations
  • difference_caption (str): Human-written summary of all differences between source and target images

Licensing and Attribution

This work is licensed under a Creative Commons Attribution 4.0 International License. Please cite the original paper when using this dataset.

Citation Information

If you find this dataset useful, please consider citing our paper:

@inproceedings{Black2025ImProvShow,
        title={ImProvShow: Multimodal Fusion for Image Provenance Summarization},
        author={Black Alexander and Shi Jing and Fan Yifei and Collomosse John},
        booktitle={British Machine Vision Conference (BMVC)},
        year={2025}
}