File size: 2,268 Bytes
b4bd6bc
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
dfd8f68
 
 
 
 
 
 
 
 
 
 
 
 
 
 
b4bd6bc
dfd8f68
 
 
 
 
 
 
 
 
 
 
a5e3d66
dfd8f68
 
 
a5e3d66
dfd8f68
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
---
dataset_info:
  features:
  - name: img_id
    dtype: string
  - name: turn_index
    dtype: int32
  - name: source_img
    dtype: image
  - name: target_img
    dtype: image
  - name: difference_caption
    dtype: string
  splits:
  - name: train
    num_bytes: 5491618579.636
    num_examples: 2699
  download_size: 3577187403
  dataset_size: 5491618579.636
configs:
- config_name: default
  data_files:
  - split: train
    path: data/train-*
license: cc-by-4.0
task_categories:
- image-to-text
- text-to-image
- image-to-image
language:
- en
tags:
- image-editing
- computer-vision
- image-manipulation
- sequential-editing
- difference-captioning
size_categories:
- 1K<n<10K
---

# Dataset Card for METS (Multiple Edits and Textual Summaries)

## Dataset Summary

METS (Multiple Edits and Textual Summaries) is a dataset of image editing sequences with human-annotated textual summaries describing the differences between original and edited images. The dataset captures cumulative changes after sequences of manipulations, providing ground truth for image difference captioning tasks. METS contains images that have undergone 5, 10, or 15 sequential edits, with human-written summaries describing all visible differences from the original image.

## Dataset Structure

The dataset contains the following fields:

- **img_id** (str): Unique identifier for the image sequence
- **turn_index** (int): The number of edits applied (5, 10, or 15)
- **source_img** (str): Path to the original unedited image 
- **target_img** (str): Path to the edited image after the specified number of manipulations
- **difference_caption** (str): Human-written summary of all differences between source and target images

## Licensing and Attribution

This work is licensed under a Creative Commons Attribution 4.0 International License. Please cite the original paper when using this dataset.

## Citation Information

If you find this dataset useful, please consider citing our paper:

```
@inproceedings{Black2025ImProvShow,
        title={ImProvShow: Multimodal Fusion for Image Provenance Summarization},
        author={Black Alexander and Shi Jing and Fan Yifei and Collomosse John},
        booktitle={British Machine Vision Conference (BMVC)},
        year={2025}
}
```