AlexBlck commited on
Commit
dfd8f68
·
verified ·
1 Parent(s): b4bd6bc

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +48 -0
README.md CHANGED
@@ -22,4 +22,52 @@ configs:
22
  data_files:
23
  - split: train
24
  path: data/train-*
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
25
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
22
  data_files:
23
  - split: train
24
  path: data/train-*
25
+ license: cc-by-4.0
26
+ task_categories:
27
+ - image-to-text
28
+ - text-to-image
29
+ - image-to-image
30
+ language:
31
+ - en
32
+ tags:
33
+ - image-editing
34
+ - computer-vision
35
+ - image-manipulation
36
+ - sequential-editing
37
+ - difference-captioning
38
+ size_categories:
39
+ - 1K<n<10K
40
  ---
41
+
42
+ # Dataset Card for METS (Multiple Edits and Textual Summaries)
43
+
44
+ ## Dataset Summary
45
+
46
+ METS (Multiple Edits and Textual Summaries) is a dataset of image editing sequences with human-annotated textual summaries describing the differences between original and edited images. The dataset captures cumulative changes after sequences of manipulations, providing ground truth for image difference captioning tasks. METS contains images that have undergone 5, 10, or 15 sequential edits, with human-written summaries describing all visible differences from the original image.
47
+
48
+ ## Dataset Structure
49
+
50
+ The dataset contains the following fields:
51
+
52
+ - **img_id** (str): Unique identifier for the image sequence (4-digit padded, e.g., "0056")
53
+ - **turn_index** (int): The number of edits applied (5, 10, or 15)
54
+ - **source_img** (str): Path to the original unedited image
55
+ - **target_img** (str): Path to the edited image after the specified number of manipulations
56
+ - **difference_caption** (str): Human-written one-sentence summary of all differences between source and target images
57
+
58
+ ## Licensing and Attribution
59
+
60
+ This work is licensed under a Creative Commons Attribution 4.0 International License. Please cite the original paper when using this dataset.
61
+
62
+ ## Citation Information
63
+
64
+ If you find this dataset useful, please consider citing our paper:
65
+
66
+ ```
67
+ @inproceedings{Black2025ImProvShow,
68
+ title={ImProvShow: Multimodal Fusion for Image Provenance Summarization},
69
+ author={Black Alexander and Shi Jing and Fan Yifei and Collomosse John},
70
+ booktitle={British Machine Vision Conference (BMVC)},
71
+ year={2025}
72
+ }
73
+ ```