ductai199x commited on
Commit
699f518
·
1 Parent(s): 0b94f9c

add readme

Browse files
.gitignore ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ .vscode
2
+ vcms/
3
+ vpvm/
4
+ vpim/
5
+
6
+ *__pycache__*
7
+ *scratch*
README.md ADDED
@@ -0,0 +1,170 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license:
3
+ - cc-by-4.0
4
+ pretty_name: VSM
5
+ category:
6
+ - vcms (Video Camera Model Splicing)
7
+ - vpvm (Video Perceptually Visible Manipulation)
8
+ - vpim (Video Perceptually Invisible Manipulation)
9
+ category_size:
10
+ videos: 4000
11
+ frames: 120000
12
+ task_categories:
13
+ - standard video manipulation detection and localization
14
+ task_ids:
15
+ - video-manipulation-detection
16
+ - video-manipulation-localization
17
+ ---
18
+
19
+ # Video Standard Manipulation Dataset
20
+
21
+ ## Dataset Description
22
+
23
+ - **Paper:** [VideoFACT: Detecting Video Forgeries Using Attention, Scene Context, and Forensic Traces](https://openaccess.thecvf.com/content/WACV2024/papers/Nguyen_VideoFACT_Detecting_Video_Forgeries_Using_Attention_Scene_Context_and_Forensic_WACV_2024_paper.pdf)
24
+ - **Total amount of data used:** approx. 15GB
25
+
26
+ This dataset is a collection of simple and traditional localized video manipulations, such as: splicing, color correction, contrast enhancement, bluring, and noise addition. The dataset is designed to be used for training and evaluating video manipulation detection models. We used this dataset to train the VideoFACT model, which is a deep learning model that uses attention, scene context, and forensic traces to detect a wide variety of video forgery types, i.e. splicing, editing, deepfake, inpainting. The dataset is divided into three parts: Video Camera Model Splicing (VCMS), Video Perceptually Visible Manipulation (VPVM), and Video Perceptually Invisible Manipulation (VPIM). Each part has a total of 4000 videos, each video is 1 second, or 30 frames, has a resolution of 1920 x 1080, and encoded using FFmpeg with the H.264 codec at CRF 23. Additionally, each part is splited into training, validation, and testing sets that consists of 3200, 200, 600 videos, respectively. More details about the dataset can be found in the paper.
27
+
28
+ ## Example
29
+
30
+ The Video Standard Manipulation (VSM) Dataset can be downloaded and used as follows:
31
+
32
+ ```py
33
+ import torch
34
+ from torch.utils.data import Dataset, IterableDataset, DataLoader
35
+ import datasets
36
+ import decord
37
+ import fsspec
38
+
39
+ decord.bridge.set_bridge("torch")
40
+
41
+ vsm_ds = datasets.load_dataset("ductai199x/video_std_manip", "vcms", trust_remote_code=True) # or "vpvm" or "vpim"
42
+
43
+ # see structure
44
+ print(vsm_ds)
45
+
46
+ # custom dataset wrapper to load video faster
47
+ class VsmDsWrapper(Dataset):
48
+ def __init__(self, ds: datasets.Dataset):
49
+ self.ds = ds
50
+
51
+ def __len__(self):
52
+ return len(self.ds)
53
+
54
+ def __getitem__(self, idx):
55
+ example = self.ds[idx]
56
+ vid_path = example["vid_path"]
57
+ mask_path = example["mask_path"]
58
+ label = example["label"]
59
+ vid = decord.VideoReader(vid_path)[:].float() / 255.0
60
+ if label == 1:
61
+ mask = decord.VideoReader(mask_path)[:].float() / 255.0
62
+ else:
63
+ mask = torch.zeros_like(vid)
64
+ mask = (mask.mean(3) > 0.5).float() # T, H, W
65
+ vid = vid.permute(0, 3, 1, 2) # T, H, W, C -> T, C, H, W
66
+ return {
67
+ "vid": vid,
68
+ "mask": mask,
69
+ "label": label,
70
+ }
71
+
72
+ # custom iterable dataset wrapper in case you want to stream the dataset
73
+ class VsmIterDsWrapper(IterableDataset):
74
+ def __init__(self, ds: datasets.IterableDataset):
75
+ self.ds = ds
76
+
77
+ def __iter__(self):
78
+ for example in self.ds:
79
+ vid_path = example["vid_path"]
80
+ mask_path = example["mask_path"]
81
+ label = example["label"]
82
+ vid = decord.VideoReader(fsspec.open(vid_path, "rb").open())[:].float() / 255.0
83
+ if label == 1:
84
+ mask = decord.VideoReader(fsspec.open(mask_path, "rb").open())[:].float() / 255.0
85
+ else:
86
+ mask = torch.zeros_like(vid)
87
+ mask = (mask.mean(3) > 0.5).float() # T, H, W
88
+ vid = vid.permute(0, 3, 1, 2) # T, H, W, C -> T, C, H, W
89
+ yield {
90
+ "vid": vid,
91
+ "mask": mask,
92
+ "label": label,
93
+ }
94
+
95
+ # Highly recommend you using Dataloader to load the dataset faster
96
+ vsm_dl = DataLoader(VsmDsWrapper(vsm_ds["train"]), batch_size=2, num_workers=14, persistent_workers=True)
97
+ for batch in vsm_dl:
98
+ vid = batch["vid"]
99
+ mask = batch["mask"]
100
+ label = batch["label"]
101
+ print(vid.shape, mask.shape, label)
102
+
103
+ ```
104
+
105
+ ## Dataset Structure
106
+
107
+ ### Data Instances
108
+
109
+ Some frame examples from this dataset:
110
+
111
+ #### VCMS
112
+ ![vcms](vcms_example.jpg)
113
+ ![vcms_mask](vcms_example_mask.jpg)
114
+
115
+ #### VPVM
116
+ ![vpvm](vpvm_example.jpg)
117
+ ![vpvm_mask](vpvm_example_mask.jpg)
118
+
119
+ #### VPIM
120
+ ![vpim](vpim_example.jpg)
121
+ ![vpim_mask](vpim_example_mask.jpg)
122
+
123
+ ### Data Fields
124
+ The data fields are the same among all splits.
125
+
126
+ - **vid_path** (str): Path to the video file
127
+ - **mask_path** (str): Path to the mask file. This will equal to empty string if the video is not manipulated.
128
+ - **label** (int): 1 if the video is manipulated, 0 otherwise.
129
+
130
+
131
+ ### Data Splits
132
+ Each part (vcms, vpvm, vpim) has a total of 4000 videos, each video is 1 second, or 30 frames, has a resolution of 1920 x 1080, and encoded using FFmpeg with the H.264 codec at CRF 23. Additionally, each part is splited into training, validation, and testing sets that consists of 3200, 200, 600 videos, respectively.
133
+
134
+ ## Dataset Creation
135
+
136
+ Each part in this dataset was made by applying different sets of standard manipulations to videos from the Video-ACID dataset. All three parts were made using a common procedure. First, we created binary ground-truth masks specifying the tamper regions for each video. These tamper regions correspond to multiple randomly chosen shapes with random sizes, orientations, and placements within a frame. Fake videos were created by choosing a mask, then manipulating content within the tamper region.
137
+ Original videos were retained to form the set of authentic videos.
138
+ All real and manipulated video frames were re-encoded as H.264 videos using FFmpeg with 30 FPS and constant rate factor of 23.
139
+
140
+ Each part in this dataset corresponds to a different manipulation type. The Video Camera Model Splicing (VCMS) part contains videos with content spliced in from other videos. The Video Perceptually Visible Manipulation (VPVM) part contains content modified using common editing operations, e.g. contrast enhancement, smoothing, sharpening, blurring, etc. applied with strengths that can be visually detected. The Video Perceptually Invisible Manipulation (VPIM) part was made in a similar fashion to VPVM, but with much smaller manipulation strengths to create challenging forgeries. For each dataset, we made 3200 videos (96000 frames) for training, 200 videos (15600 frames) for validation, 600 videos (8400 frames) for testing. More details can be found in the paper.
141
+
142
+
143
+
144
+ ## Additional Information
145
+
146
+
147
+ ### Licensing Information
148
+
149
+ All datasets are licensed under the [Creative Commons Attribution, Non-Commercial, Share-alike license (CC BY-NC-SA)](https://creativecommons.org/licenses/by-nc-sa/4.0/).
150
+
151
+ ### Citation Information
152
+
153
+ ```
154
+ @InProceedings{Nguyen_2024_WACV,
155
+ author = {Nguyen, Tai D. and Fang, Shengbang and Stamm, Matthew C.},
156
+ title = {VideoFACT: Detecting Video Forgeries Using Attention, Scene Context, and Forensic Traces},
157
+ booktitle = {Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV)},
158
+ month = {January},
159
+ year = {2024},
160
+ pages = {8563-8573}
161
+ }
162
+ ```
163
+
164
+ ### Contribution
165
+
166
+ We thank the authors of the Video-ACID dataset (https://ieee-dataport.org/documents/video-acid) for their work.
167
+
168
+ ### Contact
169
+
170
+ For any questions, please contact Tai Nguyen at [@ductai199x](https://github.com/ductai199x) or by [email](mailto:[email protected]).
vcms_example.jpg ADDED

Git LFS Details

  • SHA256: 8c0869053a31e9f73ed2def96ebae8699150f54bb034a68bfccc4fcf278bfe38
  • Pointer size: 130 Bytes
  • Size of remote file: 23.4 kB
vcms_example_mask.jpg ADDED

Git LFS Details

  • SHA256: ea6134b4fe052384e47c56c2cf8425cf06a1a4fac5d1f9d18e01ed33036e00b3
  • Pointer size: 129 Bytes
  • Size of remote file: 5.34 kB
vpim_example.jpg ADDED

Git LFS Details

  • SHA256: 0f345d2df1dce1b7a5e34e77c20ded63946ab4af553251b98409cbeac69045d9
  • Pointer size: 130 Bytes
  • Size of remote file: 20.1 kB
vpim_example_mask.jpg ADDED

Git LFS Details

  • SHA256: bd9d38b5665ffe98a7bfb6574a59e6bae408161dce20ba8f6ebc52818149c3dd
  • Pointer size: 129 Bytes
  • Size of remote file: 5.44 kB
vpvm_example.jpg ADDED

Git LFS Details

  • SHA256: 060e41798c71535d881c9d5370e406ffe025cfeb64b4b0eeda3955a272f48140
  • Pointer size: 130 Bytes
  • Size of remote file: 31.5 kB
vpvm_example_mask.jpg ADDED

Git LFS Details

  • SHA256: 7e9e73a022c9ddc1f08967147d7afe15e33c72e6f2ae280faf6da9ac5df1ea46
  • Pointer size: 129 Bytes
  • Size of remote file: 1.76 kB