license: mit
size_categories:
- 1M<n<10M
Shot Boundary Detection
Dataset Overview
This dataset supports research in shot boundary detection (SBD) by providing over 3.4 million uniformly structured 61-frame video clips. Each clip is centered on a key frame (the 31st), labeled as:
- C (Cut): a direct shot boundary,
- T (Transition): a gradual transition (e.g., fade, dissolve),
- E (Empty): no boundary present.
Data is sourced from AutoShot, ClipShots, and crawled Pexels videos, with both real and synthetically generated examples. Each clip includes metadata fields like label
, origin
, video_id
, and video
path. The dataset is split into training (83%) and testing (17%) sets, ensuring no overlap in source videos.
Preprocessing involved strict windowing, deduplication, and quality checks. While comprehensive, users should be aware of possible annotation noise, synthetic biases, and FFMPEG artifacts.
Evaluation is recommended using Precision, Recall, F1-score, and confusion matrices.
Dataset Loading
# Small Example
from datasets import load_dataset
dataset = load_dataset(
"it-just-works/shot-boundary-detection",
streaming=True,
split="train",
)
# Stream samples one by one
for example in dataset:
...
# Test the streaming of the videos one by one
from IPython.display import Video, display
import webdataset as wds
import tempfile
import requests
# Loop to verify videos one at a time
for i, example in enumerate(dataset):
with tempfile.NamedTemporaryFile(suffix=".mp4", delete=False) as tmp:
tmp.write(example["mp4"])
video_path = tmp.name
print(f"\nVideo {i}")
display(Video(video_path, embed=True))
# Wait for user confirmation to proceed
user_input = input("Press [Enter] to see next video, or type 'q' to quit: ")
if user_input.lower() == "q":
break
Dataset Datasheet
Inspiration: Datasheets for Datasets (Gebru et al., 2018)
Motivation for Dataset Creation
Why was the dataset created?
The primary motivation for creating this dataset is to advance research and development in automatic shot boundary detection (SBD). It aims to provide a centralized and consistently formatted collection of video clips, aggregated from diverse sources, to facilitate the training and robust evaluation of SBD models. This addresses the common challenge of fragmented and inconsistently formatted data in the SBD domain.
Dataset Composition
What are the instances?
Each instance is a video clip consisting of 61 consecutive frames. The 31st frame (1-indexed) is the frame of interest for classification, with the 30 preceding and 30 succeeding frames providing temporal context.
Instances are categorized based on the event occurring at or around the 31st frame:
- Direct Cut (C): A hard cut occurs precisely between the 30th and 31st frames. The 31st frame is the first frame of the new shot.
- Transition (T): The 31st frame is part of a gradual visual transition (e.g., fade, dissolve, wipe) between two shots. This includes the start, middle, or end of the transition.
- Empty (E): No shot boundary (cut or transition) occurs at or immediately around the 31st frame; it represents a continuation of the current shot.
Are relationships between instances made explicit in the data?
Yes.
- Instances derived from the same original data source (e.g., Pexels, ClipShots) are identifiable via the
origin
field. - All 61-frame clips extracted from the exact same long-form video share a common
video_id
. This allows for grouping clips that originate from the same continuous piece of content.
How many instances of each type are there?
The dataset is split into training and testing sets. The table below details the distribution of instances per label across these splits.
Label | Train Samples | Test Samples | Total Samples | Train % | Test % |
---|---|---|---|---|---|
Transition (T) | 1,790,069 | 385,039 | 2,175,108 | 82.3% | 17.7% |
Empty (E) | 560,134 | 92,917 | 653,051 | 85.8% | 14.2% |
Cut (C) | 499,212 | 105,417 | 604,629 | 82.6% | 17.4% |
Total | 2,849,415 | 583,373 | 3,432,788 | 83.0% | 17.0% |
Note on Splits: The target was an 80/20 train/test split. Deviations are primarily due to the constraint that all clips originating from the same video_id
must belong entirely to either the training or testing set. This prevents data leakage. Data de-duplication and varying numbers of extractable clips per video also contributed to the final percentages.
What data does each instance consist of?
Each instance is described by the following fields in its associated metadata:
origin
: (String) Identifies the original data source, sub-category (if any), and its designated split (train/test). The format is[main_origin]:([subcategory_origin]):[split]
(e.g.,pexels:synthetic:train
,clipshot:test
).video_id
: (String) The identifier (e.g., filename) of the full-length source video from which the clip was extracted. Multiple clips can share the samevideo_id
.video
: (String) A path to the 61-frame video clip file. This can be used as a unique identifier for each clip and to load the video data.label
: (Char) The ground truth label for the 31st frame: 'C' (Direct Cut), 'T' (Transition), or 'E' (Empty).
Is everything included or does the data rely on external resources?
The dataset, comprising the 61-frame clips and their associated metadata, is self-contained. Access to the original full-length videos from which these clips were extracted is not provided with this dataset and would need to be sourced independently if required for further analysis (e.g., to understand broader context beyond the 61-frame window).
Are there recommended data splits or evaluation measures?
Data Splits: Yes, the dataset is provided with pre-defined training and testing splits. The distribution of these splits by origin
is detailed below.
Origin | Train Samples | Test Samples | Total Samples | Train % | Test % |
---|---|---|---|---|---|
pexels:synthetic | 1,693,011 | 421,880 | 2,114,891 | 80.1% | 19.9% |
clipshot | 728,056 | 50,508 | 778,564 | 93.5% | 6.5% |
pexels:videos | 280,802 | 71,145 | 351,947 | 79.8% | 20.2% |
clipshot:only_gradual | 99,415 | 27,955 | 127,370 | 78.1% | 21.9% |
pexels:flashing | 35,871 | 8,928 | 44,799 | 80.1% | 19.9% |
autoshot:videos | 7,849 | 2,079 | 9,928 | 79.1% | 20.9% |
autoshot:ads | 4,411 | 878 | 5,289 | 83.4% | 16.6% |
Grand Total | 2,849,415 | 583,373 | 3,432,788 | 83.0% | 17.0% |
Evaluation Measures: Standard evaluation metrics for shot boundary detection are recommended, including:
- Precision, Recall, and F1-score (calculated per class: C, T, E).
- Macro-averaged and/or micro-averaged Precision, Recall, and F1-score across all classes.
- Confusion matrix.
Data Collection Process
How was the data collected?
Data was aggregated from multiple sources:
- Publicly Available Datasets:
- AutoShot: https://github.com/wentaozhu/AutoShot
- ClipShots: https://github.com/Tangshitao/ClipShots Data was collected by following instructions provided in their respective repositories.
- Crawled Data:
- Pexels: Videos were crawled from https://www.pexels.com/videos/ to serve as a basis for synthetic data generation and extraction of 'Empty' instances.
How was the data associated with each instance acquired (labeling process)?
Full-length videos were first collected from the sources. Then, 61-frame clips were extracted and labeled based on the following procedures:
Direct Cuts (C):
- Obtained directly from existing annotations in datasets like AutoShot and ClipShots.
- Synthetically generated by concatenating two randomly selected, distinct videos from Pexels (ensuring matching resolution and FPS). The cut is placed at the concatenation point (between frame 30 and 31 of the clip).
Transitions (T):
- Obtained directly from existing annotations in datasets (e.g., from
clipshot:only_gradual
). - Synthetically generated using FFMPEG to create visual transitions (e.g., fades, dissolves) between two Pexels videos. Transition types were randomized, and their durations were sampled from a normal distribution with a mean of 0.5 seconds, a minimum of 0.2 seconds, and a maximum of 0.8 seconds. The 31st frame of the clip falls within such a transition.
- Obtained directly from existing annotations in datasets (e.g., from
Empty (E):
- Extracted from frames neighboring annotated cuts/transitions in existing datasets, ensuring these frames were not part of the boundary event itself.
- Generated by chunking Pexels videos, which are typically single-shot. Clips were sampled from interior segments of these videos, sufficiently far from potential start/end fades or implicit cuts.
Does the dataset contain all possible instances?
No, this dataset is a sample.
- Some potential clips were discarded during processing due to technical issues (e.g., FFMPEG errors leading to incorrect frame counts, inability to extract a full 61-frame window around an annotation).
- The source datasets may not have exhaustive annotations for all possible shot boundaries within their original videos.
- The synthetic generation process is inherently a sampling of possible combinations.
If the dataset is a sample, then what is the population it was drawn from?
The conceptual population includes all possible 61-frame clips that could be extracted from the collected source videos (public datasets and Pexels) and correctly labeled as Direct Cut, Transition, or Empty according to our definitions. The current dataset represents a substantial sample, with selection guided by the availability of existing annotations and specific synthetic generation procedures.
Data Preprocessing
What preprocessing/cleaning was done?
- Windowing: The primary preprocessing step involved ensuring that each target annotated frame (marking a cut or transition) could serve as the center of a 61-frame clip. Annotations occurring within the first 30 frames or last 30 frames of a source video were excluded.
- Frame Count Validation: Clips not resulting in exactly 61 frames after extraction (e.g., due to FFMPEG issues or reaching video ends) were discarded.
- De-duplication: Efforts were made to identify and remove duplicate or near-duplicate clips, particularly within synthetically generated data.
Was the "raw" data saved in addition to the preprocessed/cleaned data?
The full-length source videos (prior to clipping and other preprocessing) are archived privately but are not part of this dataset distribution. The "raw" data for this dataset are the 61-frame clips.
Is the preprocessing software available?
The specific scripts used for data collection, preprocessing, and clip extraction are not publicly released at this time. Core operations relied on standard tools like FFMPEG and Python libraries for video manipulation.
Known Limitations, Noise, and Biases
Are there any errors, sources of noise, or redundancies in the dataset?
Users should be aware of the following potential issues:
- FFMPEG Processing Artifacts: Despite filtering, minor frame inaccuracies (e.g., dropped/duplicated frames, slight temporal shifts) from FFMPEG processing might persist in a small number of clips.
- Annotation Quality of Source Data: Annotations from the original public datasets are used as-is and may contain inherent inaccuracies (e.g., cuts annotated frame-by-frame, very close boundaries, misclassification between cut and transition types). These limitations are inherited by our dataset.
- Synthetic Data Characteristics:
- Synthetically generated cuts from Pexels videos might be statistically different or potentially easier to detect than cuts found in more complex, edited content.
- Synthetic transitions are limited to those generatable by FFMPEG and may not cover the full spectrum of complex real-world transitions.
- Labeling of 'Empty' Clips: 'Empty' clips generated from neighbors of annotated boundaries could be mislabeled if the original annotations were imprecise (e.g., if two distinct cuts were very close, an 'Empty' clip taken between them might inadvertently contain an unannotated boundary).
- Redundancy: While de-duplication was performed, some level of visual redundancy might exist, especially within the large volume of synthetically generated Pexels data or from repetitive content in source videos.
- Source Bias: The distribution of content types (e.g., stock footage from Pexels, specific genres from AutoShot/ClipShots) influences the overall characteristics of the dataset. Models trained on this dataset may reflect these biases.