The viewer is disabled because this dataset repo requires arbitrary Python code execution. Please consider
removing the
loading script
and relying on
automated data support
(you can use
convert_to_parquet
from the datasets
library). If this is not possible, please
open a discussion
for direct help.
Dataset Card for KITTI Flow 2012
Dataset Description
The KITTI Flow 2012 dataset is a real-world benchmark dataset designed to evaluate optical flow estimation algorithms in the context of autonomous driving. Introduced in the seminal paper "Are we ready for Autonomous Driving? The KITTI Vision Benchmark Suite" by Geiger et al., it provides challenging sequences recorded from a moving platform in urban, residential, and highway scenes.
Optical flow refers to the apparent motion of brightness patterns in image sequences, used to estimate the motion of objects and the camera in the scene. It is a fundamental problem in computer vision with applications in visual odometry, object tracking, motion segmentation, and autonomous navigation.
KITTI Flow 2012 contributes to optical flow research by providing:
- Real-world stereo image pairs captured at two consecutive timepoints (t0 and t1).
- Sparse ground-truth optical flow maps at t0, annotated using 3D laser scans.
- Calibration files to relate image pixels to 3D geometry.
- Disparity ground truth and stereo imagery for related benchmarking.
The dataset enables fair and standardized comparison of optical flow algorithms and is widely adopted for benchmarking performance under real driving conditions.
Dataset Source
- Homepage: http://www.cvlibs.net/datasets/kitti/eval_stereo_flow.php?benchmark=flow
- License: Creative Commons Attribution-NonCommercial-ShareAlike 3.0 (CC BY-NC-SA 3.0)
- Paper: Andreas Geiger, Philip Lenz, and Raquel Urtasun. Are we ready for Autonomous Driving? The KITTI Vision Benchmark Suite. CVPR 2012.
Dataset Structure
The dataset is organized into the following folders, each representing a specific modality or annotation:
Folder | Description |
---|---|
image_0/ |
Grayscale images from the left camera at two timepoints. <id>_10.png is the reference frame (t0), <id>_11.png is the subsequent frame (t1). |
image_1/ |
Grayscale images from the right camera, same timestamps as image_0/ . |
colored_0/ |
Color images from the left camera at t0 and t1. |
colored_1/ |
Color images from the right camera. |
disp_noc/ |
Disparity maps at t0 for non-occluded pixels. |
disp_occ/ |
Disparity maps at t0 for all pixels, including occlusions. |
disp_refl_noc/ |
Disparity maps for reflective surfaces, non-occluded only. |
disp_refl_occ/ |
Disparity maps for reflective surfaces, including occluded regions. |
flow_noc/ |
Sparse ground-truth optical flow maps for non-occluded pixels between t0 and t1. |
flow_occ/ |
Sparse ground-truth optical flow maps including occluded regions. |
calib/ |
Calibration files for each sample. Contains projection matrices: P0 (left grayscale), P1 (right grayscale), P2 (left color), P3 (right color). |
Notes on Filenames:
<id>_10.png
= timepoint t0 (reference frame)<id>_11.png
= timepoint t1 (subsequent frame)<id>.txt
incalib/
contains the camera projection matrices (3×4) used for reconstruction.- Testing split does not include ground truth.
Example Usage
from datasets import load_dataset
# Load the dataset (replace 'your-namespace' with your Hugging Face namespace)
dataset = load_dataset("randall-lab/kitti-flow2012", split="train", trust_remote_code=True)
example = dataset[0]
# Grayscale Images (left and right)
left_gray_t0 = example["ImageGray_left"][0] # Image at t0 from left gray camera
left_gray_t1 = example["ImageGray_left"][1] # Image at t1 from left gray camera
right_gray_t0 = example["ImageGray_right"][0]
right_gray_t1 = example["ImageGray_right"][1]
# Color Images
left_color_t0 = example["ImageColor_left"][0]
left_color_t1 = example["ImageColor_left"][1]
right_color_t0 = example["ImageColor_right"][0]
right_color_t1 = example["ImageColor_right"][1]
# Ground Truth (only for training split)
flow_noc = example["flow_noc"] # non-occluded flow map
flow_occ = example["flow_occ"] # all-pixels flow map
# GT for disparity map Uncomment it if needed
# disp_noc = example["disp_noc"] # disparity map
# disp_occ = example["disp_occ"]
# disp_refl_noc = example["disp_refl_noc"]
# disp_refl_occ = example["disp_refl_occ"]
# Calibration
P0 = example["calib"]["P0"] # Left grayscale camera
P1 = example["calib"]["P1"] # Right grayscale camera
P2 = example["calib"]["P2"] # Left color camera
P3 = example["calib"]["P3"] # Right color camera
# Show example
left_gray_t0.show()
flow_noc.show()
print(f"Calibration matrix P0 (left gray camera): {P0}")
If you are using colab, you should update datasets to avoid errors
pip install -U datasets
Citation
@inproceedings{Geiger2012CVPR,
author = {Andreas Geiger and Philip Lenz and Raquel Urtasun},
title = {Are we ready for Autonomous Driving? The KITTI Vision Benchmark Suite},
booktitle = {Conference on Computer Vision and Pattern Recognition (CVPR)},
year = {2012}
}
- Downloads last month
- 5