|
--- |
|
license: cc-by-nc-sa-3.0 |
|
--- |
|
|
|
# Dataset Card for KITTI Stereo 2012 |
|
|
|
## Dataset Description |
|
|
|
The **KITTI Stereo 2012** dataset is a widely used benchmark dataset for evaluating stereo vision, optical flow, and scene flow algorithms in autonomous driving scenarios. It was introduced in the paper ["Are we ready for Autonomous Driving? The KITTI Vision Benchmark Suite"](http://www.cvlibs.net/publications/Geiger2012CVPR.pdf) by Geiger et al. |
|
|
|
Stereo matching refers to the process of estimating depth from two images captured from slightly different viewpoints—typically a left and a right camera. The disparity (or pixel displacement) between corresponding points in the two images is inversely proportional to the depth of the observed objects. Accurate stereo matching is essential for depth perception in self-driving vehicles, robot navigation, and 3D scene reconstruction. |
|
|
|
KITTI Stereo 2012 contributes to stereo matching research by providing: |
|
- Real-world image pairs captured in urban and rural environments. |
|
- High-resolution stereo image pairs at two consecutive timepoints (t0 and t1). |
|
- Dense disparity ground truth maps for t0 frames. |
|
- Calibration files containing the projection matrices for geometric reconstruction. |
|
|
|
The dataset enables fair comparison of stereo algorithms and serves as a benchmark for performance evaluation on non-occluded and occluded regions using disparity error metrics. |
|
 |
|
|
|
## Dataset Source |
|
- **Homepage**: [http://www.cvlibs.net/datasets/kitti/eval_stereo_flow.php?benchmark=stereo](http://www.cvlibs.net/datasets/kitti/eval_stereo_flow.php?benchmark=stereo) |
|
- **License**: [Creative Commons Attribution-NonCommercial-ShareAlike 3.0 (CC BY-NC-SA 3.0)](https://creativecommons.org/licenses/by-nc-sa/3.0/) |
|
- **Paper**: Andreas Geiger, Philip Lenz, and Raquel Urtasun. _Are we ready for Autonomous Driving? The KITTI Vision Benchmark Suite_. CVPR 2012. |
|
|
|
## Dataset Structure |
|
|
|
The dataset is organized into the following folders, each representing a specific modality or annotation: |
|
|
|
| Folder | Description | |
|
|--------|-------------| |
|
| `image_0/` | Grayscale images from the **left camera** at two timepoints. `<id>_10.png` is the reference frame (t0), `<id>_11.png` is the subsequent frame (t1). | |
|
| `image_1/` | Grayscale images from the **right camera**, same timestamps as `image_0/`. Used for stereo correspondence. | |
|
| `colored_0/` | Color images from the **left camera** at t0 and t1. Same format as grayscale images. | |
|
| `colored_1/` | Color images from the **right camera**. | |
|
| `disp_noc/` | Ground truth disparity maps at t0 for **non-occluded** pixels. | |
|
| `disp_occ/` | Disparity maps at t0 including **all pixels**, even those occluded in the right image. | |
|
| `disp_refl_noc/` | Disparity maps for **reflective surfaces**, non-occluded only. | |
|
| `disp_refl_occ/` | Disparity maps for **reflective surfaces**, including occluded regions. | |
|
| `flow_noc/` | Sparse optical flow maps between t0 and t1 for non-occluded areas. | |
|
| `flow_occ/` | Sparse optical flow maps including occluded areas. | |
|
| `calib/` | Calibration files for each sample, including projection matrices `P0`(left gray camera), `P1`(right gray camera), `P2`(left color camera), and `P3`(right color camera). | |
|
|
|
### Notes on Filenames: |
|
- `<id>_10.png` = timepoint **t0** (reference frame) |
|
- `<id>_11.png` = timepoint **t1** (subsequent frame) |
|
- `<id>.txt` in `calib/` contains the camera projection matrices (3×4) used for stereo reconstruction. |
|
- Testing split does not have GT for image sample. |
|
|
|
## Example Usage |
|
|
|
```python |
|
from datasets import load_dataset |
|
|
|
# Load the dataset (replace 'your-namespace' with your Hugging Face namespace) |
|
dataset = load_dataset("randall-lab/kitti-stereo2012", split="train", trust_remote_code=True) |
|
# dataset = load_dataset("randall-lab/kitti-stereo2012", split="train", trust_remote_code=True) |
|
|
|
example = dataset[0] |
|
# Gray Image |
|
left_grayimage_t0 = example["ImageGray_t0"][0] # image_0/<id>_10.png |
|
right_grayimage_t0 = example["ImageGray_t0"][1] # image_1/<id>_10.png |
|
left_grayimage_t1 = example["ImageGray_t1"][0] # image_0/<id>_11.png |
|
right_grayimage_t1 = example["ImageGray_t1"][1] # image_1/<id>_11.png |
|
# Color Image |
|
left_colorimage_t0 = example["ImageColor_t0"][0] |
|
right_colorimage_t0 = example["ImageColor_t0"][1] |
|
left_colorimage_t1 = example["ImageColor_t1"][0] |
|
right_colorimage_t1 = example["ImageColor_t1"][1] |
|
|
|
# GT |
|
disp_noc = example["disp_noc"] # GT if using training split else none |
|
disp_occ = example["disp_occ"] # GT if using training split else none |
|
disp_refl_noc = example["disp_refl_noc"] # GT if using training split else none |
|
disp_refl_occ = example["disp_refl_occ"] # GT if using training split else none |
|
|
|
# calib |
|
P0_matrix = example["calib"]["P0"] # Left gray camera |
|
P1_matrix = example["calib"]["P1"] # Right gray camera |
|
P2_matrix = example["calib"]["P2"] # Left color camera |
|
P3_matrix = example["calib"]["P3"] # right color camera |
|
|
|
# Optional (GT for flow) Uncomment it if needed |
|
# flow_noc = example["flow_noc"] # GT if using training split else none |
|
# flow_occ = example["flow_occ"] # GT if using training split else none |
|
|
|
print("Grayscale image from the left camera at t0: ") |
|
left_image_t0.show() |
|
print("Ground truth disparity map: ") |
|
disp_noc.show() |
|
print(f"Calib for left gray camera: {P0_matrix}") |
|
``` |
|
If you are using colab, you should update datasets to avoid errors |
|
``` |
|
pip install -U datasets |
|
``` |
|
### Citation |
|
``` |
|
@inproceedings{Geiger2012CVPR, |
|
author = {Andreas Geiger and Philip Lenz and Raquel Urtasun}, |
|
title = {Are we ready for Autonomous Driving? The KITTI Vision Benchmark Suite}, |
|
booktitle = {Conference on Computer Vision and Pattern Recognition (CVPR)}, |
|
year = {2012} |
|
} |
|
``` |