MASIV Multi-Sequence Dataset
Toward Material-Agnostic System Identification from Videos
Yizhou Zhao1, Haoyu Chen1, Chunjiang Liu1, Zhenyang Li2, Charles Herrmann3, Junhwa Hur3, Yinxiao Li3, Ming‑Hsuan Yang4, Bhiksha Raj1, Min Xu1*
1Carnegie Mellon University 2University of Alabama at Birmingham 3Google 4UC Merced
Introduction
The MASIV Multi-Sequence Dataset is a synthetic dataset generated by Genesis to evaluate the generalization of data-driven constitutive models. The dataset contains 10 objects of 5 distinct materials (Elastic, Elastoplastic, Liquid, Sand, and Snow), with each object including 10 different multi-view sequences. These sequences have randomized initial conditions for location, pose, and velocity. Each sequence includes 11 views, with each view consisting of 30 frames.
Dataset Structure
MASIV/
├── 0_0/ # <ObjectID>_<SequenceID>
├── data/ # Contains per-frame images
├── point_clouds/ # Stores point cloud data for 30 frames
├── videos/ # Stores 11 videos, one for each view
├── all_data.json # A JSON file containing camera information
├── metadata.json # A JSON file containing global simulation and object-specific parameters
├── transforms_test.json # Contains the camera transformation matrices and file paths for the images in the test set
├── transforms_train.json # Contains the camera transformation matrices and file paths for the images in the training set
├── transforms_val.json # Contains the camera transformation matrices and file paths for the images in the validation set
├── 0_1/
...
├── 9_9/
Citing MASIV
If you find this dataset useful in your work, please consider citing our paper:
@article{zhao2025masiv,
title={MASIV: Toward Material-Agnostic System Identification from Videos},
author={Zhao, Yizhou and Chen, Haoyu and Liu, Chunjiang and Li, Zhenyang and Herrmann, Charles and Hur, Junhwa and Li, Yinxiao and Yang, Ming-Hsuan and Raj, Bhiksha and Xu, Min},
journal={arXiv preprint arXiv:2508.01112},
year={2025}
}
- Downloads last month
- 14,369