You need to agree to share your contact information to access this dataset

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Log in or Sign Up to review the conditions and access this dataset content.

πŸš€ News Update (October 22, 2025 - Many More Scenes and Better Ease of Use)!!

We have now:

  • Increased our number of NuRec scenes to 924!!
  • Added labels.json file for helping users who want to search by types of scenes based on: behavior, layout, lighting, road types, surface conditions, traffic density, vrus presence, and weather. (Note this is only available for files under Batch0002 and onwards)
  • A front camera video file for each clips so that users can assess the scene before opening the usdz. (Note this is only available for files under Batch0002 and onwards)

Find the 900+ scenes in the sample_set/25.07_release folder.

Dataset Description:

Neural reconstructed dataset that carries 3D reconstructed driving scenes. The scenes are about 20 second long and stored in form of usdz files, along with respective xodr map files, surface mesh. The reconstructions were generated using 6 camera views (front-wide 120 deg, front-tele 30 deg, cross right/left 120 deg and rear right/left 70 deg). Users can use these 3D reconstructed driving scenes for training and testing their autonomous vehicle (AV) systems. This dataset is ready for commercial/non-commercial AV only use.

Dataset Owner(s):

NVIDIA Corporation

Dataset Creation Date:

06/09/2025

License/Terms of Use:

NVIDIA Autonomous Vehicle Dataset License Agreement

Intended Usage:

This dataset is intended to provide AV developers with the experience of NuRec capability and try out 3DGUT. Users can use this dataset to run a set of tests / experiments for an AV system and train AI models that use camera and reconstructed data. The scenes in this dataset are generated by and can be rendered by using NVIDIA NuRec. CARLA users can also utilize this dataset by which leveraging the NVIDIA NuRec integration in CARLA.

Dataset Characterization

Data Collection Method

  • [Automatic/Sensors] - [Machine-derived]

Labeling Method

  • [Automatic/Sensors] - [Machine-derived]

Dataset Format

The scenes are stored by batches containing a number of clip folders listed by their UUIDs. Each UUID folder contains:

  • usdz file (always)
  • labels.json file (in most cases)
  • camera_front_wide_120fov.mp4 (in most cases)

Each reconstructed scene is stored as a USDZ File containing the following:

Files Description
checkpoint.ckpt Trained neural network weights
data_info.json Timestamp and frame range detail per sensors
datasource_summary.json Sensor track and poses summary
default.usda Main scene file referencing all assets and configurations
dome_light.usda Describe dome lighting for scene illumination
map.xodr OpenDRIVE map file
mesh.ply Polygon mesh file for 3D geometry
mesh.usd USD file for 3D mesh
mesh_ground.ply Polygon mesh file for ground surface geometry
mesh_ground.usd USD file for ground mesh
metadata.yaml YAML file with scene metadata
parsed_config.yaml YAML configuration file
rig_trajectories.json JSON file containing sensor rig trajectory data
rig_trajectories.usda Rig trajectories in the USD scene
sequence_tracks.json JSON file with object tracking information
sequence_tracks.usda Object sequence tracks in the USD scene
volume.nurec volumetric data file for neural reconstruction
volume.usda USD ASCII file describing volumetric data in the scene

The labels.json contains the following fields where each field except VRUs can have multiple values:

  • Behavior types: {driving_straight, stop, left_lane_change, right_lane_change, right_turn, left_turn, unspecified, reverse}
  • Layout types: {straight_road, intersection, underpass, unspecified, bridge, construction_zone, parking_lot, pedestrian_crossing, ramp, roundabout, railway_crossing}
  • Road types: {residential, highways, urban, unspecified, rural, other}
  • Weather types: {clear/cloudy, unspecified, rain, fog}
  • Surface conditions: {dry, unspecified, wet}
  • Lighting types: {daytime, unspecified, nighttime}
  • VRUs: {True, False}
  • Traffic density: {low, medium, high, unspecified}

Dataset Quantification

Record Count: 900+ usdz files (more coming soon)

Measurement of Total Data Storage: approx 4.5 TB

Downloading Data

Please see https://huggingface.co/docs/huggingface_hub/v1.0.0.rc5/en/guides/download for complete documentation on how download dataset files. The code below is just an example only.

from huggingface_hub import login, snapshot_download

def main():
    hf_api_token = os.getenv("HF_TOKEN")

    login(token=hf_api_token)

    # Download an entire repository
    snapshot_download(repo_id="nvidia/PhysicalAI-Autonomous-Vehicles-NuRec", repo_type="dataset")

    # Download all the files in a folder
    snapshot_download(repo_id="nvidia/PhysicalAI-Autonomous-Vehicles-NuRec", repo_type="dataset", allow_patterns="sample_set/25.07_release/Batch0002/001b28cb-b8f7-4627-ae65-fda88612d5bf/*")

    # Download an individual file
    snapshot_download(repo_id="nvidia/PhysicalAI-Autonomous-Vehicles-NuRec", repo_type="dataset", allow_patterns="sample_set/25.07_release/Batch0002/001b28cb-b8f7-4627-ae65-fda88612d5bf/001b28cb-b8f7-4627-ae65-fda88612d5bf.usdz")

if __name__ == "__main__":
    main()

Downloading usdz based upon categories in labels.json

import argparse
from pathlib import Path
import json
import os

from huggingface_hub import login, snapshot_download


def string_to_boolean(s):
    s = s.strip().lower()  # Normalize the string
    if s in ('true', '1', 'yes', 'on'):
        return True

    return False

def main():
    valid_categories = ["behavior", "layout", "lighting", "road_types", "surface_conditions", "traffic_density", "vrus", "weather"]

    parser = argparse.ArgumentParser(
        description="Downloads usdz clips based upon criteria specified in the labels.json"
    )
    parser.add_argument(
        "--local-dir", type=str, required=True, help="The path to store the usdz"
    )
    parser.add_argument(
        "--category",
        type=str,
        required=True,
        choices=valid_categories,
        help="The specified category in the labels.json. Must be one of: %(choices)",
    )
    parser.add_argument(
        "--value",
        type=str,
        required=True,
        help="The specified value in the category",
    )

    args = parser.parse_args()

    hf_api_token = os.getenv("HF_TOKEN")

    login(token=hf_api_token)

    # First download all the labels.json files
    print(f"Downloading dataset labels.json to {args.local_dir}.")

    snapshot_download(repo_id="nvidia/PhysicalAI-Autonomous-Vehicles-NuRec", repo_type="dataset", allow_patterns="*.json", local_dir=args.local_dir)

    category = args.category
    value = args.value

    # Find all of the labels.json files that have been downloaded
    local_dir = Path(args.local_dir)
    label_paths = local_dir.rglob("labels.json")

    # Filter through the labels.json and find all usdz that match our criteria
    paths_to_download = {}

    print(f"Filtering usdz downloads based upon labels.json downloaded with criteria {category} and {value}.")

    for label_path in label_paths:
        with open(label_path, "r", encoding="utf-8") as f:
            metadata = json.load(f)

            if category in metadata:
                if category == "vrus":
                    if string_to_boolean(value) == metadata["vrus"]:
                        paths_to_download[label_path.parent] = True
                else:
                    if value in metadata[args.category]:
                        paths_to_download[label_path.parent] = True

    print(f"Found {len(paths_to_download)} that matched criteria.")

    # Download the selected usdz and front camera mp4
    for path in paths_to_download.keys():
        relative_path = path.relative_to(local_dir)

        print(f"Downloading usdz and front camera at path {relative_path}")
        snapshot_download(repo_id="nvidia/PhysicalAI-Autonomous-Vehicles-NuRec", repo_type="dataset", allow_patterns=f"{relative_path}/*", local_dir=args.local_dir)


if __name__ == "__main__":
    main()

Reference(s):

@article{wu20253dgut, title={3DGUT: Enabling Distorted Cameras and Secondary Rays in Gaussian Splatting}, author={Wu, Qi and Martinez Esturo, Janick and Mirzaei, Ashkan and Moenne-Loccoz, Nicolas and Gojcic, Zan}, journal={Conference on Computer Vision and Pattern Recognition (CVPR)}, year={2025} }

Ethical Considerations:

NVIDIA believes Trustworthy AI is a shared responsibility and we have established policies and practices to enable development for a wide array of AI applications. When downloaded or used in accordance with our terms of service, developers should work with their internal model team to ensure this model meets requirements for the relevant industry and use case and addresses unforeseen product misuse.

Please report security vulnerabilities or NVIDIA AI Concerns here.

Downloads last month
5,051

Collection including nvidia/PhysicalAI-Autonomous-Vehicles-NuRec