Datasets:

ArXiv:
License:
ariakang's picture
test disable viewer
bc9ea35
metadata
license: other
license_name: aria-everyday-activities-dataset-license-agreement
license_link: https://www.projectaria.com/datasets/aea/license/
viewer: false

Aria Everyday Activities (AEA) Dataset

img

Figure 1. An overview of Aria Everyday Activities (AEA) dataset using some exemplar activities recorded in Location 1. On the right column, we highlight a time-synchronized snapshot of two wearers talking to each other in one activity, with the following information representing each of the viewer in red and green: (1) their high-frequency 6DoF close-loop trajectories, (2) observed point cloud, (3) RGB camera view frustum, (4) monochrome scene cameras, (5) eyetracking cameras, (6) their projected eye gaze on all three camera streams and (7) transcribed speech. On the left side, we also highlight a diverse set of activities (e.g. dining, doing laundry, folding clothes, cooking) with the projected eyetracking (green dot) on the RGB streams. All of the recordings contain close-loop trajectories (white lines) spatially aligned on the environment point cloud (semi-dense points).

Dataset Summary

The Aria Everyday Activities (AEA) dataset provides sequences collected using Project Aria glasses in a variety of egocentric scenarios, including: cooking, exercising, playing games and spending time with friends. The goal of AEA is to provide researchers with data to engage in solving problems related to the challenges of always-on egocentric vision. AEA contains multiple activity sequences where 1-2 users wearing Project Aria glasses participate in scenarios to capture time synchronized data in a shared world location.

Go to the projectaria.com/datasets/aea/ to learn more about the dataset, including instructions for downloading the data, installing our dataset specific tooling, and running example tutorials.

Dataset Contents

In addition to providing raw sensor data from Project Aria glasses, this dataset also contains annotated speech to text data, and results from our Machine Perception Services (MPS) that provide additional context to the spatial-temporal reference frames. We provide:

  • Per-frame eye tracking
  • Accurate 3D trajectories of users across multiple everyday activities in the same location including trajectory and semi-dense point cloud data
  • Timestamped with a shared clock to allow synchronization of data from multiple concurrent recordings
  • Location information expressed in a shared global coordinate frame for all recordings collected in the same physical location
  • Online calibration information for the cameras and IMUs
  • Speech-to-text annotation

The dataset contains:

  • 143 recordings for Everyday Activities
  • Recording in 5 locations, with 53 sequences of 2 users simultaneous recording
  • Over 1 million images
  • Over 7.5 accumulated hours

Citation Information

If using AEA, please cite our white paper which can be found here.

License

AEA license can be found here.

Contributors

@nickcharron