yifutao's picture
Create README.md
41db3aa verified
This folder contains the reconstructions used in the reconstruction benchmark in the paper.
For each sequence, we provide:
- The input data
- **Images**: hardware synchronised camera images from the three Alphasense cameras
- **Vilens_slam**: undistorted lidar point cloud obtained from VILENS-SLAM. The timestamps are synchronised with the images with motion undistortion.
- **T_gt_lidar.txt**: the global transform between the lidar map and the ground truth map in). This allows one to compare the reconstruction with the ground truth in a single coordinate system.
-Note 1: the raw point cloud is 10 Hz and raw camera is 20 Hz. Here, the point cloud provided comes from the pose-graph SLAM where a node is spawned every 1 metre travelled. The resultant frequency of the camera image and lidar cloud provided is about 1 Hz.
- The reconstructions
- **lidar_cloud_merged_error.pcd**: merged lidar cloud file.
- **nerfacto_cloud_metric_gt_frame_error.pcd**: exported point cloud from nerfacto.
- **openmvs_dense_cloud_gt_frame_error.pcd**: dense MVS point cloud from OpenMVS.
- Note 1: all reconstruction are coloured by point-to-point distance to the ground truth, i.e. reconstruction errors.
- Note 2: all reconstructions are filtered by the ground truth’s occupancy map **gt_cloud.bt** to avoid penalising points in the unknown space. This is described in [SiLVR](https://arxiv.org/abs/2502.02657v1) section V.C.2