Motivation
Given the safety concerns and high costs associated with real-world autonomous driving testing, high- fidelity simulation techniques have become crucial for advancing the capabilities of autonomous systems.
This workshop seeks to answer the following questions:
- How well can we Render? While NVS methods have made significant progress in generating photorealistic urban scenes, their performance still lags in extrapolated viewpoints when only a limited viewpoint is provided during training. However, extrapolated viewpoints are essential for closed-loop simulation. Improving the accuracy and consistency of NVS across diverse viewing angles is critical for ensuring that these simulators provide reliable environments for driving evaluation.
- How well can we Drive? Despite challenges in extrapolated viewpoint rendering, existing methods enable photorealistic simulators with reasonable performance when trained on dense views. These NVS-based simulators allow autonomous driving models to be tested in a fully closed-loop manner, bridging the gap between real-world data and interactive evaluation. This shift allows for benchmarking autonomous driving algorithms under realistic conditions, overcoming the limitations of static datasets.
This challenge focus on the second question. If you have interest on the first one, please refer to another competition.
Task Description
The challenge focuses on evaluating novel autonomous driving algorithms based on HUGSIM. Our closed-loop simulator will provide challenging variety of photo-realistic urban scenarios, including oncoming traffic and cut-in driving behaviors.
Both simulator and submitted autonomous driving algorithms will execute online to make sure the closed-loop evaluation. Models and running environments are expected to be submitted to this reason. If you have concern about the privacy, please refer to the privacy section to check how we protect your privacy.
Privacy
Your model checkpoints is expected to store in huggingface model hub. We recommend setting it as private.
The tested algorithms are located in huggingface instance server, which will be destroy once the evaluation is finished. We have no authorization to achieve these server either. The behaviors of server are pre-defined in the docker file which is available to participants.
Timeline
- Challenge Release: June 30, 2025
- Challenge Submission Due: Aug 31, 2025
- Release Results & Submit Technical Report: Sep 05, 2025
- Technical Report Due: Sep 20, 2025
Citation
If you find our work useful, please kindly cite us via:
@article{zhou2024hugsim,
title={HUGSIM: A Real-Time, Photo-Realistic and Closed-Loop Simulator for Autonomous Driving},
author={Zhou, Hongyu and Lin, Longzhong and Wang, Jiabao and Lu, Yichong and Bai, Dongfeng and Liu, Bingbing and Wang, Yue and Geiger, Andreas and Liao, Yiyi},
journal={arXiv preprint arXiv:2412.01718},
year={2024}
}