Motivation
Given the safety concerns and high costs associated with real-world autonomous driving testing, high- fidelity simulation techniques have become crucial for advancing the capabilities of autonomous systems.
This workshop seeks to answer the following questions:
- How well can we Render? While NVS methods have made significant progress in generating photorealistic urban scenes, their performance still lags in extrapolated viewpoints when only a limited viewpoint is provided during training. However, extrapolated viewpoints are essential for closed-loop simulation. Improving the accuracy and consistency of NVS across diverse viewing angles is critical for ensuring that these simulators provide reliable environments for driving evaluation.
- How well can we Drive? Despite challenges in extrapolated viewpoint rendering, existing methods enable photorealistic simulators with reasonable performance when trained on dense views. These NVS-based simulators allow autonomous driving models to be tested in a fully closed-loop manner, bridging the gap between real-world data and interactive evaluation. This shift allows for benchmarking autonomous driving algorithms under realistic conditions, overcoming the limitations of static datasets.
This challenge focus on the second question. If you have interest on the first one, please refer to another competition.
Task Description
The challenge focuses on evaluating novel autonomous driving algorithms based on HUGSIM. Our closed-loop simulator will provide challenging variety of photo-realistic urban scenarios, including oncoming traffic and cut-in driving behaviors.
Unlike typical challenges that require submissions as static files, this workshop requires models and code as submissions due to the nature of closed-loop simulation. Specifically, the interaction between autonomous driving algorithms and the simulator is unpredictable, and results will only be available once the simulation concludes.
For this reason, both the simulator and the submitted autonomous driving algorithms will run online to ensure closed-loop evaluation and prevent overfitting. Your privacy is our top priority in this competition. If you have any concerns regarding privacy, please refer to the privacy section to learn how we safeguard your intellectual properties.
Primary metric
The primary metric of this challenge is HD-Score (HUGSIM Driving Score) defined as:
where the driving policy items include driving with no collisions (NC) and drivable area compliance (DAC), these subscores are crucial for driving safety. The contributory items include time-to-collision (TTC) and comfort (COM), which may not directly cause failure cases when they are low. The weight for TTC is 5 and the weight for COM is 2. The final HD-Score is averaged across all simulation timestamps and multiplied by the global route completion score Rc.
How to participant in the competition
- Click the "Login with Huggingface" button.
- Click the "Register" button and complete the form.
- The "Submission Information" page will be available once you submit the form.
- We will review the submitted forms and grant authorization for the submitted results.
Privacy Assurance
Your model checkpoints should be stored in a private Huggingface model hub.
The tested algorithms are hosted on a Huggingface instance server, which will be destroyed once the evaluation is complete. We do not have authorization to access the server either.
The server's behavior is predefined in the Docker file, which is provided to all participants to prevent any cheating.
Timeline
- Challenge Release: June 30, 2025
- Challenge Submission Due: Aug 31, 2025
- Release Results & Submit Technical Report: Sep 05, 2025
- Technical Report Due: Sep 20, 2025
Citation
If you find our work useful, please kindly cite us via:
@article{zhou2024hugsim,
title={HUGSIM: A Real-Time, Photo-Realistic and Closed-Loop Simulator for Autonomous Driving},
author={Zhou, Hongyu and Lin, Longzhong and Wang, Jiabao and Lu, Yichong and Bai, Dongfeng and Liu, Bingbing and Wang, Yue and Geiger, Andreas and Liao, Yiyi},
journal={arXiv preprint arXiv:2412.01718},
year={2024}
}