File size: 5,127 Bytes
723c49b 5ee6c52 3382fec a5f8fbc 3382fec a5f8fbc 3382fec a5f8fbc 3382fec b747ae9 723c49b db66cab 4acac05 e1d88f8 e353015 b747ae9 6fadfed 362673a 6c46525 893b57c 5318b40 3a36693 5318b40 893b57c 6b597b1 9e819b2 2f0de02 b4cd827 9e819b2 b4cd827 6f15135 723c49b afe6928 b4cd827 723c49b 44d2cd8 3382fec 3f85b0e be21ac5 3382fec 8e4dff5 4cf2428 7946434 5e3b3b7 4cf2428 723c49b e1506d8 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 |
## Motivation
Given the safety concerns and high costs associated with real-world autonomous driving testing, high-fidelity simulation techniques have become crucial for advancing the capabilities of autonomous systems.
This workshop seeks to answer the following questions:
1. How well can we render? While NVS methods have made significant progress in generating
photorealistic urban scenes, their performance still lags in extrapolated viewpoints when only a
limited viewpoint is provided during training. However, extrapolated viewpoints are essential for
closed-loop simulation. Improving the accuracy and consistency of NVS across diverse viewing
angles is critical for ensuring that these simulators provide reliable environments for driving
evaluation.
2. How well can we drive? Despite challenges in extrapolated viewpoint rendering, existing
methods enable photorealistic simulators with reasonable performance when trained on dense
views. These NVS-based simulators allow autonomous driving models to be
tested in a fully closed-loop manner, bridging the gap between real-world data and interactive
evaluation. This shift allows for benchmarking autonomous driving algorithms under realistic
conditions, overcoming the limitations of static datasets.
This challenge focuses on the second question regarding the performance of autonomous driving algorithms in closed-loop evaluation. If you are interested in the first topic, please refer to the [competition of extrapolated novel view synthesis](https://huggingface.co/spaces/XDimLab/ICCV2025-RealADSim-NVS).
## Task Description

The challenge focuses on evaluating autonomous driving algorithms using [HUGSIM](https://github.com/hyzhou404/HUGSIM), a photorealistic, closed-loop driving simulator. HUGSIM provides a diverse set of challenging, photo-realistic urban scenarios, including oncoming traffic and cut-in behaviors.
Unlike typical challenges that require submissions as static files, this competition requires participants to submit models and code. This is because the closed-loop evaluation demands real-time interaction between the driving algorithm and the simulator at every timestep. Please note that your submissions are securely protected and will only be used for closed-loop evaluation. For more information on how we handle your data and ensure privacy, please refer to the Privacy section.
## Evaluation Metric
The primary metric of this challenge is HD-Score (HUGSIM Driving Score) defined as:

The HD-Score at a single timestamp consists of two categories of components:
- Driving policy items, including no collisions (NC) and drivable area compliance (DAC), which are critical for ensuring driving safety.
- Contributory items, including time-to-collision (TTC) and comfort (COM), which may not directly lead to failure cases when low, but still contribute to overall driving quality. The TTC and COM terms are weighted by 5 and 2, respectively.
The HD-Score is computed as a weighted average across all simulation timestamps and then multiplied by the global route completion score R<sub>c</sub>.
## Privacy Assurance
Your model checkpoints should be stored in a private Huggingface model hub.
The tested algorithms are hosted on a Huggingface instance server, which will be destroyed once the evaluation is completed. We do not have access to this server at any point.
To ensure transparency and prevent any form of cheating, the server’s behavior is fully defined in a Dockerfile, which is shared with all participants.
## Timeline
- Challenge Release: June 30, 2025
- Challenge Submission Due: August 31, 2025
- Release Results & Submit Technical Report: September 05, 2025
- Technical Report Due: September 20, 2025
## Awards
Winners will be announced at the ICCV2025 Workshop.
- Innovation Award: $9,000
- Outstanding Champion: $9,000
- Honorable Runner-up: $3,000
## How to Participate
To participate in the competition, both automatic registration and manual verification are required:
1. Click the "Login with Huggingface" button.
2. Click the "Register" button and complete the registration form. After this automatic registration step, the "Submission Information" page will become accessible. It provides detailed instructions on how to run local tests and submit your proposal.
3. Access to "My Submissions" and "New Submission" will be granted after we manually review your registration and authorize your account. This process is typically completed within 24 hours.
## Citation
If you find our work useful, please kindly cite us via:
```bibtex
@article{zhou2024hugsim,
title={HUGSIM: A Real-Time, Photo-Realistic and Closed-Loop Simulator for Autonomous Driving},
author={Zhou, Hongyu and Lin, Longzhong and Wang, Jiabao and Lu, Yichong and Bai, Dongfeng and Liu, Bingbing and Wang, Yue and Geiger, Andreas and Liao, Yiyi},
journal={arXiv preprint arXiv:2412.01718},
year={2024}
}
``` |