Time limitation
The actual evaluation time should be under 1.5 hour. Any submission that exceeds this time limit will be disqualified.
Usage limitation
The submitted algorithms will be executed on a single T4 GPU (16GB virtual RAM), equipped with 30GB of RAM and 8 virtual CPUs.
Due to resource limitations, each team is allowed to submit only 1 times within a 24-hour period.
DO NOT attempt to hack the test data or use the GPU resources for any purpose outside of the competition! Any illegal activities detected will result in the team and members being permanently banned.
Testing your model locally
Please refer to the guidance in the "Dataset" page. Make sure your model works well on your own machine before your submission.
This guide walks you through how to run the husim_server locally. If you can complete this setup on your machine, the same approach will work in the competition environment.
1. Clone the Repository
Clone server code:
git clone https://github.com/hyzhou404/HUGSIM_Local_Server.git
2. Build the Docker Image
Navigate to the project directory and build the Docker image:
docker build . -f ./docker/web_server_dockerfile -t hugsim_server:local
If you are using a proxy, using the following command instead:
docker build --network host . -f ./docker/web_server_dockerfile_mirror -t hugsim_server:local
3. Run the Docker Container
Run the server:
docker run --gpus "device=1" -d -p 7860:7860 -v /path/to/your/downloaded/training_data:/app/app_datas/ -v ./code:/app/code -v ./output:/app/app_datas/env_output --name hugsim_server hugsim_server:local tail -f /dev/null
Get inside the docker:
docker exec -u root -it hugsim_server /bin/bash
Launch the server:
pixi run python code/web_server.py
4. Launch the Client
We provide an LTF implementation example: LTF Demo
pixi install
pixi run python ltf_e2e.py
Then you will see interaction between LTF client and simulator server.
Submission
The model and code, including the Docker file, should be uploaded to a single Huggingface model hub. We recommend setting the hub to private.
On the "New Submission" page, enter the model hub name and submit. Your program will then be executed online and will interact with our simulator.
If more than two participants submit at the same time, your program will be queued to wait for computational resources. If there are many participants, we will consider expanding our computational resources