ICCV_workshop_test / SUBMISSION_DESC.md
Libra-1995's picture
Upload SUBMISSION_DESC.md with huggingface_hub
8227ce3 verified

Testing your model locally

This guide will help you set up the husim_server locally. If you can successfully complete this setup on your machine, the same method will be applicable in the competition environment. Please ensure that your model works properly on your local machine before submitting.

1. Clone the Server Repository

To clone the server code, use the following command:

git clone https://github.com/hyzhou404/HUGSIM_Local_Server.git

2. Build the Docker Image

Navigate to the project directory and build the Docker image with this command (it takes about 1 hour for your first building due to large file size):

docker build . -f ./docker/web_server_dockerfile -t hugsim_server:local

If you're using a proxy, run this command instead (If your proxy is listening port 7890)

docker build --network host . -f ./docker/web_server_dockerfile_mirror -t hugsim_server:local

3. Run the Docker Container

To start the server, execute (replace the "/path/to/your/downloaded/training_data" by the directory of your training data):

docker run --gpus "device=1" -d -p 7860:7860 -v /path/to/your/downloaded/training_data:/app/app_datas/ -v ./code:/app/code -v ./output:/app/app_datas/env_output --name hugsim_server hugsim_server:local tail -f /dev/null

To access the Docker container, use:

docker exec -u root -it hugsim_server /bin/bash

To launch the server, run:

pixi run python code/web_server.py

Note that if you encounter the error “OSError: Could not find compatible tinycudann extension for compute capability 86.”, please edit the varibles TCNN_CUDA_ARCHITECTURES and TORCH_CUDA_ARCH_LIST in the file "web_server_dockerfile" or "web_server_dockerfile_mirror" (based on your building setting) to the compute capability of your GPU (e.g. change the values to 86 if you encounter the above error) and rebuild the docker image.

4. Launch the Client

We provide an example implementatin of LTF: LTF Demo.

To install and run the client, download the above example implementation and use the following commands:

pixi install
pixi run python ltf_e2e.py

You will then see the interaction between the LTF client and the simulator server.

Submission

Please upload your model and code, including the Docker file, to a single Hugging Face model hub. We recommend setting the hub to private.

On the "New Submission" page, enter the name of your model hub (such as "hyzhou404/LTF_v0") and submit. Your program will then be executed online and will interact with our simulator.

If more than two teams submit simultaneously, your program will be placed in a queue until computational resources become available. If there are a large number of participants, we will consider increasing our computational resources.

Time limitation

The actual evaluation time should be under 3 hour. Any submission that exceeds this time limit will be disqualified.

Usage limitation

The submitted algorithms will run on a single T4 GPU with 16GB of virtual RAM, along with 30GB of system RAM and 8 virtual CPUs.

Due to resource constraints, each team is permitted to submit only once within a 24-hour period.

DO NOT attempt to manipulate the test data or use the GPU resources for any purpose outside of the competition! Any detected illegal activities will lead to a permanent ban for both the team and its members.