Upload SUBMISSION_DESC.md with huggingface_hub
Browse files- SUBMISSION_DESC.md +53 -0
SUBMISSION_DESC.md
CHANGED
@@ -11,6 +11,59 @@ DO NOT attempt to hack the test data or use the GPU resources for any purpose ou
|
|
11 |
## Testing your model locally
|
12 |
Please refer to the guidance in the "Dataset" page. Make sure your model works well on your own machine before your submission.
|
13 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
14 |
## Submission
|
15 |
The model and code, including the Docker file, should be uploaded to a single Huggingface model hub. We recommend setting the hub to private.
|
16 |
|
|
|
11 |
## Testing your model locally
|
12 |
Please refer to the guidance in the "Dataset" page. Make sure your model works well on your own machine before your submission.
|
13 |
|
14 |
+
This guide walks you through how to run the husim_server locally. If you can complete this setup on your machine, the same approach will work in the competition environment.
|
15 |
+
|
16 |
+
**1. Clone the Repository**
|
17 |
+
|
18 |
+
Clone server code:
|
19 |
+
|
20 |
+
```bash
|
21 |
+
git clone https://github.com/hyzhou404/HUGSIM_Local_Server.git
|
22 |
+
```
|
23 |
+
|
24 |
+
**2. Build the Docker Image**
|
25 |
+
|
26 |
+
Navigate to the project directory and build the Docker image:
|
27 |
+
|
28 |
+
```bash
|
29 |
+
docker build . -f ./docker/web_server_dockerfile -t hugsim_server:local
|
30 |
+
```
|
31 |
+
|
32 |
+
If you are using a proxy, using the following command instead:
|
33 |
+
```bash
|
34 |
+
docker build --network host . -f ./docker/web_server_dockerfile_mirror -t hugsim_server:local
|
35 |
+
```
|
36 |
+
|
37 |
+
**3. Run the Docker Container**
|
38 |
+
|
39 |
+
Run the server:
|
40 |
+
|
41 |
+
```bash
|
42 |
+
docker run --gpus "device=1" -d -p 7860:7860 -v /path/to/your/downloaded/training_data:/app/app_datas/ -v ./code:/app/code -v ./output:/app/app_datas/env_output --name hugsim_server hugsim_server:local tail -f /dev/null
|
43 |
+
```
|
44 |
+
|
45 |
+
Get inside the docker:
|
46 |
+
|
47 |
+
```bash
|
48 |
+
docker exec -u root -it hugsim_server /bin/bash
|
49 |
+
```
|
50 |
+
|
51 |
+
Launch the server:
|
52 |
+
```bash
|
53 |
+
pixi run python code/web_server.py
|
54 |
+
```
|
55 |
+
|
56 |
+
**4. Launch the Client**
|
57 |
+
|
58 |
+
We provide an LTF implementation example: [LTF Demo](https://huggingface.co/XDimLab/ICCV2025-RealADSim-ClosedLoop-SubmissionDemo)
|
59 |
+
|
60 |
+
```shell
|
61 |
+
pixi install
|
62 |
+
pixi run python ltf_e2e.py
|
63 |
+
```
|
64 |
+
|
65 |
+
Then you will see interaction between LTF client and simulator server.
|
66 |
+
|
67 |
## Submission
|
68 |
The model and code, including the Docker file, should be uploaded to a single Huggingface model hub. We recommend setting the hub to private.
|
69 |
|