--- title: Omniseal Leaderboard emoji: 🦀 colorFrom: red colorTo: green sdk: docker pinned: false short_description: Leaderboard for watermarking models --- ## Docker Build Instructions ### Prerequisites - Docker installed on your system - Git repository cloned locally ### Build Steps (conda) 1. Initialize conda environment ```bash cd backend conda env create -f environment.yml -y conda activate omniseal-benchmark-backend ``` 2. Build frontend (outputs html, js, css into frontend/dist). Note you only need this if you are updating the frontend, the repository would already have a build checked in at frontend/dist ```bash cd frontend npm install npm run build -- --mode prod ``` 3. Run backend server from project root. This would serve frontend files from port http://localhost:7860 ```bash gunicorn --chdir backend -b 0.0.0.0:7860 app:app --reload ``` 4. Server will be running on `http://localhost:7860` ### Build Steps (Docker, huggingface) 2. Build the Docker image from project root: ```bash docker build -t omniseal-benchmark . ``` OR ```bash docker buildx build -t omniseal-benchmark . ``` 3. Run the container (this runs in auto-reload mode when you update python files in the backend directory). Note the -v argument make it so the backend could hot reload: ```bash docker run -p 7860:7860 -v $(pwd)/backend:/app/backend omniseal-benchmark ``` 4. Access the application at `http://localhost:7860` ### Local Development When updating the backend, you can run it in whichever build steps above to take advantage of hot-reload so you don't have to restart the server. For the frontend: 1. Create a `.env.local` file in the frontend directory. Set `VITE_API_SERVER_URL` to where your backend server is running. When running locally it will be `VITE_API_SERVER_URL=http://localhost:7860`. This overrides the configuration in `.env` so the frontend will connect with your backend URL of choice. 2. Run the development server with hot-reload: ```bash cd frontend npm install npm run dev ``` ### Local datasets By default, datasets are loaded over the network based on `backend/config.py`. Please see the file there and modify if loading different datasets. `ABS_DATASET_DOMAIN`, `ABS_DATASET_PATH` controls where datasets are loaded from and are used in `DATASET_CONFIGS` and `EXAMPLE_CONFIGS`. Any datasets and examples to be added would need to update the above constants to be visualized in the UI. There is commented out code that sets the `ABS_DATASET_DOMAIN` to the `backend/data` directory. You can see the data formats of the csv / json files required to render the leaderboard as well as examples there. In the `data` directory, by default this matches the path structure for loading over the network. Each dataset should be placed under `data/omnisealbench` as a directory, e.g. `data/omnisealbench/sav_val_full_v2` and in the directory have files: - `{type}_benchmark.csv` for leaderboard tables - `{type}_attacks_variations.csv` for leaderboard chart - `examples_eval_results.json` for examples Please see reference csv and json files for what these need to look like.