Spaces:
Running
on
Zero
Apply for community grant: Academic project (gpu)
This is a demo of our work "Mr. DETR", which is a general and high-performance DETR-like model and has been accepted by CVPR 2025. In the future, we will provide the demo for multiple tasks besides object detection, including instance segmentation, panoptic segmentation, semantic segmentation, and pose estimation. Thanks for your support and consideration!
Hi @allencbzhang , we've assigned ZeroGPU to this Space. Please check the compatibility and usage sections of this page so your Space can run on ZeroGPU.
Hi
@hysts
, thanks for your support! There are some CUDA operations in my space, so I need to compile them using CUDA when installing. In specific, I need to set CUDA_HOME
, however, I find that there is no CUDA_PATH
. I have tried to find the CUDA_PATH
as follows but it always returns None
:
@spaces.GPU
def find_cuda():
# Check if CUDA_HOME or CUDA_PATH environment variables are set
cuda_home = os.environ.get('CUDA_HOME') or os.environ.get('CUDA_PATH')
if cuda_home and os.path.exists(cuda_home):
return cuda_home
# Search for the nvcc executable in the system's PATH
nvcc_path = shutil.which('nvcc')
if nvcc_path:
# Remove the 'bin/nvcc' part to get the CUDA installation path
cuda_path = os.path.dirname(os.path.dirname(nvcc_path))
return cuda_path
return None
How can I set the path of CUDA_HOME
?
@allencbzhang
CUDA dev tools are not available in ZeroGPU Spaces. A common workaround is to pre-build your package in your local environment with CUDA.
For example, in this Space, the diff_gaussian_rasterization
package requires CUDA to compile, but they pre-compiled the package in their local env, added the pre-built wheel to their repo, and then installed it like this at startup time.
I'm not sure if the same approach works for the packages you need, but could you give it a try?
Hi,
@hysts
Thanks for your reply. I have tried to build the package in the local environment with CUDA, however, it still raises the error cannot import detrex._C
when installing the wheel in the zeroGPU.
This error is caused by the fact that no CUDA_HOME
is set. I don't know how to solve this problem. Thanks.
@allencbzhang Thanks for testing ZeroGPU! Looks like the wheel is broken for some reason. I built a wheel and opened a PR. Could you take a look at it?
Hi,@hysts
Thank you so much for your hard work and the effort you've put into resolving this issue! I truly appreciate the time and dedication you’ve contributed to making this project better. It's been a great help, and I’m sincerely grateful for your support.
I also have a quick question regarding the .whl file packaging process. Could you kindly share some details or point me to the relevant documentation or scripts? Specifically, I’d like to understand how the .whl file is being built and packaged.
Looking forward to hearing from you, and thank you again for your assistance!
@allencbzhang Thanks! Glad to hear that it worked!
As for the .whl
packaging process, I usually use a docker image built from the following Dockerfile
to build wheels for ZeroGPU.
FROM nvidia/cuda:12.4.0-devel-ubuntu22.04
ENV DEBIAN_FRONTEND=noninteractive
RUN apt-get update && \
apt-get upgrade -y && \
apt-get install -y --no-install-recommends \
git \
git-lfs \
wget \
curl \
# python build dependencies \
build-essential \
libssl-dev \
zlib1g-dev \
libbz2-dev \
libreadline-dev \
libsqlite3-dev \
libncursesw5-dev \
xz-utils \
tk-dev \
libxml2-dev \
libxmlsec1-dev \
libffi-dev \
liblzma-dev \
# gradio dependencies \
ffmpeg && \
apt-get clean && \
rm -rf /var/lib/apt/lists/*
RUN useradd -m -u 1000 user
USER user
ENV HOME=/home/user \
PATH=/home/user/.local/bin:${PATH}
RUN curl https://pyenv.run | bash
ENV PATH=${HOME}/.pyenv/shims:${HOME}/.pyenv/bin:${PATH}
ARG PYTHON_VERSION=3.10.16
RUN pyenv install ${PYTHON_VERSION} && \
pyenv global ${PYTHON_VERSION} && \
pyenv rehash && \
pip install --no-cache-dir -U pip setuptools wheel ninja
RUN pip install packaging
RUN pip install torch==2.4.0
ENV TORCH_CUDA_ARCH_LIST="6.0 6.1 7.0 7.5 8.0 8.6 8.9"
ENV TCNN_CUDA_ARCHITECTURES=89;86;80;75;70;61;60
ENV FORCE_CUDA=1
ENV CUDA_HOME=/usr/local/cuda
WORKDIR /work
CMD ["python", "setup.py", "bdist_wheel"]
This time, I made a docker image wheel-builder
from it, and then ran the following command to get the wheel.
git clone --recursive https://github.com/IDEA-Research/detrex
cd detrex
docker run --rm -v `pwd`:/work --gpus all wheel-builder
(This setup usually works, but depending on the situation, you might need to adjust the versions of CUDA, Python, and torch
, as well as CUDA-related environment variables.)
Hope this helps.