Minimum VRAM

#4
by acleitao - opened

Amazing work guys I tried on hugging face spaces an really sounds cool... now ... tried to run locally on docker container on my 3060 12gb... OOM... what is the minimum amount of Vram for this models?

Can't run it myself cuz I don't even have enough RAM but their HF space uses ~41GB VRAM so I guess you should be good with 42GB.

Well if thats true ... its a no-no for me ... gonna have to wait for the quantized version lol... too bad

It is using 17476MiB for me for reference

@uetuluk Which GPU are you using?

It can definitely run on a 24GB consumer GPU (I tested), not sure about smaller

Yes, I am using a 4090

For me it runs on my 3060 and consumes 11.7GB / 12GB vram. Maybe it runs in half precision out of the box? Because i don't use any arguments other then --port to start. Using windows 11, nvidia driver version 576.02 and cuda 12.8.

ACE-Step org

Please check out the latest update in the official GitHub repository. The minimum VRAM requirement for full-length generation is now just 8 GB. We tested it on an RTX 4060, and it delivers decent performance beyond our expectations (1.16it/s).
image.png

Dude I just saw it on discord amazing... I'm just having a lot of trouble to make the docker container to work... tomorrow I will try again if I get it working I will send a PR.. But thank you for this will be really fun to use it

When i first had a play with this i used the huggingface space then downloaded the dockerized space and modified the pipeline to use cpu only. took about 8 hours to render 2mins. Saw the updated version that runs on 8gig vram and with a bit of struggling to get it to use existing models downloaded already i managed to get it working on a 4060ti. (YAY!) I had to include the call to main in the init python file to get it to work and i ended up modifying the dockerfile to optimise layer separation to minimise re downloading and installing the same python libraries over and over again to make slight changes to the code and modified the code to use the /app/outputs properly rather then relative to the packages in /opt. There are some hiccups but still very happy to be able to run this locally. Thanks!

Sign up or log in to comment