FLUX TRELLIS
3D Generation from text prompts
Whoohoo!!! Running on Zero GPU!!!
https://huggingface.co/spaces/DegMaTsu/ComfyUI-Reactor-Fast-Face-Swap
Here is my couple pieces of tip for this great tutorial (from completely newb in this 'programming' things).
Step-by-step.
1/ Compare code Video of changing code is cool. But most useful - is direct compare code. Links to site for compare code - cool, but better solution (for me) is download files and look them in Notepad++ with Compare plug-in enabled. Use it!
2/ Link to model - folders If your model in donor's repo not in 'models' folder - but in some subfolder - your code need to be like:
hf_hub_download(repo_id="martintomov/comfy", filename="facerestore_models/GPEN-BFR-512.onnx", local_dir="models/facerestore_models")
Look at 'filename' section - model file placed in "martintomov/comfy" repo in models/facerestore_models subfolder.
3/ Name project - without spaces Name of project need to be without spaces (only "-", "." or "_" allowed).
So I use 'ComfyUI-Reactor-Fast-Face-Swap' for name (as example).
4/ Readme.md - config You can't use default ComfyUI Readme.md on Hugging Face Spaces. You need specially modified Readme.md file - first 11 lines. Download Readme.md from any working Space and see how it should look.
5/ How to start upload files? For upload files, you need to click on (top right corner - Files), than click (Contribute - Upload files). Then drag-n-drop your files. Then click in bottom (Commit changes to main).
You asking - what is option "Open as a pull request to the main branch"? - It's for collaborate working for other's people Spaces (suggesting changes). You don't need that (yet).
6/ Don't upload .git folder Hugging Face not allow to upload .git folder. Exclude them from your project.
In tutorial author's Space you can see uploaded .git folder - but for me trying to do this causes persistent error. ChatGPT says: "Don't upload this folder".
7/ Runtime Error - no NVIDIA drivers When you first time launch your program on Space - you got error "Runtime Error - no NVIDIA drivers". As I understand, this is because you stay on "Free CPU-only" basic hardware setting. Now you need ask for GPU-Zero grant - and if you get them - this error is gone.
8/ Grant asking - link It's hard to recognize, where you need to ask grant. Here is link:
https://huggingface.co/docs/hub/spaces-gpus#community-gpu-grants
Fill the form - and in result it's will be look as forum's thread in 'Community' section of your Space. (With asking grant in name of thread).
Now you need to wait and pray.
I got Zero-GPU after only 1 day of waiting. Woohoo! Cool! Thanx!
9/ Model upload - After start running on Zero-GPU, many errors occure (locally all works fine, but online ...). I solved them step-by-step by consulting with ChatGPT (much helping!). And only one error I can't solve - program still try to find Face Restore Model in /home/user/app/models/facerestore_models folder!!
But tutorial's author tell to us it's impossible (cause error) to upload 'models' folder (from drag-n-drop) - and you need to find a way to download model to your project via code from 'anywhere'.
But ChatGPT say - "What the? Just do it! Do this drag-n-drop upload". (Warning: 1Gb limit).
I prepare two folders-subfolders "models/facerestore_models" with txt file inside - and successfully upload them to space (with drag-n-drop)!
Next step I upload GPEN-BFR-512.onnx model to his 'asking-by' folder - and WHOOHOO!!! - program start to work!!!
Hope this couple of tips help somebody and make this great tutorial even more great! 🎅✨
May be user need to do:
pip install spaces
before adding code:
import spaces
ChatGPT say, this is wrong (slower workflow) to import every generation all custom_nodes:
def generate_image(prompt, structure_image, style_image, depth_strength, style_strength):
import_custom_nodes()
with torch.inference_mode():
Except, they suggest do import ONCE outside of 'generate_image':
import_custom_nodes()
from nodes import NODE_CLASS_MAPPINGS
@spaces.GPU
def generate_image(prompt, structure_image, style_image, depth_strength, style_strength):
with torch.inference_mode():