gguf quantized version of pixart
setup (once)
- drag pixart-xl-2-1024-ms-q4_k_m.gguf [1GB] to > ./ComfyUI/models/diffusion_models
- drag t5xxl_fp16-q4_0.gguf [2.9GB] to > ./ComfyUI/models/text_encoders
- drag pixart_vae_fp8_e4m3fn.safetensors [83.7MB] to > ./ComfyUI/models/vae
run it straight (no installation needed way)
- run the .bat file in the main directory (assuming you are using the gguf-node pack below)
- drag the workflow json file (below) or the demo picture above to > your browser
workflow
- example workflow for gguf
- example workflow for safetensors
review
- should set the output image size according to the model stated, i.e., 1024x1024 or 512x512
- small size model but good quality pictures; support image-to-image, image-text-to-image; and t5 encoder allows you to input short description or sentence instead of tag(s)
- upgrade your gguf-node to the latest version for pixart model support
reference
- base model from pixart-alpha
- comfyui comfyanonymous
- gguf-node (pypi|repo|pack)
- Downloads last month
- 22
Model tree for calcuis/pixart
Base model
PixArt-alpha/PixArt-XL-2-1024-MS