Is it possible to make this work with the Rapid AIO to get T2V + I2V in one model?
Thank you for the shoutout! I've been trying to mess with the LORA, but not sure how to make it useful. I'm guessing it isn't working with native ComfyUI nodes yet?
I'd love to bring I2V and T2V features together in a single "all in one" rapid model, but unsure how to do it or if it is possible. My T2V is a mix of the WAN 2.2 "low" model + WAN 2.2 lightning + lightx2v. Adding in PUSA "low" didn't seem to enable I2V features using typical "WanImageToVideo" nodes. Any help is appreciated!
Hi @Phr00t , your work is really great, and thank you for integrating Pusa within your model!
Regarding ComfyUI compatibility: Pusa-Wan2.2 isn’t natively supported in ComfyUI just yet. However, if you’re working with the Wan2.2 low-noise model, it should be functionally similar to Wan2.1. For I2V support within ComfyUI, I’d recommend checking out Kj’s integration of the Pusa pipeline for Wan2.1. I think he’s implemented support for I2V via a custom scheduler and shared a helpful example.
As for why WanImageToVideo
nodes aren’t working: Pusa uses a vectorized timestep paradigm, where we directly set the first timestep to zero (or a small value) to enable I2V (the condition image is used as the first frame). This differs from the mainstream approach, so existing nodes may not handle it.
If you run into issues or have questions while testing the Wan2.1 + Pusa or Wan2.2 + Pusa, I’d be happy to help debug! I will also investigate more into your project and try to see if I can contribute more 🙌
so how do we use the lora for t2v? comfy just throws a lora key not loaded error
Does it work with GGUF models? I am getting a “lora key not loaded error.”
Hmm... may have to wait for ComfyUI native support. I'm going for simplicity and speed, so asking people to delve into custom nodes is getting a bit beyond its original goal. I was actually surprised there wasn't an issue on their github asking for support, so I made one:
https://github.com/comfyanonymous/ComfyUI/issues/9684
I'm also a bit concerned about the reports of a VRAM requirement increase: https://github.com/kijai/ComfyUI-WanVideoWrapper/issues/804#issuecomment-3089366981
I try to keep the "base" of my Rapid AIO the most minimal requirements for time and hardware.
I was thinking what would be the easiest way to import this into native ComfyUI. Looks like Kijai lists the PUSA sampler/scheduler combo here (while KSampler separates out the sampler and scheduler):
https://github.com/kijai/ComfyUI-WanVideoWrapper/blob/main/wanvideo/schedulers/__init__.py#L11