--- license: apache-2.0 language: - en base_model: - Wan-AI/Wan2.1-I2V-14B-480P - Wan-AI/Wan2.1-I2V-14B-480P-Diffusers pipeline_tag: image-to-video tags: - text-to-image - lora - diffusers - template:diffusion-lora - image-to-video widget: - text: >- The video starts with an image of a woman. The m0n4 Mona Lisa transformation begins as a dark sheet seems to wrap around the woman, and when the image resolves, the woman is depicted as a Mona Lisa version of itself. The Mona Lisa version of the woman sits in a chair with a backdrop featuring a landscape painting. output: url: example_videos/woman_mona_lisa.mp4 - text: >- The video starts with an image of a man wearing a suit. The m0n4 Mona Lisa transformation begins as a dark sheet seems to wrap around him, and when the image resolves, he is depicted as a Mona Lisa version of himself. The Mona Lisa version sits in a chair with a backdrop featuring a landscape painting. output: url: example_videos/man_mona_lisa.mp4 ---
This LoRA is trained on the Wan2.1 14B I2V 480p model and allows you to make any person/object in an image become a Mona Lisa version of themselves!
The key trigger phrase is: m0n4 Mona Lisa transformation
For best results, try following the structure of the prompt examples above. These worked well for me.
This LoRA works with a modified version of Kijai's Wan Video Wrapper workflow. The main modification is adding a Wan LoRA node connected to the base model.
See the Downloads section above for the modified workflow.
The model weights are available in Safetensors format. See the Downloads section above.
Training was done using Diffusion Pipe for Training
Special thanks to Kijai for the ComfyUI Wan Video Wrapper and tdrussell for the training scripts!