IllumiCraft: Unified Geometry and Illumination Diffusion for Controllable Video Generation
Abstract
IllumiCraft integrates geometric cues in a diffusion framework to generate high-fidelity, temporally coherent videos from textual or image inputs.
Although diffusion-based models can generate high-quality and high-resolution video sequences from textual or image inputs, they lack explicit integration of geometric cues when controlling scene lighting and visual appearance across frames. To address this limitation, we propose IllumiCraft, an end-to-end diffusion framework accepting three complementary inputs: (1) high-dynamic-range (HDR) video maps for detailed lighting control; (2) synthetically relit frames with randomized illumination changes (optionally paired with a static background reference image) to provide appearance cues; and (3) 3D point tracks that capture precise 3D geometry information. By integrating the lighting, appearance, and geometry cues within a unified diffusion architecture, IllumiCraft generates temporally coherent videos aligned with user-defined prompts. It supports background-conditioned and text-conditioned video relighting and provides better fidelity than existing controllable video generation methods. Project Page: https://yuanze-lin.me/IllumiCraft_page
Community
We propose a unified diffusion architecture that jointly incorporates illumination and geometry
guidance, enabling high-quality video relighting. It supports text-conditioned and background-conditioned relighting for videos.
Project Page: https://yuanze-lin.me/IllumiCraft_page/
GitHub Page: https://github.com/yuanze-lin/IllumiCraft
Youtube Video: https://www.youtube.com/watch?v=qAV58sADEzo
For more controllable video generation results, please check our project page.
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- OmniVDiff: Omni Controllable Video Diffusion for Generation and Understanding (2025)
- LMP: Leveraging Motion Prior in Zero-Shot Video Generation with Diffusion Transformer (2025)
- HoloTime: Taming Video Diffusion Models for Panoramic 4D Scene Generation (2025)
- MAGREF: Masked Guidance for Any-Reference Video Generation (2025)
- SViMo: Synchronized Diffusion for Video and Motion Generation in Hand-object Interaction Scenarios (2025)
- Modular-Cam: Modular Dynamic Camera-view Video Generation with LLM (2025)
- CamContextI2V: Context-aware Controllable Video Generation (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper