Lotus-2: Advancing Geometric Dense Prediction with Powerful Image Generative Model
Abstract
A two-stage deterministic framework, Lotus-2, leverages diffusion models' world priors for high-quality geometric inference, achieving state-of-the-art results in monocular depth estimation and competitive surface normal prediction with limited training data.
Recovering pixel-wise geometric properties from a single image is fundamentally ill-posed due to appearance ambiguity and non-injective mappings between 2D observations and 3D structures. While discriminative regression models achieve strong performance through large-scale supervision, their success is bounded by the scale, quality and diversity of available data and limited physical reasoning. Recent diffusion models exhibit powerful world priors that encode geometry and semantics learned from massive image-text data, yet directly reusing their stochastic generative formulation is suboptimal for deterministic geometric inference: the former is optimized for diverse and high-fidelity image generation, whereas the latter requires stable and accurate predictions. In this work, we propose Lotus-2, a two-stage deterministic framework for stable, accurate and fine-grained geometric dense prediction, aiming to provide an optimal adaption protocol to fully exploit the pre-trained generative priors. Specifically, in the first stage, the core predictor employs a single-step deterministic formulation with a clean-data objective and a lightweight local continuity module (LCM) to generate globally coherent structures without grid artifacts. In the second stage, the detail sharpener performs a constrained multi-step rectified-flow refinement within the manifold defined by the core predictor, enhancing fine-grained geometry through noise-free deterministic flow matching. Using only 59K training samples, less than 1% of existing large-scale datasets, Lotus-2 establishes new state-of-the-art results in monocular depth estimation and highly competitive surface normal prediction. These results demonstrate that diffusion models can serve as deterministic world priors, enabling high-quality geometric reasoning beyond traditional discriminative and generative paradigms.
Community
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- UniLumos: Fast and Unified Image and Video Relighting with Physics-Plausible Feedback (2025)
- Pixel-Perfect Depth with Semantics-Prompted Diffusion Transformers (2025)
- Visual Bridge: Universal Visual Perception Representations Generating (2025)
- WorldGrow: Generating Infinite 3D World (2025)
- PG-ControlNet: A Physics-Guided ControlNet for Generative Spatially Varying Image Deblurring (2025)
- MoRE: 3D Visual Geometry Reconstruction Meets Mixture-of-Experts (2025)
- Diff4Splat: Controllable 4D Scene Generation with Latent Dynamic Reconstruction Models (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper