DiffDecompose: Layer-Wise Decomposition of Alpha-Composited Images via Diffusion Transformers
Abstract
DiffDecompose, a diffusion Transformer-based framework, effectively decomposes images into constituent layers with semantic prompts, addressing challenges in transparent layer decomposition.
Diffusion models have recently motivated great success in many generation tasks like object removal. Nevertheless, existing image decomposition methods struggle to disentangle semi-transparent or transparent layer occlusions due to mask prior dependencies, static object assumptions, and the lack of datasets. In this paper, we delve into a novel task: Layer-Wise Decomposition of Alpha-Composited Images, aiming to recover constituent layers from single overlapped images under the condition of semi-transparent/transparent alpha layer non-linear occlusion. To address challenges in layer ambiguity, generalization, and data scarcity, we first introduce AlphaBlend, the first large-scale and high-quality dataset for transparent and semi-transparent layer decomposition, supporting six real-world subtasks (e.g., translucent flare removal, semi-transparent cell decomposition, glassware decomposition). Building on this dataset, we present DiffDecompose, a diffusion Transformer-based framework that learns the posterior over possible layer decompositions conditioned on the input image, semantic prompts, and blending type. Rather than regressing alpha mattes directly, DiffDecompose performs In-Context Decomposition, enabling the model to predict one or multiple layers without per-layer supervision, and introduces Layer Position Encoding Cloning to maintain pixel-level correspondence across layers. Extensive experiments on the proposed AlphaBlend dataset and public LOGO dataset verify the effectiveness of DiffDecompose. The code and dataset will be available upon paper acceptance. Our code will be available at: https://github.com/Wangzt1121/DiffDecompose.
Community
Diffusion models have recently achieved impressive performance in various generative tasks, including object removal. However, existing image decomposition methods still struggle to disentangle semi-transparent or transparent layer occlusions due to their reliance on mask priors, assumptions of static objects, and the lack of suitable datasets. In this work, we introduce a new task: Layer-Wise Decomposition of Alpha-Composited Images, which aims to recover constituent layers from a single image with semi-transparent or transparent occlusions caused by nonlinear alpha blending. To address the challenges of layer ambiguity, generalization, and data scarcity, we first present AlphaBlend, the first large-scale, high-quality dataset designed for transparent and semi-transparent layer decomposition. AlphaBlend supports six real-world subtasks such as translucent flare removal, semi-transparent cell decomposition, and glassware decomposition. Based on this dataset, we propose DiffDecompose, a diffusion transformer-based framework that models the posterior over possible layer decompositions conditioned on the input image, semantic prompts, and blending type. Instead of regressing alpha mattes directly, DiffDecompose adopts an In-Context Decomposition strategy, allowing the model to predict one or multiple layers without requiring per-layer supervision. It further introduces Layer Position Encoding Cloning to ensure pixel-level correspondence across layers. Extensive experiments on AlphaBlend and the public LOGO dataset demonstrate the effectiveness of DiffDecompose. Code and dataset will be released upon paper acceptance.
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Retinex-guided Histogram Transformer for Mask-free Shadow Removal (2025)
- PrismLayers: Open Data for High-Quality Multi-Layer Transparent Image Generative Models (2025)
- PSDiffusion: Harmonized Multi-Layer Image Generation via Layout and Appearance Alignment (2025)
- HAODiff: Human-Aware One-Step Diffusion via Dual-Prompt Guidance (2025)
- Freqformer: Image-Demoir'eing Transformer via Efficient Frequency Decomposition (2025)
- DeeCLIP: A Robust and Generalizable Transformer-Based Framework for Detecting AI-Generated Images (2025)
- InstaRevive: One-Step Image Enhancement via Dynamic Score Matching (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper