Latent Diffusion Model without Variational Autoencoder
Abstract
SVG, a novel latent diffusion model without VAEs, uses self-supervised representations to enable efficient training, few-step sampling, and high-quality visual generation with semantic and discriminative capabilities.
Recent progress in diffusion-based visual generation has largely relied on latent diffusion models with variational autoencoders (VAEs). While effective for high-fidelity synthesis, this VAE+diffusion paradigm suffers from limited training efficiency, slow inference, and poor transferability to broader vision tasks. These issues stem from a key limitation of VAE latent spaces: the lack of clear semantic separation and strong discriminative structure. Our analysis confirms that these properties are crucial not only for perception and understanding tasks, but also for the stable and efficient training of latent diffusion models. Motivated by this insight, we introduce SVG, a novel latent diffusion model without variational autoencoders, which leverages self-supervised representations for visual generation. SVG constructs a feature space with clear semantic discriminability by leveraging frozen DINO features, while a lightweight residual branch captures fine-grained details for high-fidelity reconstruction. Diffusion models are trained directly on this semantically structured latent space to facilitate more efficient learning. As a result, SVG enables accelerated diffusion training, supports few-step sampling, and improves generative quality. Experimental results further show that SVG preserves the semantic and discriminative capabilities of the underlying self-supervised representations, providing a principled pathway toward task-general, high-quality visual representations.
Community
We introduce SVG (Self-supervised representation for Visual Generation), a new paradigm for latent diffusion models (LDMs) that completely eliminates the traditional Variational Autoencoder (VAE).
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Adapting Self-Supervised Representations as a Latent Space for Efficient Generation (2025)
- Diffusion Transformers with Representation Autoencoders (2025)
- Aligning Visual Foundation Encoders to Tokenizers for Diffusion Models (2025)
- UniFlow: A Unified Pixel Flow Tokenizer for Visual Understanding and Generation (2025)
- VUGEN: Visual Understanding priors for GENeration (2025)
- Missing Fine Details in Images: Last Seen in High Frequencies (2025)
- Growing Visual Generative Capacity for Pre-Trained MLLMs (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper