Glance: Accelerating Diffusion Models with 1 Sample
Abstract
Using phase-aware LoRA adapters, diffusion models achieve efficient acceleration and strong generalization with minimal retraining.
Diffusion models have achieved remarkable success in image generation, yet their deployment remains constrained by the heavy computational cost and the need for numerous inference steps. Previous efforts on fewer-step distillation attempt to skip redundant steps by training compact student models, yet they often suffer from heavy retraining costs and degraded generalization. In this work, we take a different perspective: we accelerate smartly, not evenly, applying smaller speedups to early semantic stages and larger ones to later redundant phases. We instantiate this phase-aware strategy with two experts that specialize in slow and fast denoising phases. Surprisingly, instead of investing massive effort in retraining student models, we find that simply equipping the base model with lightweight LoRA adapters achieves both efficient acceleration and strong generalization. We refer to these two adapters as Slow-LoRA and Fast-LoRA. Through extensive experiments, our method achieves up to 5 acceleration over the base model while maintaining comparable visual quality across diverse benchmarks. Remarkably, the LoRA experts are trained with only 1 samples on a single V100 within one hour, yet the resulting models generalize strongly on unseen prompts.
Community
Thanks for your sharing! It's a nice and meaningful job. I will follow your work and look forward to our communication.
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Mixture of Ranks with Degradation-Aware Routing for One-Step Real-World Image Super-Resolution (2025)
- EchoDistill: Bidirectional Concept Distillation for One-Step Diffusion Personalization (2025)
- Flash-DMD: Towards High-Fidelity Few-Step Image Generation with Efficient Distillation and Joint Reinforcement Learning (2025)
- MosaicDiff: Training-free Structural Pruning for Diffusion Model Acceleration Reflecting Pretraining Dynamics (2025)
- Shortcutting Pre-trained Flow Matching Diffusion Models is Almost Free Lunch (2025)
- GRASP: Guided Residual Adapters with Sample-wise Partitioning (2025)
- No MoCap Needed: Post-Training Motion Diffusion Models with Reinforcement Learning using Only Textual Prompts (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 1
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
