|
--- |
|
license: apache-2.0 |
|
library_name: diffusers |
|
--- |
|
# TLCM: Training-efficient Latent Consistency Model for Image Generation with 2-8 Steps |
|
|
|
<p align="center"> |
|
📃 <a href="https://arxiv.org/html/2406.05768v5" target="_blank">Paper</a> • |
|
</p> |
|
|
|
<!-- **TLCM: Training-efficient Latent Consistency Model for Image Generation with 2-8 Steps** --> |
|
|
|
<!-- Our method accelerates LDMs via data-free multistep latent consistency distillation (MLCD), and data-free latent consistency distillation is proposed to efficiently guarantee the inter-segment consistency in MLCD. |
|
|
|
Furthermore, we introduce bags of techniques, e.g., distribution matching, adversarial learning, and preference learning, to enhance TLCM’s performance at few-step inference without any real data. |
|
|
|
TLCM demonstrates a high level of flexibility by enabling adjustment of sampling steps within the range of 2 to 8 while still producing competitive outputs compared |
|
to full-step approaches. --> |
|
we propose an innovative two-stage data-free consistency distillation (TDCD) approach to accelerate latent consistency model. The first stage improves consistency constraint by data-free sub-segment consistency distillation (DSCD). The second stage enforces the |
|
global consistency across inter-segments through data-free consistency distillation (DCD). Besides, we explore various |
|
techniques to promote TLCM’s performance in data-free manner, forming Training-efficient Latent Consistency |
|
Model (TLCM) with 2-8 step inference. |
|
|
|
TLCM demonstrates a high level of flexibility by enabling adjustment of sampling steps within the range of 2 to 8 while still producing competitive outputs compared |
|
to full-step approaches. |
|
|
|
## This is for SDXL-base LoRA. |