AIGCer-OPPO commited on
Commit
8b4eef7
·
verified ·
1 Parent(s): e8e1864

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +25 -1
README.md CHANGED
@@ -1,4 +1,28 @@
1
  ---
2
  license: apache-2.0
3
  library_name: diffusers
4
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: apache-2.0
3
  library_name: diffusers
4
+ ---
5
+ # TLCM: Training-efficient Latent Consistency Model for Image Generation with 2-8 Steps
6
+
7
+ <p align="center">
8
+ 📃 <a href="https://arxiv.org/html/2406.05768v5" target="_blank">Paper</a> •
9
+ 🤗 <a href="https://huggingface.co/OPPOer/TLCM" target="_blank">Checkpoints</a>
10
+ </p>
11
+
12
+ <!-- **TLCM: Training-efficient Latent Consistency Model for Image Generation with 2-8 Steps** -->
13
+
14
+ <!-- Our method accelerates LDMs via data-free multistep latent consistency distillation (MLCD), and data-free latent consistency distillation is proposed to efficiently guarantee the inter-segment consistency in MLCD.
15
+
16
+ Furthermore, we introduce bags of techniques, e.g., distribution matching, adversarial learning, and preference learning, to enhance TLCM’s performance at few-step inference without any real data.
17
+
18
+ TLCM demonstrates a high level of flexibility by enabling adjustment of sampling steps within the range of 2 to 8 while still producing competitive outputs compared
19
+ to full-step approaches. -->
20
+ we propose an innovative two-stage data-free consistency distillation (TDCD) approach to accelerate latent consistency model. The first stage improves consistency constraint by data-free sub-segment consistency distillation (DSCD). The second stage enforces the
21
+ global consistency across inter-segments through data-free consistency distillation (DCD). Besides, we explore various
22
+ techniques to promote TLCM’s performance in data-free manner, forming Training-efficient Latent Consistency
23
+ Model (TLCM) with 2-8 step inference.
24
+
25
+ TLCM demonstrates a high level of flexibility by enabling adjustment of sampling steps within the range of 2 to 8 while still producing competitive outputs compared
26
+ to full-step approaches.
27
+
28
+ This is for SDXL-base LoRA.