Update README.md
Browse files
README.md
CHANGED
@@ -1,198 +1,63 @@
|
|
1 |
---
|
|
|
2 |
library_name: diffusers
|
|
|
|
|
|
|
|
|
|
|
|
|
3 |
---
|
4 |
|
5 |
-
#
|
6 |
|
7 |
-
|
8 |
|
|
|
|
|
9 |
|
|
|
10 |
|
11 |
-
##
|
|
|
12 |
|
13 |
-
|
14 |
|
15 |
-
<!-- Provide a longer summary of what this model is. -->
|
16 |
|
17 |
-
|
18 |
|
19 |
-
- **
|
20 |
-
- **
|
21 |
-
- **
|
22 |
-
- **
|
23 |
-
- **
|
24 |
-
- **
|
25 |
-
- **Finetuned from model [optional]:** [More Information Needed]
|
26 |
|
27 |
-
|
28 |
|
29 |
-
|
30 |
|
31 |
-
|
32 |
-
|
33 |
-
|
34 |
|
35 |
-
|
36 |
|
37 |
-
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
|
38 |
|
39 |
-
|
40 |
|
41 |
-
|
42 |
|
43 |
-
|
|
|
44 |
|
45 |
-
### Downstream Use [optional]
|
46 |
|
47 |
-
|
48 |
|
49 |
-
|
|
|
50 |
|
51 |
-
|
52 |
|
53 |
-
|
54 |
-
|
55 |
-
[More Information Needed]
|
56 |
-
|
57 |
-
## Bias, Risks, and Limitations
|
58 |
-
|
59 |
-
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
|
60 |
-
|
61 |
-
[More Information Needed]
|
62 |
-
|
63 |
-
### Recommendations
|
64 |
-
|
65 |
-
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
|
66 |
-
|
67 |
-
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
|
68 |
-
|
69 |
-
## How to Get Started with the Model
|
70 |
-
|
71 |
-
Use the code below to get started with the model.
|
72 |
-
|
73 |
-
[More Information Needed]
|
74 |
-
|
75 |
-
## Training Details
|
76 |
-
|
77 |
-
### Training Data
|
78 |
-
|
79 |
-
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
|
80 |
-
|
81 |
-
[More Information Needed]
|
82 |
-
|
83 |
-
### Training Procedure
|
84 |
-
|
85 |
-
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
|
86 |
-
|
87 |
-
#### Preprocessing [optional]
|
88 |
-
|
89 |
-
[More Information Needed]
|
90 |
-
|
91 |
-
|
92 |
-
#### Training Hyperparameters
|
93 |
-
|
94 |
-
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
|
95 |
-
|
96 |
-
#### Speeds, Sizes, Times [optional]
|
97 |
-
|
98 |
-
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
|
99 |
-
|
100 |
-
[More Information Needed]
|
101 |
-
|
102 |
-
## Evaluation
|
103 |
-
|
104 |
-
<!-- This section describes the evaluation protocols and provides the results. -->
|
105 |
-
|
106 |
-
### Testing Data, Factors & Metrics
|
107 |
-
|
108 |
-
#### Testing Data
|
109 |
-
|
110 |
-
<!-- This should link to a Dataset Card if possible. -->
|
111 |
-
|
112 |
-
[More Information Needed]
|
113 |
-
|
114 |
-
#### Factors
|
115 |
-
|
116 |
-
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
|
117 |
-
|
118 |
-
[More Information Needed]
|
119 |
-
|
120 |
-
#### Metrics
|
121 |
-
|
122 |
-
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
|
123 |
-
|
124 |
-
[More Information Needed]
|
125 |
-
|
126 |
-
### Results
|
127 |
-
|
128 |
-
[More Information Needed]
|
129 |
-
|
130 |
-
#### Summary
|
131 |
-
|
132 |
-
|
133 |
-
|
134 |
-
## Model Examination [optional]
|
135 |
-
|
136 |
-
<!-- Relevant interpretability work for the model goes here -->
|
137 |
-
|
138 |
-
[More Information Needed]
|
139 |
-
|
140 |
-
## Environmental Impact
|
141 |
-
|
142 |
-
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
|
143 |
-
|
144 |
-
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
|
145 |
-
|
146 |
-
- **Hardware Type:** [More Information Needed]
|
147 |
-
- **Hours used:** [More Information Needed]
|
148 |
-
- **Cloud Provider:** [More Information Needed]
|
149 |
-
- **Compute Region:** [More Information Needed]
|
150 |
-
- **Carbon Emitted:** [More Information Needed]
|
151 |
-
|
152 |
-
## Technical Specifications [optional]
|
153 |
-
|
154 |
-
### Model Architecture and Objective
|
155 |
-
|
156 |
-
[More Information Needed]
|
157 |
-
|
158 |
-
### Compute Infrastructure
|
159 |
-
|
160 |
-
[More Information Needed]
|
161 |
-
|
162 |
-
#### Hardware
|
163 |
-
|
164 |
-
[More Information Needed]
|
165 |
-
|
166 |
-
#### Software
|
167 |
-
|
168 |
-
[More Information Needed]
|
169 |
-
|
170 |
-
## Citation [optional]
|
171 |
-
|
172 |
-
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
|
173 |
-
|
174 |
-
**BibTeX:**
|
175 |
-
|
176 |
-
[More Information Needed]
|
177 |
-
|
178 |
-
**APA:**
|
179 |
-
|
180 |
-
[More Information Needed]
|
181 |
-
|
182 |
-
## Glossary [optional]
|
183 |
-
|
184 |
-
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
|
185 |
-
|
186 |
-
[More Information Needed]
|
187 |
-
|
188 |
-
## More Information [optional]
|
189 |
-
|
190 |
-
[More Information Needed]
|
191 |
-
|
192 |
-
## Model Card Authors [optional]
|
193 |
-
|
194 |
-
[More Information Needed]
|
195 |
-
|
196 |
-
## Model Card Contact
|
197 |
-
|
198 |
-
[More Information Needed]
|
|
|
1 |
---
|
2 |
+
license: cc-by-nc-sa-4.0
|
3 |
library_name: diffusers
|
4 |
+
tags:
|
5 |
+
- pytorch
|
6 |
+
- diffusers
|
7 |
+
- unconditional-image-generation
|
8 |
+
- wavelet-diffusion
|
9 |
+
- U-KAN
|
10 |
---
|
11 |
|
12 |
+
# A Wavelet Diffusion Framework for Accelerated Generative Modeling with Lightweight Denoisers
|
13 |
|
14 |
+
**Authors**: Markos Aivazoglou-Vounatsos, Mostafa Mehdipour Ghazi
|
15 |
|
16 |
+
**Abstract**
|
17 |
+
Denoising diffusion models have emerged as a powerful class of deep generative models, yet they remain computationally demanding due to their iterative nature and high-dimensional input space. In this work, we propose a novel framework that integrates wavelet decomposition into diffusion-based generative models to reduce spatial redundancy and improve training and sampling efficiency. By operating in the wavelet domain, our approach enables a compact multiresolution representation of images, facilitating faster convergence and more efficient inference with minimal architectural modifications. We assess this framework using UNets and UKANs as denoising backbones across multiple diffusion models and benchmark datasets. Our experiments show that a 1-level wavelet decomposition achieves a speedup of up to three times in training, with competitive Fréchet Inception Distance (FID) scores. We further demonstrate that KAN-based architectures offer lightweight alternatives to convolutional backbones, enabling parameter-efficient generation. In-depth analysis of sampling dynamics, including the impact of implicit configurations and wavelet depth, reveals trade-offs between speed, quality, and resolution-specific sensitivity. These findings offer practical insights into the design of efficient generative models and highlight the potential of frequency-domain learning for future generative modeling research.
|
18 |
|
19 |
+
Source code available at https://github.com/markos-aivazoglou/wavelet-diffusion.
|
20 |
|
21 |
+
## Architecture Overview
|
22 |
+
<img src="wddpm-diffusion-new.png" alt="Architecture Overview" width="100%">
|
23 |
|
24 |
+
*Figure 1: Overview of the Wavelet Diffusion Model (WDDM) architecture. The model operates in the wavelet domain, leveraging wavelet decomposition to reduce spatial redundancy and improve training efficiency. The denoising backbone can be a UNet or a KAN-based architecture, allowing for flexible and efficient generative modeling.*
|
25 |
|
|
|
26 |
|
27 |
+
## 🚀 Key Features
|
28 |
|
29 |
+
- **Efficient Training**: Up to 3x faster training compared to standard diffusion models
|
30 |
+
- **Wavelet-Based Compression**: Operates in wavelet domain for reduced spatial redundancy
|
31 |
+
- **Multiple Architectures**: Supports multiple denoising backbones such as UNet and U-KAN
|
32 |
+
- **Flexible Framework**: Compatible with DDPM, DDIM and other standard diffusion solvers
|
33 |
+
- **Multi-Dataset Support**: Evaluated on CIFAR-10, CELEBA-HQ, and STL-10
|
34 |
+
- **Parameter Efficiency**: Significant reduction in model parameters while maintaining quality
|
|
|
35 |
|
36 |
+
## 📊 Datasets
|
37 |
|
38 |
+
The framework supports three main datasets:
|
39 |
|
40 |
+
1. **CIFAR-10**: 32×32 Natural images (60,000 samples)
|
41 |
+
2. **CelebA-HQ**: 256×256 facial images (30,000 samples)
|
42 |
+
3. **STL-10**: 64×64 natural images (100,000 samples)
|
43 |
|
44 |
+
CIFAR10 and STL10 will be automatically downloaded when first used. For CELEBA-HQ, you need to download the dataset manually and place it in the `data/celeba-hq` directory. The dataset can be downloaded from [CelebA-HQ](https://www.kaggle.com/datasets/badasstechie/celebahq-resized-256x256/data)
|
45 |
|
|
|
46 |
|
47 |
+
## 📄 License
|
48 |
|
49 |
+
This project is licensed under the Creative Commons Attribution Non-Commercial Share-Alike (CC-BY-NC-SA 4.0) - see the [LICENSE](LICENSE) file for details.
|
50 |
|
51 |
+
## 📚 Citation
|
52 |
+
TBA
|
53 |
|
|
|
54 |
|
55 |
+
## 👥 Authors
|
56 |
|
57 |
+
- **Markos Aivazoglou-Vounatsos** - Pioneer Centre for AI, University of Copenhagen
|
58 |
+
- **Mostafa Mehdipour Ghazi** - Pioneer Centre for AI, University of Copenhagen
|
59 |
|
60 |
+
## 📞 Contact
|
61 |
|
62 |
+
For questions feel free to contact:
|
63 |
+
- Contact the authors at `[email protected]` or `[email protected]`
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|