Update README.md
Browse files
README.md
CHANGED
@@ -139,4 +139,16 @@ The Transformer models in this repository are licensed under the MIT License. Th
|
|
139 |
|
140 |
## Acknowledgements
|
141 |
- The VAE component is from `FLUX.1 [schnell]`, licensed under Apache 2.0.
|
142 |
-
- The text encoders are from `google/t5-v1_1-xxl` (licensed under Apache 2.0) and `meta-llama/Meta-Llama-3.1-8B-Instruct` (licensed under the Llama 3.1 Community License Agreement).
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
139 |
|
140 |
## Acknowledgements
|
141 |
- The VAE component is from `FLUX.1 [schnell]`, licensed under Apache 2.0.
|
142 |
+
- The text encoders are from `google/t5-v1_1-xxl` (licensed under Apache 2.0) and `meta-llama/Meta-Llama-3.1-8B-Instruct` (licensed under the Llama 3.1 Community License Agreement).
|
143 |
+
|
144 |
+
|
145 |
+
## Citation
|
146 |
+
|
147 |
+
```bibtex
|
148 |
+
@article{hidreami1technicalreport,
|
149 |
+
title={HiDream-I1: A High-Efficient Image Generative Foundation Model with Sparse Diffusion Transformer},
|
150 |
+
author={Cai, Qi and Chen, Jingwen and Chen, Yang and Li, Yehao and Long, Fuchen and Pan, Yingwei and Qiu, Zhaofan and Zhang, Yiheng and Gao, Fengbin and Xu, Peihan and others},
|
151 |
+
journal={arXiv preprint arXiv:2505.22705},
|
152 |
+
year={2025}
|
153 |
+
}
|
154 |
+
```
|