X2I / README.md
majian0318's picture
Update README.md
69381eb verified
---
license: apache-2.0
language:
- multilingual
base_model:
- black-forest-labs/FLUX.1-dev
- OpenGVLab/InternVL2_5-1B
- OpenGVLab/InternVL2_5-4B
- openbmb/MiniCPM-o-2_6
- Qwen/Qwen2.5-7B-Instruct
- Qwen/Qwen2.5-3B-Instruct
tags:
- flux.1
- minicpm-o
- qwenvl
- internvl
- text-to-image
- multi-image-to-image
- video-to-image
- text_image-to-image
- audio-to-image
- speech-to-image
pipeline_tag: text-to-image
---
<div align="center">
<a href="https://export.arxiv.org/abs/2503.06134">πŸ“œ X2I Paper </a>
<a href="https://github.com/OPPO-Mente-Lab/X2I">🌐 Github </a>
</div>
> **X2I: Seamless Integration of Multimodal Understanding into Diffusion Transformer via Attention Distillation**
<div align="center">
<img src="versatile.png">
</div>
## Citation
🌟 If you find our work helpful, please consider citing our paper and leaving valuable stars
```
@misc{ma2025x2i,
title={X2I: Seamless Integration of Multimodal Understanding into Diffusion Transformer via Attention Distillation},
author={Jian Ma and Qirong Peng and Xu Guo and Chen Chen and Haonan Lu and Zhenyu Yang},
year={2025},
eprint={2503.06134},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
```
## License
This model is released under the [Apache 2.0 License](LICENSE).