--- license: apache-2.0 language: - en base_model: - black-forest-labs/FLUX.1-dev library_name: transformers pipeline_tag: image-to-image tags: - image-generation - subject-personalization - style-transfer - Diffusion-Transformer ---

Logo
Unified Style and Subject-Driven Generation via Disentangled and Reward Learning

Build Build Build

![teaser of USO](./assets/teaser.webp) ## 📖 Introduction Existing literature typically treats style-driven and subject-driven generation as two disjoint tasks: the former prioritizes stylistic similarity, whereas the latter insists on subject consistency, resulting in an apparent antagonism. We argue that both objectives can be unified under a single framework because they ultimately concern the disentanglement and re-composition of “content” and “style”, a long-standing theme in style-driven research. To this end, we present USO, a Unified framework for Style driven and subject-driven GeneratiOn. First, we construct a large-scale triplet dataset consisting of content images, style images, and their corresponding stylized content images. Second, we introduce a disentangled learning scheme that simultaneously aligns style features and disentangles content from style through two complementary objectives, style-alignment training and content–style disentanglement training. Third, we incorporate a style reward-learning paradigm to further enhance the model’s performance. ## ⚡️ Quick Start ### 🔧 Requirements and Installation Clone our [Github repo](https://github.com/bytedance/UNO) Install the requirements ```bash ## create a virtual environment with python >= 3.10 <= 3.12, like # python -m venv uso_env # source uso_env/bin/activate # then install pip install -r requirements.txt ``` then download checkpoints in one of the three ways: 1. Directly run the inference scripts, the checkpoints will be downloaded automatically by the `hf_hub_download` function in the code to your `$HF_HOME`(the default value is `~/.cache/huggingface`). 2. use `huggingface-cli download ` to download `black-forest-labs/FLUX.1-dev`, `xlabs-ai/xflux_text_encoders`, `openai/clip-vit-large-patch14`, `TODO UNO hf model`, then run the inference scripts. 3. use `huggingface-cli download --local-dir ` to download all the checkpoints menthioned in 2. to the directories your want. Then set the environment variable `TODO`. Finally, run the inference scripts. ### 🌟 Gradio Demo ```bash python app.py ``` ## 📄 Disclaimer

We open-source this project for academic research. The vast majority of images used in this project are either generated or from open-source datasets. If you have any concerns, please contact us, and we will promptly remove any inappropriate content. Our project is released under the Apache 2.0 License. If you apply to other base models, please ensure that you comply with the original licensing terms.

This research aims to advance the field of generative AI. Users are free to create images using this tool, provided they comply with local laws and exercise responsible usage. The developers are not liable for any misuse of the tool by users.

## Citation We also appreciate it if you could give a star ⭐ to our [Github repository](https://github.com/bytedance/USO). Thanks a lot! If you find this project useful for your research, please consider citing our paper: ```bibtex @article{wu2025uso, title={USO: Unified Style and Subject-Driven Generation via Disentangled and Reward Learning}, author={Shaojin Wu and Mengqi Huang and Yufeng Cheng and Wenxu Wu and Jiahe Tian and Yiming Luo and Fei Ding and Qian He}, year={2025}, eprint={2508.18966}, archivePrefix={arXiv}, primaryClass={cs.CV}, } ```