|
--- |
|
license: cc-by-nc-4.0 |
|
task_categories: |
|
- text-to-image |
|
- image-to-image |
|
language: |
|
- en |
|
tags: |
|
- image-generation |
|
- image-to-image |
|
- Security |
|
pretty_name: OriPID |
|
size_categories: |
|
- 1M<n<10M |
|
--- |
|
|
|
# Summary |
|
This is the dataset proposed in our paper [**Origin Identification for Text-Guided Image-to-Image Diffusion Models**](https://arxiv.org/abs/2501.02376) (ICML 2025). |
|
|
|
<p align="center"> |
|
<img src="https://huggingface.co/datasets/WenhaoWang/OriPID/resolve/main/assets/teasor.png" width="1000"> |
|
</p> |
|
|
|
|
|
# Download |
|
|
|
## Training |
|
|
|
You can download the images: |
|
``` |
|
wget https://huggingface.co/datasets/WenhaoWang/OriPID/resolve/main/training/sd2_d_multi.tar.part_0{0..9} |
|
cat sd2_d_multi.tar.part_* > sd2_d_multi.tar |
|
tar -xvf sd2_d_multi.tar |
|
``` |
|
|
|
Or you can directly download the features extracted by VAE in Stable Diffusion 2: |
|
``` |
|
wget https://huggingface.co/datasets/WenhaoWang/OriPID/resolve/main/training/sd2_d_multi_feature.tar |
|
tar -xvf sd2_d_multi_feature.tar |
|
``` |
|
|
|
The features are extracted by: |
|
```python |
|
# pip install torch torchvision torchaudio transformers diffusers accelerate |
|
from diffusers import AutoPipelineForImage2Image |
|
import torchvision |
|
import torch |
|
from PIL import Image |
|
import requests |
|
|
|
pipeline = AutoPipelineForImage2Image.from_pretrained("stabilityai/stable-diffusion-2", torch_dtype=torch.float32, variant="fp16", use_safetensors=True) |
|
vae = pipeline.vae |
|
|
|
mean, std = [0.485, 0.456, 0.406],[0.229, 0.224, 0.225] |
|
transforms = torchvision.transforms.Compose([ |
|
torchvision.transforms.Resize((256, 256)), |
|
torchvision.transforms.ToTensor(), |
|
torchvision.transforms.Normalize([0.5], [0.5]), |
|
]) |
|
|
|
url = "https://huggingface.co/datasets/WenhaoWang/AnyPattern/resolve/main/Irises.jpg" |
|
image = Image.open(requests.get(url, stream=True).raw) |
|
latents = vae.encode(transforms(image).unsqueeze(0)).latent_dist.sample() # torch.Size([1, 4, 32, 32]) |
|
features = latents.reshape(len(latents), -1) # torch.Size([1, 4096]) |
|
``` |
|
|
|
|
|
|
|
## Query |
|
|
|
``` |
|
wget https://huggingface.co/datasets/WenhaoWang/OriPID/resolve/main/query/colorful.tar |
|
wget https://huggingface.co/datasets/WenhaoWang/OriPID/resolve/main/query/kk.tar |
|
wget https://huggingface.co/datasets/WenhaoWang/OriPID/resolve/main/query/kolor.tar |
|
wget https://huggingface.co/datasets/WenhaoWang/OriPID/resolve/main/query/opendalle.tar |
|
wget https://huggingface.co/datasets/WenhaoWang/OriPID/resolve/main/query/sd2.tar |
|
wget https://huggingface.co/datasets/WenhaoWang/OriPID/resolve/main/query/sd3.tar |
|
wget https://huggingface.co/datasets/WenhaoWang/OriPID/resolve/main/query/sdxl.tar |
|
``` |
|
|
|
``` |
|
tar -xvf colorful.tar |
|
tar -xvf kk.tar |
|
tar -xvf kolor.tar |
|
tar -xvf opendalle.tar |
|
tar -xvf sd2.tar |
|
tar -xvf sd3.tar |
|
tar -xvf sdxl.tar |
|
``` |
|
|
|
## Reference |
|
``` |
|
wget https://huggingface.co/datasets/WenhaoWang/AnyPattern/resolve/main/reference/references_{0..19}.zip |
|
for z in references_*.zip; do unzip $z; done |
|
mv images/references reference_images |
|
``` |
|
|
|
# License |
|
The dataset is licensed under the [CC BY-NC 4.0 license](https://creativecommons.org/licenses/by-nc/4.0/deed.en). For commercial uses, please email [email protected]. |
|
|
|
# Citation |
|
``` |
|
@article{wang2025origin, |
|
title={Origin Identification for Text-Guided Image-to-Image Diffusion Models}, |
|
author={Wang, Wenhao and Sun, Yifan and Yang, Zongxin and Tan, Zhentao and Hu, Zhengdong and Yang, Yi}, |
|
journal={Forty-second International Conference on Machine Learning}, |
|
year={2025}, |
|
url={https://openreview.net/forum?id=46n3izUNiv} |
|
} |
|
``` |
|
|
|
# Contact |
|
|
|
If you have any questions, feel free to contact Wenhao Wang ([email protected]). |
|
|
|
|