This model repo is for AnyStory.
AnyStory is a unified approach for personalized subject generation. It not only achieves high-fidelity personalization for single subjects, but also for multiple subjects, without sacrificing subject fidelity.
News
- [2025/05/01] ๐ We release the code and demo for the
FLUX.1-dev
version of AnyStory.
Usage
import torch
from PIL import Image
from huggingface_hub import hf_hub_download
from anystory.generate import AnyStoryFluxPipeline
anystory_path = hf_hub_download(repo_id="Junjie96/AnyStory", filename="anystory_flux.bin")
story_pipe = AnyStoryFluxPipeline(
hf_flux_pipeline_path="black-forest-labs/FLUX.1-dev",
hf_flux_redux_path="black-forest-labs/FLUX.1-Redux-dev",
anystory_path=anystory_path,
device="cuda",
torch_dtype=torch.bfloat16
)
# you can add lora here
# story_pipe.flux_pipeline.load_lora_weights(lora_path, adapter_name="...")
# single-subject
subject_image = Image.open("assets/examples/1.webp").convert("RGB")
subject_mask = Image.open("assets/examples/1_mask.webp").convert("L")
prompt = "Cartoon style. A sheep is riding a skateboard and gliding through the city," \
" holding a wooden sign that says \"hello\"."
image = story_pipe.generate(prompt=prompt, images=[subject_image], masks=[subject_mask], seed=2025,
num_inference_steps=25, height=512, width=512,
guidance_scale=3.5)
image.save("output_1.png")
# multi-subject
subject_image_1 = Image.open("assets/examples/6_1.webp").convert("RGB")
subject_mask_1 = Image.open("assets/examples/6_1_mask.webp").convert("L")
subject_image_2 = Image.open("assets/examples/6_2.webp").convert("RGB")
subject_mask_2 = Image.open("assets/examples/6_2_mask.webp").convert("L")
prompt = "Two men are sitting by a wooden table, which is laden with delicious food and a pot of wine. " \
"One of the men holds a wine glass, drinking heartily with a bold expression; " \
"the other smiles as he pours wine for his companion, both of them engaged in cheerful conversation. " \
"In the background is an ancient pavilion surrounded by emerald bamboo groves, with sunlight filtering " \
"through the leaves to cast dappled shadows."
image = story_pipe.generate(prompt=prompt,
images=[subject_image_1, subject_image_2],
masks=[subject_mask_1, subject_mask_2],
seed=2025,
enable_router=True, ref_start_at=0.09,
num_inference_steps=25, height=512, width=512,
guidance_scale=3.5)
image.save("output_2.png")
Storyboard generation
import json
from storyboard import StoryboardPipeline
storyboard_pipe = StoryboardPipeline()
script_dict = json.load(open("assets/scripts/013420.json"))
print(script_dict)
results = storyboard_pipe(script_dict, style_name="Comic book")
for key, result in results.items():
result.save(f"output_1_{key}.png")
# ็ฎๅญ็่พๅทดๆ้ฟ
script_dict = json.load(open("assets/scripts/014933.json"))
print(script_dict)
results = storyboard_pipe(script_dict, style_name="Japanese Anime")
for key, result in results.items():
result.save(f"output_2_{key}.png")
Example output:
Applications
Intelligent creation of AI story pictures with Qwen Agent (please refer to storyboard.py
)
AI Animation Video Production with Wan Image-to-Video
Acknowledgements
This code is built on diffusers and OminiControl. Highly appreciate their great work!
Cite
@article{he2025anystory,
title={AnyStory: Towards Unified Single and Multiple Subject Personalization in Text-to-Image Generation},
author={He, Junjie and Tuo, Yuxiang and Chen, Binghui and Zhong, Chongyang and Geng, Yifeng and Bo, Liefeng},
journal={arXiv preprint arXiv:2501.09503},
year={2025}
}
- Downloads last month
- 56
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support
Model tree for Junjie96/AnyStory
Base model
black-forest-labs/FLUX.1-dev