Generative Refocusing: Flexible Defocus Control from a Single Image
This model, Generative Refocusing, presented in the paper Generative Refocusing: Flexible Defocus Control from a Single Image, offers a novel two-step process for depth-of-field control from a single image. It uses DeblurNet to recover all-in-focus images from various inputs and BokehNet for creating controllable bokeh. The method leverages semi-supervised training, combining synthetic paired data with unpaired real bokeh images, and achieves state-of-the-art performance in defocus deblurring, bokeh synthesis, and refocusing benchmarks, allowing text-guided adjustments and custom aperture shapes.
âš¡ Quick Start
Follow the steps below to set up the environment and run the inference demo.
1. Installation
Clone the repository:
git clone [email protected]:rayray9999/Genfocus.git
cd Genfocus
Environment setup:
conda create -n Genfocus python=3.12
conda activate Genfocus
Install requirements:
pip install -r requirements.txt
2. Download Weights
You can download the pre-trained models using the following commands. Ensure you are in the Genfocus root directory.
# 1. Download main models to the root directory
wget https://huggingface.co/nycu-cplab/Genfocus-Model/resolve/main/bokehNet.safetensors
wget https://huggingface.co/nycu-cplab/Genfocus-Model/resolve/main/deblurNet.safetensors
# 2. Setup checkpoints directory and download auxiliary model
mkdir -p checkpoints
cd checkpoints
wget https://huggingface.co/nycu-cplab/Genfocus-Model/resolve/main/checkpoints/depth_pro.pt
cd ..
3. Run Gradio Demo
Launch the interactive web interface locally:
Note: The project uses FLUX.1-dev. You must request access and authenticate locally before running the demo.
python demo.py
The demo will be accessible at http://127.0.0.1:7860 in your browser.
Citation
If you find this project useful for your research, please consider citing:
@article{Genfocus2025,
title={Generative Refocusing: Flexible Defocus Control from a Single Image},
author={Tuan Mu, Chun-Wei and Huang, Jia-Bin and Liu, Yu-Lun},
journal={arXiv preprint arXiv:2512.16923},
year={2025}
}