# ControlVideo
Official pytorch implementation of "ControlVideo: Training-free Controllable Text-to-Video Generation"
[![arXiv](https://img.shields.io/badge/arXiv-2305.13077-b31b1b.svg)](https://arxiv.org/abs/2305.13077)
![visitors](https://visitor-badge.laobi.icu/badge?page_id=YBYBZhang/ControlVideo)
[![Replicate](https://replicate.com/cjwbw/controlvideo/badge)](https://replicate.com/cjwbw/controlvideo)
ControlVideo adapts ControlNet to the video counterpart without any finetuning, aiming to directly inherit its high-quality and consistent generation
## News
* [05/28/2023] Thanks [chenxwh](https://github.com/chenxwh), add a [Replicate demo](https://replicate.com/cjwbw/controlvideo)!
* [05/25/2023] Code [ControlVideo](https://github.com/YBYBZhang/ControlVideo/) released!
* [05/23/2023] Paper [ControlVideo](https://arxiv.org/abs/2305.13077) released!
## Setup
### 1. Download Weights
All pre-trained weights are downloaded to `checkpoints/` directory, including the pre-trained weights of [Stable Diffusion v1.5](https://huggingface.co/runwayml/stable-diffusion-v1-5), ControlNet conditioned on [canny edges](https://huggingface.co/lllyasviel/sd-controlnet-canny), [depth maps](https://huggingface.co/lllyasviel/sd-controlnet-depth), [human poses](https://huggingface.co/lllyasviel/sd-controlnet-openpose).
The `flownet.pkl` is the weights of [RIFE](https://github.com/megvii-research/ECCV2022-RIFE).
The final file tree likes:
```none
checkpoints
├── stable-diffusion-v1-5
├── sd-controlnet-canny
├── sd-controlnet-depth
├── sd-controlnet-openpose
├── flownet.pkl
```
### 2. Requirements
```shell
conda create -n controlvideo python=3.10
conda activate controlvideo
pip install -r requirements.txt
```
`xformers` is recommended to save memory and running time.
## Inference
To perform text-to-video generation, just run this command in `inference.sh`:
```bash
python inference.py \
--prompt "A striking mallard floats effortlessly on the sparkling pond." \
--condition "depth" \
--video_path "data/mallard-water.mp4" \
--output_path "outputs/" \
--video_length 15 \
--smoother_steps 19 20 \
--width 512 \
--height 512 \
# --is_long_video
```
where `--video_length` is the length of synthesized video, `--condition` represents the type of structure sequence,
`--smoother_steps` determines at which timesteps to perform smoothing, and `--is_long_video` denotes whether to enable efficient long-video synthesis.
## Visualizations
### ControlVideo on depth maps
|
|
|
"A charming flamingo gracefully wanders in the calm and serene water, its delicate neck curving into an elegant shape." |
"A striking mallard floats effortlessly on the sparkling pond." |
"A gigantic yellow jeep slowly turns on a wide, smooth road in the city." |
|
|
|
"A sleek boat glides effortlessly through the shimmering river, van gogh style." |
"A majestic sailing boat cruises along the vast, azure sea." |
"A contented cow ambles across the dewy, verdant pasture." |
### ControlVideo on canny edges
|
|
|
"A young man riding a sleek, black motorbike through the winding mountain roads." |
"A white swan movingon the lake, cartoon style." |
"A dusty old jeep was making its way down the winding forest road, creaking and groaning with each bump and turn." |
|
|
|
"A shiny red jeep smoothly turns on a narrow, winding road in the mountains." |
"A majestic camel gracefully strides across the scorching desert sands." |
"A fit man is leisurely hiking through a lush and verdant forest." |
### ControlVideo on human poses
|
|
|
|
"James bond moonwalk on the beach, animation style." |
"Goku in a mountain range, surreal style." |
"Hulk is jumping on the street, cartoon style." |
"A robot dances on a road, animation style." |
### Long video generation
|
|
"A steamship on the ocean, at sunset, sketch style." |
"Hulk is dancing on the beach, cartoon style." |
## Citation
If you make use of our work, please cite our paper.
```bibtex
@article{zhang2023controlvideo,
title={ControlVideo: Training-free Controllable Text-to-Video Generation},
author={Zhang, Yabo and Wei, Yuxiang and Jiang, Dongsheng and Zhang, Xiaopeng and Zuo, Wangmeng and Tian, Qi},
journal={arXiv preprint arXiv:2305.13077},
year={2023}
}
```
## Acknowledgement
This work repository borrows heavily from [Diffusers](https://github.com/huggingface/diffusers), [ControlNet](https://github.com/lllyasviel/ControlNet), [Tune-A-Video](https://github.com/showlab/Tune-A-Video), and [RIFE](https://github.com/megvii-research/ECCV2022-RIFE).
There are also many interesting works on video generation: [Tune-A-Video](https://github.com/showlab/Tune-A-Video), [Text2Video-Zero](https://github.com/Picsart-AI-Research/Text2Video-Zero), [Follow-Your-Pose](https://github.com/mayuelala/FollowYourPose), [Control-A-Video](https://github.com/Weifeng-Chen/control-a-video), et al.