Text-to-Video
Diffusers
English
video
onlyflow
arlaz commited on
Commit
b3d2fd4
·
verified ·
1 Parent(s): aa87a32

Add README and model card

Browse files
.gitattributes CHANGED
@@ -33,3 +33,5 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
 
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ assets/onlyflow_example_1.gif filter=lfs diff=lfs merge=lfs -text
37
+ assets/onlyflow_example_2.gif filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,61 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - "en"
4
+ tags:
5
+ - video
6
+ - onlyflow
7
+ license: apache-2.0
8
+ datasets:
9
+ - TempoFunk/webvid-10M
10
+ base_model:
11
+ - stable-diffusion-v1-5/stable-diffusion-v1-5
12
+ pipeline_tag: text-to-video
13
+ library_name: diffusers
14
+ ---
15
+ # OnlyFlow - Obvious Research
16
+
17
+ ![](assets/obvious_research.png)
18
+
19
+ ## Model for [OnlyFlow: Optical Flow based Motion Conditioning for Video Diffusion Models](https://arxiv.org/pdf/2411.10501).
20
+
21
+ ![](assets/onlyflow.png)
22
+
23
+ ## Code, available on [Github](https://huggingface.co/obvious-research/OnlyFlow)
24
+
25
+
26
+ ## Model weight release, on [HuggingFace](https://huggingface.co/obvious-research/OnlyFlow)
27
+
28
+ We release the model weights of our best training in fp32, fp16 ckpt and safetensors format.
29
+ The model is trained on the Webvid-10M dataset on a 8 A100 node for 20h.
30
+
31
+ Here are examples of videos created by the model, with from left to right:
32
+ - input
33
+ - optical flow extracted from input
34
+ - output with onylflow influence (gamma parameter set to 0.)
35
+ - output with gamma = 0.5
36
+ - output with gamma = 0.75
37
+ - output with gamma = 1.
38
+
39
+ ![](assets/onlyflow_example_1.gif)
40
+ ![](assets/onlyflow_example_2.gif)
41
+
42
+
43
+ ## Usage for inference
44
+
45
+ You can use the gradio interface on our [HuggingFace Space](https://huggingface.co/spaces/obvious-research/OnlyFlow).
46
+ You can also use the inference script in our [Github repository](https://github.com/obvious-research/onlylow) to test the released model.
47
+
48
+ Please note that the model is really sensitive to the input video framerate. You should try downsampling the video to 8fps before using it if you don't get the desired results.
49
+
50
+ ## Next steps
51
+
52
+ We are working on improving OnlyFlow and train it for other base model than AnimateDiff.
53
+
54
+ We appreciate any help, feel free to reach out! You can contact us:
55
+
56
+ - On Twitter: [@obv_research](https://x.com/obv_research)
57
+ - By mail: [email protected]
58
+
59
+ ## About Obvious Research
60
+
61
+ Obvious Research is an Artificial Intelligence research laboratory dedicated to creating new AI artistic tools, initiated by the artists’ trio [Obvious](https://obvious-art.com/), in partnership with La Sorbonne Université.
assets/onlyflow_example_1.gif ADDED

Git LFS Details

  • SHA256: f2b5aafcc5d4c4a31de659cb0a45c4d632f32f6001d8cd3c0b32639fbf2c0495
  • Pointer size: 132 Bytes
  • Size of remote file: 3.1 MB
assets/onlyflow_example_2.gif ADDED

Git LFS Details

  • SHA256: e6444237105def44c30086100efd376412517857430cc4e73818a84a1c295fce
  • Pointer size: 132 Bytes
  • Size of remote file: 3.62 MB