Update README.md
Browse files
README.md
CHANGED
@@ -96,22 +96,26 @@ python wan_generate_video.py --fp8 --task t2v-1.3B --video_size 1024 1024 --vide
|
|
96 |
|
97 |
<video controls autoplay src="https://cdn-uploads.huggingface.co/production/uploads/634dffc49b777beec3bc6448/6mrbztx2gZ7UR7a4kirIq.mp4"></video>
|
98 |
|
|
|
|
|
99 |
## Parameters
|
100 |
|
101 |
-
|
102 |
-
|
103 |
-
|
104 |
-
|
105 |
-
|
106 |
-
|
107 |
-
|
108 |
-
|
109 |
-
|
110 |
-
|
111 |
-
|
112 |
-
|
113 |
-
|
114 |
-
|
|
|
|
|
115 |
|
116 |
## Output
|
117 |
|
|
|
96 |
|
97 |
<video controls autoplay src="https://cdn-uploads.huggingface.co/production/uploads/634dffc49b777beec3bc6448/6mrbztx2gZ7UR7a4kirIq.mp4"></video>
|
98 |
|
99 |
+
|
100 |
+
|
101 |
## Parameters
|
102 |
|
103 |
+
* `--fp8`: Enable FP8 precision (optional).
|
104 |
+
* `--task`: Specify the task (e.g., `t2v-1.3B`).
|
105 |
+
* `--video_size`: Set the resolution of the generated video (e.g., `1024 1024`).
|
106 |
+
* `--video_length`: Define the length of the video in frames.
|
107 |
+
* `--infer_steps`: Number of inference steps.
|
108 |
+
* `--save_path`: Directory to save the generated video.
|
109 |
+
* `--output_type`: Output type (e.g., `both` for video and frames).
|
110 |
+
* `--dit`: Path to the diffusion model weights.
|
111 |
+
* `--vae`: Path to the VAE model weights.
|
112 |
+
* `--t5`: Path to the T5 model weights.
|
113 |
+
* `--attn_mode`: Attention mode (e.g., `torch`).
|
114 |
+
* `--lora_weight`: Path to the LoRA weights.
|
115 |
+
* `--lora_multiplier`: Multiplier for LoRA weights.
|
116 |
+
* `--prompt`: Textual prompt for video generation.
|
117 |
+
|
118 |
+
|
119 |
|
120 |
## Output
|
121 |
|