rahul7star commited on
Commit
1030ba2
·
verified ·
1 Parent(s): bbbac10

Migrated from GitHub

Browse files
LICENSE ADDED
@@ -0,0 +1,201 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Apache License
2
+ Version 2.0, January 2004
3
+ http://www.apache.org/licenses/
4
+
5
+ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
6
+
7
+ 1. Definitions.
8
+
9
+ "License" shall mean the terms and conditions for use, reproduction,
10
+ and distribution as defined by Sections 1 through 9 of this document.
11
+
12
+ "Licensor" shall mean the copyright owner or entity authorized by
13
+ the copyright owner that is granting the License.
14
+
15
+ "Legal Entity" shall mean the union of the acting entity and all
16
+ other entities that control, are controlled by, or are under common
17
+ control with that entity. For the purposes of this definition,
18
+ "control" means (i) the power, direct or indirect, to cause the
19
+ direction or management of such entity, whether by contract or
20
+ otherwise, or (ii) ownership of fifty percent (50%) or more of the
21
+ outstanding shares, or (iii) beneficial ownership of such entity.
22
+
23
+ "You" (or "Your") shall mean an individual or Legal Entity
24
+ exercising permissions granted by this License.
25
+
26
+ "Source" form shall mean the preferred form for making modifications,
27
+ including but not limited to software source code, documentation
28
+ source, and configuration files.
29
+
30
+ "Object" form shall mean any form resulting from mechanical
31
+ transformation or translation of a Source form, including but
32
+ not limited to compiled object code, generated documentation,
33
+ and conversions to other media types.
34
+
35
+ "Work" shall mean the work of authorship, whether in Source or
36
+ Object form, made available under the License, as indicated by a
37
+ copyright notice that is included in or attached to the work
38
+ (an example is provided in the Appendix below).
39
+
40
+ "Derivative Works" shall mean any work, whether in Source or Object
41
+ form, that is based on (or derived from) the Work and for which the
42
+ editorial revisions, annotations, elaborations, or other modifications
43
+ represent, as a whole, an original work of authorship. For the purposes
44
+ of this License, Derivative Works shall not include works that remain
45
+ separable from, or merely link (or bind by name) to the interfaces of,
46
+ the Work and Derivative Works thereof.
47
+
48
+ "Contribution" shall mean any work of authorship, including
49
+ the original version of the Work and any modifications or additions
50
+ to that Work or Derivative Works thereof, that is intentionally
51
+ submitted to Licensor for inclusion in the Work by the copyright owner
52
+ or by an individual or Legal Entity authorized to submit on behalf of
53
+ the copyright owner. For the purposes of this definition, "submitted"
54
+ means any form of electronic, verbal, or written communication sent
55
+ to the Licensor or its representatives, including but not limited to
56
+ communication on electronic mailing lists, source code control systems,
57
+ and issue tracking systems that are managed by, or on behalf of, the
58
+ Licensor for the purpose of discussing and improving the Work, but
59
+ excluding communication that is conspicuously marked or otherwise
60
+ designated in writing by the copyright owner as "Not a Contribution."
61
+
62
+ "Contributor" shall mean Licensor and any individual or Legal Entity
63
+ on behalf of whom a Contribution has been received by Licensor and
64
+ subsequently incorporated within the Work.
65
+
66
+ 2. Grant of Copyright License. Subject to the terms and conditions of
67
+ this License, each Contributor hereby grants to You a perpetual,
68
+ worldwide, non-exclusive, no-charge, royalty-free, irrevocable
69
+ copyright license to reproduce, prepare Derivative Works of,
70
+ publicly display, publicly perform, sublicense, and distribute the
71
+ Work and such Derivative Works in Source or Object form.
72
+
73
+ 3. Grant of Patent License. Subject to the terms and conditions of
74
+ this License, each Contributor hereby grants to You a perpetual,
75
+ worldwide, non-exclusive, no-charge, royalty-free, irrevocable
76
+ (except as stated in this section) patent license to make, have made,
77
+ use, offer to sell, sell, import, and otherwise transfer the Work,
78
+ where such license applies only to those patent claims licensable
79
+ by such Contributor that are necessarily infringed by their
80
+ Contribution(s) alone or by combination of their Contribution(s)
81
+ with the Work to which such Contribution(s) was submitted. If You
82
+ institute patent litigation against any entity (including a
83
+ cross-claim or counterclaim in a lawsuit) alleging that the Work
84
+ or a Contribution incorporated within the Work constitutes direct
85
+ or contributory patent infringement, then any patent licenses
86
+ granted to You under this License for that Work shall terminate
87
+ as of the date such litigation is filed.
88
+
89
+ 4. Redistribution. You may reproduce and distribute copies of the
90
+ Work or Derivative Works thereof in any medium, with or without
91
+ modifications, and in Source or Object form, provided that You
92
+ meet the following conditions:
93
+
94
+ (a) You must give any other recipients of the Work or
95
+ Derivative Works a copy of this License; and
96
+
97
+ (b) You must cause any modified files to carry prominent notices
98
+ stating that You changed the files; and
99
+
100
+ (c) You must retain, in the Source form of any Derivative Works
101
+ that You distribute, all copyright, patent, trademark, and
102
+ attribution notices from the Source form of the Work,
103
+ excluding those notices that do not pertain to any part of
104
+ the Derivative Works; and
105
+
106
+ (d) If the Work includes a "NOTICE" text file as part of its
107
+ distribution, then any Derivative Works that You distribute must
108
+ include a readable copy of the attribution notices contained
109
+ within such NOTICE file, excluding those notices that do not
110
+ pertain to any part of the Derivative Works, in at least one
111
+ of the following places: within a NOTICE text file distributed
112
+ as part of the Derivative Works; within the Source form or
113
+ documentation, if provided along with the Derivative Works; or,
114
+ within a display generated by the Derivative Works, if and
115
+ wherever such third-party notices normally appear. The contents
116
+ of the NOTICE file are for informational purposes only and
117
+ do not modify the License. You may add Your own attribution
118
+ notices within Derivative Works that You distribute, alongside
119
+ or as an addendum to the NOTICE text from the Work, provided
120
+ that such additional attribution notices cannot be construed
121
+ as modifying the License.
122
+
123
+ You may add Your own copyright statement to Your modifications and
124
+ may provide additional or different license terms and conditions
125
+ for use, reproduction, or distribution of Your modifications, or
126
+ for any such Derivative Works as a whole, provided Your use,
127
+ reproduction, and distribution of the Work otherwise complies with
128
+ the conditions stated in this License.
129
+
130
+ 5. Submission of Contributions. Unless You explicitly state otherwise,
131
+ any Contribution intentionally submitted for inclusion in the Work
132
+ by You to the Licensor shall be under the terms and conditions of
133
+ this License, without any additional terms or conditions.
134
+ Notwithstanding the above, nothing herein shall supersede or modify
135
+ the terms of any separate license agreement you may have executed
136
+ with Licensor regarding such Contributions.
137
+
138
+ 6. Trademarks. This License does not grant permission to use the trade
139
+ names, trademarks, service marks, or product names of the Licensor,
140
+ except as required for reasonable and customary use in describing the
141
+ origin of the Work and reproducing the content of the NOTICE file.
142
+
143
+ 7. Disclaimer of Warranty. Unless required by applicable law or
144
+ agreed to in writing, Licensor provides the Work (and each
145
+ Contributor provides its Contributions) on an "AS IS" BASIS,
146
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
147
+ implied, including, without limitation, any warranties or conditions
148
+ of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
149
+ PARTICULAR PURPOSE. You are solely responsible for determining the
150
+ appropriateness of using or redistributing the Work and assume any
151
+ risks associated with Your exercise of permissions under this License.
152
+
153
+ 8. Limitation of Liability. In no event and under no legal theory,
154
+ whether in tort (including negligence), contract, or otherwise,
155
+ unless required by applicable law (such as deliberate and grossly
156
+ negligent acts) or agreed to in writing, shall any Contributor be
157
+ liable to You for damages, including any direct, indirect, special,
158
+ incidental, or consequential damages of any character arising as a
159
+ result of this License or out of the use or inability to use the
160
+ Work (including but not limited to damages for loss of goodwill,
161
+ work stoppage, computer failure or malfunction, or any and all
162
+ other commercial damages or losses), even if such Contributor
163
+ has been advised of the possibility of such damages.
164
+
165
+ 9. Accepting Warranty or Additional Liability. While redistributing
166
+ the Work or Derivative Works thereof, You may choose to offer,
167
+ and charge a fee for, acceptance of support, warranty, indemnity,
168
+ or other liability obligations and/or rights consistent with this
169
+ License. However, in accepting such obligations, You may act only
170
+ on Your own behalf and on Your sole responsibility, not on behalf
171
+ of any other Contributor, and only if You agree to indemnify,
172
+ defend, and hold each Contributor harmless for any liability
173
+ incurred by, or claims asserted against, such Contributor by reason
174
+ of your accepting any such warranty or additional liability.
175
+
176
+ END OF TERMS AND CONDITIONS
177
+
178
+ APPENDIX: How to apply the Apache License to your work.
179
+
180
+ To apply the Apache License to your work, attach the following
181
+ boilerplate notice, with the fields enclosed by brackets "[]"
182
+ replaced with your own identifying information. (Don't include
183
+ the brackets!) The text should be enclosed in the appropriate
184
+ comment syntax for the file format. We also recommend that a
185
+ file or class name and description of purpose be included on the
186
+ same "printed page" as the copyright notice for easier
187
+ identification within third-party archives.
188
+
189
+ Copyright [yyyy] [name of copyright owner]
190
+
191
+ Licensed under the Apache License, Version 2.0 (the "License");
192
+ you may not use this file except in compliance with the License.
193
+ You may obtain a copy of the License at
194
+
195
+ http://www.apache.org/licenses/LICENSE-2.0
196
+
197
+ Unless required by applicable law or agreed to in writing, software
198
+ distributed under the License is distributed on an "AS IS" BASIS,
199
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
200
+ See the License for the specific language governing permissions and
201
+ limitations under the License.
ORIGINAL_README.md ADDED
@@ -0,0 +1,67 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # goan: A Power-User UI for FramePack
2
+
3
+ Welcome to `goan`, an enhanced user interface designed for creative professionals and power users of FramePack. This project builds upon the brilliant FramePack video generation engine created by lllyasviel (of Fooocus and the Stable Diffusion Forge fork of A1111), exposing useful controls to reach further into FramePack's functionality through a robust and intuitive interface.
4
+
5
+ The base FramePack provides a powerful core model. `goan` extends it with a suite of tools designed for serious workflow, experimentation, and reproducibility. Unlock fine-grained control over your video generations with batch processing, parameter editing, effortless recipe sharing, and complete workspace management.
6
+
7
+ ---
8
+
9
+ ### Key Features for a Creative Workflow
10
+
11
+ The features in `goan` are designed to directly support two important aspects for any serious use of FramePack: deep control over the diffusion process and recovery handling for long-running jobs.
12
+
13
+ * **Uninterrupted Sessions & True Resilience:** Long-running jobs are the norm for video generation, and nothing is more frustrating than a UI disconnect or a browser crash wiping out hours of progress. `goan` is architected specifically to combat this fragility, a common pain point in many Gradio-based UIs.
14
+ * **Backend-Driven State:** The entire state of your session—the task queue, processing status, and live updates—is managed persistently on the backend. This means your job keeps running safely on the server, completely independent of the browser tab's status. Whether the tab is minimized, the screen is locked, or the connection is temporarily lost, your progress is secure.
15
+ * **Intelligent Session Re-attachment:** If your browser tab becomes disconnected for any reason, `goan`'s UI is designed to intelligently and automatically reconnect to the active backend process upon being re-focused. It finds the existing session and its live update stream, allowing the UI to seamlessly catch up to the real-time status of your render queue. This completely mitigates the dreaded "Error" boxes that plague typical Gradio interfaces during long sessions.
16
+ * **Robust Task Queue & Crash Recovery:** Beyond network stability, the entire task queue is automatically saved to disk. If there's a system crash or you need to restart, you can simply relaunch `goan`, and your queue will be right where you left it, ready to continue processing. No more lost work.
17
+
18
+ * **Limit MP4 Preview Generation:** The base FramePack functionality writes a VAE-decoded `.mp4` preview for every single segment. This creates a tremendous amount of potentially unneeded compute, causing video generation to take much longer than necessary. `goan` introduces the ability to restrict these previews to only the segments you care about, using a combination of a periodic slider and a comma-separated list of individual segments to dramatically speed up your workflow.
19
+
20
+ * **Advanced Diffusion Controls:** The base FramePack is a unique approach to video generation. `goan` exposes advanced controls like **Variable CFG**, which allows you to change the prompt adherence over the course of the video. This can be used to correct for a tendency for FramePack to "burn in" or oversaturate the final video as total length increases, giving you greater artistic control.
21
+
22
+ * **Effortless "Recipe" Sharing:** `goan`'s **Drop-in Parameter Loading** allows you to save all creative settings directly into a generated PNG. Share the image, and anyone using `goan` can drop it into their UI to instantly load your exact "recipe," making collaboration and experimentation simple and repeatable.
23
+
24
+ * **Complete Workspace Management:** For more complex projects, you can save your *entire UI state*—every slider, checkbox, and text prompt—into a single `.json` file. This ensures you can always get back to a specific setup for consistent results.
25
+
26
+ ### Deeper Dive: Functional & UI Control Comparison
27
+
28
+ For those curious about the specifics, here's a more detailed breakdown of what's new.
29
+
30
+ #### Diffusion Controls (CFG, Guidance Scale)
31
+
32
+ **Understanding CFG:** Classifier-Free Guidance (CFG) is a critical technique in diffusion models. Think of it as a knob that controls how strongly the model should adhere to your text prompt versus how much creative freedom it has.
33
+ * A **low CFG** value allows the model to be more imaginative, potentially straying from the prompt.
34
+ * A **high CFG** value forces the model to follow the prompt more strictly, which can sometimes reduce creativity or lead to artifacts if pushed too high.
35
+
36
+ In this model, there are two main guidance controls:
37
+
38
+ * **`Distilled CFG Scale` (`gs`):** This is the primary control you will use.
39
+ * Recommended settings to begin with are:
40
+ * Always start at 10.
41
+ * If bright colors contrast too harshly, experiment with levels around 7-9.
42
+ * Variable CFG has been added but useful suggestions here are still pending.
43
+ * **`CFG Scale` (`cfg`):** This controls the standard guidance, which is essential for negative prompts.
44
+ * **Important:** For your **Negative Prompt** to have any effect, you must set the `CFG Scale` to a value greater than 1.0 (e.g., 1.1 or 2.0).
45
+ * **Performance Trade-Off:** Be aware that setting `CFG Scale` to any value other than 1.0 will roughly **double the generation time** for your video, as it requires a second pass for each step. Use it only when you need the control of a negative prompt.
46
+
47
+ **Comparison:**
48
+ * **Base FramePack:** Presented a very simplified interface. Key controls like `CFG Scale` and `CFG Re-Scale` were hidden (`visible=False`), and the `info` text for `Distilled CFG Scale` explicitly said, "Changing this value is not recommended." This was effective for a simple demo but limited experimentation.
49
+ * **`goan`:** Exposes all guidance controls for the power user. It introduces the concept of **Variable CFG**, allowing the `Distilled CFG Scale` to change linearly over the course of the generation. This provides advanced control over the video's evolution, letting a user start with high prompt adherence and gradually decrease it, for example.
50
+
51
+ #### New Functionality: Workspace & Metadata
52
+
53
+ This entire feature set is new in `goan` to extend power user functionality to FramePack.
54
+
55
+ * **Drop-in Parameter Loading:** This is the core of the new workflow. You can take a PNG generated by `goan`, drop it into the image input, and the UI will automatically detect the embedded settings. A modal will ask if you want to apply them. This makes sharing and reusing "recipes" effortless.
56
+ * **Workspace Management:** Users can now save the *entire state* of the UI—all sliders, text boxes, and checkboxes—to a `.json` file. This "workspace" can be reloaded at any time, which is invaluable for complex projects or for ensuring consistent settings across sessions.
57
+ * **Session Persistence & Autosave:** The task queue is automatically saved when the application is closed and reloaded on startup. This prevents the loss of a long list of batched jobs. The UI also attempts to restore its last state after a page refresh.
58
+ * **Full Task Queue Control:** `goan` includes a full-featured task queue where you can add, remove, reorder, and *edit* jobs before you start processing. This is a massive improvement over the original's single-task processing model.
59
+
60
+ ### For Developers: A Look Under the Hood (pending review)
61
+
62
+ ### Code Example: A Glimpse at the New Architecture (pending review)
63
+
64
+ ### Acknowledgements
65
+
66
+ * **@lllyasviel** for creating the groundbreaking FramePack engine.
67
+ * **@Tophness** for the super-useful queueing system architecture introduced in FramePack PR #150, from which forms the foundation of goan's task management, background processing, and progress update features.
diffusers_helper/bucket_tools.py ADDED
@@ -0,0 +1,30 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ bucket_options = {
2
+ 640: [
3
+ (416, 960),
4
+ (448, 864),
5
+ (480, 832),
6
+ (512, 768),
7
+ (544, 704),
8
+ (576, 672),
9
+ (608, 640),
10
+ (640, 608),
11
+ (672, 576),
12
+ (704, 544),
13
+ (768, 512),
14
+ (832, 480),
15
+ (864, 448),
16
+ (960, 416),
17
+ ],
18
+ }
19
+
20
+
21
+ def find_nearest_bucket(h, w, resolution=640):
22
+ min_metric = float('inf')
23
+ best_bucket = None
24
+ for (bucket_h, bucket_w) in bucket_options[resolution]:
25
+ metric = abs(h * bucket_w - w * bucket_h)
26
+ if metric <= min_metric:
27
+ min_metric = metric
28
+ best_bucket = (bucket_h, bucket_w)
29
+ return best_bucket
30
+
diffusers_helper/clip_vision.py ADDED
@@ -0,0 +1,12 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import numpy as np
2
+
3
+
4
+ def hf_clip_vision_encode(image, feature_extractor, image_encoder):
5
+ assert isinstance(image, np.ndarray)
6
+ assert image.ndim == 3 and image.shape[2] == 3
7
+ assert image.dtype == np.uint8
8
+
9
+ preprocessed = feature_extractor.preprocess(images=image, return_tensors="pt").to(device=image_encoder.device, dtype=image_encoder.dtype)
10
+ image_encoder_output = image_encoder(**preprocessed)
11
+
12
+ return image_encoder_output
diffusers_helper/dit_common.py ADDED
@@ -0,0 +1,53 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import torch
2
+ import accelerate.accelerator
3
+
4
+ from diffusers.models.normalization import RMSNorm, LayerNorm, FP32LayerNorm, AdaLayerNormContinuous
5
+
6
+
7
+ accelerate.accelerator.convert_outputs_to_fp32 = lambda x: x
8
+
9
+
10
+ def LayerNorm_forward(self, x):
11
+ return torch.nn.functional.layer_norm(x, self.normalized_shape, self.weight, self.bias, self.eps).to(x)
12
+
13
+
14
+ LayerNorm.forward = LayerNorm_forward
15
+ torch.nn.LayerNorm.forward = LayerNorm_forward
16
+
17
+
18
+ def FP32LayerNorm_forward(self, x):
19
+ origin_dtype = x.dtype
20
+ return torch.nn.functional.layer_norm(
21
+ x.float(),
22
+ self.normalized_shape,
23
+ self.weight.float() if self.weight is not None else None,
24
+ self.bias.float() if self.bias is not None else None,
25
+ self.eps,
26
+ ).to(origin_dtype)
27
+
28
+
29
+ FP32LayerNorm.forward = FP32LayerNorm_forward
30
+
31
+
32
+ def RMSNorm_forward(self, hidden_states):
33
+ input_dtype = hidden_states.dtype
34
+ variance = hidden_states.to(torch.float32).pow(2).mean(-1, keepdim=True)
35
+ hidden_states = hidden_states * torch.rsqrt(variance + self.eps)
36
+
37
+ if self.weight is None:
38
+ return hidden_states.to(input_dtype)
39
+
40
+ return hidden_states.to(input_dtype) * self.weight.to(input_dtype)
41
+
42
+
43
+ RMSNorm.forward = RMSNorm_forward
44
+
45
+
46
+ def AdaLayerNormContinuous_forward(self, x, conditioning_embedding):
47
+ emb = self.linear(self.silu(conditioning_embedding))
48
+ scale, shift = emb.chunk(2, dim=1)
49
+ x = self.norm(x) * (1 + scale)[:, None, :] + shift[:, None, :]
50
+ return x
51
+
52
+
53
+ AdaLayerNormContinuous.forward = AdaLayerNormContinuous_forward
diffusers_helper/gradio/progress_bar.py ADDED
@@ -0,0 +1,86 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ progress_html = '''
2
+ <div class="loader-container">
3
+ <div class="loader"></div>
4
+ <div class="progress-container">
5
+ <progress value="*number*" max="100"></progress>
6
+ </div>
7
+ <span>*text*</span>
8
+ </div>
9
+ '''
10
+
11
+ css = '''
12
+ .loader-container {
13
+ display: flex; /* Use flex to align items horizontally */
14
+ align-items: center; /* Center items vertically within the container */
15
+ white-space: nowrap; /* Prevent line breaks within the container */
16
+ }
17
+
18
+ .loader {
19
+ border: 8px solid #f3f3f3; /* Light grey */
20
+ border-top: 8px solid #3498db; /* Blue */
21
+ border-radius: 50%;
22
+ width: 30px;
23
+ height: 30px;
24
+ animation: spin 2s linear infinite;
25
+ }
26
+
27
+ @keyframes spin {
28
+ 0% { transform: rotate(0deg); }
29
+ 100% { transform: rotate(360deg); }
30
+ }
31
+
32
+ /* Style the progress bar */
33
+ progress {
34
+ appearance: none; /* Remove default styling */
35
+ height: 20px; /* Set the height of the progress bar */
36
+ border-radius: 5px; /* Round the corners of the progress bar */
37
+ background-color: #f3f3f3; /* Light grey background */
38
+ width: 100%;
39
+ vertical-align: middle !important;
40
+ }
41
+
42
+ /* Style the progress bar container */
43
+ .progress-container {
44
+ margin-left: 20px;
45
+ margin-right: 20px;
46
+ flex-grow: 1; /* Allow the progress container to take up remaining space */
47
+ }
48
+
49
+ /* Set the color of the progress bar fill */
50
+ progress::-webkit-progress-value {
51
+ background-color: #3498db; /* Blue color for the fill */
52
+ }
53
+
54
+ progress::-moz-progress-bar {
55
+ background-color: #3498db; /* Blue color for the fill in Firefox */
56
+ }
57
+
58
+ /* Style the text on the progress bar */
59
+ progress::after {
60
+ content: attr(value '%'); /* Display the progress value followed by '%' */
61
+ position: absolute;
62
+ top: 50%;
63
+ left: 50%;
64
+ transform: translate(-50%, -50%);
65
+ color: white; /* Set text color */
66
+ font-size: 14px; /* Set font size */
67
+ }
68
+
69
+ /* Style other texts */
70
+ .loader-container > span {
71
+ margin-left: 5px; /* Add spacing between the progress bar and the text */
72
+ }
73
+
74
+ .no-generating-animation > .generating {
75
+ display: none !important;
76
+ }
77
+
78
+ '''
79
+
80
+
81
+ def make_progress_bar_html(number, text):
82
+ return progress_html.replace('*number*', str(number)).replace('*text*', text)
83
+
84
+
85
+ def make_progress_bar_css():
86
+ return css
diffusers_helper/hf_login.py ADDED
@@ -0,0 +1,21 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import os
2
+
3
+
4
+ def login(token):
5
+ from huggingface_hub import login
6
+ import time
7
+
8
+ while True:
9
+ try:
10
+ login(token)
11
+ print('HF login ok.')
12
+ break
13
+ except Exception as e:
14
+ print(f'HF login failed: {e}. Retrying')
15
+ time.sleep(0.5)
16
+
17
+
18
+ hf_token = os.environ.get('HF_TOKEN', None)
19
+
20
+ if hf_token is not None:
21
+ login(hf_token)
diffusers_helper/hunyuan.py ADDED
@@ -0,0 +1,111 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import torch
2
+
3
+ from diffusers.pipelines.hunyuan_video.pipeline_hunyuan_video import DEFAULT_PROMPT_TEMPLATE
4
+ from diffusers_helper.utils import crop_or_pad_yield_mask
5
+
6
+
7
+ @torch.no_grad()
8
+ def encode_prompt_conds(prompt, text_encoder, text_encoder_2, tokenizer, tokenizer_2, max_length=256):
9
+ assert isinstance(prompt, str)
10
+
11
+ prompt = [prompt]
12
+
13
+ # LLAMA
14
+
15
+ prompt_llama = [DEFAULT_PROMPT_TEMPLATE["template"].format(p) for p in prompt]
16
+ crop_start = DEFAULT_PROMPT_TEMPLATE["crop_start"]
17
+
18
+ llama_inputs = tokenizer(
19
+ prompt_llama,
20
+ padding="max_length",
21
+ max_length=max_length + crop_start,
22
+ truncation=True,
23
+ return_tensors="pt",
24
+ return_length=False,
25
+ return_overflowing_tokens=False,
26
+ return_attention_mask=True,
27
+ )
28
+
29
+ llama_input_ids = llama_inputs.input_ids.to(text_encoder.device)
30
+ llama_attention_mask = llama_inputs.attention_mask.to(text_encoder.device)
31
+ llama_attention_length = int(llama_attention_mask.sum())
32
+
33
+ llama_outputs = text_encoder(
34
+ input_ids=llama_input_ids,
35
+ attention_mask=llama_attention_mask,
36
+ output_hidden_states=True,
37
+ )
38
+
39
+ llama_vec = llama_outputs.hidden_states[-3][:, crop_start:llama_attention_length]
40
+ # llama_vec_remaining = llama_outputs.hidden_states[-3][:, llama_attention_length:]
41
+ llama_attention_mask = llama_attention_mask[:, crop_start:llama_attention_length]
42
+
43
+ assert torch.all(llama_attention_mask.bool())
44
+
45
+ # CLIP
46
+
47
+ clip_l_input_ids = tokenizer_2(
48
+ prompt,
49
+ padding="max_length",
50
+ max_length=77,
51
+ truncation=True,
52
+ return_overflowing_tokens=False,
53
+ return_length=False,
54
+ return_tensors="pt",
55
+ ).input_ids
56
+ clip_l_pooler = text_encoder_2(clip_l_input_ids.to(text_encoder_2.device), output_hidden_states=False).pooler_output
57
+
58
+ return llama_vec, clip_l_pooler
59
+
60
+
61
+ @torch.no_grad()
62
+ def vae_decode_fake(latents):
63
+ latent_rgb_factors = [
64
+ [-0.0395, -0.0331, 0.0445],
65
+ [0.0696, 0.0795, 0.0518],
66
+ [0.0135, -0.0945, -0.0282],
67
+ [0.0108, -0.0250, -0.0765],
68
+ [-0.0209, 0.0032, 0.0224],
69
+ [-0.0804, -0.0254, -0.0639],
70
+ [-0.0991, 0.0271, -0.0669],
71
+ [-0.0646, -0.0422, -0.0400],
72
+ [-0.0696, -0.0595, -0.0894],
73
+ [-0.0799, -0.0208, -0.0375],
74
+ [0.1166, 0.1627, 0.0962],
75
+ [0.1165, 0.0432, 0.0407],
76
+ [-0.2315, -0.1920, -0.1355],
77
+ [-0.0270, 0.0401, -0.0821],
78
+ [-0.0616, -0.0997, -0.0727],
79
+ [0.0249, -0.0469, -0.1703]
80
+ ] # From comfyui
81
+
82
+ latent_rgb_factors_bias = [0.0259, -0.0192, -0.0761]
83
+
84
+ weight = torch.tensor(latent_rgb_factors, device=latents.device, dtype=latents.dtype).transpose(0, 1)[:, :, None, None, None]
85
+ bias = torch.tensor(latent_rgb_factors_bias, device=latents.device, dtype=latents.dtype)
86
+
87
+ images = torch.nn.functional.conv3d(latents, weight, bias=bias, stride=1, padding=0, dilation=1, groups=1)
88
+ images = images.clamp(0.0, 1.0)
89
+
90
+ return images
91
+
92
+
93
+ @torch.no_grad()
94
+ def vae_decode(latents, vae, image_mode=False):
95
+ latents = latents / vae.config.scaling_factor
96
+
97
+ if not image_mode:
98
+ image = vae.decode(latents.to(device=vae.device, dtype=vae.dtype)).sample
99
+ else:
100
+ latents = latents.to(device=vae.device, dtype=vae.dtype).unbind(2)
101
+ image = [vae.decode(l.unsqueeze(2)).sample for l in latents]
102
+ image = torch.cat(image, dim=2)
103
+
104
+ return image
105
+
106
+
107
+ @torch.no_grad()
108
+ def vae_encode(image, vae):
109
+ latents = vae.encode(image.to(device=vae.device, dtype=vae.dtype)).latent_dist.sample()
110
+ latents = latents * vae.config.scaling_factor
111
+ return latents
diffusers_helper/k_diffusion/uni_pc_fm.py ADDED
@@ -0,0 +1,141 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Better Flow Matching UniPC by Lvmin Zhang
2
+ # (c) 2025
3
+ # CC BY-SA 4.0
4
+ # Attribution-ShareAlike 4.0 International Licence
5
+
6
+
7
+ import torch
8
+
9
+ from tqdm.auto import trange
10
+
11
+
12
+ def expand_dims(v, dims):
13
+ return v[(...,) + (None,) * (dims - 1)]
14
+
15
+
16
+ class FlowMatchUniPC:
17
+ def __init__(self, model, extra_args, variant='bh1'):
18
+ self.model = model
19
+ self.variant = variant
20
+ self.extra_args = extra_args
21
+
22
+ def model_fn(self, x, t):
23
+ return self.model(x, t, **self.extra_args)
24
+
25
+ def update_fn(self, x, model_prev_list, t_prev_list, t, order):
26
+ assert order <= len(model_prev_list)
27
+ dims = x.dim()
28
+
29
+ t_prev_0 = t_prev_list[-1]
30
+ lambda_prev_0 = - torch.log(t_prev_0)
31
+ lambda_t = - torch.log(t)
32
+ model_prev_0 = model_prev_list[-1]
33
+
34
+ h = lambda_t - lambda_prev_0
35
+
36
+ rks = []
37
+ D1s = []
38
+ for i in range(1, order):
39
+ t_prev_i = t_prev_list[-(i + 1)]
40
+ model_prev_i = model_prev_list[-(i + 1)]
41
+ lambda_prev_i = - torch.log(t_prev_i)
42
+ rk = ((lambda_prev_i - lambda_prev_0) / h)[0]
43
+ rks.append(rk)
44
+ D1s.append((model_prev_i - model_prev_0) / rk)
45
+
46
+ rks.append(1.)
47
+ rks = torch.tensor(rks, device=x.device)
48
+
49
+ R = []
50
+ b = []
51
+
52
+ hh = -h[0]
53
+ h_phi_1 = torch.expm1(hh)
54
+ h_phi_k = h_phi_1 / hh - 1
55
+
56
+ factorial_i = 1
57
+
58
+ if self.variant == 'bh1':
59
+ B_h = hh
60
+ elif self.variant == 'bh2':
61
+ B_h = torch.expm1(hh)
62
+ else:
63
+ raise NotImplementedError('Bad variant!')
64
+
65
+ for i in range(1, order + 1):
66
+ R.append(torch.pow(rks, i - 1))
67
+ b.append(h_phi_k * factorial_i / B_h)
68
+ factorial_i *= (i + 1)
69
+ h_phi_k = h_phi_k / hh - 1 / factorial_i
70
+
71
+ R = torch.stack(R)
72
+ b = torch.tensor(b, device=x.device)
73
+
74
+ use_predictor = len(D1s) > 0
75
+
76
+ if use_predictor:
77
+ D1s = torch.stack(D1s, dim=1)
78
+ if order == 2:
79
+ rhos_p = torch.tensor([0.5], device=b.device)
80
+ else:
81
+ rhos_p = torch.linalg.solve(R[:-1, :-1], b[:-1])
82
+ else:
83
+ D1s = None
84
+ rhos_p = None
85
+
86
+ if order == 1:
87
+ rhos_c = torch.tensor([0.5], device=b.device)
88
+ else:
89
+ rhos_c = torch.linalg.solve(R, b)
90
+
91
+ x_t_ = expand_dims(t / t_prev_0, dims) * x - expand_dims(h_phi_1, dims) * model_prev_0
92
+
93
+ if use_predictor:
94
+ pred_res = torch.tensordot(D1s, rhos_p, dims=([1], [0]))
95
+ else:
96
+ pred_res = 0
97
+
98
+ x_t = x_t_ - expand_dims(B_h, dims) * pred_res
99
+ model_t = self.model_fn(x_t, t)
100
+
101
+ if D1s is not None:
102
+ corr_res = torch.tensordot(D1s, rhos_c[:-1], dims=([1], [0]))
103
+ else:
104
+ corr_res = 0
105
+
106
+ D1_t = (model_t - model_prev_0)
107
+ x_t = x_t_ - expand_dims(B_h, dims) * (corr_res + rhos_c[-1] * D1_t)
108
+
109
+ return x_t, model_t
110
+
111
+ def sample(self, x, sigmas, callback=None, disable_pbar=False):
112
+ order = min(3, len(sigmas) - 2)
113
+ model_prev_list, t_prev_list = [], []
114
+ for i in trange(len(sigmas) - 1, disable=disable_pbar):
115
+ vec_t = sigmas[i].expand(x.shape[0])
116
+
117
+ if i == 0:
118
+ model_prev_list = [self.model_fn(x, vec_t)]
119
+ t_prev_list = [vec_t]
120
+ elif i < order:
121
+ init_order = i
122
+ x, model_x = self.update_fn(x, model_prev_list, t_prev_list, vec_t, init_order)
123
+ model_prev_list.append(model_x)
124
+ t_prev_list.append(vec_t)
125
+ else:
126
+ x, model_x = self.update_fn(x, model_prev_list, t_prev_list, vec_t, order)
127
+ model_prev_list.append(model_x)
128
+ t_prev_list.append(vec_t)
129
+
130
+ model_prev_list = model_prev_list[-order:]
131
+ t_prev_list = t_prev_list[-order:]
132
+
133
+ if callback is not None:
134
+ callback({'x': x, 'i': i, 'denoised': model_prev_list[-1]})
135
+
136
+ return model_prev_list[-1]
137
+
138
+
139
+ def sample_unipc(model, noise, sigmas, extra_args=None, callback=None, disable=False, variant='bh1'):
140
+ assert variant in ['bh1', 'bh2']
141
+ return FlowMatchUniPC(model, extra_args=extra_args, variant=variant).sample(noise, sigmas=sigmas, callback=callback, disable_pbar=disable)
diffusers_helper/k_diffusion/wrapper.py ADDED
@@ -0,0 +1,51 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import torch
2
+
3
+
4
+ def append_dims(x, target_dims):
5
+ return x[(...,) + (None,) * (target_dims - x.ndim)]
6
+
7
+
8
+ def rescale_noise_cfg(noise_cfg, noise_pred_text, guidance_rescale=1.0):
9
+ if guidance_rescale == 0:
10
+ return noise_cfg
11
+
12
+ std_text = noise_pred_text.std(dim=list(range(1, noise_pred_text.ndim)), keepdim=True)
13
+ std_cfg = noise_cfg.std(dim=list(range(1, noise_cfg.ndim)), keepdim=True)
14
+ noise_pred_rescaled = noise_cfg * (std_text / std_cfg)
15
+ noise_cfg = guidance_rescale * noise_pred_rescaled + (1.0 - guidance_rescale) * noise_cfg
16
+ return noise_cfg
17
+
18
+
19
+ def fm_wrapper(transformer, t_scale=1000.0):
20
+ def k_model(x, sigma, **extra_args):
21
+ dtype = extra_args['dtype']
22
+ cfg_scale = extra_args['cfg_scale']
23
+ cfg_rescale = extra_args['cfg_rescale']
24
+ concat_latent = extra_args['concat_latent']
25
+
26
+ original_dtype = x.dtype
27
+ sigma = sigma.float()
28
+
29
+ x = x.to(dtype)
30
+ timestep = (sigma * t_scale).to(dtype)
31
+
32
+ if concat_latent is None:
33
+ hidden_states = x
34
+ else:
35
+ hidden_states = torch.cat([x, concat_latent.to(x)], dim=1)
36
+
37
+ pred_positive = transformer(hidden_states=hidden_states, timestep=timestep, return_dict=False, **extra_args['positive'])[0].float()
38
+
39
+ if cfg_scale == 1.0:
40
+ pred_negative = torch.zeros_like(pred_positive)
41
+ else:
42
+ pred_negative = transformer(hidden_states=hidden_states, timestep=timestep, return_dict=False, **extra_args['negative'])[0].float()
43
+
44
+ pred_cfg = pred_negative + cfg_scale * (pred_positive - pred_negative)
45
+ pred = rescale_noise_cfg(pred_cfg, pred_positive, guidance_rescale=cfg_rescale)
46
+
47
+ x0 = x.float() - pred.float() * append_dims(sigma, x.ndim)
48
+
49
+ return x0.to(dtype=original_dtype)
50
+
51
+ return k_model
diffusers_helper/memory.py ADDED
@@ -0,0 +1,134 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # By lllyasviel
2
+
3
+
4
+ import torch
5
+
6
+
7
+ cpu = torch.device('cpu')
8
+ gpu = torch.device(f'cuda:{torch.cuda.current_device()}')
9
+ gpu_complete_modules = []
10
+
11
+
12
+ class DynamicSwapInstaller:
13
+ @staticmethod
14
+ def _install_module(module: torch.nn.Module, **kwargs):
15
+ original_class = module.__class__
16
+ module.__dict__['forge_backup_original_class'] = original_class
17
+
18
+ def hacked_get_attr(self, name: str):
19
+ if '_parameters' in self.__dict__:
20
+ _parameters = self.__dict__['_parameters']
21
+ if name in _parameters:
22
+ p = _parameters[name]
23
+ if p is None:
24
+ return None
25
+ if p.__class__ == torch.nn.Parameter:
26
+ return torch.nn.Parameter(p.to(**kwargs), requires_grad=p.requires_grad)
27
+ else:
28
+ return p.to(**kwargs)
29
+ if '_buffers' in self.__dict__:
30
+ _buffers = self.__dict__['_buffers']
31
+ if name in _buffers:
32
+ return _buffers[name].to(**kwargs)
33
+ return super(original_class, self).__getattr__(name)
34
+
35
+ module.__class__ = type('DynamicSwap_' + original_class.__name__, (original_class,), {
36
+ '__getattr__': hacked_get_attr,
37
+ })
38
+
39
+ return
40
+
41
+ @staticmethod
42
+ def _uninstall_module(module: torch.nn.Module):
43
+ if 'forge_backup_original_class' in module.__dict__:
44
+ module.__class__ = module.__dict__.pop('forge_backup_original_class')
45
+ return
46
+
47
+ @staticmethod
48
+ def install_model(model: torch.nn.Module, **kwargs):
49
+ for m in model.modules():
50
+ DynamicSwapInstaller._install_module(m, **kwargs)
51
+ return
52
+
53
+ @staticmethod
54
+ def uninstall_model(model: torch.nn.Module):
55
+ for m in model.modules():
56
+ DynamicSwapInstaller._uninstall_module(m)
57
+ return
58
+
59
+
60
+ def fake_diffusers_current_device(model: torch.nn.Module, target_device: torch.device):
61
+ if hasattr(model, 'scale_shift_table'):
62
+ model.scale_shift_table.data = model.scale_shift_table.data.to(target_device)
63
+ return
64
+
65
+ for k, p in model.named_modules():
66
+ if hasattr(p, 'weight'):
67
+ p.to(target_device)
68
+ return
69
+
70
+
71
+ def get_cuda_free_memory_gb(device=None):
72
+ if device is None:
73
+ device = gpu
74
+
75
+ memory_stats = torch.cuda.memory_stats(device)
76
+ bytes_active = memory_stats['active_bytes.all.current']
77
+ bytes_reserved = memory_stats['reserved_bytes.all.current']
78
+ bytes_free_cuda, _ = torch.cuda.mem_get_info(device)
79
+ bytes_inactive_reserved = bytes_reserved - bytes_active
80
+ bytes_total_available = bytes_free_cuda + bytes_inactive_reserved
81
+ return bytes_total_available / (1024 ** 3)
82
+
83
+
84
+ def move_model_to_device_with_memory_preservation(model, target_device, preserved_memory_gb=0):
85
+ print(f'Moving {model.__class__.__name__} to {target_device} with preserved memory: {preserved_memory_gb} GB')
86
+
87
+ for m in model.modules():
88
+ if get_cuda_free_memory_gb(target_device) <= preserved_memory_gb:
89
+ torch.cuda.empty_cache()
90
+ return
91
+
92
+ if hasattr(m, 'weight'):
93
+ m.to(device=target_device)
94
+
95
+ model.to(device=target_device)
96
+ torch.cuda.empty_cache()
97
+ return
98
+
99
+
100
+ def offload_model_from_device_for_memory_preservation(model, target_device, preserved_memory_gb=0):
101
+ print(f'Offloading {model.__class__.__name__} from {target_device} to preserve memory: {preserved_memory_gb} GB')
102
+
103
+ for m in model.modules():
104
+ if get_cuda_free_memory_gb(target_device) >= preserved_memory_gb:
105
+ torch.cuda.empty_cache()
106
+ return
107
+
108
+ if hasattr(m, 'weight'):
109
+ m.to(device=cpu)
110
+
111
+ model.to(device=cpu)
112
+ torch.cuda.empty_cache()
113
+ return
114
+
115
+
116
+ def unload_complete_models(*args):
117
+ for m in gpu_complete_modules + list(args):
118
+ m.to(device=cpu)
119
+ print(f'Unloaded {m.__class__.__name__} as complete.')
120
+
121
+ gpu_complete_modules.clear()
122
+ torch.cuda.empty_cache()
123
+ return
124
+
125
+
126
+ def load_model_as_complete(model, target_device, unload=True):
127
+ if unload:
128
+ unload_complete_models()
129
+
130
+ model.to(device=target_device)
131
+ print(f'Loaded {model.__class__.__name__} to {target_device} as complete.')
132
+
133
+ gpu_complete_modules.append(model)
134
+ return
diffusers_helper/models/hunyuan_video_packed.py ADDED
@@ -0,0 +1,1035 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from typing import Any, Dict, List, Optional, Tuple, Union
2
+
3
+ import torch
4
+ import einops
5
+ import torch.nn as nn
6
+ import numpy as np
7
+
8
+ from diffusers.loaders import FromOriginalModelMixin
9
+ from diffusers.configuration_utils import ConfigMixin, register_to_config
10
+ from diffusers.loaders import PeftAdapterMixin
11
+ from diffusers.utils import logging
12
+ from diffusers.models.attention import FeedForward
13
+ from diffusers.models.attention_processor import Attention
14
+ from diffusers.models.embeddings import TimestepEmbedding, Timesteps, PixArtAlphaTextProjection
15
+ from diffusers.models.modeling_outputs import Transformer2DModelOutput
16
+ from diffusers.models.modeling_utils import ModelMixin
17
+ from diffusers_helper.dit_common import LayerNorm
18
+ from diffusers_helper.utils import zero_module
19
+
20
+
21
+ enabled_backends = []
22
+
23
+ if torch.backends.cuda.flash_sdp_enabled():
24
+ enabled_backends.append("flash")
25
+ if torch.backends.cuda.math_sdp_enabled():
26
+ enabled_backends.append("math")
27
+ if torch.backends.cuda.mem_efficient_sdp_enabled():
28
+ enabled_backends.append("mem_efficient")
29
+ if torch.backends.cuda.cudnn_sdp_enabled():
30
+ enabled_backends.append("cudnn")
31
+
32
+ print("Currently enabled native sdp backends:", enabled_backends)
33
+
34
+ try:
35
+ # raise NotImplementedError
36
+ from xformers.ops import memory_efficient_attention as xformers_attn_func
37
+ print('Xformers is installed!')
38
+ except:
39
+ print('Xformers is not installed!')
40
+ xformers_attn_func = None
41
+
42
+ try:
43
+ # raise NotImplementedError
44
+ from flash_attn import flash_attn_varlen_func, flash_attn_func
45
+ print('Flash Attn is installed!')
46
+ except:
47
+ print('Flash Attn is not installed!')
48
+ flash_attn_varlen_func = None
49
+ flash_attn_func = None
50
+
51
+ try:
52
+ # raise NotImplementedError
53
+ from sageattention import sageattn_varlen, sageattn
54
+ print('Sage Attn is installed!')
55
+ except:
56
+ print('Sage Attn is not installed!')
57
+ sageattn_varlen = None
58
+ sageattn = None
59
+
60
+
61
+ logger = logging.get_logger(__name__) # pylint: disable=invalid-name
62
+
63
+
64
+ def pad_for_3d_conv(x, kernel_size):
65
+ b, c, t, h, w = x.shape
66
+ pt, ph, pw = kernel_size
67
+ pad_t = (pt - (t % pt)) % pt
68
+ pad_h = (ph - (h % ph)) % ph
69
+ pad_w = (pw - (w % pw)) % pw
70
+ return torch.nn.functional.pad(x, (0, pad_w, 0, pad_h, 0, pad_t), mode='replicate')
71
+
72
+
73
+ def center_down_sample_3d(x, kernel_size):
74
+ # pt, ph, pw = kernel_size
75
+ # cp = (pt * ph * pw) // 2
76
+ # xp = einops.rearrange(x, 'b c (t pt) (h ph) (w pw) -> (pt ph pw) b c t h w', pt=pt, ph=ph, pw=pw)
77
+ # xc = xp[cp]
78
+ # return xc
79
+ return torch.nn.functional.avg_pool3d(x, kernel_size, stride=kernel_size)
80
+
81
+
82
+ def get_cu_seqlens(text_mask, img_len):
83
+ batch_size = text_mask.shape[0]
84
+ text_len = text_mask.sum(dim=1)
85
+ max_len = text_mask.shape[1] + img_len
86
+
87
+ cu_seqlens = torch.zeros([2 * batch_size + 1], dtype=torch.int32, device="cuda")
88
+
89
+ for i in range(batch_size):
90
+ s = text_len[i] + img_len
91
+ s1 = i * max_len + s
92
+ s2 = (i + 1) * max_len
93
+ cu_seqlens[2 * i + 1] = s1
94
+ cu_seqlens[2 * i + 2] = s2
95
+
96
+ return cu_seqlens
97
+
98
+
99
+ def apply_rotary_emb_transposed(x, freqs_cis):
100
+ cos, sin = freqs_cis.unsqueeze(-2).chunk(2, dim=-1)
101
+ x_real, x_imag = x.unflatten(-1, (-1, 2)).unbind(-1)
102
+ x_rotated = torch.stack([-x_imag, x_real], dim=-1).flatten(3)
103
+ out = x.float() * cos + x_rotated.float() * sin
104
+ out = out.to(x)
105
+ return out
106
+
107
+
108
+ def attn_varlen_func(q, k, v, cu_seqlens_q, cu_seqlens_kv, max_seqlen_q, max_seqlen_kv):
109
+ if cu_seqlens_q is None and cu_seqlens_kv is None and max_seqlen_q is None and max_seqlen_kv is None:
110
+ if sageattn is not None:
111
+ x = sageattn(q, k, v, tensor_layout='NHD')
112
+ return x
113
+
114
+ if flash_attn_func is not None:
115
+ x = flash_attn_func(q, k, v)
116
+ return x
117
+
118
+ if xformers_attn_func is not None:
119
+ x = xformers_attn_func(q, k, v)
120
+ return x
121
+
122
+ x = torch.nn.functional.scaled_dot_product_attention(q.transpose(1, 2), k.transpose(1, 2), v.transpose(1, 2)).transpose(1, 2)
123
+ return x
124
+
125
+ B, L, H, C = q.shape
126
+
127
+ q = q.flatten(0, 1)
128
+ k = k.flatten(0, 1)
129
+ v = v.flatten(0, 1)
130
+
131
+ if sageattn_varlen is not None:
132
+ x = sageattn_varlen(q, k, v, cu_seqlens_q, cu_seqlens_kv, max_seqlen_q, max_seqlen_kv)
133
+ elif flash_attn_varlen_func is not None:
134
+ x = flash_attn_varlen_func(q, k, v, cu_seqlens_q, cu_seqlens_kv, max_seqlen_q, max_seqlen_kv)
135
+ else:
136
+ raise NotImplementedError('No Attn Installed!')
137
+
138
+ x = x.unflatten(0, (B, L))
139
+
140
+ return x
141
+
142
+
143
+ class HunyuanAttnProcessorFlashAttnDouble:
144
+ def __call__(self, attn, hidden_states, encoder_hidden_states, attention_mask, image_rotary_emb):
145
+ cu_seqlens_q, cu_seqlens_kv, max_seqlen_q, max_seqlen_kv = attention_mask
146
+
147
+ query = attn.to_q(hidden_states)
148
+ key = attn.to_k(hidden_states)
149
+ value = attn.to_v(hidden_states)
150
+
151
+ query = query.unflatten(2, (attn.heads, -1))
152
+ key = key.unflatten(2, (attn.heads, -1))
153
+ value = value.unflatten(2, (attn.heads, -1))
154
+
155
+ query = attn.norm_q(query)
156
+ key = attn.norm_k(key)
157
+
158
+ query = apply_rotary_emb_transposed(query, image_rotary_emb)
159
+ key = apply_rotary_emb_transposed(key, image_rotary_emb)
160
+
161
+ encoder_query = attn.add_q_proj(encoder_hidden_states)
162
+ encoder_key = attn.add_k_proj(encoder_hidden_states)
163
+ encoder_value = attn.add_v_proj(encoder_hidden_states)
164
+
165
+ encoder_query = encoder_query.unflatten(2, (attn.heads, -1))
166
+ encoder_key = encoder_key.unflatten(2, (attn.heads, -1))
167
+ encoder_value = encoder_value.unflatten(2, (attn.heads, -1))
168
+
169
+ encoder_query = attn.norm_added_q(encoder_query)
170
+ encoder_key = attn.norm_added_k(encoder_key)
171
+
172
+ query = torch.cat([query, encoder_query], dim=1)
173
+ key = torch.cat([key, encoder_key], dim=1)
174
+ value = torch.cat([value, encoder_value], dim=1)
175
+
176
+ hidden_states = attn_varlen_func(query, key, value, cu_seqlens_q, cu_seqlens_kv, max_seqlen_q, max_seqlen_kv)
177
+ hidden_states = hidden_states.flatten(-2)
178
+
179
+ txt_length = encoder_hidden_states.shape[1]
180
+ hidden_states, encoder_hidden_states = hidden_states[:, :-txt_length], hidden_states[:, -txt_length:]
181
+
182
+ hidden_states = attn.to_out[0](hidden_states)
183
+ hidden_states = attn.to_out[1](hidden_states)
184
+ encoder_hidden_states = attn.to_add_out(encoder_hidden_states)
185
+
186
+ return hidden_states, encoder_hidden_states
187
+
188
+
189
+ class HunyuanAttnProcessorFlashAttnSingle:
190
+ def __call__(self, attn, hidden_states, encoder_hidden_states, attention_mask, image_rotary_emb):
191
+ cu_seqlens_q, cu_seqlens_kv, max_seqlen_q, max_seqlen_kv = attention_mask
192
+
193
+ hidden_states = torch.cat([hidden_states, encoder_hidden_states], dim=1)
194
+
195
+ query = attn.to_q(hidden_states)
196
+ key = attn.to_k(hidden_states)
197
+ value = attn.to_v(hidden_states)
198
+
199
+ query = query.unflatten(2, (attn.heads, -1))
200
+ key = key.unflatten(2, (attn.heads, -1))
201
+ value = value.unflatten(2, (attn.heads, -1))
202
+
203
+ query = attn.norm_q(query)
204
+ key = attn.norm_k(key)
205
+
206
+ txt_length = encoder_hidden_states.shape[1]
207
+
208
+ query = torch.cat([apply_rotary_emb_transposed(query[:, :-txt_length], image_rotary_emb), query[:, -txt_length:]], dim=1)
209
+ key = torch.cat([apply_rotary_emb_transposed(key[:, :-txt_length], image_rotary_emb), key[:, -txt_length:]], dim=1)
210
+
211
+ hidden_states = attn_varlen_func(query, key, value, cu_seqlens_q, cu_seqlens_kv, max_seqlen_q, max_seqlen_kv)
212
+ hidden_states = hidden_states.flatten(-2)
213
+
214
+ hidden_states, encoder_hidden_states = hidden_states[:, :-txt_length], hidden_states[:, -txt_length:]
215
+
216
+ return hidden_states, encoder_hidden_states
217
+
218
+
219
+ class CombinedTimestepGuidanceTextProjEmbeddings(nn.Module):
220
+ def __init__(self, embedding_dim, pooled_projection_dim):
221
+ super().__init__()
222
+
223
+ self.time_proj = Timesteps(num_channels=256, flip_sin_to_cos=True, downscale_freq_shift=0)
224
+ self.timestep_embedder = TimestepEmbedding(in_channels=256, time_embed_dim=embedding_dim)
225
+ self.guidance_embedder = TimestepEmbedding(in_channels=256, time_embed_dim=embedding_dim)
226
+ self.text_embedder = PixArtAlphaTextProjection(pooled_projection_dim, embedding_dim, act_fn="silu")
227
+
228
+ def forward(self, timestep, guidance, pooled_projection):
229
+ timesteps_proj = self.time_proj(timestep)
230
+ timesteps_emb = self.timestep_embedder(timesteps_proj.to(dtype=pooled_projection.dtype))
231
+
232
+ guidance_proj = self.time_proj(guidance)
233
+ guidance_emb = self.guidance_embedder(guidance_proj.to(dtype=pooled_projection.dtype))
234
+
235
+ time_guidance_emb = timesteps_emb + guidance_emb
236
+
237
+ pooled_projections = self.text_embedder(pooled_projection)
238
+ conditioning = time_guidance_emb + pooled_projections
239
+
240
+ return conditioning
241
+
242
+
243
+ class CombinedTimestepTextProjEmbeddings(nn.Module):
244
+ def __init__(self, embedding_dim, pooled_projection_dim):
245
+ super().__init__()
246
+
247
+ self.time_proj = Timesteps(num_channels=256, flip_sin_to_cos=True, downscale_freq_shift=0)
248
+ self.timestep_embedder = TimestepEmbedding(in_channels=256, time_embed_dim=embedding_dim)
249
+ self.text_embedder = PixArtAlphaTextProjection(pooled_projection_dim, embedding_dim, act_fn="silu")
250
+
251
+ def forward(self, timestep, pooled_projection):
252
+ timesteps_proj = self.time_proj(timestep)
253
+ timesteps_emb = self.timestep_embedder(timesteps_proj.to(dtype=pooled_projection.dtype))
254
+
255
+ pooled_projections = self.text_embedder(pooled_projection)
256
+
257
+ conditioning = timesteps_emb + pooled_projections
258
+
259
+ return conditioning
260
+
261
+
262
+ class HunyuanVideoAdaNorm(nn.Module):
263
+ def __init__(self, in_features: int, out_features: Optional[int] = None) -> None:
264
+ super().__init__()
265
+
266
+ out_features = out_features or 2 * in_features
267
+ self.linear = nn.Linear(in_features, out_features)
268
+ self.nonlinearity = nn.SiLU()
269
+
270
+ def forward(
271
+ self, temb: torch.Tensor
272
+ ) -> Tuple[torch.Tensor, torch.Tensor, torch.Tensor, torch.Tensor, torch.Tensor]:
273
+ temb = self.linear(self.nonlinearity(temb))
274
+ gate_msa, gate_mlp = temb.chunk(2, dim=-1)
275
+ gate_msa, gate_mlp = gate_msa.unsqueeze(1), gate_mlp.unsqueeze(1)
276
+ return gate_msa, gate_mlp
277
+
278
+
279
+ class HunyuanVideoIndividualTokenRefinerBlock(nn.Module):
280
+ def __init__(
281
+ self,
282
+ num_attention_heads: int,
283
+ attention_head_dim: int,
284
+ mlp_width_ratio: str = 4.0,
285
+ mlp_drop_rate: float = 0.0,
286
+ attention_bias: bool = True,
287
+ ) -> None:
288
+ super().__init__()
289
+
290
+ hidden_size = num_attention_heads * attention_head_dim
291
+
292
+ self.norm1 = LayerNorm(hidden_size, elementwise_affine=True, eps=1e-6)
293
+ self.attn = Attention(
294
+ query_dim=hidden_size,
295
+ cross_attention_dim=None,
296
+ heads=num_attention_heads,
297
+ dim_head=attention_head_dim,
298
+ bias=attention_bias,
299
+ )
300
+
301
+ self.norm2 = LayerNorm(hidden_size, elementwise_affine=True, eps=1e-6)
302
+ self.ff = FeedForward(hidden_size, mult=mlp_width_ratio, activation_fn="linear-silu", dropout=mlp_drop_rate)
303
+
304
+ self.norm_out = HunyuanVideoAdaNorm(hidden_size, 2 * hidden_size)
305
+
306
+ def forward(
307
+ self,
308
+ hidden_states: torch.Tensor,
309
+ temb: torch.Tensor,
310
+ attention_mask: Optional[torch.Tensor] = None,
311
+ ) -> torch.Tensor:
312
+ norm_hidden_states = self.norm1(hidden_states)
313
+
314
+ attn_output = self.attn(
315
+ hidden_states=norm_hidden_states,
316
+ encoder_hidden_states=None,
317
+ attention_mask=attention_mask,
318
+ )
319
+
320
+ gate_msa, gate_mlp = self.norm_out(temb)
321
+ hidden_states = hidden_states + attn_output * gate_msa
322
+
323
+ ff_output = self.ff(self.norm2(hidden_states))
324
+ hidden_states = hidden_states + ff_output * gate_mlp
325
+
326
+ return hidden_states
327
+
328
+
329
+ class HunyuanVideoIndividualTokenRefiner(nn.Module):
330
+ def __init__(
331
+ self,
332
+ num_attention_heads: int,
333
+ attention_head_dim: int,
334
+ num_layers: int,
335
+ mlp_width_ratio: float = 4.0,
336
+ mlp_drop_rate: float = 0.0,
337
+ attention_bias: bool = True,
338
+ ) -> None:
339
+ super().__init__()
340
+
341
+ self.refiner_blocks = nn.ModuleList(
342
+ [
343
+ HunyuanVideoIndividualTokenRefinerBlock(
344
+ num_attention_heads=num_attention_heads,
345
+ attention_head_dim=attention_head_dim,
346
+ mlp_width_ratio=mlp_width_ratio,
347
+ mlp_drop_rate=mlp_drop_rate,
348
+ attention_bias=attention_bias,
349
+ )
350
+ for _ in range(num_layers)
351
+ ]
352
+ )
353
+
354
+ def forward(
355
+ self,
356
+ hidden_states: torch.Tensor,
357
+ temb: torch.Tensor,
358
+ attention_mask: Optional[torch.Tensor] = None,
359
+ ) -> None:
360
+ self_attn_mask = None
361
+ if attention_mask is not None:
362
+ batch_size = attention_mask.shape[0]
363
+ seq_len = attention_mask.shape[1]
364
+ attention_mask = attention_mask.to(hidden_states.device).bool()
365
+ self_attn_mask_1 = attention_mask.view(batch_size, 1, 1, seq_len).repeat(1, 1, seq_len, 1)
366
+ self_attn_mask_2 = self_attn_mask_1.transpose(2, 3)
367
+ self_attn_mask = (self_attn_mask_1 & self_attn_mask_2).bool()
368
+ self_attn_mask[:, :, :, 0] = True
369
+
370
+ for block in self.refiner_blocks:
371
+ hidden_states = block(hidden_states, temb, self_attn_mask)
372
+
373
+ return hidden_states
374
+
375
+
376
+ class HunyuanVideoTokenRefiner(nn.Module):
377
+ def __init__(
378
+ self,
379
+ in_channels: int,
380
+ num_attention_heads: int,
381
+ attention_head_dim: int,
382
+ num_layers: int,
383
+ mlp_ratio: float = 4.0,
384
+ mlp_drop_rate: float = 0.0,
385
+ attention_bias: bool = True,
386
+ ) -> None:
387
+ super().__init__()
388
+
389
+ hidden_size = num_attention_heads * attention_head_dim
390
+
391
+ self.time_text_embed = CombinedTimestepTextProjEmbeddings(
392
+ embedding_dim=hidden_size, pooled_projection_dim=in_channels
393
+ )
394
+ self.proj_in = nn.Linear(in_channels, hidden_size, bias=True)
395
+ self.token_refiner = HunyuanVideoIndividualTokenRefiner(
396
+ num_attention_heads=num_attention_heads,
397
+ attention_head_dim=attention_head_dim,
398
+ num_layers=num_layers,
399
+ mlp_width_ratio=mlp_ratio,
400
+ mlp_drop_rate=mlp_drop_rate,
401
+ attention_bias=attention_bias,
402
+ )
403
+
404
+ def forward(
405
+ self,
406
+ hidden_states: torch.Tensor,
407
+ timestep: torch.LongTensor,
408
+ attention_mask: Optional[torch.LongTensor] = None,
409
+ ) -> torch.Tensor:
410
+ if attention_mask is None:
411
+ pooled_projections = hidden_states.mean(dim=1)
412
+ else:
413
+ original_dtype = hidden_states.dtype
414
+ mask_float = attention_mask.float().unsqueeze(-1)
415
+ pooled_projections = (hidden_states * mask_float).sum(dim=1) / mask_float.sum(dim=1)
416
+ pooled_projections = pooled_projections.to(original_dtype)
417
+
418
+ temb = self.time_text_embed(timestep, pooled_projections)
419
+ hidden_states = self.proj_in(hidden_states)
420
+ hidden_states = self.token_refiner(hidden_states, temb, attention_mask)
421
+
422
+ return hidden_states
423
+
424
+
425
+ class HunyuanVideoRotaryPosEmbed(nn.Module):
426
+ def __init__(self, rope_dim, theta):
427
+ super().__init__()
428
+ self.DT, self.DY, self.DX = rope_dim
429
+ self.theta = theta
430
+
431
+ @torch.no_grad()
432
+ def get_frequency(self, dim, pos):
433
+ T, H, W = pos.shape
434
+ freqs = 1.0 / (self.theta ** (torch.arange(0, dim, 2, dtype=torch.float32, device=pos.device)[: (dim // 2)] / dim))
435
+ freqs = torch.outer(freqs, pos.reshape(-1)).unflatten(-1, (T, H, W)).repeat_interleave(2, dim=0)
436
+ return freqs.cos(), freqs.sin()
437
+
438
+ @torch.no_grad()
439
+ def forward_inner(self, frame_indices, height, width, device):
440
+ GT, GY, GX = torch.meshgrid(
441
+ frame_indices.to(device=device, dtype=torch.float32),
442
+ torch.arange(0, height, device=device, dtype=torch.float32),
443
+ torch.arange(0, width, device=device, dtype=torch.float32),
444
+ indexing="ij"
445
+ )
446
+
447
+ FCT, FST = self.get_frequency(self.DT, GT)
448
+ FCY, FSY = self.get_frequency(self.DY, GY)
449
+ FCX, FSX = self.get_frequency(self.DX, GX)
450
+
451
+ result = torch.cat([FCT, FCY, FCX, FST, FSY, FSX], dim=0)
452
+
453
+ return result.to(device)
454
+
455
+ @torch.no_grad()
456
+ def forward(self, frame_indices, height, width, device):
457
+ frame_indices = frame_indices.unbind(0)
458
+ results = [self.forward_inner(f, height, width, device) for f in frame_indices]
459
+ results = torch.stack(results, dim=0)
460
+ return results
461
+
462
+
463
+ class AdaLayerNormZero(nn.Module):
464
+ def __init__(self, embedding_dim: int, norm_type="layer_norm", bias=True):
465
+ super().__init__()
466
+ self.silu = nn.SiLU()
467
+ self.linear = nn.Linear(embedding_dim, 6 * embedding_dim, bias=bias)
468
+ if norm_type == "layer_norm":
469
+ self.norm = LayerNorm(embedding_dim, elementwise_affine=False, eps=1e-6)
470
+ else:
471
+ raise ValueError(f"unknown norm_type {norm_type}")
472
+
473
+ def forward(
474
+ self,
475
+ x: torch.Tensor,
476
+ emb: Optional[torch.Tensor] = None,
477
+ ) -> Tuple[torch.Tensor, torch.Tensor, torch.Tensor, torch.Tensor, torch.Tensor]:
478
+ emb = emb.unsqueeze(-2)
479
+ emb = self.linear(self.silu(emb))
480
+ shift_msa, scale_msa, gate_msa, shift_mlp, scale_mlp, gate_mlp = emb.chunk(6, dim=-1)
481
+ x = self.norm(x) * (1 + scale_msa) + shift_msa
482
+ return x, gate_msa, shift_mlp, scale_mlp, gate_mlp
483
+
484
+
485
+ class AdaLayerNormZeroSingle(nn.Module):
486
+ def __init__(self, embedding_dim: int, norm_type="layer_norm", bias=True):
487
+ super().__init__()
488
+
489
+ self.silu = nn.SiLU()
490
+ self.linear = nn.Linear(embedding_dim, 3 * embedding_dim, bias=bias)
491
+ if norm_type == "layer_norm":
492
+ self.norm = LayerNorm(embedding_dim, elementwise_affine=False, eps=1e-6)
493
+ else:
494
+ raise ValueError(f"unknown norm_type {norm_type}")
495
+
496
+ def forward(
497
+ self,
498
+ x: torch.Tensor,
499
+ emb: Optional[torch.Tensor] = None,
500
+ ) -> Tuple[torch.Tensor, torch.Tensor, torch.Tensor, torch.Tensor, torch.Tensor]:
501
+ emb = emb.unsqueeze(-2)
502
+ emb = self.linear(self.silu(emb))
503
+ shift_msa, scale_msa, gate_msa = emb.chunk(3, dim=-1)
504
+ x = self.norm(x) * (1 + scale_msa) + shift_msa
505
+ return x, gate_msa
506
+
507
+
508
+ class AdaLayerNormContinuous(nn.Module):
509
+ def __init__(
510
+ self,
511
+ embedding_dim: int,
512
+ conditioning_embedding_dim: int,
513
+ elementwise_affine=True,
514
+ eps=1e-5,
515
+ bias=True,
516
+ norm_type="layer_norm",
517
+ ):
518
+ super().__init__()
519
+ self.silu = nn.SiLU()
520
+ self.linear = nn.Linear(conditioning_embedding_dim, embedding_dim * 2, bias=bias)
521
+ if norm_type == "layer_norm":
522
+ self.norm = LayerNorm(embedding_dim, eps, elementwise_affine, bias)
523
+ else:
524
+ raise ValueError(f"unknown norm_type {norm_type}")
525
+
526
+ def forward(self, x: torch.Tensor, emb: torch.Tensor) -> torch.Tensor:
527
+ emb = emb.unsqueeze(-2)
528
+ emb = self.linear(self.silu(emb))
529
+ scale, shift = emb.chunk(2, dim=-1)
530
+ x = self.norm(x) * (1 + scale) + shift
531
+ return x
532
+
533
+
534
+ class HunyuanVideoSingleTransformerBlock(nn.Module):
535
+ def __init__(
536
+ self,
537
+ num_attention_heads: int,
538
+ attention_head_dim: int,
539
+ mlp_ratio: float = 4.0,
540
+ qk_norm: str = "rms_norm",
541
+ ) -> None:
542
+ super().__init__()
543
+
544
+ hidden_size = num_attention_heads * attention_head_dim
545
+ mlp_dim = int(hidden_size * mlp_ratio)
546
+
547
+ self.attn = Attention(
548
+ query_dim=hidden_size,
549
+ cross_attention_dim=None,
550
+ dim_head=attention_head_dim,
551
+ heads=num_attention_heads,
552
+ out_dim=hidden_size,
553
+ bias=True,
554
+ processor=HunyuanAttnProcessorFlashAttnSingle(),
555
+ qk_norm=qk_norm,
556
+ eps=1e-6,
557
+ pre_only=True,
558
+ )
559
+
560
+ self.norm = AdaLayerNormZeroSingle(hidden_size, norm_type="layer_norm")
561
+ self.proj_mlp = nn.Linear(hidden_size, mlp_dim)
562
+ self.act_mlp = nn.GELU(approximate="tanh")
563
+ self.proj_out = nn.Linear(hidden_size + mlp_dim, hidden_size)
564
+
565
+ def forward(
566
+ self,
567
+ hidden_states: torch.Tensor,
568
+ encoder_hidden_states: torch.Tensor,
569
+ temb: torch.Tensor,
570
+ attention_mask: Optional[torch.Tensor] = None,
571
+ image_rotary_emb: Optional[Tuple[torch.Tensor, torch.Tensor]] = None,
572
+ ) -> torch.Tensor:
573
+ text_seq_length = encoder_hidden_states.shape[1]
574
+ hidden_states = torch.cat([hidden_states, encoder_hidden_states], dim=1)
575
+
576
+ residual = hidden_states
577
+
578
+ # 1. Input normalization
579
+ norm_hidden_states, gate = self.norm(hidden_states, emb=temb)
580
+ mlp_hidden_states = self.act_mlp(self.proj_mlp(norm_hidden_states))
581
+
582
+ norm_hidden_states, norm_encoder_hidden_states = (
583
+ norm_hidden_states[:, :-text_seq_length, :],
584
+ norm_hidden_states[:, -text_seq_length:, :],
585
+ )
586
+
587
+ # 2. Attention
588
+ attn_output, context_attn_output = self.attn(
589
+ hidden_states=norm_hidden_states,
590
+ encoder_hidden_states=norm_encoder_hidden_states,
591
+ attention_mask=attention_mask,
592
+ image_rotary_emb=image_rotary_emb,
593
+ )
594
+ attn_output = torch.cat([attn_output, context_attn_output], dim=1)
595
+
596
+ # 3. Modulation and residual connection
597
+ hidden_states = torch.cat([attn_output, mlp_hidden_states], dim=2)
598
+ hidden_states = gate * self.proj_out(hidden_states)
599
+ hidden_states = hidden_states + residual
600
+
601
+ hidden_states, encoder_hidden_states = (
602
+ hidden_states[:, :-text_seq_length, :],
603
+ hidden_states[:, -text_seq_length:, :],
604
+ )
605
+ return hidden_states, encoder_hidden_states
606
+
607
+
608
+ class HunyuanVideoTransformerBlock(nn.Module):
609
+ def __init__(
610
+ self,
611
+ num_attention_heads: int,
612
+ attention_head_dim: int,
613
+ mlp_ratio: float,
614
+ qk_norm: str = "rms_norm",
615
+ ) -> None:
616
+ super().__init__()
617
+
618
+ hidden_size = num_attention_heads * attention_head_dim
619
+
620
+ self.norm1 = AdaLayerNormZero(hidden_size, norm_type="layer_norm")
621
+ self.norm1_context = AdaLayerNormZero(hidden_size, norm_type="layer_norm")
622
+
623
+ self.attn = Attention(
624
+ query_dim=hidden_size,
625
+ cross_attention_dim=None,
626
+ added_kv_proj_dim=hidden_size,
627
+ dim_head=attention_head_dim,
628
+ heads=num_attention_heads,
629
+ out_dim=hidden_size,
630
+ context_pre_only=False,
631
+ bias=True,
632
+ processor=HunyuanAttnProcessorFlashAttnDouble(),
633
+ qk_norm=qk_norm,
634
+ eps=1e-6,
635
+ )
636
+
637
+ self.norm2 = LayerNorm(hidden_size, elementwise_affine=False, eps=1e-6)
638
+ self.ff = FeedForward(hidden_size, mult=mlp_ratio, activation_fn="gelu-approximate")
639
+
640
+ self.norm2_context = LayerNorm(hidden_size, elementwise_affine=False, eps=1e-6)
641
+ self.ff_context = FeedForward(hidden_size, mult=mlp_ratio, activation_fn="gelu-approximate")
642
+
643
+ def forward(
644
+ self,
645
+ hidden_states: torch.Tensor,
646
+ encoder_hidden_states: torch.Tensor,
647
+ temb: torch.Tensor,
648
+ attention_mask: Optional[torch.Tensor] = None,
649
+ freqs_cis: Optional[Tuple[torch.Tensor, torch.Tensor]] = None,
650
+ ) -> Tuple[torch.Tensor, torch.Tensor]:
651
+ # 1. Input normalization
652
+ norm_hidden_states, gate_msa, shift_mlp, scale_mlp, gate_mlp = self.norm1(hidden_states, emb=temb)
653
+ norm_encoder_hidden_states, c_gate_msa, c_shift_mlp, c_scale_mlp, c_gate_mlp = self.norm1_context(encoder_hidden_states, emb=temb)
654
+
655
+ # 2. Joint attention
656
+ attn_output, context_attn_output = self.attn(
657
+ hidden_states=norm_hidden_states,
658
+ encoder_hidden_states=norm_encoder_hidden_states,
659
+ attention_mask=attention_mask,
660
+ image_rotary_emb=freqs_cis,
661
+ )
662
+
663
+ # 3. Modulation and residual connection
664
+ hidden_states = hidden_states + attn_output * gate_msa
665
+ encoder_hidden_states = encoder_hidden_states + context_attn_output * c_gate_msa
666
+
667
+ norm_hidden_states = self.norm2(hidden_states)
668
+ norm_encoder_hidden_states = self.norm2_context(encoder_hidden_states)
669
+
670
+ norm_hidden_states = norm_hidden_states * (1 + scale_mlp) + shift_mlp
671
+ norm_encoder_hidden_states = norm_encoder_hidden_states * (1 + c_scale_mlp) + c_shift_mlp
672
+
673
+ # 4. Feed-forward
674
+ ff_output = self.ff(norm_hidden_states)
675
+ context_ff_output = self.ff_context(norm_encoder_hidden_states)
676
+
677
+ hidden_states = hidden_states + gate_mlp * ff_output
678
+ encoder_hidden_states = encoder_hidden_states + c_gate_mlp * context_ff_output
679
+
680
+ return hidden_states, encoder_hidden_states
681
+
682
+
683
+ class ClipVisionProjection(nn.Module):
684
+ def __init__(self, in_channels, out_channels):
685
+ super().__init__()
686
+ self.up = nn.Linear(in_channels, out_channels * 3)
687
+ self.down = nn.Linear(out_channels * 3, out_channels)
688
+
689
+ def forward(self, x):
690
+ projected_x = self.down(nn.functional.silu(self.up(x)))
691
+ return projected_x
692
+
693
+
694
+ class HunyuanVideoPatchEmbed(nn.Module):
695
+ def __init__(self, patch_size, in_chans, embed_dim):
696
+ super().__init__()
697
+ self.proj = nn.Conv3d(in_chans, embed_dim, kernel_size=patch_size, stride=patch_size)
698
+
699
+
700
+ class HunyuanVideoPatchEmbedForCleanLatents(nn.Module):
701
+ def __init__(self, inner_dim):
702
+ super().__init__()
703
+ self.proj = nn.Conv3d(16, inner_dim, kernel_size=(1, 2, 2), stride=(1, 2, 2))
704
+ self.proj_2x = nn.Conv3d(16, inner_dim, kernel_size=(2, 4, 4), stride=(2, 4, 4))
705
+ self.proj_4x = nn.Conv3d(16, inner_dim, kernel_size=(4, 8, 8), stride=(4, 8, 8))
706
+
707
+ @torch.no_grad()
708
+ def initialize_weight_from_another_conv3d(self, another_layer):
709
+ weight = another_layer.weight.detach().clone()
710
+ bias = another_layer.bias.detach().clone()
711
+
712
+ sd = {
713
+ 'proj.weight': weight.clone(),
714
+ 'proj.bias': bias.clone(),
715
+ 'proj_2x.weight': einops.repeat(weight, 'b c t h w -> b c (t tk) (h hk) (w wk)', tk=2, hk=2, wk=2) / 8.0,
716
+ 'proj_2x.bias': bias.clone(),
717
+ 'proj_4x.weight': einops.repeat(weight, 'b c t h w -> b c (t tk) (h hk) (w wk)', tk=4, hk=4, wk=4) / 64.0,
718
+ 'proj_4x.bias': bias.clone(),
719
+ }
720
+
721
+ sd = {k: v.clone() for k, v in sd.items()}
722
+
723
+ self.load_state_dict(sd)
724
+ return
725
+
726
+
727
+ class HunyuanVideoTransformer3DModelPacked(ModelMixin, ConfigMixin, PeftAdapterMixin, FromOriginalModelMixin):
728
+ @register_to_config
729
+ def __init__(
730
+ self,
731
+ in_channels: int = 16,
732
+ out_channels: int = 16,
733
+ num_attention_heads: int = 24,
734
+ attention_head_dim: int = 128,
735
+ num_layers: int = 20,
736
+ num_single_layers: int = 40,
737
+ num_refiner_layers: int = 2,
738
+ mlp_ratio: float = 4.0,
739
+ patch_size: int = 2,
740
+ patch_size_t: int = 1,
741
+ qk_norm: str = "rms_norm",
742
+ guidance_embeds: bool = True,
743
+ text_embed_dim: int = 4096,
744
+ pooled_projection_dim: int = 768,
745
+ rope_theta: float = 256.0,
746
+ rope_axes_dim: Tuple[int] = (16, 56, 56),
747
+ has_image_proj=False,
748
+ image_proj_dim=1152,
749
+ has_clean_x_embedder=False,
750
+ ) -> None:
751
+ super().__init__()
752
+
753
+ inner_dim = num_attention_heads * attention_head_dim
754
+ out_channels = out_channels or in_channels
755
+
756
+ # 1. Latent and condition embedders
757
+ self.x_embedder = HunyuanVideoPatchEmbed((patch_size_t, patch_size, patch_size), in_channels, inner_dim)
758
+ self.context_embedder = HunyuanVideoTokenRefiner(
759
+ text_embed_dim, num_attention_heads, attention_head_dim, num_layers=num_refiner_layers
760
+ )
761
+ self.time_text_embed = CombinedTimestepGuidanceTextProjEmbeddings(inner_dim, pooled_projection_dim)
762
+
763
+ self.clean_x_embedder = None
764
+ self.image_projection = None
765
+
766
+ # 2. RoPE
767
+ self.rope = HunyuanVideoRotaryPosEmbed(rope_axes_dim, rope_theta)
768
+
769
+ # 3. Dual stream transformer blocks
770
+ self.transformer_blocks = nn.ModuleList(
771
+ [
772
+ HunyuanVideoTransformerBlock(
773
+ num_attention_heads, attention_head_dim, mlp_ratio=mlp_ratio, qk_norm=qk_norm
774
+ )
775
+ for _ in range(num_layers)
776
+ ]
777
+ )
778
+
779
+ # 4. Single stream transformer blocks
780
+ self.single_transformer_blocks = nn.ModuleList(
781
+ [
782
+ HunyuanVideoSingleTransformerBlock(
783
+ num_attention_heads, attention_head_dim, mlp_ratio=mlp_ratio, qk_norm=qk_norm
784
+ )
785
+ for _ in range(num_single_layers)
786
+ ]
787
+ )
788
+
789
+ # 5. Output projection
790
+ self.norm_out = AdaLayerNormContinuous(inner_dim, inner_dim, elementwise_affine=False, eps=1e-6)
791
+ self.proj_out = nn.Linear(inner_dim, patch_size_t * patch_size * patch_size * out_channels)
792
+
793
+ self.inner_dim = inner_dim
794
+ self.use_gradient_checkpointing = False
795
+ self.enable_teacache = False
796
+
797
+ if has_image_proj:
798
+ self.install_image_projection(image_proj_dim)
799
+
800
+ if has_clean_x_embedder:
801
+ self.install_clean_x_embedder()
802
+
803
+ self.high_quality_fp32_output_for_inference = False
804
+
805
+ def install_image_projection(self, in_channels):
806
+ self.image_projection = ClipVisionProjection(in_channels=in_channels, out_channels=self.inner_dim)
807
+ self.config['has_image_proj'] = True
808
+ self.config['image_proj_dim'] = in_channels
809
+
810
+ def install_clean_x_embedder(self):
811
+ self.clean_x_embedder = HunyuanVideoPatchEmbedForCleanLatents(self.inner_dim)
812
+ self.config['has_clean_x_embedder'] = True
813
+
814
+ def enable_gradient_checkpointing(self):
815
+ self.use_gradient_checkpointing = True
816
+ print('self.use_gradient_checkpointing = True')
817
+
818
+ def disable_gradient_checkpointing(self):
819
+ self.use_gradient_checkpointing = False
820
+ print('self.use_gradient_checkpointing = False')
821
+
822
+ def initialize_teacache(self, enable_teacache=True, num_steps=25, rel_l1_thresh=0.15):
823
+ self.enable_teacache = enable_teacache
824
+ self.cnt = 0
825
+ self.num_steps = num_steps
826
+ self.rel_l1_thresh = rel_l1_thresh # 0.1 for 1.6x speedup, 0.15 for 2.1x speedup
827
+ self.accumulated_rel_l1_distance = 0
828
+ self.previous_modulated_input = None
829
+ self.previous_residual = None
830
+ self.teacache_rescale_func = np.poly1d([7.33226126e+02, -4.01131952e+02, 6.75869174e+01, -3.14987800e+00, 9.61237896e-02])
831
+
832
+ def gradient_checkpointing_method(self, block, *args):
833
+ if self.use_gradient_checkpointing:
834
+ result = torch.utils.checkpoint.checkpoint(block, *args, use_reentrant=False)
835
+ else:
836
+ result = block(*args)
837
+ return result
838
+
839
+ def process_input_hidden_states(
840
+ self,
841
+ latents, latent_indices=None,
842
+ clean_latents=None, clean_latent_indices=None,
843
+ clean_latents_2x=None, clean_latent_2x_indices=None,
844
+ clean_latents_4x=None, clean_latent_4x_indices=None
845
+ ):
846
+ hidden_states = self.gradient_checkpointing_method(self.x_embedder.proj, latents)
847
+ B, C, T, H, W = hidden_states.shape
848
+
849
+ if latent_indices is None:
850
+ latent_indices = torch.arange(0, T).unsqueeze(0).expand(B, -1)
851
+
852
+ hidden_states = hidden_states.flatten(2).transpose(1, 2)
853
+
854
+ rope_freqs = self.rope(frame_indices=latent_indices, height=H, width=W, device=hidden_states.device)
855
+ rope_freqs = rope_freqs.flatten(2).transpose(1, 2)
856
+
857
+ if clean_latents is not None and clean_latent_indices is not None:
858
+ clean_latents = clean_latents.to(hidden_states)
859
+ clean_latents = self.gradient_checkpointing_method(self.clean_x_embedder.proj, clean_latents)
860
+ clean_latents = clean_latents.flatten(2).transpose(1, 2)
861
+
862
+ clean_latent_rope_freqs = self.rope(frame_indices=clean_latent_indices, height=H, width=W, device=clean_latents.device)
863
+ clean_latent_rope_freqs = clean_latent_rope_freqs.flatten(2).transpose(1, 2)
864
+
865
+ hidden_states = torch.cat([clean_latents, hidden_states], dim=1)
866
+ rope_freqs = torch.cat([clean_latent_rope_freqs, rope_freqs], dim=1)
867
+
868
+ if clean_latents_2x is not None and clean_latent_2x_indices is not None:
869
+ clean_latents_2x = clean_latents_2x.to(hidden_states)
870
+ clean_latents_2x = pad_for_3d_conv(clean_latents_2x, (2, 4, 4))
871
+ clean_latents_2x = self.gradient_checkpointing_method(self.clean_x_embedder.proj_2x, clean_latents_2x)
872
+ clean_latents_2x = clean_latents_2x.flatten(2).transpose(1, 2)
873
+
874
+ clean_latent_2x_rope_freqs = self.rope(frame_indices=clean_latent_2x_indices, height=H, width=W, device=clean_latents_2x.device)
875
+ clean_latent_2x_rope_freqs = pad_for_3d_conv(clean_latent_2x_rope_freqs, (2, 2, 2))
876
+ clean_latent_2x_rope_freqs = center_down_sample_3d(clean_latent_2x_rope_freqs, (2, 2, 2))
877
+ clean_latent_2x_rope_freqs = clean_latent_2x_rope_freqs.flatten(2).transpose(1, 2)
878
+
879
+ hidden_states = torch.cat([clean_latents_2x, hidden_states], dim=1)
880
+ rope_freqs = torch.cat([clean_latent_2x_rope_freqs, rope_freqs], dim=1)
881
+
882
+ if clean_latents_4x is not None and clean_latent_4x_indices is not None:
883
+ clean_latents_4x = clean_latents_4x.to(hidden_states)
884
+ clean_latents_4x = pad_for_3d_conv(clean_latents_4x, (4, 8, 8))
885
+ clean_latents_4x = self.gradient_checkpointing_method(self.clean_x_embedder.proj_4x, clean_latents_4x)
886
+ clean_latents_4x = clean_latents_4x.flatten(2).transpose(1, 2)
887
+
888
+ clean_latent_4x_rope_freqs = self.rope(frame_indices=clean_latent_4x_indices, height=H, width=W, device=clean_latents_4x.device)
889
+ clean_latent_4x_rope_freqs = pad_for_3d_conv(clean_latent_4x_rope_freqs, (4, 4, 4))
890
+ clean_latent_4x_rope_freqs = center_down_sample_3d(clean_latent_4x_rope_freqs, (4, 4, 4))
891
+ clean_latent_4x_rope_freqs = clean_latent_4x_rope_freqs.flatten(2).transpose(1, 2)
892
+
893
+ hidden_states = torch.cat([clean_latents_4x, hidden_states], dim=1)
894
+ rope_freqs = torch.cat([clean_latent_4x_rope_freqs, rope_freqs], dim=1)
895
+
896
+ return hidden_states, rope_freqs
897
+
898
+ def forward(
899
+ self,
900
+ hidden_states, timestep, encoder_hidden_states, encoder_attention_mask, pooled_projections, guidance,
901
+ latent_indices=None,
902
+ clean_latents=None, clean_latent_indices=None,
903
+ clean_latents_2x=None, clean_latent_2x_indices=None,
904
+ clean_latents_4x=None, clean_latent_4x_indices=None,
905
+ image_embeddings=None,
906
+ attention_kwargs=None, return_dict=True
907
+ ):
908
+
909
+ if attention_kwargs is None:
910
+ attention_kwargs = {}
911
+
912
+ batch_size, num_channels, num_frames, height, width = hidden_states.shape
913
+ p, p_t = self.config['patch_size'], self.config['patch_size_t']
914
+ post_patch_num_frames = num_frames // p_t
915
+ post_patch_height = height // p
916
+ post_patch_width = width // p
917
+ original_context_length = post_patch_num_frames * post_patch_height * post_patch_width
918
+
919
+ hidden_states, rope_freqs = self.process_input_hidden_states(hidden_states, latent_indices, clean_latents, clean_latent_indices, clean_latents_2x, clean_latent_2x_indices, clean_latents_4x, clean_latent_4x_indices)
920
+
921
+ temb = self.gradient_checkpointing_method(self.time_text_embed, timestep, guidance, pooled_projections)
922
+ encoder_hidden_states = self.gradient_checkpointing_method(self.context_embedder, encoder_hidden_states, timestep, encoder_attention_mask)
923
+
924
+ if self.image_projection is not None:
925
+ assert image_embeddings is not None, 'You must use image embeddings!'
926
+ extra_encoder_hidden_states = self.gradient_checkpointing_method(self.image_projection, image_embeddings)
927
+ extra_attention_mask = torch.ones((batch_size, extra_encoder_hidden_states.shape[1]), dtype=encoder_attention_mask.dtype, device=encoder_attention_mask.device)
928
+
929
+ # must cat before (not after) encoder_hidden_states, due to attn masking
930
+ encoder_hidden_states = torch.cat([extra_encoder_hidden_states, encoder_hidden_states], dim=1)
931
+ encoder_attention_mask = torch.cat([extra_attention_mask, encoder_attention_mask], dim=1)
932
+
933
+ if batch_size == 1:
934
+ # When batch size is 1, we do not need any masks or var-len funcs since cropping is mathematically same to what we want
935
+ # If they are not same, then their impls are wrong. Ours are always the correct one.
936
+ text_len = encoder_attention_mask.sum().item()
937
+ encoder_hidden_states = encoder_hidden_states[:, :text_len]
938
+ attention_mask = None, None, None, None
939
+ else:
940
+ img_seq_len = hidden_states.shape[1]
941
+ txt_seq_len = encoder_hidden_states.shape[1]
942
+
943
+ cu_seqlens_q = get_cu_seqlens(encoder_attention_mask, img_seq_len)
944
+ cu_seqlens_kv = cu_seqlens_q
945
+ max_seqlen_q = img_seq_len + txt_seq_len
946
+ max_seqlen_kv = max_seqlen_q
947
+
948
+ attention_mask = cu_seqlens_q, cu_seqlens_kv, max_seqlen_q, max_seqlen_kv
949
+
950
+ if self.enable_teacache:
951
+ modulated_inp = self.transformer_blocks[0].norm1(hidden_states, emb=temb)[0]
952
+
953
+ if self.cnt == 0 or self.cnt == self.num_steps-1:
954
+ should_calc = True
955
+ self.accumulated_rel_l1_distance = 0
956
+ else:
957
+ curr_rel_l1 = ((modulated_inp - self.previous_modulated_input).abs().mean() / self.previous_modulated_input.abs().mean()).cpu().item()
958
+ self.accumulated_rel_l1_distance += self.teacache_rescale_func(curr_rel_l1)
959
+ should_calc = self.accumulated_rel_l1_distance >= self.rel_l1_thresh
960
+
961
+ if should_calc:
962
+ self.accumulated_rel_l1_distance = 0
963
+
964
+ self.previous_modulated_input = modulated_inp
965
+ self.cnt += 1
966
+
967
+ if self.cnt == self.num_steps:
968
+ self.cnt = 0
969
+
970
+ if not should_calc:
971
+ hidden_states = hidden_states + self.previous_residual
972
+ else:
973
+ ori_hidden_states = hidden_states.clone()
974
+
975
+ for block_id, block in enumerate(self.transformer_blocks):
976
+ hidden_states, encoder_hidden_states = self.gradient_checkpointing_method(
977
+ block,
978
+ hidden_states,
979
+ encoder_hidden_states,
980
+ temb,
981
+ attention_mask,
982
+ rope_freqs
983
+ )
984
+
985
+ for block_id, block in enumerate(self.single_transformer_blocks):
986
+ hidden_states, encoder_hidden_states = self.gradient_checkpointing_method(
987
+ block,
988
+ hidden_states,
989
+ encoder_hidden_states,
990
+ temb,
991
+ attention_mask,
992
+ rope_freqs
993
+ )
994
+
995
+ self.previous_residual = hidden_states - ori_hidden_states
996
+ else:
997
+ for block_id, block in enumerate(self.transformer_blocks):
998
+ hidden_states, encoder_hidden_states = self.gradient_checkpointing_method(
999
+ block,
1000
+ hidden_states,
1001
+ encoder_hidden_states,
1002
+ temb,
1003
+ attention_mask,
1004
+ rope_freqs
1005
+ )
1006
+
1007
+ for block_id, block in enumerate(self.single_transformer_blocks):
1008
+ hidden_states, encoder_hidden_states = self.gradient_checkpointing_method(
1009
+ block,
1010
+ hidden_states,
1011
+ encoder_hidden_states,
1012
+ temb,
1013
+ attention_mask,
1014
+ rope_freqs
1015
+ )
1016
+
1017
+ hidden_states = self.gradient_checkpointing_method(self.norm_out, hidden_states, temb)
1018
+
1019
+ hidden_states = hidden_states[:, -original_context_length:, :]
1020
+
1021
+ if self.high_quality_fp32_output_for_inference:
1022
+ hidden_states = hidden_states.to(dtype=torch.float32)
1023
+ if self.proj_out.weight.dtype != torch.float32:
1024
+ self.proj_out.to(dtype=torch.float32)
1025
+
1026
+ hidden_states = self.gradient_checkpointing_method(self.proj_out, hidden_states)
1027
+
1028
+ hidden_states = einops.rearrange(hidden_states, 'b (t h w) (c pt ph pw) -> b c (t pt) (h ph) (w pw)',
1029
+ t=post_patch_num_frames, h=post_patch_height, w=post_patch_width,
1030
+ pt=p_t, ph=p, pw=p)
1031
+
1032
+ if return_dict:
1033
+ return Transformer2DModelOutput(sample=hidden_states)
1034
+
1035
+ return hidden_states,
diffusers_helper/pipelines/k_diffusion_hunyuan.py ADDED
@@ -0,0 +1,120 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import torch
2
+ import math
3
+
4
+ from diffusers_helper.k_diffusion.uni_pc_fm import sample_unipc
5
+ from diffusers_helper.k_diffusion.wrapper import fm_wrapper
6
+ from diffusers_helper.utils import repeat_to_batch_size
7
+
8
+
9
+ def flux_time_shift(t, mu=1.15, sigma=1.0):
10
+ return math.exp(mu) / (math.exp(mu) + (1 / t - 1) ** sigma)
11
+
12
+
13
+ def calculate_flux_mu(context_length, x1=256, y1=0.5, x2=4096, y2=1.15, exp_max=7.0):
14
+ k = (y2 - y1) / (x2 - x1)
15
+ b = y1 - k * x1
16
+ mu = k * context_length + b
17
+ mu = min(mu, math.log(exp_max))
18
+ return mu
19
+
20
+
21
+ def get_flux_sigmas_from_mu(n, mu):
22
+ sigmas = torch.linspace(1, 0, steps=n + 1)
23
+ sigmas = flux_time_shift(sigmas, mu=mu)
24
+ return sigmas
25
+
26
+
27
+ @torch.inference_mode()
28
+ def sample_hunyuan(
29
+ transformer,
30
+ sampler='unipc',
31
+ initial_latent=None,
32
+ concat_latent=None,
33
+ strength=1.0,
34
+ width=512,
35
+ height=512,
36
+ frames=16,
37
+ real_guidance_scale=1.0,
38
+ distilled_guidance_scale=6.0,
39
+ guidance_rescale=0.0,
40
+ shift=None,
41
+ num_inference_steps=25,
42
+ batch_size=None,
43
+ generator=None,
44
+ prompt_embeds=None,
45
+ prompt_embeds_mask=None,
46
+ prompt_poolers=None,
47
+ negative_prompt_embeds=None,
48
+ negative_prompt_embeds_mask=None,
49
+ negative_prompt_poolers=None,
50
+ dtype=torch.bfloat16,
51
+ device=None,
52
+ negative_kwargs=None,
53
+ callback=None,
54
+ **kwargs,
55
+ ):
56
+ device = device or transformer.device
57
+
58
+ if batch_size is None:
59
+ batch_size = int(prompt_embeds.shape[0])
60
+
61
+ latents = torch.randn((batch_size, 16, (frames + 3) // 4, height // 8, width // 8), generator=generator, device=generator.device).to(device=device, dtype=torch.float32)
62
+
63
+ B, C, T, H, W = latents.shape
64
+ seq_length = T * H * W // 4
65
+
66
+ if shift is None:
67
+ mu = calculate_flux_mu(seq_length, exp_max=7.0)
68
+ else:
69
+ mu = math.log(shift)
70
+
71
+ sigmas = get_flux_sigmas_from_mu(num_inference_steps, mu).to(device)
72
+
73
+ k_model = fm_wrapper(transformer)
74
+
75
+ if initial_latent is not None:
76
+ sigmas = sigmas * strength
77
+ first_sigma = sigmas[0].to(device=device, dtype=torch.float32)
78
+ initial_latent = initial_latent.to(device=device, dtype=torch.float32)
79
+ latents = initial_latent.float() * (1.0 - first_sigma) + latents.float() * first_sigma
80
+
81
+ if concat_latent is not None:
82
+ concat_latent = concat_latent.to(latents)
83
+
84
+ distilled_guidance = torch.tensor([distilled_guidance_scale * 1000.0] * batch_size).to(device=device, dtype=dtype)
85
+
86
+ prompt_embeds = repeat_to_batch_size(prompt_embeds, batch_size)
87
+ prompt_embeds_mask = repeat_to_batch_size(prompt_embeds_mask, batch_size)
88
+ prompt_poolers = repeat_to_batch_size(prompt_poolers, batch_size)
89
+ negative_prompt_embeds = repeat_to_batch_size(negative_prompt_embeds, batch_size)
90
+ negative_prompt_embeds_mask = repeat_to_batch_size(negative_prompt_embeds_mask, batch_size)
91
+ negative_prompt_poolers = repeat_to_batch_size(negative_prompt_poolers, batch_size)
92
+ concat_latent = repeat_to_batch_size(concat_latent, batch_size)
93
+
94
+ sampler_kwargs = dict(
95
+ dtype=dtype,
96
+ cfg_scale=real_guidance_scale,
97
+ cfg_rescale=guidance_rescale,
98
+ concat_latent=concat_latent,
99
+ positive=dict(
100
+ pooled_projections=prompt_poolers,
101
+ encoder_hidden_states=prompt_embeds,
102
+ encoder_attention_mask=prompt_embeds_mask,
103
+ guidance=distilled_guidance,
104
+ **kwargs,
105
+ ),
106
+ negative=dict(
107
+ pooled_projections=negative_prompt_poolers,
108
+ encoder_hidden_states=negative_prompt_embeds,
109
+ encoder_attention_mask=negative_prompt_embeds_mask,
110
+ guidance=distilled_guidance,
111
+ **(kwargs if negative_kwargs is None else {**kwargs, **negative_kwargs}),
112
+ )
113
+ )
114
+
115
+ if sampler == 'unipc':
116
+ results = sample_unipc(k_model, latents, sigmas, extra_args=sampler_kwargs, disable=False, callback=callback)
117
+ else:
118
+ raise NotImplementedError(f'Sampler {sampler} is not supported.')
119
+
120
+ return results
diffusers_helper/thread_utils.py ADDED
@@ -0,0 +1,76 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import time
2
+
3
+ from threading import Thread, Lock
4
+
5
+
6
+ class Listener:
7
+ task_queue = []
8
+ lock = Lock()
9
+ thread = None
10
+
11
+ @classmethod
12
+ def _process_tasks(cls):
13
+ while True:
14
+ task = None
15
+ with cls.lock:
16
+ if cls.task_queue:
17
+ task = cls.task_queue.pop(0)
18
+
19
+ if task is None:
20
+ time.sleep(0.001)
21
+ continue
22
+
23
+ func, args, kwargs = task
24
+ try:
25
+ func(*args, **kwargs)
26
+ except Exception as e:
27
+ print(f"Error in listener thread: {e}")
28
+
29
+ @classmethod
30
+ def add_task(cls, func, *args, **kwargs):
31
+ with cls.lock:
32
+ cls.task_queue.append((func, args, kwargs))
33
+
34
+ if cls.thread is None:
35
+ cls.thread = Thread(target=cls._process_tasks, daemon=True)
36
+ cls.thread.start()
37
+
38
+
39
+ def async_run(func, *args, **kwargs):
40
+ Listener.add_task(func, *args, **kwargs)
41
+
42
+
43
+ class FIFOQueue:
44
+ def __init__(self):
45
+ self.queue = []
46
+ self.lock = Lock()
47
+
48
+ def push(self, item):
49
+ with self.lock:
50
+ self.queue.append(item)
51
+
52
+ def pop(self):
53
+ with self.lock:
54
+ if self.queue:
55
+ return self.queue.pop(0)
56
+ return None
57
+
58
+ def top(self):
59
+ with self.lock:
60
+ if self.queue:
61
+ return self.queue[0]
62
+ return None
63
+
64
+ def next(self):
65
+ while True:
66
+ with self.lock:
67
+ if self.queue:
68
+ return self.queue.pop(0)
69
+
70
+ time.sleep(0.001)
71
+
72
+
73
+ class AsyncStream:
74
+ def __init__(self):
75
+ self.input_queue = FIFOQueue()
76
+ self.output_queue = FIFOQueue()
diffusers_helper/utils.py ADDED
@@ -0,0 +1,613 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import os
2
+ import cv2
3
+ import json
4
+ import random
5
+ import glob
6
+ import torch
7
+ import einops
8
+ import numpy as np
9
+ import datetime
10
+ import torchvision
11
+
12
+ import safetensors.torch as sf
13
+ from PIL import Image
14
+
15
+
16
+ def min_resize(x, m):
17
+ if x.shape[0] < x.shape[1]:
18
+ s0 = m
19
+ s1 = int(float(m) / float(x.shape[0]) * float(x.shape[1]))
20
+ else:
21
+ s0 = int(float(m) / float(x.shape[1]) * float(x.shape[0]))
22
+ s1 = m
23
+ new_max = max(s1, s0)
24
+ raw_max = max(x.shape[0], x.shape[1])
25
+ if new_max < raw_max:
26
+ interpolation = cv2.INTER_AREA
27
+ else:
28
+ interpolation = cv2.INTER_LANCZOS4
29
+ y = cv2.resize(x, (s1, s0), interpolation=interpolation)
30
+ return y
31
+
32
+
33
+ def d_resize(x, y):
34
+ H, W, C = y.shape
35
+ new_min = min(H, W)
36
+ raw_min = min(x.shape[0], x.shape[1])
37
+ if new_min < raw_min:
38
+ interpolation = cv2.INTER_AREA
39
+ else:
40
+ interpolation = cv2.INTER_LANCZOS4
41
+ y = cv2.resize(x, (W, H), interpolation=interpolation)
42
+ return y
43
+
44
+
45
+ def resize_and_center_crop(image, target_width, target_height):
46
+ if target_height == image.shape[0] and target_width == image.shape[1]:
47
+ return image
48
+
49
+ pil_image = Image.fromarray(image)
50
+ original_width, original_height = pil_image.size
51
+ scale_factor = max(target_width / original_width, target_height / original_height)
52
+ resized_width = int(round(original_width * scale_factor))
53
+ resized_height = int(round(original_height * scale_factor))
54
+ resized_image = pil_image.resize((resized_width, resized_height), Image.LANCZOS)
55
+ left = (resized_width - target_width) / 2
56
+ top = (resized_height - target_height) / 2
57
+ right = (resized_width + target_width) / 2
58
+ bottom = (resized_height + target_height) / 2
59
+ cropped_image = resized_image.crop((left, top, right, bottom))
60
+ return np.array(cropped_image)
61
+
62
+
63
+ def resize_and_center_crop_pytorch(image, target_width, target_height):
64
+ B, C, H, W = image.shape
65
+
66
+ if H == target_height and W == target_width:
67
+ return image
68
+
69
+ scale_factor = max(target_width / W, target_height / H)
70
+ resized_width = int(round(W * scale_factor))
71
+ resized_height = int(round(H * scale_factor))
72
+
73
+ resized = torch.nn.functional.interpolate(image, size=(resized_height, resized_width), mode='bilinear', align_corners=False)
74
+
75
+ top = (resized_height - target_height) // 2
76
+ left = (resized_width - target_width) // 2
77
+ cropped = resized[:, :, top:top + target_height, left:left + target_width]
78
+
79
+ return cropped
80
+
81
+
82
+ def resize_without_crop(image, target_width, target_height):
83
+ if target_height == image.shape[0] and target_width == image.shape[1]:
84
+ return image
85
+
86
+ pil_image = Image.fromarray(image)
87
+ resized_image = pil_image.resize((target_width, target_height), Image.LANCZOS)
88
+ return np.array(resized_image)
89
+
90
+
91
+ def just_crop(image, w, h):
92
+ if h == image.shape[0] and w == image.shape[1]:
93
+ return image
94
+
95
+ original_height, original_width = image.shape[:2]
96
+ k = min(original_height / h, original_width / w)
97
+ new_width = int(round(w * k))
98
+ new_height = int(round(h * k))
99
+ x_start = (original_width - new_width) // 2
100
+ y_start = (original_height - new_height) // 2
101
+ cropped_image = image[y_start:y_start + new_height, x_start:x_start + new_width]
102
+ return cropped_image
103
+
104
+
105
+ def write_to_json(data, file_path):
106
+ temp_file_path = file_path + ".tmp"
107
+ with open(temp_file_path, 'wt', encoding='utf-8') as temp_file:
108
+ json.dump(data, temp_file, indent=4)
109
+ os.replace(temp_file_path, file_path)
110
+ return
111
+
112
+
113
+ def read_from_json(file_path):
114
+ with open(file_path, 'rt', encoding='utf-8') as file:
115
+ data = json.load(file)
116
+ return data
117
+
118
+
119
+ def get_active_parameters(m):
120
+ return {k: v for k, v in m.named_parameters() if v.requires_grad}
121
+
122
+
123
+ def cast_training_params(m, dtype=torch.float32):
124
+ result = {}
125
+ for n, param in m.named_parameters():
126
+ if param.requires_grad:
127
+ param.data = param.to(dtype)
128
+ result[n] = param
129
+ return result
130
+
131
+
132
+ def separate_lora_AB(parameters, B_patterns=None):
133
+ parameters_normal = {}
134
+ parameters_B = {}
135
+
136
+ if B_patterns is None:
137
+ B_patterns = ['.lora_B.', '__zero__']
138
+
139
+ for k, v in parameters.items():
140
+ if any(B_pattern in k for B_pattern in B_patterns):
141
+ parameters_B[k] = v
142
+ else:
143
+ parameters_normal[k] = v
144
+
145
+ return parameters_normal, parameters_B
146
+
147
+
148
+ def set_attr_recursive(obj, attr, value):
149
+ attrs = attr.split(".")
150
+ for name in attrs[:-1]:
151
+ obj = getattr(obj, name)
152
+ setattr(obj, attrs[-1], value)
153
+ return
154
+
155
+
156
+ def print_tensor_list_size(tensors):
157
+ total_size = 0
158
+ total_elements = 0
159
+
160
+ if isinstance(tensors, dict):
161
+ tensors = tensors.values()
162
+
163
+ for tensor in tensors:
164
+ total_size += tensor.nelement() * tensor.element_size()
165
+ total_elements += tensor.nelement()
166
+
167
+ total_size_MB = total_size / (1024 ** 2)
168
+ total_elements_B = total_elements / 1e9
169
+
170
+ print(f"Total number of tensors: {len(tensors)}")
171
+ print(f"Total size of tensors: {total_size_MB:.2f} MB")
172
+ print(f"Total number of parameters: {total_elements_B:.3f} billion")
173
+ return
174
+
175
+
176
+ @torch.no_grad()
177
+ def batch_mixture(a, b=None, probability_a=0.5, mask_a=None):
178
+ batch_size = a.size(0)
179
+
180
+ if b is None:
181
+ b = torch.zeros_like(a)
182
+
183
+ if mask_a is None:
184
+ mask_a = torch.rand(batch_size) < probability_a
185
+
186
+ mask_a = mask_a.to(a.device)
187
+ mask_a = mask_a.reshape((batch_size,) + (1,) * (a.dim() - 1))
188
+ result = torch.where(mask_a, a, b)
189
+ return result
190
+
191
+
192
+ @torch.no_grad()
193
+ def zero_module(module):
194
+ for p in module.parameters():
195
+ p.detach().zero_()
196
+ return module
197
+
198
+
199
+ @torch.no_grad()
200
+ def supress_lower_channels(m, k, alpha=0.01):
201
+ data = m.weight.data.clone()
202
+
203
+ assert int(data.shape[1]) >= k
204
+
205
+ data[:, :k] = data[:, :k] * alpha
206
+ m.weight.data = data.contiguous().clone()
207
+ return m
208
+
209
+
210
+ def freeze_module(m):
211
+ if not hasattr(m, '_forward_inside_frozen_module'):
212
+ m._forward_inside_frozen_module = m.forward
213
+ m.requires_grad_(False)
214
+ m.forward = torch.no_grad()(m.forward)
215
+ return m
216
+
217
+
218
+ def get_latest_safetensors(folder_path):
219
+ safetensors_files = glob.glob(os.path.join(folder_path, '*.safetensors'))
220
+
221
+ if not safetensors_files:
222
+ raise ValueError('No file to resume!')
223
+
224
+ latest_file = max(safetensors_files, key=os.path.getmtime)
225
+ latest_file = os.path.abspath(os.path.realpath(latest_file))
226
+ return latest_file
227
+
228
+
229
+ def generate_random_prompt_from_tags(tags_str, min_length=3, max_length=32):
230
+ tags = tags_str.split(', ')
231
+ tags = random.sample(tags, k=min(random.randint(min_length, max_length), len(tags)))
232
+ prompt = ', '.join(tags)
233
+ return prompt
234
+
235
+
236
+ def interpolate_numbers(a, b, n, round_to_int=False, gamma=1.0):
237
+ numbers = a + (b - a) * (np.linspace(0, 1, n) ** gamma)
238
+ if round_to_int:
239
+ numbers = np.round(numbers).astype(int)
240
+ return numbers.tolist()
241
+
242
+
243
+ def uniform_random_by_intervals(inclusive, exclusive, n, round_to_int=False):
244
+ edges = np.linspace(0, 1, n + 1)
245
+ points = np.random.uniform(edges[:-1], edges[1:])
246
+ numbers = inclusive + (exclusive - inclusive) * points
247
+ if round_to_int:
248
+ numbers = np.round(numbers).astype(int)
249
+ return numbers.tolist()
250
+
251
+
252
+ def soft_append_bcthw(history, current, overlap=0):
253
+ if overlap <= 0:
254
+ return torch.cat([history, current], dim=2)
255
+
256
+ assert history.shape[2] >= overlap, f"History length ({history.shape[2]}) must be >= overlap ({overlap})"
257
+ assert current.shape[2] >= overlap, f"Current length ({current.shape[2]}) must be >= overlap ({overlap})"
258
+
259
+ weights = torch.linspace(1, 0, overlap, dtype=history.dtype, device=history.device).view(1, 1, -1, 1, 1)
260
+ blended = weights * history[:, :, -overlap:] + (1 - weights) * current[:, :, :overlap]
261
+ output = torch.cat([history[:, :, :-overlap], blended, current[:, :, overlap:]], dim=2)
262
+
263
+ return output.to(history)
264
+
265
+
266
+ def save_bcthw_as_mp4(x, output_filename, fps=10, crf=0):
267
+ b, c, t, h, w = x.shape
268
+
269
+ per_row = b
270
+ for p in [6, 5, 4, 3, 2]:
271
+ if b % p == 0:
272
+ per_row = p
273
+ break
274
+
275
+ os.makedirs(os.path.dirname(os.path.abspath(os.path.realpath(output_filename))), exist_ok=True)
276
+ x = torch.clamp(x.float(), -1., 1.) * 127.5 + 127.5
277
+ x = x.detach().cpu().to(torch.uint8)
278
+ x = einops.rearrange(x, '(m n) c t h w -> t (m h) (n w) c', n=per_row)
279
+ torchvision.io.write_video(output_filename, x, fps=fps, video_codec='libx264', options={'crf': str(int(crf))})
280
+ return x
281
+
282
+
283
+ def save_bcthw_as_png(x, output_filename):
284
+ os.makedirs(os.path.dirname(os.path.abspath(os.path.realpath(output_filename))), exist_ok=True)
285
+ x = torch.clamp(x.float(), -1., 1.) * 127.5 + 127.5
286
+ x = x.detach().cpu().to(torch.uint8)
287
+ x = einops.rearrange(x, 'b c t h w -> c (b h) (t w)')
288
+ torchvision.io.write_png(x, output_filename)
289
+ return output_filename
290
+
291
+
292
+ def save_bchw_as_png(x, output_filename):
293
+ os.makedirs(os.path.dirname(os.path.abspath(os.path.realpath(output_filename))), exist_ok=True)
294
+ x = torch.clamp(x.float(), -1., 1.) * 127.5 + 127.5
295
+ x = x.detach().cpu().to(torch.uint8)
296
+ x = einops.rearrange(x, 'b c h w -> c h (b w)')
297
+ torchvision.io.write_png(x, output_filename)
298
+ return output_filename
299
+
300
+
301
+ def add_tensors_with_padding(tensor1, tensor2):
302
+ if tensor1.shape == tensor2.shape:
303
+ return tensor1 + tensor2
304
+
305
+ shape1 = tensor1.shape
306
+ shape2 = tensor2.shape
307
+
308
+ new_shape = tuple(max(s1, s2) for s1, s2 in zip(shape1, shape2))
309
+
310
+ padded_tensor1 = torch.zeros(new_shape)
311
+ padded_tensor2 = torch.zeros(new_shape)
312
+
313
+ padded_tensor1[tuple(slice(0, s) for s in shape1)] = tensor1
314
+ padded_tensor2[tuple(slice(0, s) for s in shape2)] = tensor2
315
+
316
+ result = padded_tensor1 + padded_tensor2
317
+ return result
318
+
319
+
320
+ def print_free_mem():
321
+ torch.cuda.empty_cache()
322
+ free_mem, total_mem = torch.cuda.mem_get_info(0)
323
+ free_mem_mb = free_mem / (1024 ** 2)
324
+ total_mem_mb = total_mem / (1024 ** 2)
325
+ print(f"Free memory: {free_mem_mb:.2f} MB")
326
+ print(f"Total memory: {total_mem_mb:.2f} MB")
327
+ return
328
+
329
+
330
+ def print_gpu_parameters(device, state_dict, log_count=1):
331
+ summary = {"device": device, "keys_count": len(state_dict)}
332
+
333
+ logged_params = {}
334
+ for i, (key, tensor) in enumerate(state_dict.items()):
335
+ if i >= log_count:
336
+ break
337
+ logged_params[key] = tensor.flatten()[:3].tolist()
338
+
339
+ summary["params"] = logged_params
340
+
341
+ print(str(summary))
342
+ return
343
+
344
+
345
+ def visualize_txt_as_img(width, height, text, font_path='font/DejaVuSans.ttf', size=18):
346
+ from PIL import Image, ImageDraw, ImageFont
347
+
348
+ txt = Image.new("RGB", (width, height), color="white")
349
+ draw = ImageDraw.Draw(txt)
350
+ font = ImageFont.truetype(font_path, size=size)
351
+
352
+ if text == '':
353
+ return np.array(txt)
354
+
355
+ # Split text into lines that fit within the image width
356
+ lines = []
357
+ words = text.split()
358
+ current_line = words[0]
359
+
360
+ for word in words[1:]:
361
+ line_with_word = f"{current_line} {word}"
362
+ if draw.textbbox((0, 0), line_with_word, font=font)[2] <= width:
363
+ current_line = line_with_word
364
+ else:
365
+ lines.append(current_line)
366
+ current_line = word
367
+
368
+ lines.append(current_line)
369
+
370
+ # Draw the text line by line
371
+ y = 0
372
+ line_height = draw.textbbox((0, 0), "A", font=font)[3]
373
+
374
+ for line in lines:
375
+ if y + line_height > height:
376
+ break # stop drawing if the next line will be outside the image
377
+ draw.text((0, y), line, fill="black", font=font)
378
+ y += line_height
379
+
380
+ return np.array(txt)
381
+
382
+
383
+ def blue_mark(x):
384
+ x = x.copy()
385
+ c = x[:, :, 2]
386
+ b = cv2.blur(c, (9, 9))
387
+ x[:, :, 2] = ((c - b) * 16.0 + b).clip(-1, 1)
388
+ return x
389
+
390
+
391
+ def green_mark(x):
392
+ x = x.copy()
393
+ x[:, :, 2] = -1
394
+ x[:, :, 0] = -1
395
+ return x
396
+
397
+
398
+ def frame_mark(x):
399
+ x = x.copy()
400
+ x[:64] = -1
401
+ x[-64:] = -1
402
+ x[:, :8] = 1
403
+ x[:, -8:] = 1
404
+ return x
405
+
406
+
407
+ @torch.inference_mode()
408
+ def pytorch2numpy(imgs):
409
+ results = []
410
+ for x in imgs:
411
+ y = x.movedim(0, -1)
412
+ y = y * 127.5 + 127.5
413
+ y = y.detach().float().cpu().numpy().clip(0, 255).astype(np.uint8)
414
+ results.append(y)
415
+ return results
416
+
417
+
418
+ @torch.inference_mode()
419
+ def numpy2pytorch(imgs):
420
+ h = torch.from_numpy(np.stack(imgs, axis=0)).float() / 127.5 - 1.0
421
+ h = h.movedim(-1, 1)
422
+ return h
423
+
424
+
425
+ @torch.no_grad()
426
+ def duplicate_prefix_to_suffix(x, count, zero_out=False):
427
+ if zero_out:
428
+ return torch.cat([x, torch.zeros_like(x[:count])], dim=0)
429
+ else:
430
+ return torch.cat([x, x[:count]], dim=0)
431
+
432
+
433
+ def weighted_mse(a, b, weight):
434
+ return torch.mean(weight.float() * (a.float() - b.float()) ** 2)
435
+
436
+
437
+ def clamped_linear_interpolation(x, x_min, y_min, x_max, y_max, sigma=1.0):
438
+ x = (x - x_min) / (x_max - x_min)
439
+ x = max(0.0, min(x, 1.0))
440
+ x = x ** sigma
441
+ return y_min + x * (y_max - y_min)
442
+
443
+
444
+ def expand_to_dims(x, target_dims):
445
+ return x.view(*x.shape, *([1] * max(0, target_dims - x.dim())))
446
+
447
+
448
+ def repeat_to_batch_size(tensor: torch.Tensor, batch_size: int):
449
+ if tensor is None:
450
+ return None
451
+
452
+ first_dim = tensor.shape[0]
453
+
454
+ if first_dim == batch_size:
455
+ return tensor
456
+
457
+ if batch_size % first_dim != 0:
458
+ raise ValueError(f"Cannot evenly repeat first dim {first_dim} to match batch_size {batch_size}.")
459
+
460
+ repeat_times = batch_size // first_dim
461
+
462
+ return tensor.repeat(repeat_times, *[1] * (tensor.dim() - 1))
463
+
464
+
465
+ def dim5(x):
466
+ return expand_to_dims(x, 5)
467
+
468
+
469
+ def dim4(x):
470
+ return expand_to_dims(x, 4)
471
+
472
+
473
+ def dim3(x):
474
+ return expand_to_dims(x, 3)
475
+
476
+
477
+ def crop_or_pad_yield_mask(x, length):
478
+ B, F, C = x.shape
479
+ device = x.device
480
+ dtype = x.dtype
481
+
482
+ if F < length:
483
+ y = torch.zeros((B, length, C), dtype=dtype, device=device)
484
+ mask = torch.zeros((B, length), dtype=torch.bool, device=device)
485
+ y[:, :F, :] = x
486
+ mask[:, :F] = True
487
+ return y, mask
488
+
489
+ return x[:, :length, :], torch.ones((B, length), dtype=torch.bool, device=device)
490
+
491
+
492
+ def extend_dim(x, dim, minimal_length, zero_pad=False):
493
+ original_length = int(x.shape[dim])
494
+
495
+ if original_length >= minimal_length:
496
+ return x
497
+
498
+ if zero_pad:
499
+ padding_shape = list(x.shape)
500
+ padding_shape[dim] = minimal_length - original_length
501
+ padding = torch.zeros(padding_shape, dtype=x.dtype, device=x.device)
502
+ else:
503
+ idx = (slice(None),) * dim + (slice(-1, None),) + (slice(None),) * (len(x.shape) - dim - 1)
504
+ last_element = x[idx]
505
+ padding = last_element.repeat_interleave(minimal_length - original_length, dim=dim)
506
+
507
+ return torch.cat([x, padding], dim=dim)
508
+
509
+
510
+ def lazy_positional_encoding(t, repeats=None):
511
+ if not isinstance(t, list):
512
+ t = [t]
513
+
514
+ from diffusers.models.embeddings import get_timestep_embedding
515
+
516
+ te = torch.tensor(t)
517
+ te = get_timestep_embedding(timesteps=te, embedding_dim=256, flip_sin_to_cos=True, downscale_freq_shift=0.0, scale=1.0)
518
+
519
+ if repeats is None:
520
+ return te
521
+
522
+ te = te[:, None, :].expand(-1, repeats, -1)
523
+
524
+ return te
525
+
526
+
527
+ def state_dict_offset_merge(A, B, C=None):
528
+ result = {}
529
+ keys = A.keys()
530
+
531
+ for key in keys:
532
+ A_value = A[key]
533
+ B_value = B[key].to(A_value)
534
+
535
+ if C is None:
536
+ result[key] = A_value + B_value
537
+ else:
538
+ C_value = C[key].to(A_value)
539
+ result[key] = A_value + B_value - C_value
540
+
541
+ return result
542
+
543
+
544
+ def state_dict_weighted_merge(state_dicts, weights):
545
+ if len(state_dicts) != len(weights):
546
+ raise ValueError("Number of state dictionaries must match number of weights")
547
+
548
+ if not state_dicts:
549
+ return {}
550
+
551
+ total_weight = sum(weights)
552
+
553
+ if total_weight == 0:
554
+ raise ValueError("Sum of weights cannot be zero")
555
+
556
+ normalized_weights = [w / total_weight for w in weights]
557
+
558
+ keys = state_dicts[0].keys()
559
+ result = {}
560
+
561
+ for key in keys:
562
+ result[key] = state_dicts[0][key] * normalized_weights[0]
563
+
564
+ for i in range(1, len(state_dicts)):
565
+ state_dict_value = state_dicts[i][key].to(result[key])
566
+ result[key] += state_dict_value * normalized_weights[i]
567
+
568
+ return result
569
+
570
+
571
+ def group_files_by_folder(all_files):
572
+ grouped_files = {}
573
+
574
+ for file in all_files:
575
+ folder_name = os.path.basename(os.path.dirname(file))
576
+ if folder_name not in grouped_files:
577
+ grouped_files[folder_name] = []
578
+ grouped_files[folder_name].append(file)
579
+
580
+ list_of_lists = list(grouped_files.values())
581
+ return list_of_lists
582
+
583
+
584
+ def generate_timestamp():
585
+ now = datetime.datetime.now()
586
+ timestamp = now.strftime('%y%m%d_%H%M%S')
587
+ milliseconds = f"{int(now.microsecond / 1000):03d}"
588
+ random_number = random.randint(0, 9999)
589
+ return f"{timestamp}_{milliseconds}_{random_number}"
590
+
591
+
592
+ def write_PIL_image_with_png_info(image, metadata, path):
593
+ from PIL.PngImagePlugin import PngInfo
594
+
595
+ png_info = PngInfo()
596
+ for key, value in metadata.items():
597
+ png_info.add_text(key, value)
598
+
599
+ image.save(path, "PNG", pnginfo=png_info)
600
+ return image
601
+
602
+
603
+ def torch_safe_save(content, path):
604
+ torch.save(content, path + '_tmp')
605
+ os.replace(path + '_tmp', path)
606
+ return path
607
+
608
+
609
+ def move_optimizer_to_device(optimizer, device):
610
+ for state in optimizer.state.values():
611
+ for k, v in state.items():
612
+ if isinstance(v, torch.Tensor):
613
+ state[k] = v.to(device)
generation_core.py ADDED
@@ -0,0 +1,615 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import torch
2
+ import traceback
3
+ import einops
4
+ import numpy as np
5
+ import os
6
+ import threading
7
+ import json
8
+ from PIL import Image
9
+ from PIL.PngImagePlugin import PngInfo
10
+
11
+ from diffusers_helper.hunyuan import (
12
+ encode_prompt_conds,
13
+ vae_decode,
14
+ vae_encode,
15
+ vae_decode_fake,
16
+ )
17
+ from diffusers_helper.utils import (
18
+ save_bcthw_as_mp4,
19
+ crop_or_pad_yield_mask,
20
+ soft_append_bcthw,
21
+ resize_and_center_crop,
22
+ generate_timestamp,
23
+ )
24
+ from diffusers_helper.pipelines.k_diffusion_hunyuan import sample_hunyuan
25
+ from diffusers_helper.memory import (
26
+ unload_complete_models,
27
+ load_model_as_complete,
28
+ move_model_to_device_with_memory_preservation,
29
+ offload_model_from_device_for_memory_preservation,
30
+ fake_diffusers_current_device,
31
+ gpu,
32
+ )
33
+ from diffusers_helper.clip_vision import hf_clip_vision_encode
34
+ from diffusers_helper.bucket_tools import find_nearest_bucket
35
+ from diffusers_helper.gradio.progress_bar import make_progress_bar_html
36
+ from ui import metadata as metadata_manager
37
+
38
+
39
+ @torch.no_grad()
40
+ def worker(
41
+ # --- Task I/O & Identity ---
42
+ task_id,
43
+ input_image,
44
+ output_folder,
45
+ output_queue_ref,
46
+ # --- Creative Parameters (The "Recipe") ---
47
+ prompt,
48
+ n_prompt,
49
+ seed,
50
+ total_second_length,
51
+ steps,
52
+ cfg,
53
+ gs,
54
+ gs_final,
55
+ gs_schedule_active,
56
+ rs,
57
+ preview_frequency,
58
+ segments_to_decode_csv,
59
+ # --- Environment & Debug Parameters ---
60
+ latent_window_size,
61
+ gpu_memory_preservation,
62
+ use_teacache,
63
+ use_fp32_transformer_output,
64
+ mp4_crf,
65
+ # --- Model & System Objects (Passed from main app) ---
66
+ text_encoder,
67
+ text_encoder_2,
68
+ tokenizer,
69
+ tokenizer_2,
70
+ vae,
71
+ feature_extractor,
72
+ image_encoder,
73
+ transformer,
74
+ high_vram,
75
+ # --- Control Flow ---
76
+ abort_event: threading.Event = None,
77
+ ):
78
+ outputs_folder = (
79
+ os.path.expanduser(output_folder) if output_folder else "./outputs/"
80
+ )
81
+ os.makedirs(outputs_folder, exist_ok=True)
82
+
83
+ # --- Gemini: do not touch - "secret sauce"
84
+ total_latent_sections = (total_second_length * 30) / (latent_window_size * 4)
85
+ total_latent_sections = int(max(round(total_latent_sections), 1))
86
+
87
+ job_id = f"{generate_timestamp()}_task{task_id}"
88
+ output_queue_ref.push(
89
+ (
90
+ "progress",
91
+ (
92
+ task_id,
93
+ None,
94
+ f"Total Segments: {total_latent_sections}",
95
+ make_progress_bar_html(0, "Starting ..."),
96
+ ),
97
+ )
98
+ )
99
+ # ---
100
+ parsed_segments_to_decode_set = set()
101
+ if segments_to_decode_csv:
102
+ try:
103
+ parsed_segments_to_decode_set = {
104
+ int(s.strip()) for s in segments_to_decode_csv.split(",") if s.strip()
105
+ }
106
+ except ValueError:
107
+ print(
108
+ f"Task {task_id}: Warning - Could not parse 'Segments to Decode CSV': \"{segments_to_decode_csv}\"."
109
+ )
110
+ final_output_filename = None
111
+ success = False
112
+ initial_gs_from_ui = gs
113
+ gs_final_value_for_schedule = (
114
+ gs_final if gs_final is not None else initial_gs_from_ui
115
+ )
116
+ original_fp32_setting = transformer.high_quality_fp32_output_for_inference
117
+ transformer.high_quality_fp32_output_for_inference = use_fp32_transformer_output
118
+ print(
119
+ f"Task {task_id}: transformer.high_quality_fp32_output_for_inference set to {use_fp32_transformer_output}"
120
+ )
121
+
122
+ try:
123
+ if not isinstance(input_image, np.ndarray):
124
+ raise ValueError(f"Task {task_id}: input_image is not a NumPy array.")
125
+
126
+ output_queue_ref.push(
127
+ (
128
+ "progress",
129
+ (
130
+ task_id,
131
+ None,
132
+ f"Total Segments: {total_latent_sections}",
133
+ make_progress_bar_html(0, "Image processing ..."),
134
+ ),
135
+ )
136
+ )
137
+ if input_image.shape[-1] == 4:
138
+ pil_img = Image.fromarray(input_image)
139
+ input_image = np.array(pil_img.convert("RGB"))
140
+ H, W, C = input_image.shape
141
+ if C != 3:
142
+ raise ValueError(
143
+ f"Task {task_id}: Input image must be RGB, found {C} channels."
144
+ )
145
+ height, width = find_nearest_bucket(H, W, resolution=640)
146
+ input_image_np = resize_and_center_crop(
147
+ input_image, target_width=width, target_height=height
148
+ )
149
+
150
+ metadata_obj = PngInfo()
151
+ params_to_save_in_metadata = {
152
+ "prompt": prompt,
153
+ "n_prompt": n_prompt,
154
+ "seed": seed,
155
+ "total_second_length": total_second_length,
156
+ "steps": steps,
157
+ "cfg": cfg,
158
+ "gs": gs,
159
+ "gs_final": gs_final,
160
+ "gs_schedule_active": gs_schedule_active,
161
+ "rs": rs,
162
+ "preview_frequency": preview_frequency,
163
+ "segments_to_decode_csv": segments_to_decode_csv,
164
+ }
165
+ metadata_obj.add_text("parameters", json.dumps(params_to_save_in_metadata))
166
+ initial_image_with_params_path = os.path.join(
167
+ outputs_folder, f"{job_id}_initial_image_with_params.png"
168
+ )
169
+ try:
170
+ Image.fromarray(input_image_np).save(
171
+ initial_image_with_params_path, pnginfo=metadata_obj
172
+ )
173
+ except Exception as e_png:
174
+ print(
175
+ f"Task {task_id}: WARNING - Failed to save initial image with parameters: {e_png}"
176
+ )
177
+
178
+ # --- Gemini: do not touch - "secret sauce"
179
+ if not high_vram:
180
+ unload_complete_models(
181
+ text_encoder, text_encoder_2, image_encoder, vae, transformer
182
+ )
183
+ output_queue_ref.push(
184
+ (
185
+ "progress",
186
+ (
187
+ task_id,
188
+ None,
189
+ f"Total Segments: {total_latent_sections}",
190
+ make_progress_bar_html(0, "Text encoding ..."),
191
+ ),
192
+ )
193
+ )
194
+ if not high_vram:
195
+ fake_diffusers_current_device(text_encoder, gpu)
196
+ load_model_as_complete(text_encoder_2, target_device=gpu)
197
+ llama_vec, clip_l_pooler = encode_prompt_conds(
198
+ prompt, text_encoder, text_encoder_2, tokenizer, tokenizer_2
199
+ )
200
+ if cfg == 1:
201
+ llama_vec_n, clip_l_pooler_n = torch.zeros_like(
202
+ llama_vec
203
+ ), torch.zeros_like(clip_l_pooler)
204
+ else:
205
+ llama_vec_n, clip_l_pooler_n = encode_prompt_conds(
206
+ n_prompt, text_encoder, text_encoder_2, tokenizer, tokenizer_2
207
+ )
208
+ llama_vec, llama_attention_mask = crop_or_pad_yield_mask(llama_vec, length=512)
209
+ llama_vec_n, llama_attention_mask_n = crop_or_pad_yield_mask(
210
+ llama_vec_n, length=512
211
+ )
212
+ input_image_pt = (
213
+ torch.from_numpy(input_image_np).float().permute(2, 0, 1).unsqueeze(0)
214
+ / 127.5
215
+ - 1.0
216
+ )
217
+ input_image_pt = input_image_pt[:, :, None, :, :]
218
+ output_queue_ref.push(
219
+ (
220
+ "progress",
221
+ (
222
+ task_id,
223
+ None,
224
+ f"Total Segments: {total_latent_sections}",
225
+ make_progress_bar_html(0, "VAE encoding ..."),
226
+ ),
227
+ )
228
+ )
229
+ if not high_vram:
230
+ load_model_as_complete(vae, target_device=gpu)
231
+ start_latent = vae_encode(input_image_pt, vae)
232
+ output_queue_ref.push(
233
+ (
234
+ "progress",
235
+ (
236
+ task_id,
237
+ None,
238
+ f"Total Segments: {total_latent_sections}",
239
+ make_progress_bar_html(0, "CLIP Vision encoding ..."),
240
+ ),
241
+ )
242
+ )
243
+ if not high_vram:
244
+ load_model_as_complete(image_encoder, target_device=gpu)
245
+ image_encoder_output = hf_clip_vision_encode(
246
+ input_image_np, feature_extractor, image_encoder
247
+ )
248
+ image_encoder_last_hidden_state = image_encoder_output.last_hidden_state
249
+ (
250
+ llama_vec,
251
+ llama_vec_n,
252
+ clip_l_pooler,
253
+ clip_l_pooler_n,
254
+ image_encoder_last_hidden_state,
255
+ ) = [
256
+ t.to(transformer.dtype)
257
+ for t in [
258
+ llama_vec,
259
+ llama_vec_n,
260
+ clip_l_pooler,
261
+ clip_l_pooler_n,
262
+ image_encoder_last_hidden_state,
263
+ ]
264
+ ]
265
+
266
+ output_queue_ref.push(
267
+ (
268
+ "progress",
269
+ (
270
+ task_id,
271
+ None,
272
+ f"Total Segments: {total_latent_sections}",
273
+ make_progress_bar_html(0, "Start sampling ..."),
274
+ ),
275
+ )
276
+ )
277
+ rnd = torch.Generator(device="cpu").manual_seed(int(seed))
278
+ num_frames = latent_window_size * 4 - 3
279
+ # overlapped_frames = num_frames
280
+
281
+ history_latents = torch.zeros(
282
+ size=(1, 16, 1 + 2 + 16, height // 8, width // 8),
283
+ dtype=torch.float32,
284
+ device="cpu",
285
+ )
286
+ history_pixels = None
287
+ total_generated_latent_frames = 0
288
+ latent_paddings = list(reversed(range(total_latent_sections)))
289
+ if total_latent_sections > 4:
290
+ latent_paddings = [3] + [2] * (total_latent_sections - 3) + [1, 0]
291
+
292
+ # for latent_padding_iteration, latent_padding in enumerate(latent_paddings):
293
+ # if abort_event and abort_event.is_set(): raise KeyboardInterrupt("Abort signal received.")
294
+ # is_last_section = (latent_padding == 0)
295
+ # latent_padding_size = latent_padding * latent_window_size
296
+ # print(f'Task {task_id}: Seg {latent_padding_iteration + 1}/{total_latent_sections} (lp_val={latent_padding}), last_loop_seg={is_last_section}')
297
+
298
+ # ^ our code | v Flash code
299
+
300
+ for latent_padding_iteration, latent_padding in enumerate(latent_paddings):
301
+ if abort_event and abort_event.is_set():
302
+ raise KeyboardInterrupt("Abort signal received.")
303
+ is_last_section = latent_padding == 0
304
+ latent_padding_size = latent_padding * latent_window_size
305
+ # Added for consistent 1-indexed segment number for loop segments
306
+ current_loop_segment_number = latent_padding_iteration + 1
307
+ print(
308
+ f"Task {task_id}: Seg {current_loop_segment_number}/{total_latent_sections} (lp_val={latent_padding}), last_loop_seg={is_last_section}"
309
+ )
310
+
311
+ indices = torch.arange(
312
+ 0,
313
+ sum([1, latent_padding_size, latent_window_size, 1, 2, 16]),
314
+ device="cpu",
315
+ ).unsqueeze(0)
316
+ (
317
+ clean_latent_indices_pre,
318
+ _,
319
+ latent_indices,
320
+ clean_latent_indices_post,
321
+ clean_latent_2x_indices,
322
+ clean_latent_4x_indices,
323
+ ) = indices.split(
324
+ [1, latent_padding_size, latent_window_size, 1, 2, 16], dim=1
325
+ )
326
+ clean_latents_pre = start_latent.to(
327
+ history_latents.device, dtype=history_latents.dtype
328
+ )
329
+ clean_latent_indices = torch.cat(
330
+ [clean_latent_indices_pre, clean_latent_indices_post], dim=1
331
+ )
332
+ # current_history_depth_for_clean_split = history_latents.shape[2]; needed_depth_for_clean_split = 1 + 2 + 16
333
+ # history_latents_for_clean_split = history_latents
334
+ # if current_history_depth_for_clean_split < needed_depth_for_clean_split:
335
+ # padding_needed = needed_depth_for_clean_split - current_history_depth_for_clean_split
336
+ # pad_tensor = torch.zeros(history_latents.shape[0], history_latents.shape[1], padding_needed, history_latents.shape[3], history_latents.shape[4], dtype=history_latents.dtype, device=history_latents.device)
337
+ # history_latents_for_clean_split = torch.cat((history_latents, pad_tensor), dim=2)
338
+ clean_latents_post, clean_latents_2x, clean_latents_4x = history_latents[
339
+ :, :, : 1 + 2 + 16, :, :
340
+ ].split([1, 2, 16], dim=2)
341
+ clean_latents = torch.cat([clean_latents_pre, clean_latents_post], dim=2)
342
+
343
+ if not high_vram:
344
+ unload_complete_models()
345
+ move_model_to_device_with_memory_preservation(
346
+ transformer,
347
+ target_device=gpu,
348
+ preserved_memory_gb=gpu_memory_preservation,
349
+ )
350
+ transformer.initialize_teacache(
351
+ enable_teacache=use_teacache, num_steps=steps
352
+ )
353
+
354
+ def callback_diffusion_step(d):
355
+ if abort_event and abort_event.is_set():
356
+ raise KeyboardInterrupt("Abort signal received during sampling.")
357
+ current_diffusion_step = d["i"] + 1
358
+ is_first_step = current_diffusion_step == 1
359
+ is_last_step = current_diffusion_step == steps
360
+ is_preview_step = preview_frequency > 0 and (
361
+ current_diffusion_step % preview_frequency == 0
362
+ )
363
+ if not (is_first_step or is_last_step or is_preview_step):
364
+ return
365
+ preview_latent = d["denoised"]
366
+ preview_img_np = vae_decode_fake(preview_latent)
367
+ preview_img_np = (
368
+ (preview_img_np * 255.0)
369
+ .detach()
370
+ .cpu()
371
+ .numpy()
372
+ .clip(0, 255)
373
+ .astype(np.uint8)
374
+ )
375
+ preview_img_np = einops.rearrange(
376
+ preview_img_np, "b c t h w -> (b h) (t w) c"
377
+ )
378
+
379
+ # percentage = int(100.0 * current_diffusion_step / steps)
380
+ # hint = f'Segment {latent_padding_iteration + 1}, Sampling {current_diffusion_step}/{steps}'
381
+ # current_video_frames_count = history_pixels.shape[2] if history_pixels is not None else 0
382
+ # desc = f'Task {task_id}: Vid Frames: {current_video_frames_count}, Len: {current_video_frames_count / 30 :.2f}s. Seg {latent_padding_iteration + 1}/{total_latent_sections}. Extending...'
383
+ # output_queue_ref.push(('progress', (task_id, preview_img_np, desc, make_progress_bar_html(percentage, hint))))
384
+
385
+ # ^ our code | v Flash code
386
+
387
+ percentage = int(100.0 * current_diffusion_step / steps)
388
+ hint = f"Segment {current_loop_segment_number}, Sampling {current_diffusion_step}/{steps}" # Updated hint
389
+ current_video_frames_count = (
390
+ history_pixels.shape[2] if history_pixels is not None else 0
391
+ )
392
+ desc = f"Task {task_id}: Vid Frames: {current_video_frames_count}, Len: {current_video_frames_count / 30 :.2f}s. Seg {current_loop_segment_number}/{total_latent_sections}. Extending..." # Updated desc
393
+ output_queue_ref.push(
394
+ (
395
+ "progress",
396
+ (
397
+ task_id,
398
+ preview_img_np,
399
+ desc,
400
+ make_progress_bar_html(percentage, hint),
401
+ ),
402
+ )
403
+ )
404
+
405
+ current_segment_gs_to_use = initial_gs_from_ui
406
+ if gs_schedule_active and total_latent_sections > 1:
407
+ progress_for_gs = (
408
+ latent_padding_iteration / (total_latent_sections - 1)
409
+ if total_latent_sections > 1
410
+ else 0
411
+ )
412
+ current_segment_gs_to_use = (
413
+ initial_gs_from_ui
414
+ + (gs_final_value_for_schedule - initial_gs_from_ui)
415
+ * progress_for_gs
416
+ )
417
+
418
+ generated_latents = sample_hunyuan(
419
+ transformer=transformer,
420
+ sampler="unipc",
421
+ width=width,
422
+ height=height,
423
+ frames=num_frames,
424
+ real_guidance_scale=cfg,
425
+ distilled_guidance_scale=current_segment_gs_to_use,
426
+ guidance_rescale=rs,
427
+ num_inference_steps=steps,
428
+ generator=rnd,
429
+ prompt_embeds=llama_vec.to(transformer.device),
430
+ prompt_embeds_mask=llama_attention_mask.to(transformer.device),
431
+ prompt_poolers=clip_l_pooler.to(transformer.device),
432
+ negative_prompt_embeds=llama_vec_n.to(transformer.device),
433
+ negative_prompt_embeds_mask=llama_attention_mask_n.to(
434
+ transformer.device
435
+ ),
436
+ negative_prompt_poolers=clip_l_pooler_n.to(transformer.device),
437
+ device=transformer.device,
438
+ dtype=transformer.dtype,
439
+ image_embeddings=image_encoder_last_hidden_state.to(transformer.device),
440
+ latent_indices=latent_indices.to(transformer.device),
441
+ clean_latents=clean_latents.to(
442
+ transformer.device, dtype=transformer.dtype
443
+ ),
444
+ clean_latent_indices=clean_latent_indices.to(transformer.device),
445
+ clean_latents_2x=clean_latents_2x.to(
446
+ transformer.device, dtype=transformer.dtype
447
+ ),
448
+ clean_latent_2x_indices=clean_latent_2x_indices.to(transformer.device),
449
+ clean_latents_4x=clean_latents_4x.to(
450
+ transformer.device, dtype=transformer.dtype
451
+ ),
452
+ clean_latent_4x_indices=clean_latent_4x_indices.to(transformer.device),
453
+ callback=callback_diffusion_step,
454
+ )
455
+
456
+ if is_last_section:
457
+ generated_latents = torch.cat(
458
+ [start_latent.to(generated_latents), generated_latents], dim=2
459
+ )
460
+
461
+ total_generated_latent_frames += int(generated_latents.shape[2])
462
+ history_latents = torch.cat(
463
+ [generated_latents.to(history_latents), history_latents], dim=2
464
+ )
465
+
466
+ if not high_vram:
467
+ offload_model_from_device_for_memory_preservation(
468
+ transformer, target_device=gpu, preserved_memory_gb=8
469
+ )
470
+ load_model_as_complete(vae, target_device=gpu)
471
+
472
+ real_history_latents = history_latents[
473
+ :, :, :total_generated_latent_frames, :, :
474
+ ]
475
+
476
+ if history_pixels is None:
477
+ history_pixels = vae_decode(real_history_latents, vae).cpu()
478
+ else:
479
+ section_latent_frames = (
480
+ (latent_window_size * 2 + 1)
481
+ if is_last_section
482
+ else (latent_window_size * 2)
483
+ )
484
+ overlapped_frames = latent_window_size * 4 - 3
485
+ current_pixels = vae_decode(
486
+ real_history_latents[:, :, :section_latent_frames], vae
487
+ ).cpu()
488
+ history_pixels = soft_append_bcthw(
489
+ current_pixels, history_pixels, overlapped_frames
490
+ )
491
+
492
+ if not high_vram:
493
+ unload_complete_models()
494
+
495
+ current_video_frame_count = history_pixels.shape[2]
496
+
497
+ # --- Gemini start again
498
+ # # Skip writing preview mp4 for this segment logic
499
+ # should_save_mp4_this_iteration = False
500
+ # current_segment_1_indexed = latent_padding_iteration # + 1
501
+ # if (latent_padding_iteration == 0) or is_last_section or (parsed_segments_to_decode_set and current_segment_1_indexed in parsed_segments_to_decode_set):
502
+ # should_save_mp4_this_iteration = True
503
+ # if should_save_mp4_this_iteration:
504
+ # segment_mp4_filename = os.path.join(outputs_folder, f'{job_id}_segment_{latent_padding_iteration}_frames_{current_video_frame_count}.mp4')
505
+ # save_bcthw_as_mp4(history_pixels, segment_mp4_filename, fps=30, crf=mp4_crf)
506
+ # final_output_filename = segment_mp4_filename
507
+ # print(f"Task {task_id}: SAVED MP4 for segment {latent_padding_iteration} to {segment_mp4_filename}. Total video frames: {current_video_frame_count}")
508
+ # output_queue_ref.push(('file', (task_id, segment_mp4_filename, f"Segment {latent_padding_iteration} MP4 saved ({current_video_frame_count} frames)")))
509
+ # else:
510
+ # print(f"Task {task_id}: SKIPPED MP4 save for intermediate segment {current_segment_1_indexed}.")
511
+
512
+ # if is_last_section: success = True; break
513
+
514
+ # --- Gemini start again
515
+
516
+ # ^ original code | v Flash code
517
+
518
+ # # Skip writing preview mp4 for this segment logic
519
+ # should_save_mp4_this_iteration = False
520
+ # # Use latent_padding_iteration directly here, as it's the 0-indexed loop counter
521
+ # current_segment_index = latent_padding_iteration
522
+
523
+ # # Condition 1: Always save the first segment (index 0)
524
+ # if current_segment_index == 0:
525
+ # should_save_mp4_this_iteration = True
526
+ # # Condition 2: Always save the last segment
527
+ # elif is_last_section:
528
+ # should_save_mp4_this_iteration = True
529
+ # # Condition 3: Save if the current segment index is in the parsed set
530
+ # elif parsed_segments_to_decode_set and (current_segment_index + 1) in parsed_segments_to_decode_set:
531
+ # # Add 1 here if segments_to_decode_csv assumes 1-based indexing for user input
532
+ # should_save_mp4_this_iteration = True
533
+ # # Condition 4: Save based on preview_frequency, if enabled (preview_frequency > 0)
534
+ # elif preview_frequency > 0 and current_segment_index % preview_frequency == 0:
535
+ # should_save_mp4_this_iteration = True
536
+
537
+ # if should_save_mp4_this_iteration:
538
+ # segment_mp4_filename = os.path.join(outputs_folder, f'{job_id}_segment_{latent_padding_iteration}_frames_{current_video_frame_count}.mp4')
539
+ # save_bcthw_as_mp4(history_pixels, segment_mp4_filename, fps=30, crf=mp4_crf)
540
+ # final_output_filename = segment_mp4_filename
541
+ # print(f"Task {task_id}: SAVED MP4 for segment {latent_padding_iteration} to {segment_mp4_filename}. Total video frames: {current_video_frame_count}")
542
+ # output_queue_ref.push(('file', (task_id, segment_mp4_filename, f"Segment {latent_padding_iteration} MP4 saved ({current_video_frame_count} frames)")))
543
+ # else:
544
+ # print(f"Task {task_id}: SKIPPED MP4 save for intermediate segment {current_segment_index}.")
545
+
546
+ # Determine if we should save an intermediate MP4 for this loop segment
547
+ should_save_mp4_this_iteration = False
548
+
549
+ # Condition 1: Always save the last segment of the loop
550
+ if is_last_section:
551
+ should_save_mp4_this_iteration = True
552
+ # Condition 2: Save if the current loop segment number is explicitly in the parsed set
553
+ elif (
554
+ parsed_segments_to_decode_set
555
+ and current_loop_segment_number in parsed_segments_to_decode_set
556
+ ):
557
+ should_save_mp4_this_iteration = True
558
+ # Condition 3: Save based on preview_frequency, if enabled (preview_frequency > 0)
559
+ elif preview_frequency > 0 and (
560
+ current_loop_segment_number % preview_frequency == 0
561
+ ):
562
+ should_save_mp4_this_iteration = True
563
+
564
+ if should_save_mp4_this_iteration:
565
+ segment_mp4_filename = os.path.join(
566
+ outputs_folder,
567
+ f"{job_id}_segment_{current_loop_segment_number}_frames_{current_video_frame_count}.mp4",
568
+ ) # Updated filename to use 1-indexed segment
569
+ save_bcthw_as_mp4(
570
+ history_pixels, segment_mp4_filename, fps=30, crf=mp4_crf
571
+ )
572
+ final_output_filename = segment_mp4_filename
573
+ print(
574
+ f"Task {task_id}: SAVED MP4 for segment {current_loop_segment_number} to {segment_mp4_filename}. Total video frames: {current_video_frame_count}"
575
+ ) # Updated log to use 1-indexed segment
576
+ output_queue_ref.push(
577
+ (
578
+ "file",
579
+ (
580
+ task_id,
581
+ segment_mp4_filename,
582
+ f"Segment {current_loop_segment_number} MP4 saved ({current_video_frame_count} frames)",
583
+ ),
584
+ )
585
+ ) # Updated output queue message to use 1-indexed segment
586
+ else:
587
+ print(
588
+ f"Task {task_id}: SKIPPED MP4 save for intermediate segment {current_loop_segment_number}."
589
+ ) # Updated log to use 1-indexed segment
590
+
591
+ except KeyboardInterrupt:
592
+ print(f"Worker task {task_id} caught KeyboardInterrupt (likely abort signal).")
593
+ output_queue_ref.push(("aborted", task_id))
594
+ success = False
595
+ except Exception as e:
596
+ print(f"Error in worker task {task_id}: {e}")
597
+ traceback.print_exc()
598
+ output_queue_ref.push(("error", (task_id, str(e))))
599
+ success = False
600
+ finally:
601
+ transformer.high_quality_fp32_output_for_inference = original_fp32_setting
602
+ print(
603
+ f"Task {task_id}: Restored transformer.high_quality_fp32_output_for_inference to {original_fp32_setting}"
604
+ )
605
+ if not high_vram:
606
+ unload_complete_models(
607
+ text_encoder, text_encoder_2, image_encoder, vae, transformer
608
+ )
609
+ if final_output_filename and not os.path.dirname(
610
+ final_output_filename
611
+ ) == os.path.abspath(outputs_folder):
612
+ final_output_filename = os.path.join(
613
+ outputs_folder, os.path.basename(final_output_filename)
614
+ )
615
+ output_queue_ref.push(("end", (task_id, success, final_output_filename)))
goan.py ADDED
@@ -0,0 +1,195 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # goan.py
2
+ # Main application entry point for the goan video generation UI.
3
+
4
+ # --- Python Standard Library Imports ---
5
+ import os
6
+ import gradio as gr
7
+ import torch
8
+ import argparse
9
+ import atexit
10
+
11
+ # --- Local Application Imports ---
12
+ # Import managers for different UI sections and shared state
13
+ from ui import layout as layout_manager
14
+ from ui import metadata as metadata_manager
15
+ from ui import queue as queue_manager # Renamed for clarity
16
+ from ui import workspace as workspace_manager
17
+ from ui import shared_state # Ensure this is used consistently
18
+
19
+ # --- Diffusers and Helper Imports ---
20
+ from diffusers import AutoencoderKLHunyuanVideo
21
+ from transformers import LlamaModel, CLIPTextModel, LlamaTokenizerFast, CLIPTokenizer, SiglipImageProcessor, SiglipVisionModel
22
+ from diffusers_helper.models.hunyuan_video_packed import HunyuanVideoTransformer3DModelPacked
23
+ from diffusers_helper.memory import cpu, gpu, get_cuda_free_memory_gb, DynamicSwapInstaller
24
+ from diffusers_helper.gradio.progress_bar import make_progress_bar_css
25
+
26
+ # --- Environment Setup ---
27
+ os.environ['HF_HOME'] = os.path.abspath(os.path.realpath(os.path.join(os.path.dirname(__file__), './hf_download')))
28
+
29
+ # --- Argument Parsing ---
30
+ parser = argparse.ArgumentParser(description="goan: FramePack-based Video Generation UI")
31
+ parser.add_argument('--share', action='store_true', default=False, help="Enable Gradio sharing link.")
32
+ parser.add_argument("--server", type=str, default='127.0.0.1', help="Server name to bind to.")
33
+ parser.add_argument("--port", type=int, required=False, help="Port to run the server on.")
34
+ parser.add_argument("--inbrowser", action='store_true', default=False, help="Launch in browser automatically.")
35
+ # Add the allowed_output_paths argument here
36
+ parser.add_argument("--allowed_output_paths", type=str, default="", help="Comma-separated list of additional output folders Gradio is allowed to access. E.g., '~/my_outputs, /mnt/external_drive/vids'")
37
+ args = parser.parse_args()
38
+ print(f"goan launching with args: {args}")
39
+
40
+ # --- Model Loading ---
41
+ print("Initializing models...")
42
+ free_mem_gb = get_cuda_free_memory_gb(gpu)
43
+ high_vram = free_mem_gb > 60
44
+ print(f'Free VRAM {free_mem_gb} GB, High-VRAM Mode: {high_vram}')
45
+
46
+ # Populate shared_state.models with loaded model instances
47
+ shared_state.models = {
48
+ 'text_encoder': LlamaModel.from_pretrained("hunyuanvideo-community/HunyuanVideo", subfolder='text_encoder', torch_dtype=torch.float16).cpu(),
49
+ 'text_encoder_2': CLIPTextModel.from_pretrained("hunyuanvideo-community/HunyuanVideo", subfolder='text_encoder_2', torch_dtype=torch.float16).cpu(),
50
+ 'tokenizer': LlamaTokenizerFast.from_pretrained("hunyuanvideo-community/HunyuanVideo", subfolder='tokenizer'),
51
+ 'tokenizer_2': CLIPTokenizer.from_pretrained("hunyuanvideo-community/HunyuanVideo", subfolder='tokenizer_2'),
52
+ 'vae': AutoencoderKLHunyuanVideo.from_pretrained("hunyuanvideo-community/HunyuanVideo", subfolder='vae', torch_dtype=torch.float16).cpu(),
53
+ 'feature_extractor': SiglipImageProcessor.from_pretrained("lllyasviel/flux_redux_bfl", subfolder='feature_extractor'),
54
+ 'image_encoder': SiglipVisionModel.from_pretrained("lllyasviel/flux_redux_bfl", subfolder='image_encoder', torch_dtype=torch.float16).cpu(),
55
+ 'transformer': HunyuanVideoTransformer3DModelPacked.from_pretrained('lllyasviel/FramePackI2V_HY', torch_dtype=torch.bfloat16).cpu(),
56
+ 'high_vram': high_vram # Renamed key to match worker's expected param
57
+ }
58
+ print("Models loaded to CPU. Configuring...")
59
+ for model_name in ['vae', 'text_encoder', 'text_encoder_2', 'image_encoder', 'transformer']:
60
+ shared_state.models[model_name].eval()
61
+ if not high_vram:
62
+ shared_state.models['vae'].enable_slicing(); shared_state.models['vae'].enable_tiling()
63
+ shared_state.models['transformer'].high_quality_fp32_output_for_inference = False
64
+ for model_name, dtype in [('transformer', torch.bfloat16), ('vae', torch.float16), ('image_encoder', torch.float16), ('text_encoder', torch.float16), ('text_encoder_2', torch.float16)]:
65
+ shared_state.models[model_name].to(dtype=dtype)
66
+ for model_obj in shared_state.models.values(): # Iterate over values, not keys
67
+ if isinstance(model_obj, torch.nn.Module): model_obj.requires_grad_(False) # Use model_obj here
68
+ if not high_vram:
69
+ print("Low VRAM mode: Installing DynamicSwap.")
70
+ DynamicSwapInstaller.install_model(shared_state.models['transformer'], device=gpu)
71
+ DynamicSwapInstaller.install_model(shared_state.models['text_encoder'], device=gpu)
72
+ else:
73
+ print("High VRAM mode: Moving all models to GPU.")
74
+ for model_name in ['text_encoder', 'text_encoder_2', 'image_encoder', 'vae', 'transformer']:
75
+ shared_state.models[model_name].to(gpu)
76
+ print("Model configuration and placement complete.")
77
+
78
+
79
+ # --- UI Helper Functions ---
80
+ def patched_video_is_playable(video_filepath): return True
81
+ gr.processing_utils.video_is_playable = patched_video_is_playable
82
+
83
+ def ui_update_total_segments(total_seconds_ui, latent_window_size_ui):
84
+ """Calculates and formats the total segment count for display in the UI."""
85
+ try:
86
+ total_segments = int(max(round((total_seconds_ui * 30) / (latent_window_size_ui * 4)), 1))
87
+ return f"Calculated Total Segments: {total_segments}"
88
+ except (TypeError, ValueError): return "Segments: Invalid input"
89
+
90
+
91
+ # --- UI Creation and Event Wiring ---
92
+ print("Creating UI layout...")
93
+ # Create the UI by calling the layout manager. This returns the block and a dictionary of components.
94
+ ui_components = layout_manager.create_ui()
95
+ block = ui_components['block']
96
+
97
+ # Define lists of components for easier wiring of events
98
+ creative_ui_keys = ['prompt_ui', 'n_prompt_ui', 'total_second_length_ui', 'seed_ui', 'preview_frequency_ui', 'segments_to_decode_csv_ui', 'gs_ui', 'gs_schedule_shape_ui', 'gs_final_ui', 'steps_ui', 'cfg_ui', 'rs_ui']
99
+ environment_ui_keys = ['use_teacache_ui', 'use_fp32_transformer_output_checkbox_ui', 'gpu_memory_preservation_ui', 'mp4_crf_ui', 'output_folder_ui_ctrl', 'latent_window_size_ui']
100
+ full_workspace_ui_keys = creative_ui_keys + environment_ui_keys
101
+
102
+ creative_ui_components = [ui_components[key] for key in creative_ui_keys]
103
+ full_workspace_ui_components = [ui_components[key] for key in full_workspace_ui_keys]
104
+ task_defining_ui_inputs = [ui_components['input_image_gallery_ui']] + full_workspace_ui_components
105
+
106
+ # Define output lists for complex Gradio calls
107
+ process_queue_outputs_list = [ui_components[key] for key in ['app_state', 'queue_df_display_ui', 'last_finished_video_ui', 'current_task_preview_image_ui', 'current_task_progress_desc_ui', 'current_task_progress_bar_ui', 'process_queue_button', 'abort_task_button', 'reset_ui_button']]
108
+ queue_df_select_outputs_list = [ui_components[key] for key in ['app_state', 'queue_df_display_ui', 'input_image_gallery_ui'] + full_workspace_ui_keys + ['add_task_button', 'cancel_edit_task_button', 'last_finished_video_ui']]
109
+
110
+ # Wire up all the UI events to their handler functions in the respective managers
111
+ with block:
112
+ # Workspace Manager Events
113
+ ui_components['save_workspace_button'].click(fn=workspace_manager.save_workspace, inputs=full_workspace_ui_components, outputs=None)
114
+ ui_components['load_workspace_button'].click(fn=workspace_manager.load_workspace, inputs=None, outputs=full_workspace_ui_components)
115
+ ui_components['save_as_default_button'].click(fn=workspace_manager.save_as_default_workspace, inputs=full_workspace_ui_components, outputs=None)
116
+
117
+ # Metadata Manager Events
118
+ ui_components['input_image_gallery_ui'].upload(fn=metadata_manager.handle_image_upload_for_metadata, inputs=[ui_components['input_image_gallery_ui']], outputs=[ui_components['metadata_modal']])
119
+ ui_components['confirm_metadata_btn'].click(fn=metadata_manager.apply_and_hide_modal, inputs=[ui_components['input_image_gallery_ui']], outputs=[ui_components['metadata_modal']] + creative_ui_components)
120
+ ui_components['cancel_metadata_btn'].click(fn=lambda: gr.update(visible=False), inputs=None, outputs=ui_components['metadata_modal'])
121
+
122
+ # Queue Manager Events
123
+ ui_components['add_task_button'].click(fn=queue_manager.add_or_update_task_in_queue, inputs=[ui_components['app_state']] + task_defining_ui_inputs, outputs=[ui_components['app_state'], ui_components['queue_df_display_ui'], ui_components['add_task_button'], ui_components['cancel_edit_task_button']])
124
+ ui_components['process_queue_button'].click(fn=queue_manager.process_task_queue_main_loop, inputs=[ui_components['app_state']], outputs=process_queue_outputs_list)
125
+ ui_components['cancel_edit_task_button'].click(fn=queue_manager.cancel_edit_mode_action, inputs=[ui_components['app_state']], outputs=[ui_components['app_state'], ui_components['queue_df_display_ui'], ui_components['add_task_button'], ui_components['cancel_edit_task_button']])
126
+ ui_components['abort_task_button'].click(fn=queue_manager.abort_current_task_processing_action, inputs=[ui_components['app_state']], outputs=[ui_components['app_state'], ui_components['abort_task_button']])
127
+ ui_components['clear_queue_button_ui'].click(fn=queue_manager.clear_task_queue_action, inputs=[ui_components['app_state']], outputs=[ui_components['app_state'], ui_components['queue_df_display_ui']])
128
+ ui_components['save_queue_button_ui'].click(fn=queue_manager.save_queue_to_zip, inputs=[ui_components['app_state']], outputs=[ui_components['app_state'], ui_components['save_queue_zip_b64_output']]).then(fn=None, inputs=[ui_components['save_queue_zip_b64_output']], outputs=None, js="""(b64) => { if(!b64) return; const blob = new Blob([Uint8Array.from(atob(b64), c => c.charCodeAt(0))], {type: 'application/zip'}); const url = URL.createObjectURL(blob); const a = document.createElement('a'); a.href=url; a.download='goan_queue.zip'; a.click(); URL.revokeObjectURL(url); }""")
129
+ ui_components['load_queue_button_ui'].upload(fn=queue_manager.load_queue_from_zip, inputs=[ui_components['app_state'], ui_components['load_queue_button_ui']], outputs=[ui_components['app_state'], ui_components['queue_df_display_ui']])
130
+ ui_components['queue_df_display_ui'].select(fn=queue_manager.handle_queue_action_on_select, inputs=[ui_components['app_state']] + task_defining_ui_inputs, outputs=queue_df_select_outputs_list)
131
+
132
+ # Other UI Event Handlers
133
+ ui_components['gs_schedule_shape_ui'].change(fn=lambda choice: gr.update(interactive=(choice != "Off")), inputs=[ui_components['gs_schedule_shape_ui']], outputs=[ui_components['gs_final_ui']])
134
+ for ctrl_key in ['total_second_length_ui', 'latent_window_size_ui']:
135
+ ui_components[ctrl_key].change(fn=ui_update_total_segments, inputs=[ui_components['total_second_length_ui'], ui_components['latent_window_size_ui']], outputs=[ui_components['total_segments_display_ui']])
136
+
137
+ refresh_image_path_state = gr.State(None)
138
+ # The reset_ui_button's functionality remains the same: save state then reload page
139
+ ui_components['reset_ui_button'].click(fn=workspace_manager.save_ui_and_image_for_refresh, inputs=task_defining_ui_inputs, outputs=None).then(fn=None, inputs=None, outputs=None, js="() => { window.location.reload(); }")
140
+
141
+ # --- Application Startup and Shutdown ---
142
+ autoload_outputs = [ui_components[k] for k in ['app_state', 'queue_df_display_ui', 'process_queue_button', 'abort_task_button', 'last_finished_video_ui']]
143
+
144
+ # This is the crucial block.load chain to ensure re-attachment
145
+ (block.load(fn=workspace_manager.load_workspace_on_start, inputs=[], outputs=[refresh_image_path_state] + full_workspace_ui_components)
146
+ .then(fn=workspace_manager.load_image_from_path, inputs=[refresh_image_path_state], outputs=[ui_components['input_image_gallery_ui']])
147
+ .then(fn=queue_manager.autoload_queue_on_start_action, inputs=[ui_components['app_state']], outputs=autoload_outputs)
148
+ .then(lambda s_val: shared_state.global_state_for_autosave.update(s_val), inputs=[ui_components['app_state']], outputs=None)
149
+ .then(fn=ui_update_total_segments, inputs=[ui_components['total_second_length_ui'], ui_components['latent_window_size_ui']], outputs=[ui_components['total_segments_display_ui']])
150
+ # *** ADDED: Automatic re-attachment of progress UI on load if processing ***
151
+ .then(
152
+ fn=queue_manager.process_task_queue_main_loop,
153
+ inputs=[ui_components['app_state']],
154
+ outputs=process_queue_outputs_list, # Use the existing output list
155
+ js="""
156
+ (app_state_val) => {
157
+ // This JS runs after autoload_queue_on_start_action completes.
158
+ // If a task is processing, we want to re-invoke the Python generator.
159
+ if (app_state_val.queue_state && app_state_val.queue_state.processing) {
160
+ console.log("Gradio: Auto-reconnecting to ongoing task output stream.");
161
+ // Return a non-null, non-falsey value to trigger the Python function.
162
+ return "reconnect_stream";
163
+ }
164
+ console.log("Gradio: No ongoing task detected for auto-reconnection.");
165
+ return null; // Return null to skip calling the Python function
166
+ }
167
+ """
168
+ )
169
+ )
170
+
171
+ # Register the atexit handler with the global_state_for_autosave
172
+ atexit.register(queue_manager.autosave_queue_on_exit_action, shared_state.global_state_for_autosave)
173
+
174
+ # --- Application Launch ---
175
+ if __name__ == "__main__":
176
+ print("Starting goan FramePack UI...")
177
+ # Determine the initial output folder for allowed_paths based on saved settings or default
178
+ initial_output_folder_path = workspace_manager.get_initial_output_folder_from_settings()
179
+ expanded_outputs_folder_for_launch = os.path.abspath(initial_output_folder_path)
180
+
181
+ # Prepare the list of allowed paths for Gradio
182
+ final_allowed_paths = [expanded_outputs_folder_for_launch]
183
+
184
+ if args.allowed_output_paths:
185
+ custom_cli_paths = [
186
+ os.path.abspath(os.path.expanduser(p.strip()))
187
+ for p in args.allowed_output_paths.split(',')
188
+ if p.strip()
189
+ ]
190
+ final_allowed_paths.extend(custom_cli_paths)
191
+
192
+ final_allowed_paths = list(set(final_allowed_paths)) # Remove duplicates
193
+
194
+ print(f"Gradio allowed paths: {final_allowed_paths}")
195
+ block.launch(server_name=args.server, server_port=args.port, share=args.share, inbrowser=args.inbrowser, allowed_paths=final_allowed_paths)
requirements.txt ADDED
@@ -0,0 +1,20 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ accelerate==1.6.0
2
+ diffusers==0.33.1
3
+ transformers==4.46.2
4
+ gradio==5.23.0
5
+ gradio-modal
6
+ sentencepiece==0.2.0
7
+ pillow==11.1.0
8
+ av==12.1.0
9
+ numpy==1.26.2
10
+ scipy==1.12.0
11
+ requests==2.31.0
12
+ torchsde==0.2.6
13
+
14
+ einops
15
+ opencv-contrib-python
16
+ safetensors
17
+
18
+ # --- OK if one fails but less so if they both do ---
19
+ xformers
20
+ sageattention
ui/__init__.py ADDED
@@ -0,0 +1 @@
 
 
1
+
ui/layout.py ADDED
@@ -0,0 +1,126 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # ui/layout.py
2
+ # This file defines the Gradio UI layout for the goan application.
3
+
4
+ import gradio as gr
5
+ from gradio_modal import Modal
6
+
7
+ # Import workspace manager to get the default output folder path
8
+ from . import workspace as workspace_manager
9
+
10
+ def create_ui():
11
+ """
12
+ Creates the Gradio UI layout and returns a dictionary of all UI components.
13
+ This separation of layout from logic makes the main script cleaner.
14
+ """
15
+ # Define a dictionary to hold all UI components, which will be returned.
16
+ components = {}
17
+
18
+ css = """
19
+ #queue_df { font-size: 0.85rem; }
20
+ #queue_df th:nth-child(1), #queue_df td:nth-child(1) { width: 5%; }
21
+ #queue_df th:nth-child(2), #queue_df td:nth-child(2) { width: 10%; }
22
+ #queue_df th:nth-child(3), #queue_df td:nth-child(3) { width: 40%; overflow: hidden; text-overflow: ellipsis; white-space: nowrap;}
23
+ #queue_df th:nth-child(4), #queue_df td:nth-child(4) { width: 8%; }
24
+ #queue_df th:nth-child(5), #queue_df td:nth-child(5) { width: 8%; }
25
+ #queue_df th:nth-child(6), #queue_df td:nth-child(6) { width: 10%; text-align: center; }
26
+ #queue_df th:nth-child(7), #queue_df td:nth-child(7),
27
+ #queue_df th:nth-child(8), #queue_df td:nth-child(8),
28
+ #queue_df th:nth-child(9), #queue_df td:nth-child(9),
29
+ #queue_df th:nth-child(10), #queue_df td:nth-child(10) { width: 4%; cursor: pointer; text-align: center; }
30
+ #queue_df td:hover { background-color: #f0f0f0; }
31
+ .gradio-container { max-width: 95% !important; margin: auto !important; }
32
+ """
33
+
34
+ block = gr.Blocks(css=css, title="goan").queue()
35
+ components['block'] = block
36
+
37
+ with block:
38
+ # The app_state dictionary holds all transient state for the UI
39
+ app_state = gr.State({
40
+ "queue_state": {"queue": [], "next_id": 1, "processing": False, "editing_task_id": None},
41
+ "last_completed_video_path": None
42
+ })
43
+ components['app_state'] = app_state
44
+
45
+ gr.Markdown('# goan (Powered by FramePack)')
46
+
47
+ with Modal(visible=False) as metadata_modal:
48
+ components['metadata_modal'] = metadata_modal
49
+ gr.Markdown("Image has saved parameters. Overwrite current creative settings?")
50
+ with gr.Row():
51
+ components['cancel_metadata_btn'] = gr.Button("No")
52
+ components['confirm_metadata_btn'] = gr.Button("Yes, Apply", variant="primary")
53
+
54
+ # --- Start of layout structure ---
55
+
56
+ # TOP ZONE: A row with two columns.
57
+ # Column 1 (scale=1): Image input and primary queue actions.
58
+ # Column 2 (scale=2): Prompting.
59
+ with gr.Row():
60
+ # Column 1: Input Image & Primary Actions
61
+ with gr.Column(scale=1, min_width=300):
62
+ components['input_image_gallery_ui'] = gr.Gallery(type="pil", label="Input Image(s)", height=220)
63
+ components['add_task_button'] = gr.Button("Add to Queue", variant="secondary")
64
+ components['process_queue_button'] = gr.Button("▶️ Process Queue", variant="primary")
65
+ components['abort_task_button'] = gr.Button("⏹️ Abort", variant="stop", interactive=True) # default to active on startup in case service is still running
66
+ components['cancel_edit_task_button'] = gr.Button("Cancel Edit", visible=False, variant="secondary")
67
+
68
+ # Column 2: Prompts
69
+ with gr.Column(scale=2, min_width=600):
70
+ components['prompt_ui'] = gr.Textbox(label="Prompt", lines=10)
71
+ components['n_prompt_ui'] = gr.Textbox(label="Negative Prompt", lines=4)
72
+
73
+ # MIDDLE ZONE: Full-width task queue display and file operations
74
+ with gr.Group():
75
+ gr.Markdown("## Task Queue")
76
+ components['queue_df_display_ui'] = gr.DataFrame(headers=["ID", "Status", "Prompt", "Length", "Steps", "Input", "↑", "↓", "✖", "✎"], datatype=["number","markdown","markdown","str","number","markdown","markdown","markdown","markdown","markdown"], col_count=(10,"fixed"), interactive=False, visible=False, elem_id="queue_df")
77
+ with gr.Row():
78
+ components['save_queue_zip_b64_output'] = gr.Text(visible=False)
79
+ components['save_queue_button_ui'] = gr.DownloadButton("Save Queue", size="sm")
80
+ components['load_queue_button_ui'] = gr.UploadButton("Load Queue", file_types=[".zip"], size="sm")
81
+ components['clear_queue_button_ui'] = gr.Button("Clear Pending", size="sm", variant="stop")
82
+
83
+ # BOTTOM ZONE: A row split into two columns for Settings and Output
84
+ with gr.Row(equal_height=False):
85
+ # Column 1: All settings and parameters
86
+ with gr.Column(scale=1):
87
+ with gr.Row():
88
+ components['total_second_length_ui'] = gr.Slider(label="Video Length (s)", minimum=0.1, maximum=120, value=5.0, step=0.1)
89
+ components['seed_ui'] = gr.Number(label="Seed", value=-1, precision=0)
90
+ with gr.Accordion("Advanced Settings", open=False): # keep default closed on startup
91
+ components['total_segments_display_ui'] = gr.Markdown("Calculated Total Segments: N/A")
92
+ components['preview_frequency_ui'] = gr.Slider(label="Preview Freq.", minimum=0, maximum=100, value=5, step=1)
93
+ components['segments_to_decode_csv_ui'] = gr.Textbox(label="Preview Segments CSV", value="")
94
+ with gr.Row():
95
+ components['gs_ui'] = gr.Slider(label="Distilled CFG Start", minimum=1.0, maximum=32.0, value=10.0, step=0.01)
96
+ components['gs_schedule_shape_ui'] = gr.Radio(["Off", "Linear"], label="Variable CFG", value="Off")
97
+ components['gs_final_ui'] = gr.Slider(label="Distilled CFG End", minimum=1.0, maximum=32.0, value=10.0, step=0.01, interactive=False)
98
+ components['cfg_ui'] = gr.Slider(label="CFG (Real)", minimum=1.0, maximum=32.0, value=1.0, step=0.01)
99
+ components['steps_ui'] = gr.Slider(label="Steps", minimum=1, maximum=100, value=25, step=1)
100
+ components['rs_ui'] = gr.Slider(label="RS", minimum=0.0, maximum=32.0, value=0.0, step=0.01, visible=False)
101
+ with gr.Accordion("Debug Settings", open=False): # keep default closed on startup
102
+ components['use_teacache_ui'] = gr.Checkbox(label='Use TeaCache', value=True)
103
+ components['use_fp32_transformer_output_checkbox_ui'] = gr.Checkbox(label="Use FP32 Transformer Output", value=False)
104
+ components['gpu_memory_preservation_ui'] = gr.Slider(label="GPU Preserved (GB)", minimum=4, maximum=128, value=6.0, step=0.1)
105
+ components['mp4_crf_ui'] = gr.Slider(label="MP4 CRF", minimum=0, maximum=51, value=18, step=1)
106
+ components['latent_window_size_ui'] = gr.Slider(label="Latent Window Size", minimum=1, maximum=33, value=9, step=1, visible=False)
107
+ components['output_folder_ui_ctrl'] = gr.Textbox(label="Output Folder", value=workspace_manager.outputs_folder)
108
+ components['save_as_default_button'] = gr.Button("Save as Default", variant="secondary")
109
+ components['reset_ui_button'] = gr.Button("Save & Refresh UI", variant="secondary")
110
+
111
+
112
+ # UI and Workspace Management Buttons
113
+ with gr.Row():
114
+ components['save_workspace_button'] = gr.Button("Save Workspace", variant="secondary")
115
+ components['load_workspace_button'] = gr.Button("Load Workspace", variant="secondary")
116
+
117
+ # Column 2: Live Preview and Final Output
118
+ with gr.Column(scale=1):
119
+ gr.Markdown("## Live Preview & Output")
120
+ components['current_task_preview_image_ui'] = gr.Image(interactive=False, visible=False)
121
+ components['current_task_progress_desc_ui'] = gr.Markdown('')
122
+ components['current_task_progress_bar_ui'] = gr.HTML('')
123
+ # Set a height on the video player to help with layout stability
124
+ components['last_finished_video_ui'] = gr.Video(interactive=True, autoplay=False, height=540)
125
+
126
+ return components
ui/metadata.py ADDED
@@ -0,0 +1,88 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # ui/metadata.py
2
+ import json
3
+ import gradio as gr
4
+ from PIL import Image
5
+ from PIL.PngImagePlugin import PngInfo
6
+
7
+ # Import shared state for access to parameter key lists
8
+ from . import shared_state
9
+
10
+ # --- Core Metadata Functions ---
11
+
12
+ def extract_metadata_from_pil_image(pil_image: Image.Image) -> dict:
13
+ """Extracts a 'parameters' dictionary from a PIL image's text chunk."""
14
+ if pil_image is None:
15
+ return {}
16
+
17
+ pnginfo_data = getattr(pil_image, 'text', None)
18
+ if not isinstance(pnginfo_data, dict):
19
+ return {}
20
+
21
+ params_json_str = pnginfo_data.get('parameters')
22
+ if not params_json_str:
23
+ return {}
24
+
25
+ try:
26
+ extracted_params = json.loads(params_json_str)
27
+ return extracted_params if isinstance(extracted_params, dict) else {}
28
+ except json.JSONDecodeError as e:
29
+ print(f"Error decoding metadata JSON: {e}")
30
+ return {}
31
+
32
+ def write_image_metadata(pil_image: Image.Image, params_dict: dict) -> Image.Image:
33
+ """Creates a PngInfo object with the given parameters and attaches it to the image."""
34
+ metadata = PngInfo()
35
+ metadata.add_text("parameters", json.dumps(params_dict))
36
+ pil_image.info = metadata # The .info attribute is the correct place to assign the PngInfo object
37
+ return pil_image
38
+
39
+ # --- UI Handler Functions (Moved from demo_gradio_svc.py) ---
40
+
41
+ def handle_image_upload_for_metadata(gallery_pil_list):
42
+ """
43
+ Checks an uploaded image for metadata and shows a confirmation modal if found.
44
+ This function is triggered by the 'upload' event of the image gallery.
45
+ """
46
+ if not gallery_pil_list:
47
+ return gr.update(visible=False)
48
+
49
+ # The gallery component returns a list of (image, name) tuples.
50
+ pil_image = gallery_pil_list[0][0] if isinstance(gallery_pil_list[0], tuple) else gallery_pil_list[0]
51
+
52
+ if isinstance(pil_image, Image.Image):
53
+ extracted_metadata = extract_metadata_from_pil_image(pil_image)
54
+ # Show the modal only if metadata exists and contains relevant keys.
55
+ if extracted_metadata and any(key in extracted_metadata for key in shared_state.CREATIVE_PARAM_KEYS):
56
+ return gr.update(visible=True)
57
+
58
+ return gr.update(visible=False)
59
+
60
+ def ui_load_params_from_image_metadata(gallery_data_list):
61
+ """
62
+ Loads ONLY the creative parameters from image metadata and returns UI updates.
63
+ """
64
+ updates = [gr.update()] * len(shared_state.CREATIVE_PARAM_KEYS)
65
+ if not gallery_data_list:
66
+ return updates
67
+
68
+ pil_image = gallery_data_list[0][0] if isinstance(gallery_data_list[0], tuple) else gallery_data_list[0]
69
+ extracted_metadata = extract_metadata_from_pil_image(pil_image)
70
+
71
+ if not extracted_metadata:
72
+ gr.Info("No parameters found in image.")
73
+ return updates
74
+
75
+ gr.Info(f"Applying creative settings from image...")
76
+ for i, key in enumerate(shared_state.CREATIVE_PARAM_KEYS):
77
+ if key in extracted_metadata:
78
+ # Return a Gradio update object for each changed parameter.
79
+ updates[i] = gr.update(value=extracted_metadata[key])
80
+
81
+ return updates
82
+
83
+ def apply_and_hide_modal(gallery_data_list):
84
+ """
85
+ A wrapper function that applies the metadata and then hides the confirmation modal.
86
+ """
87
+ # The first output hides the modal, the rest are the parameter updates.
88
+ return [gr.update(visible=False)] + ui_load_params_from_image_metadata(gallery_data_list)
ui/queue.py ADDED
@@ -0,0 +1,397 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # ui/queue.py
2
+ import gradio as gr
3
+ import numpy as np
4
+ from PIL import Image
5
+ import os
6
+ import json
7
+ import base64
8
+ import io
9
+ import zipfile
10
+ import tempfile
11
+ import atexit
12
+ import traceback
13
+ from pathlib import Path
14
+
15
+ # Import shared state and constants from the dedicated module.
16
+ from . import shared_state
17
+ from generation_core import worker
18
+ from diffusers_helper.thread_utils import AsyncStream, async_run
19
+
20
+
21
+ # Configuration for the autosave feature.
22
+ AUTOSAVE_FILENAME = "goan_autosave_queue.zip"
23
+
24
+
25
+ def np_to_base64_uri(np_array_or_tuple, format="png"):
26
+ """Converts a NumPy array representing an image to a base64 data URI."""
27
+ if np_array_or_tuple is None: return None
28
+ try:
29
+ np_array = np_array_or_tuple[0] if isinstance(np_array_or_tuple, tuple) and len(np_array_or_tuple) > 0 and isinstance(np_array_or_tuple[0], np.ndarray) else np_array_or_tuple if isinstance(np_array_or_tuple, np.ndarray) else None
30
+ if np_array is None: return None
31
+ pil_image = Image.fromarray(np_array.astype(np.uint8))
32
+ if format.lower() == "jpeg" and pil_image.mode == "RGBA": pil_image = pil_image.convert("RGB")
33
+ buffer = io.BytesIO(); pil_image.save(buffer, format=format.upper()); img_bytes = buffer.getvalue()
34
+ return f"data:image/{format.lower()};base64,{base64.b64encode(img_bytes).decode('utf-8')}"
35
+ except Exception as e: print(f"Error converting NumPy to base64: {e}"); return None
36
+
37
+ def get_queue_state(state_dict_gr_state):
38
+ """Safely retrieves the queue_state dictionary from the main application state."""
39
+ if "queue_state" not in state_dict_gr_state: state_dict_gr_state["queue_state"] = {"queue": [], "next_id": 1, "processing": False, "editing_task_id": None}
40
+ return state_dict_gr_state["queue_state"]
41
+
42
+ def update_queue_df_display(queue_state):
43
+ """Generates a Gradio DataFrame update from the current queue state."""
44
+ queue = queue_state.get("queue", []); data = []; processing = queue_state.get("processing", False); editing_task_id = queue_state.get("editing_task_id", None)
45
+ for i, task in enumerate(queue):
46
+ params = task['params']; task_id = task['id']; prompt_display = (params['prompt'][:77] + '...') if len(params['prompt']) > 80 else params['prompt']; prompt_title = params['prompt'].replace('"', '"'); prompt_cell = f'<span title="{prompt_title}">{prompt_display}</span>'; img_uri = np_to_base64_uri(params.get('input_image'), format="png"); thumbnail_size = "50px"; img_md = f'<img src="{img_uri}" alt="Input" style="max-width:{thumbnail_size}; max-height:{thumbnail_size}; display:block; margin:auto; object-fit:contain;" />' if img_uri else ""; is_processing_current_task = processing and i == 0; is_editing_current_task = editing_task_id == task_id; task_status_val = task.get("status", "pending");
47
+ if is_processing_current_task: status_display = "⏳ Processing"
48
+ elif is_editing_current_task: status_display = "✏️ Editing"
49
+ elif task_status_val == "done": status_display = "✅ Done"
50
+ elif task_status_val == "error": status_display = f"❌ Error: {task.get('error_message', 'Unknown')}"
51
+ elif task_status_val == "aborted": status_display = "⏹️ Aborted"
52
+ elif task_status_val == "pending": status_display = "⏸️ Pending"
53
+ data.append([task_id, status_display, prompt_cell, f"{params.get('total_second_length', 0):.1f}s", params.get('steps', 0), img_md, "↑", "↓", "✖", "✎"])
54
+ return gr.DataFrame(value=data, visible=len(data) > 0)
55
+
56
+ def add_or_update_task_in_queue(state_dict_gr_state, *args_from_ui_controls_tuple):
57
+ """Adds a new task to the queue or updates an existing one if in edit mode."""
58
+ queue_state = get_queue_state(state_dict_gr_state); editing_task_id = queue_state.get("editing_task_id", None)
59
+
60
+ input_images_pil_list = args_from_ui_controls_tuple[0]
61
+ all_ui_values_tuple = args_from_ui_controls_tuple[1:]
62
+ if not input_images_pil_list:
63
+ gr.Warning("Input image is required!")
64
+ return state_dict_gr_state, update_queue_df_display(queue_state), gr.update(value="Add Task to Queue" if editing_task_id is None else "Update Task"), gr.update(visible=editing_task_id is not None)
65
+
66
+ temp_params_from_ui = dict(zip(shared_state.ALL_TASK_UI_KEYS, all_ui_values_tuple))
67
+ base_params_for_worker_dict = {}
68
+ for ui_key, worker_key in shared_state.UI_TO_WORKER_PARAM_MAP.items():
69
+ if ui_key == 'gs_schedule_shape_ui':
70
+ base_params_for_worker_dict[worker_key] = temp_params_from_ui.get(ui_key) != 'Off'
71
+ else:
72
+ base_params_for_worker_dict[worker_key] = temp_params_from_ui.get(ui_key)
73
+
74
+ if editing_task_id is not None:
75
+ if len(input_images_pil_list) > 1: gr.Warning("Cannot update task with multiple images. Cancel edit."); return state_dict_gr_state, update_queue_df_display(queue_state), gr.update(value="Update Task"), gr.update(visible=True)
76
+ pil_img_for_update = input_images_pil_list[0][0] if isinstance(input_images_pil_list[0], tuple) else input_images_pil_list[0]
77
+ if not isinstance(pil_img_for_update, Image.Image): gr.Warning("Invalid image format for update."); return state_dict_gr_state, update_queue_df_display(queue_state), gr.update(value="Update Task"), gr.update(visible=True)
78
+ img_np_for_update = np.array(pil_img_for_update)
79
+ with shared_state.queue_lock:
80
+ task_found = False
81
+ for task in queue_state["queue"]:
82
+ if task["id"] == editing_task_id:
83
+ task["params"] = {**base_params_for_worker_dict, 'input_image': img_np_for_update}
84
+ task["status"] = "pending"
85
+ task_found = True
86
+ break
87
+ if not task_found: gr.Warning(f"Task {editing_task_id} not found for update.")
88
+ else: gr.Info(f"Task {editing_task_id} updated.")
89
+ queue_state["editing_task_id"] = None
90
+ else:
91
+ tasks_added_count = 0; first_new_task_id = -1
92
+ with shared_state.queue_lock:
93
+ for img_obj in input_images_pil_list:
94
+ pil_image = img_obj[0] if isinstance(img_obj, tuple) else img_obj
95
+ if not isinstance(pil_image, Image.Image): gr.Warning("Skipping invalid image input."); continue
96
+ img_np_data = np.array(pil_image)
97
+ next_id = queue_state["next_id"]
98
+ if first_new_task_id == -1: first_new_task_id = next_id
99
+ task = {"id": next_id, "params": {**base_params_for_worker_dict, 'input_image': img_np_data}, "status": "pending"}
100
+ queue_state["queue"].append(task); queue_state["next_id"] += 1; tasks_added_count += 1
101
+ if tasks_added_count > 0: gr.Info(f"Added {tasks_added_count} task(s) (start ID: {first_new_task_id}).")
102
+ else: gr.Warning("No valid tasks added.")
103
+
104
+ return state_dict_gr_state, update_queue_df_display(queue_state), gr.update(value="Add Task(s) to Queue", variant="secondary"), gr.update(visible=False)
105
+
106
+ def cancel_edit_mode_action(state_dict_gr_state):
107
+ """Cancels the current task editing session."""
108
+ queue_state = get_queue_state(state_dict_gr_state)
109
+ if queue_state.get("editing_task_id") is not None: gr.Info("Edit cancelled."); queue_state["editing_task_id"] = None
110
+ return state_dict_gr_state, update_queue_df_display(queue_state), gr.update(value="Add Task(s) to Queue", variant="secondary"), gr.update(visible=False)
111
+
112
+ def move_task_in_queue(state_dict_gr_state, direction: str, selected_indices_list: list):
113
+ """Moves a selected task up or down in the queue."""
114
+ if not selected_indices_list or not selected_indices_list[0]: return state_dict_gr_state, update_queue_df_display(get_queue_state(state_dict_gr_state))
115
+ idx = int(selected_indices_list[0][0]); queue_state = get_queue_state(state_dict_gr_state); queue = queue_state["queue"]
116
+ with shared_state.queue_lock:
117
+ if direction == 'up' and idx > 0: queue[idx], queue[idx-1] = queue[idx-1], queue[idx]
118
+ elif direction == 'down' and idx < len(queue) - 1: queue[idx], queue[idx+1] = queue[idx+1], queue[idx]
119
+ return state_dict_gr_state, update_queue_df_display(queue_state)
120
+
121
+ def remove_task_from_queue(state_dict_gr_state, selected_indices_list: list):
122
+ """Removes a selected task from the queue."""
123
+ removed_task_id = None
124
+ if not selected_indices_list or not selected_indices_list[0]: return state_dict_gr_state, update_queue_df_display(get_queue_state(state_dict_gr_state)), removed_task_id
125
+ idx = int(selected_indices_list[0][0]); queue_state = get_queue_state(state_dict_gr_state); queue = queue_state["queue"]
126
+ with shared_state.queue_lock:
127
+ if 0 <= idx < len(queue): removed_task = queue.pop(idx); removed_task_id = removed_task['id']; gr.Info(f"Removed task {removed_task_id}.")
128
+ else: gr.Warning("Invalid index for removal.")
129
+ return state_dict_gr_state, update_queue_df_display(queue_state), removed_task_id
130
+
131
+ def handle_queue_action_on_select(evt: gr.SelectData, state_dict_gr_state, *ui_param_controls_tuple):
132
+ """Handles clicks on the action buttons (↑, ↓, ✖, ✎) in the queue DataFrame."""
133
+ if evt.index is None or evt.value not in ["↑", "↓", "✖", "✎"]:
134
+ return [state_dict_gr_state, update_queue_df_display(get_queue_state(state_dict_gr_state))] + [gr.update()] * (len(shared_state.ALL_TASK_UI_KEYS) + 4)
135
+
136
+ row_index, col_index = evt.index; button_clicked = evt.value; queue_state = get_queue_state(state_dict_gr_state); queue = queue_state["queue"]; processing = queue_state.get("processing", False)
137
+ outputs_list = [state_dict_gr_state, update_queue_df_display(queue_state)] + [gr.update()] * (len(shared_state.ALL_TASK_UI_KEYS) + 4)
138
+
139
+ if button_clicked == "↑":
140
+ if processing and row_index == 0: gr.Warning("Cannot move processing task."); return outputs_list
141
+ new_state, new_df = move_task_in_queue(state_dict_gr_state, 'up', [[row_index, col_index]]); outputs_list[0], outputs_list[1] = new_state, new_df
142
+ elif button_clicked == "↓":
143
+ if processing and row_index == 0: gr.Warning("Cannot move processing task."); return outputs_list
144
+ if processing and row_index == 1: gr.Warning("Cannot move below processing task."); return outputs_list
145
+ new_state, new_df = move_task_in_queue(state_dict_gr_state, 'down', [[row_index, col_index]]); outputs_list[0], outputs_list[1] = new_state, new_df
146
+ elif button_clicked == "✖":
147
+ if processing and row_index == 0: gr.Warning("Cannot remove processing task."); return outputs_list
148
+ new_state, new_df, removed_id = remove_task_from_queue(state_dict_gr_state, [[row_index, col_index]]); outputs_list[0], outputs_list[1] = new_state, new_df
149
+ if removed_id is not None and queue_state.get("editing_task_id", None) == removed_id:
150
+ queue_state["editing_task_id"] = None
151
+ outputs_list[2 + 1 + len(shared_state.ALL_TASK_UI_KEYS)] = gr.update(value="Add Task(s) to Queue", variant="secondary")
152
+ outputs_list[2 + 1 + len(shared_state.ALL_TASK_UI_KEYS) + 1] = gr.update(visible=False)
153
+ elif button_clicked == "✎":
154
+ if processing and row_index == 0: gr.Warning("Cannot edit processing task."); return outputs_list
155
+ if 0 <= row_index < len(queue):
156
+ task_to_edit = queue[row_index]; task_id_to_edit = task_to_edit['id']; params_to_load_to_ui = task_to_edit['params']
157
+ queue_state["editing_task_id"] = task_id_to_edit; gr.Info(f"Editing Task {task_id_to_edit}.")
158
+ img_np_from_task = params_to_load_to_ui.get('input_image')
159
+ outputs_list[2] = gr.update(value=[(Image.fromarray(img_np_from_task), "loaded_image")]) if isinstance(img_np_from_task, np.ndarray) else gr.update(value=None)
160
+ for i, ui_key in enumerate(shared_state.ALL_TASK_UI_KEYS):
161
+ worker_key = shared_state.UI_TO_WORKER_PARAM_MAP.get(ui_key)
162
+ if worker_key in params_to_load_to_ui:
163
+ value_from_task = params_to_load_to_ui[worker_key]
164
+ outputs_list[3 + i] = gr.update(value="Linear" if value_from_task else "Off") if ui_key == 'gs_schedule_shape_ui' else gr.update(value=value_from_task)
165
+ outputs_list[2 + 1 + len(shared_state.ALL_TASK_UI_KEYS)] = gr.update(value="Update Task", variant="secondary")
166
+ outputs_list[2 + 1 + len(shared_state.ALL_TASK_UI_KEYS) + 1] = gr.update(visible=True)
167
+ else: gr.Warning("Invalid index for edit.")
168
+ return outputs_list
169
+
170
+ def clear_task_queue_action(state_dict_gr_state):
171
+ """Clears all non-processing tasks from the queue."""
172
+ queue_state = get_queue_state(state_dict_gr_state); queue = queue_state["queue"]; processing = queue_state["processing"]; cleared_count = 0
173
+ with shared_state.queue_lock:
174
+ if processing:
175
+ if len(queue) > 1: cleared_count = len(queue) - 1; queue_state["queue"] = [queue[0]]; gr.Info(f"Cleared {cleared_count} pending tasks.")
176
+ else: gr.Info("Only processing task in queue.")
177
+ elif queue: cleared_count = len(queue); queue.clear(); gr.Info(f"Cleared {cleared_count} tasks.")
178
+ else: gr.Info("Queue empty.")
179
+ if not processing and cleared_count > 0 and os.path.isfile(AUTOSAVE_FILENAME):
180
+ try: os.remove(AUTOSAVE_FILENAME); print(f"Cleared autosave: {AUTOSAVE_FILENAME}.")
181
+ except OSError as e: print(f"Error deleting autosave: {e}")
182
+ return state_dict_gr_state, update_queue_df_display(queue_state)
183
+
184
+ def save_queue_to_zip(state_dict_gr_state):
185
+ """Saves the current task queue to a zip file for download."""
186
+ queue_state = get_queue_state(state_dict_gr_state); queue = queue_state.get("queue", [])
187
+ if not queue: gr.Info("Queue is empty. Nothing to save."); return state_dict_gr_state, ""
188
+ zip_buffer = io.BytesIO(); saved_files_count = 0
189
+ try:
190
+ with tempfile.TemporaryDirectory() as tmpdir:
191
+ queue_manifest = []; image_paths_in_zip = {}
192
+ for task in queue:
193
+ params_copy = task['params'].copy(); task_id_s = task['id']; input_image_np_data = params_copy.pop('input_image', None)
194
+ manifest_entry = {"id": task_id_s, "params": params_copy, "status": task.get("status", "pending")}
195
+ if input_image_np_data is not None:
196
+ img_hash = hash(input_image_np_data.tobytes()); img_filename_in_zip = f"task_{task_id_s}_input.png"; manifest_entry['image_ref'] = img_filename_in_zip
197
+ if img_hash not in image_paths_in_zip:
198
+ img_save_path = os.path.join(tmpdir, img_filename_in_zip)
199
+ try: Image.fromarray(input_image_np_data).save(img_save_path, "PNG"); image_paths_in_zip[img_hash] = img_filename_in_zip; saved_files_count +=1
200
+ except Exception as e: print(f"Error saving image for task {task_id_s} in zip: {e}")
201
+ queue_manifest.append(manifest_entry)
202
+ manifest_path = os.path.join(tmpdir, "queue_manifest.json");
203
+ with open(manifest_path, 'w', encoding='utf-8') as f: json.dump(queue_manifest, f, indent=4)
204
+ with zipfile.ZipFile(zip_buffer, 'w', zipfile.ZIP_DEFLATED) as zf:
205
+ zf.write(manifest_path, arcname="queue_manifest.json")
206
+ for img_hash, img_filename_rel in image_paths_in_zip.items(): zf.write(os.path.join(tmpdir, img_filename_rel), arcname=img_filename_rel)
207
+ zip_buffer.seek(0); zip_base64 = base64.b64encode(zip_buffer.getvalue()).decode('utf-8')
208
+ gr.Info(f"Queue with {len(queue)} tasks ({saved_files_count} images) prepared for download.")
209
+ return state_dict_gr_state, zip_base64
210
+ except Exception as e: print(f"Error creating zip for queue: {e}"); traceback.print_exc(); gr.Warning("Failed to create zip data."); return state_dict_gr_state, ""
211
+ finally: zip_buffer.close()
212
+
213
+ def load_queue_from_zip(state_dict_gr_state, uploaded_zip_file_obj):
214
+ """Loads a task queue from an uploaded zip file."""
215
+ if not uploaded_zip_file_obj or not hasattr(uploaded_zip_file_obj, 'name') or not Path(uploaded_zip_file_obj.name).is_file(): gr.Warning("No valid file selected."); return state_dict_gr_state, update_queue_df_display(get_queue_state(state_dict_gr_state))
216
+ queue_state = get_queue_state(state_dict_gr_state); newly_loaded_queue = []; max_id_in_file = 0; loaded_image_count = 0; error_messages = []
217
+ try:
218
+ with tempfile.TemporaryDirectory() as tmpdir_extract:
219
+ with zipfile.ZipFile(uploaded_zip_file_obj.name, 'r') as zf:
220
+ if "queue_manifest.json" not in zf.namelist(): raise ValueError("queue_manifest.json not found in zip")
221
+ zf.extractall(tmpdir_extract)
222
+ manifest_path = os.path.join(tmpdir_extract, "queue_manifest.json")
223
+ with open(manifest_path, 'r', encoding='utf-8') as f: loaded_manifest = json.load(f)
224
+
225
+ for task_data in loaded_manifest:
226
+ params_from_manifest = task_data.get('params', {}); task_id_loaded = task_data.get('id', 0); max_id_in_file = max(max_id_in_file, task_id_loaded)
227
+ image_ref_from_manifest = task_data.get('image_ref'); input_image_np_data = None
228
+ if image_ref_from_manifest:
229
+ img_path_in_extract = os.path.join(tmpdir_extract, image_ref_from_manifest)
230
+ if os.path.exists(img_path_in_extract):
231
+ try: input_image_np_data = np.array(Image.open(img_path_in_extract)); loaded_image_count +=1
232
+ except Exception as img_e: error_messages.append(f"Err loading img for task {task_id_loaded}: {img_e}")
233
+ else: error_messages.append(f"Missing img file for task {task_id_loaded}: {image_ref_from_manifest}")
234
+ runtime_task = {"id": task_id_loaded, "params": {**params_from_manifest, 'input_image': input_image_np_data}, "status": "pending"}
235
+ newly_loaded_queue.append(runtime_task)
236
+ with shared_state.queue_lock: queue_state["queue"] = newly_loaded_queue; queue_state["next_id"] = max(max_id_in_file + 1, queue_state.get("next_id", 1))
237
+ gr.Info(f"Loaded {len(newly_loaded_queue)} tasks ({loaded_image_count} images).")
238
+ if error_messages: gr.Warning(" ".join(error_messages))
239
+ except Exception as e: print(f"Error loading queue: {e}"); traceback.print_exc(); gr.Warning(f"Failed to load queue: {str(e)[:200]}")
240
+ finally:
241
+ if uploaded_zip_file_obj and hasattr(uploaded_zip_file_obj, 'name') and uploaded_zip_file_obj.name and tempfile.gettempdir() in os.path.abspath(uploaded_zip_file_obj.name):
242
+ try: os.remove(uploaded_zip_file_obj.name)
243
+ except OSError: pass
244
+ return state_dict_gr_state, update_queue_df_display(queue_state)
245
+
246
+ def autosave_queue_on_exit_action(state_dict_gr_state_ref):
247
+ """Saves the queue to a zip file on application exit."""
248
+ print("Attempting to autosave queue on exit...")
249
+ queue_state = get_queue_state(state_dict_gr_state_ref)
250
+ if not queue_state.get("queue"): print("Autosave: Queue is empty."); return
251
+ try:
252
+ _dummy_state_ignored, zip_b64_for_save = save_queue_to_zip(state_dict_gr_state_ref)
253
+ if zip_b64_for_save:
254
+ with open(AUTOSAVE_FILENAME, "wb") as f: f.write(base64.b64decode(zip_b64_for_save))
255
+ print(f"Autosave successful: Queue saved to {AUTOSAVE_FILENAME}")
256
+ else: print("Autosave failed: Could not generate zip data.")
257
+ except Exception as e: print(f"Error during autosave: {e}"); traceback.print_exc()
258
+
259
+ def autoload_queue_on_start_action(state_dict_gr_state):
260
+ """Loads a previously autosaved queue when the application starts."""
261
+ queue_state = get_queue_state(state_dict_gr_state)
262
+ df_update = update_queue_df_display(queue_state)
263
+
264
+ if not queue_state["queue"] and Path(AUTOSAVE_FILENAME).is_file():
265
+ print(f"Autoloading queue from {AUTOSAVE_FILENAME}...")
266
+ class MockFilepath:
267
+ def __init__(self, name): self.name = name
268
+
269
+ temp_state_for_load = {"queue_state": queue_state.copy()}
270
+ loaded_state_result, df_update_after_load = load_queue_from_zip(temp_state_for_load, MockFilepath(AUTOSAVE_FILENAME))
271
+
272
+ if loaded_state_result["queue_state"]["queue"]:
273
+ queue_state.update(loaded_state_result["queue_state"])
274
+ df_update = df_update_after_load
275
+ print(f"Autoload successful. Loaded {len(queue_state['queue'])} tasks.")
276
+ try:
277
+ os.remove(AUTOSAVE_FILENAME)
278
+ print(f"Removed autosave file: {AUTOSAVE_FILENAME}")
279
+ except OSError as e:
280
+ print(f"Error removing autosave file '{AUTOSAVE_FILENAME}': {e}")
281
+ else:
282
+ print("Autoload: File existed but queue remains empty. Resetting queue.")
283
+ queue_state["queue"] = []
284
+ queue_state["next_id"] = 1
285
+ df_update = update_queue_df_display(queue_state)
286
+
287
+ is_processing_on_load = queue_state.get("processing", False) and bool(queue_state.get("queue"))
288
+ initial_video_path = state_dict_gr_state.get("last_completed_video_path")
289
+ if initial_video_path and not os.path.exists(initial_video_path):
290
+ print(f"Warning: Last completed video file not found at {initial_video_path}. Clearing reference.")
291
+ initial_video_path = None
292
+ state_dict_gr_state["last_completed_video_path"] = None
293
+
294
+ return (state_dict_gr_state, df_update, gr.update(interactive=not is_processing_on_load), gr.update(interactive=is_processing_on_load), gr.update(value=initial_video_path))
295
+
296
+ def process_task_queue_main_loop(state_dict_gr_state):
297
+ """The main loop that processes tasks from the queue one by one."""
298
+ queue_state = get_queue_state(state_dict_gr_state)
299
+ shared_state.abort_event.clear()
300
+
301
+ output_stream_for_ui = state_dict_gr_state.get("active_output_stream_queue")
302
+
303
+ if queue_state["processing"]:
304
+ gr.Info("Queue processing is already active. Attempting to re-attach UI to live updates...")
305
+ if output_stream_for_ui is None:
306
+ gr.Warning("No active stream found in state. Queue processing may have been interrupted. Please clear queue or restart."); queue_state["processing"] = False
307
+ yield (state_dict_gr_state, update_queue_df_display(queue_state), gr.update(), gr.update(), gr.update(), gr.update(), gr.update(interactive=True), gr.update(interactive=False), gr.update(interactive=True)); return
308
+
309
+ # --- MODIFIED: Initial yield for re-attachment path ---
310
+ # Provide placeholder updates to progress/preview UI elements
311
+ yield (
312
+ state_dict_gr_state,
313
+ update_queue_df_display(queue_state),
314
+ gr.update(value=state_dict_gr_state.get("last_completed_video_path", None)), # Keep last video if available
315
+ gr.update(visible=True, value=None), # Make preview visible but start blank until new data
316
+ gr.update(value=f"Re-attaching to processing Task {queue_state['queue'][0]['id']}... Awaiting next preview."),
317
+ gr.update(value="<div style='text-align: center;'>Re-connecting...</div>"), # Generic HTML for progress bar
318
+ gr.update(interactive=False), # Process Queue button
319
+ gr.update(interactive=True), # Abort button
320
+ gr.update(interactive=True) # Reset button
321
+ )
322
+ elif not queue_state["queue"]:
323
+ gr.Info("Queue is empty. Add tasks to process.")
324
+ yield (state_dict_gr_state, update_queue_df_display(queue_state), gr.update(), gr.update(), gr.update(), gr.update(), gr.update(interactive=True), gr.update(interactive=False), gr.update(interactive=True)); return
325
+ else:
326
+ queue_state["processing"] = True
327
+ output_stream_for_ui = AsyncStream()
328
+ state_dict_gr_state["active_output_stream_queue"] = output_stream_for_ui
329
+ yield (state_dict_gr_state, update_queue_df_display(queue_state), gr.update(), gr.update(visible=False), gr.update(value="Queue processing started..."), gr.update(value=""), gr.update(interactive=False), gr.update(interactive=True), gr.update(interactive=True))
330
+
331
+ actual_output_queue = output_stream_for_ui.output_queue if output_stream_for_ui else None
332
+ if not actual_output_queue:
333
+ gr.Warning("Internal error: Output queue not available. Aborting."); queue_state["processing"] = False; state_dict_gr_state["active_output_stream_queue"] = None
334
+ yield (state_dict_gr_state, update_queue_df_display(queue_state), gr.update(), gr.update(), gr.update(), gr.update(), gr.update(interactive=True), gr.update(interactive=False), gr.update(interactive=True)); return
335
+
336
+ while queue_state["queue"] and not shared_state.abort_event.is_set():
337
+ with shared_state.queue_lock:
338
+ if not queue_state["queue"]: break
339
+ current_task_obj = queue_state["queue"][0]
340
+ task_parameters_for_worker = current_task_obj["params"]
341
+ current_task_id = current_task_obj["id"]
342
+
343
+ if task_parameters_for_worker.get('input_image') is None:
344
+ print(f"Skipping task {current_task_id}: Missing input image data.")
345
+ gr.Warning(f"Task {current_task_id} skipped: Input image is missing.")
346
+ with shared_state.queue_lock:
347
+ current_task_obj["status"] = "error"; current_task_obj["error_message"] = "Missing Image"
348
+ yield (state_dict_gr_state, update_queue_df_display(queue_state), gr.update(), gr.update(visible=False), gr.update(), gr.update(), gr.update(interactive=False), gr.update(interactive=True), gr.update(interactive=True)); break
349
+
350
+ if task_parameters_for_worker.get('seed') == -1: task_parameters_for_worker['seed'] = np.random.randint(0, 2**32 - 1)
351
+
352
+ print(f"Starting task {current_task_id} (Prompt: {task_parameters_for_worker.get('prompt', '')[:30]}...).")
353
+ current_task_obj["status"] = "processing"
354
+ yield (state_dict_gr_state, update_queue_df_display(queue_state), gr.update(), gr.update(visible=False), gr.update(value=f"Processing Task {current_task_id}..."), gr.update(value=""), gr.update(interactive=False), gr.update(interactive=True), gr.update(interactive=True))
355
+
356
+ worker_args = {
357
+ **task_parameters_for_worker,
358
+ 'task_id': current_task_id, 'output_queue_ref': actual_output_queue, 'abort_event': shared_state.abort_event,
359
+ **shared_state.models
360
+ }
361
+ async_run(worker, **worker_args)
362
+
363
+ last_known_output_filename = state_dict_gr_state.get("last_completed_video_path", None)
364
+ task_completed_successfully = False
365
+ while True:
366
+ flag, data_from_worker = actual_output_queue.next()
367
+ if flag == 'progress':
368
+ msg_task_id, preview_np_array, desc_str, html_str = data_from_worker
369
+ if msg_task_id == current_task_id: yield (state_dict_gr_state, update_queue_df_display(queue_state), gr.update(value=last_known_output_filename), gr.update(visible=(preview_np_array is not None), value=preview_np_array), desc_str, html_str, gr.update(interactive=False), gr.update(interactive=True), gr.update(interactive=True))
370
+ elif flag == 'file':
371
+ msg_task_id, segment_file_path, segment_info = data_from_worker
372
+ if msg_task_id == current_task_id: last_known_output_filename = segment_file_path; gr.Info(f"Task {current_task_id}: {segment_info}")
373
+ yield (state_dict_gr_state, update_queue_df_display(queue_state), gr.update(value=last_known_output_filename), gr.update(), gr.update(), gr.update(), gr.update(interactive=False), gr.update(interactive=True), gr.update(interactive=True))
374
+ elif flag == 'aborted': current_task_obj["status"] = "aborted"; task_completed_successfully = False; break
375
+ elif flag == 'error': _, error_message_str = data_from_worker; gr.Warning(f"Task {current_task_id} Error: {error_message_str}"); current_task_obj["status"] = "error"; current_task_obj["error_message"] = str(error_message_str)[:100]; task_completed_successfully = False; break
376
+ elif flag == 'end': _, success_bool, final_video_path = data_from_worker; task_completed_successfully = success_bool; last_known_output_filename = final_video_path if success_bool else last_known_output_filename; current_task_obj["status"] = "done" if success_bool else "error"; break
377
+
378
+ with shared_state.queue_lock:
379
+ if queue_state["queue"] and queue_state["queue"][0]["id"] == current_task_id: queue_state["queue"].pop(0)
380
+ state_dict_gr_state["last_completed_video_path"] = last_known_output_filename if task_completed_successfully else None
381
+ final_desc = f"Task {current_task_id} {'completed' if task_completed_successfully else 'finished with issues'}."
382
+ yield (state_dict_gr_state, update_queue_df_display(queue_state), gr.update(value=state_dict_gr_state["last_completed_video_path"]), gr.update(visible=False), gr.update(value=final_desc), gr.update(value=""), gr.update(interactive=False), gr.update(interactive=True), gr.update(interactive=True))
383
+ if shared_state.abort_event.is_set(): gr.Info("Queue processing halted by user."); break
384
+
385
+ queue_state["processing"] = False; state_dict_gr_state["active_output_stream_queue"] = None
386
+ final_status_msg = "All tasks processed." if not shared_state.abort_event.is_set() else "Queue processing aborted."
387
+ yield (state_dict_gr_state, update_queue_df_display(queue_state), gr.update(value=state_dict_gr_state["last_completed_video_path"]), gr.update(visible=False), gr.update(value=final_status_msg), gr.update(value=""), gr.update(interactive=True), gr.update(interactive=False), gr.update(interactive=True))
388
+
389
+ def abort_current_task_processing_action(state_dict_gr_state):
390
+ """Sends the abort signal to the currently processing task."""
391
+ queue_state = get_queue_state(state_dict_gr_state)
392
+ if queue_state["processing"]:
393
+ gr.Info("Abort signal sent. Current task will attempt to stop shortly.")
394
+ shared_state.abort_event.set()
395
+ else:
396
+ gr.Info("Nothing is currently processing.")
397
+ return state_dict_gr_state, gr.update(interactive=not queue_state["processing"])
ui/shared_state.py ADDED
@@ -0,0 +1,60 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # ui/shared_state.py
2
+ import threading
3
+
4
+ # --- Application-wide Threading Controls ---
5
+ # Lock for thread-safe queue modifications.
6
+ queue_lock = threading.Lock()
7
+ # Event to signal the abortion of the current processing task.
8
+ abort_event = threading.Event()
9
+
10
+ # --- Model and Global State Containers ---
11
+ # This dictionary will be populated at runtime by the main script after the models are loaded.
12
+ # It allows other modules to access the models without circular imports.
13
+ models = {}
14
+
15
+ # This dictionary holds the application state, which is passed to the atexit
16
+ # handler to enable the autosave functionality on browser close or exit.
17
+ global_state_for_autosave = {}
18
+
19
+
20
+ # --- UI and Parameter Mapping Constants ---
21
+ # These constants define the structure of the UI and how UI components
22
+ # map to the parameters of the backend generation worker.
23
+
24
+ # Creative "Recipe" Parameters (for portable PNG metadata and task editing)
25
+ CREATIVE_PARAM_KEYS = [
26
+ 'prompt', 'n_prompt', 'total_second_length', 'seed', 'preview_frequency_ui',
27
+ 'segments_to_decode_csv', 'gs_ui', 'gs_schedule_shape_ui', 'gs_final_ui', 'steps', 'cfg', 'rs'
28
+ ]
29
+
30
+ # Environment/Debug Parameters (for the full workspace, machine/session-specific)
31
+ ENVIRONMENT_PARAM_KEYS = [
32
+ 'use_teacache', 'use_fp32_transformer_output_ui', 'gpu_memory_preservation',
33
+ 'mp4_crf', 'output_folder_ui', 'latent_window_size'
34
+ ]
35
+
36
+ # A comprehensive list of all UI components that define a task's parameters.
37
+ ALL_TASK_UI_KEYS = CREATIVE_PARAM_KEYS + ENVIRONMENT_PARAM_KEYS
38
+
39
+ # This maps the string keys of the Gradio UI components to the keyword argument
40
+ # names expected by the 'worker' function in generation_core.py.
41
+ UI_TO_WORKER_PARAM_MAP = {
42
+ 'prompt': 'prompt',
43
+ 'n_prompt': 'n_prompt',
44
+ 'total_second_length': 'total_second_length',
45
+ 'seed': 'seed',
46
+ 'use_teacache': 'use_teacache',
47
+ 'preview_frequency_ui': 'preview_frequency',
48
+ 'segments_to_decode_csv': 'segments_to_decode_csv',
49
+ 'gs_ui': 'gs',
50
+ 'gs_schedule_shape_ui': 'gs_schedule_active',
51
+ 'gs_final_ui': 'gs_final',
52
+ 'steps': 'steps',
53
+ 'cfg': 'cfg',
54
+ 'latent_window_size': 'latent_window_size',
55
+ 'gpu_memory_preservation': 'gpu_memory_preservation',
56
+ 'use_fp32_transformer_output_ui': 'use_fp32_transformer_output',
57
+ 'rs': 'rs',
58
+ 'mp4_crf': 'mp4_crf',
59
+ 'output_folder_ui': 'output_folder'
60
+ }
ui/workspace.py ADDED
@@ -0,0 +1,206 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # ui/workspace.py
2
+ # Contains functions for saving, loading, and managing workspace settings from files.
3
+
4
+ import gradio as gr
5
+ import json
6
+ import os
7
+ import traceback
8
+ import tkinter as tk
9
+ from tkinter import filedialog
10
+ from PIL import Image
11
+
12
+ # Import shared state and other managers
13
+ from . import shared_state
14
+ from . import metadata as metadata_manager
15
+
16
+ # --- Constants ---
17
+ # Define filenames and the default output folder at the module level.
18
+ # These were moved from demo_gradio_svc.py
19
+ outputs_folder = './outputs_svc/'
20
+ SETTINGS_FILENAME = "goan_settings.json"
21
+ UNLOAD_SAVE_FILENAME = "goan_unload_save.json"
22
+ REFRESH_IMAGE_FILENAME = "goan_refresh_image.png"
23
+
24
+ # --- Core Save/Load Logic ---
25
+
26
+ def get_default_values_map():
27
+ """Returns a dictionary with the default values for all UI settings."""
28
+ return {
29
+ 'prompt': '', 'n_prompt': '', 'total_second_length': 5.0, 'seed': -1,
30
+ 'use_teacache': True, 'preview_frequency_ui': 5, 'segments_to_decode_csv': '',
31
+ 'gs_ui': 10.0, 'gs_schedule_shape_ui': 'Off', 'gs_final_ui': 10.0, 'steps': 25,
32
+ 'cfg': 1.0, 'latent_window_size': 9, 'gpu_memory_preservation': 6.0,
33
+ 'use_fp32_transformer_output_ui': False, 'rs': 0.0, 'mp4_crf': 18,
34
+ 'output_folder_ui': outputs_folder,
35
+ }
36
+
37
+ def save_settings_to_file(filepath, *ui_values_tuple):
38
+ """Saves a tuple of UI values to a specified JSON file."""
39
+ settings_to_save = dict(zip(shared_state.ALL_TASK_UI_KEYS, ui_values_tuple))
40
+ try:
41
+ with open(filepath, 'w', encoding='utf-8') as f:
42
+ json.dump(settings_to_save, f, indent=4)
43
+ gr.Info(f"Workspace saved to {filepath}")
44
+ except Exception as e:
45
+ gr.Warning(f"Error saving workspace: {e}")
46
+ traceback.print_exc()
47
+
48
+ def load_settings_from_file(filepath, return_updates=True):
49
+ """Loads settings from a JSON file and returns Gradio updates or raw values."""
50
+ default_values = get_default_values_map()
51
+ try:
52
+ with open(filepath, 'r', encoding='utf-8') as f:
53
+ loaded_settings = json.load(f)
54
+ gr.Info(f"Loaded workspace from {filepath}")
55
+ except Exception as e:
56
+ gr.Warning(f"Could not load workspace from {filepath}: {e}")
57
+ loaded_settings = {}
58
+
59
+ final_settings = {**default_values, **loaded_settings}
60
+ output_values = [final_settings.get(key, default_values.get(key)) for key in shared_state.ALL_TASK_UI_KEYS]
61
+
62
+ # Type correction to prevent errors when loading from JSON
63
+ for i, key in enumerate(shared_state.ALL_TASK_UI_KEYS):
64
+ try:
65
+ if key in ['seed', 'latent_window_size', 'steps', 'mp4_crf', 'preview_frequency_ui']:
66
+ output_values[i] = int(output_values[i])
67
+ elif key in ['total_second_length', 'cfg', 'gs_ui', 'rs', 'gpu_memory_preservation', 'gs_final_ui']:
68
+ output_values[i] = float(output_values[i])
69
+ elif key in ['use_teacache', 'use_fp32_transformer_output_ui']:
70
+ output_values[i] = bool(output_values[i])
71
+ except (ValueError, TypeError):
72
+ output_values[i] = default_values.get(key)
73
+
74
+ return [gr.update(value=v) for v in output_values] if return_updates else output_values
75
+
76
+ # --- NEW FUNCTION FOR INITIAL OUTPUT FOLDER LOADING ---
77
+ def get_initial_output_folder_from_settings():
78
+ """
79
+ Attempts to load the 'output_folder_ui' value from UNLOAD_SAVE_FILENAME or SETTINGS_FILENAME.
80
+ If not found or an error occurs, returns the module's default outputs_folder.
81
+ Expands user home directory paths (e.g., '~') if present.
82
+ """
83
+ # Use the module-level outputs_folder as the fallback default
84
+ default_output_folder_path = outputs_folder
85
+
86
+ # Prioritize the temporary unload save file, then the default settings file
87
+ filename_to_check = None
88
+ if os.path.exists(UNLOAD_SAVE_FILENAME):
89
+ filename_to_check = UNLOAD_SAVE_FILENAME
90
+ elif os.path.exists(SETTINGS_FILENAME):
91
+ filename_to_check = SETTINGS_FILENAME
92
+
93
+ if filename_to_check:
94
+ try:
95
+ with open(filename_to_check, 'r', encoding='utf-8') as f:
96
+ settings = json.load(f)
97
+ # If 'output_folder_ui' exists in the loaded settings, use it
98
+ if 'output_folder_ui' in settings:
99
+ # Expand user path (e.g., '~/.goan/outputs/') to its absolute form
100
+ return os.path.expanduser(settings['output_folder_ui'])
101
+ except Exception as e:
102
+ # Log a warning if loading fails, but proceed with default
103
+ print(f"Warning: Could not load 'output_folder_ui' from {filename_to_check} for initial path setup: {e}")
104
+ traceback.print_exc() # Print full traceback for deeper debugging if needed
105
+
106
+ # If no file found, or loading failed, or key not present, return the module's default
107
+ return default_output_folder_path
108
+
109
+ # --- UI Handler Functions ---
110
+
111
+ def save_workspace(*ui_values_tuple):
112
+ """Opens a file dialog to save the full workspace settings."""
113
+ root = tk.Tk()
114
+ root.withdraw()
115
+ file_path = filedialog.asksaveasfilename(
116
+ defaultextension=".json",
117
+ initialfile="goan_workspace.json",
118
+ filetypes=[("JSON files", "*.json")]
119
+ )
120
+ root.destroy()
121
+ if file_path:
122
+ save_settings_to_file(file_path, *ui_values_tuple)
123
+ else:
124
+ gr.Warning("Save cancelled.")
125
+
126
+ def save_as_default_workspace(*ui_values_tuple):
127
+ """Saves the current UI settings as the default startup configuration."""
128
+ gr.Info(f"Saving current settings as default to {SETTINGS_FILENAME}")
129
+ save_settings_to_file(SETTINGS_FILENAME, *ui_values_tuple)
130
+
131
+ def save_ui_and_image_for_refresh(*args_from_ui_controls_tuple):
132
+ """Saves UI state and the current image to temporary files for session recovery."""
133
+ gallery_list = args_from_ui_controls_tuple[0]
134
+ all_ui_values_tuple = args_from_ui_controls_tuple[1:]
135
+ full_params_map = dict(zip(shared_state.ALL_TASK_UI_KEYS, all_ui_values_tuple))
136
+ settings_to_save = full_params_map.copy()
137
+
138
+ if gallery_list and isinstance(gallery_list[0], (tuple, Image.Image)):
139
+ pil_image = gallery_list[0][0] if isinstance(gallery_list[0], tuple) else gallery_list[0]
140
+ try:
141
+ creative_params = {k: full_params_map.get(k) for k in shared_state.CREATIVE_PARAM_KEYS}
142
+ pil_image = metadata_manager.write_image_metadata(pil_image, creative_params)
143
+
144
+ # Use the output folder path from the UI settings
145
+ output_folder_path = full_params_map.get('output_folder_ui', outputs_folder)
146
+ refresh_image_path = os.path.join(output_folder_path, REFRESH_IMAGE_FILENAME)
147
+
148
+ pil_image.save(refresh_image_path)
149
+ settings_to_save["refresh_image_path"] = refresh_image_path
150
+ except Exception as e:
151
+ gr.Warning(f"Could not save refresh image: {e}")
152
+
153
+ # Save all settings to the unload file
154
+ save_settings_to_file(UNLOAD_SAVE_FILENAME, *settings_to_save.values())
155
+
156
+ def load_workspace():
157
+ """Opens a file dialog to load a workspace from a JSON file."""
158
+ root = tk.Tk()
159
+ root.withdraw()
160
+ file_path = filedialog.askopenfilename(filetypes=[("JSON files", "*.json")])
161
+ root.destroy()
162
+ return load_settings_from_file(file_path) if file_path else [gr.update()] * len(shared_state.ALL_TASK_UI_KEYS)
163
+
164
+ def load_workspace_on_start():
165
+ """
166
+ Loads settings on app startup, prioritizing a temporary session file,
167
+ then a default file, and finally falling back to hardcoded defaults.
168
+ """
169
+ image_path_to_load = None
170
+ settings_file = None
171
+
172
+ if os.path.exists(UNLOAD_SAVE_FILENAME):
173
+ settings_file = UNLOAD_SAVE_FILENAME
174
+ try:
175
+ with open(settings_file, 'r') as f:
176
+ settings = json.load(f)
177
+ # Check if the saved image path exists
178
+ if "refresh_image_path" in settings and os.path.exists(settings["refresh_image_path"]):
179
+ image_path_to_load = settings["refresh_image_path"]
180
+ except Exception:
181
+ pass # Ignore errors reading the temp file
182
+ elif os.path.exists(SETTINGS_FILENAME):
183
+ settings_file = SETTINGS_FILENAME
184
+
185
+ if settings_file:
186
+ print(f"Loading workspace from {settings_file}")
187
+ ui_updates = load_settings_from_file(settings_file)
188
+ if image_path_to_load:
189
+ gr.Info("Restoring UI state and image from previous session.")
190
+ if settings_file == UNLOAD_SAVE_FILENAME:
191
+ os.remove(UNLOAD_SAVE_FILENAME) # Clean up temp file
192
+ return [image_path_to_load] + ui_updates
193
+
194
+ print("No workspace file found. Using default values.")
195
+ default_vals = get_default_values_map()
196
+ return [None] + [default_vals[key] for key in shared_state.ALL_TASK_UI_KEYS]
197
+
198
+ def load_image_from_path(image_path):
199
+ """Loads an image from a given path and deletes the temporary file."""
200
+ if image_path and os.path.exists(image_path):
201
+ try:
202
+ # The gallery component expects a list of (image, name) tuples.
203
+ return gr.update(value=[(Image.open(image_path), "refresh_image")])
204
+ finally:
205
+ os.remove(image_path) # Clean up the temp image
206
+ return gr.update(value=None)