Hunyuan Video depth control loras in diffusers format. They're experimental, and do not work as expected. Inference is overly sensitive. Either zero influence or too much, no middle ground.
Trained with: https://github.com/jquintanilla4/HunyuanVideo-Training/blob/depth-control/train_hunyuan_lora.py
Inference/testing script: https://github.com/jquintanilla4/HunyuanVideo-Training/blob/depth-control/test_hunyuan_control_lora.py
You will need the depth anything v2 model to run both train and testing scripts.
Last training run was done over a small 14K dataset (10k train, 2k test, 2k val) over 10K steps.
- learning_rate 5e-5
- lora_rank 128
- lora_alpha 128
- timestep_shift 5
- assert_steps 100
- input_lr_scale 5.0
Deleted old versions. They did not work at all.
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support
Model tree for jqlive/hyv_depth_control
Base model
tencent/HunyuanVideo