--- base_model: THUDM/CogVideoX-5b-I2V library_name: diffusers license: other instance_prompt: Realistic motion, smooth, complete, high resolution widget: [] tags: - image-to-video - diffusers-training - diffusers - lora - cogvideox - cogvideox-diffusers - template:sd-lora - image-to-video - diffusers-training - diffusers - lora - cogvideox - cogvideox-diffusers - template:sd-lora --- # CogVideoX LoRA - BelGio13/cogvideoX-I2V-locobot ## Model description These are BelGio13/cogvideoX-I2V-locobot LoRA weights for THUDM/CogVideoX-5b-I2V. The weights were trained using the [CogVideoX Diffusers trainer](https://github.com/huggingface/diffusers/blob/main/examples/cogvideo/train_cogvideox_image_to_video_lora.py). Was LoRA for the text encoder enabled? No. ## Download model [Download the *.safetensors LoRA](BelGio13/cogvideoX-I2V-locobot/tree/main) in the Files & versions tab. ## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers) ```py import torch from diffusers import CogVideoXImageToVideoPipeline from diffusers.utils import load_image, export_to_video pipe = CogVideoXImageToVideoPipeline.from_pretrained("THUDM/CogVideoX-5b-I2V", torch_dtype=torch.bfloat16).to("cuda") pipe.load_lora_weights("BelGio13/cogvideoX-I2V-locobot", weight_name="pytorch_lora_weights.safetensors", adapter_name="cogvideox-i2v-lora") # The LoRA adapter weights are determined by what was used for training. # In this case, we assume `--lora_alpha` is 32 and `--rank` is 64. # It can be made lower or higher from what was used in training to decrease or amplify the effect # of the LoRA upto a tolerance, beyond which one might notice no effect at all or overflows. pipe.set_adapters("cogvideox-i2v-lora", [32 / 64]) image = load_image("/path/to/image") video = pipe(image=image, "", guidance_scale=6, use_dynamic_cfg=True).frames[0] export_to_video(video, "output.mp4", fps=8) ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters) ## License Please adhere to the licensing terms as described [here](https://huggingface.co/THUDM/CogVideoX-5b-I2V/blob/main/LICENSE). ## Intended uses & limitations #### How to use ```python # TODO: add an example code snippet for running this diffusion pipeline ``` #### Limitations and bias [TODO: provide examples of latent issues and potential remediations] ## Training details [TODO: describe the data used to train the model]