How can I use city96/LTX-Video-gguf model to generate Image to Video ?

#3
by zakariyamansuri1 - opened

I'm new to working with LLMs and quantized models, so I have a bit of a beginner question. Could someone share reference code for generating videos from images? If possible, I'd appreciate a generalized solution that can work with any quantized or fine-tuned model. Additionally, it would be great if you could provide resources to learn more about this topic.

OIP.jpg

@bethovent can you tell how can I generate video from the above Image using the city96/LTX-Video-gguf ?
Please share the code so I can run that in System.

@zakariyamansuri1 This page has most of the info for inference via diffusers, including example scripts: https://huggingface.co/docs/diffusers/main/en/api/pipelines/ltx_video
You'd want to combine the gguf example at the very top with the "LTXImageToVideoPipeline" (img2vid) pipeline part further down the page by passing transformer=transformer to the pipeline.

For ComfyUI, you just use the workflow from the ComfyUI examples page, replacing the UnetLoader with the GGUF equivalent from the ComfyUI-GGUF custom node pack.

Sign up or log in to comment