GGUF quantized and fp8 scaled versions of LTX-Video
setup (once)
- drag ltx-video-2b-v0.9.1-q4_0.gguf (1.09GB) to > ./ComfyUI/models/diffusion_models
- drag t5xxl_fp16-q4_0.gguf (2.9GB) to > ./ComfyUI/models/text_encoders
- drag ltx-video-vae.safetensors (838MB) to > ./ComfyUI/models/vae
run it straight (no installation needed way)
- run the .bat file in the main directory (assuming you are using the gguf-node pack below)
- drag the workflow json file (below) to > your browser
workflow
- example workflow for gguf (see demo above)
- example workflow for the original safetensors
review
q2_k
gguf is super fast but not usable; keep it for testing only0.9.1_fp8_e4m3fn
and0.9.1-vae_fp8_e4m3fn
are not working; but keep them here, see who can figure out how to make them work- by the way,
0.9_fp8_e4m3fn
and0.9-vae_fp8_e4m3fn
are working pretty good - mix-and-match possible; you could mix up using the vae(s) available with different model file(s) here; test which combination works better
- gguf-node is available (see details here) for running the new features (the point below might not be directly related to the model)
- you are able to make your own
fp8_e4m3fn
scaled safetensors and/or convert it to gguf with the new node via comfyui
reference
- base model from lightricks
- comfyui from comfyanonymous
- comfyui-gguf from city96
- gguf-comfy pack
- gguf-node (pypi|repo|pack)
- Downloads last month
- 50,381
Model tree for calcuis/ltxv-gguf
Base model
Lightricks/LTX-Video