metadata
license: apache-2.0
language:
- en
base_model:
- Wan-AI/Wan2.1-T2V-14B
tags:
- text-to-video
- lora
- diffusers
- template:diffusion-lora
widget:
- text: >-
[conor] a man fighting another man in the MMA ring, landing a right hook
towards the starting.
output:
url: videos/conor_2.mp4
- text: >-
[conor] a man sitting drinking whiskey, the bottle of whiskey says "proper
twelve", with girls sitting around.
output:
url: videos/conor_1.mp4
Conor McGregor Lora for WanVideo2.1
- Prompt
- [conor] a man fighting another man in the MMA ring, landing a right hook towards the starting.
- Prompt
- [conor] a man sitting drinking whiskey, the bottle of whiskey says "proper twelve", with girls sitting around.
Trigger words
You should use conor
to trigger the video generation.
Using with Diffusers
pip install git+https://github.com/huggingface/diffusers.git
import torch
from diffusers.utils import export_to_video
from diffusers import AutoencoderKLWan, WanPipeline
from diffusers.schedulers.scheduling_unipc_multistep import UniPCMultistepScheduler
# Available models: Wan-AI/Wan2.1-T2V-14B-Diffusers, Wan-AI/Wan2.1-T2V-1.3B-Diffusers
model_id = "Wan-AI/Wan2.1-T2V-14B-Diffusers"
vae = AutoencoderKLWan.from_pretrained(model_id, subfolder="vae", torch_dtype=torch.float32)
pipe = WanPipeline.from_pretrained(model_id, vae=vae, torch_dtype=torch.bfloat16)
flow_shift = 5.0 # 5.0 for 720P, 3.0 for 480P
pipe.scheduler = UniPCMultistepScheduler.from_config(pipe.scheduler.config, flow_shift=flow_shift)
pipe.to("cuda")
pipe.load_lora_weights("shauray/Mcgregor_WanLora")
pipe.enable_model_cpu_offload() #for low-vram environments
prompt = "conor a man walking through a bar"
output = pipe(
prompt=prompt,
height=480,
width=720,
num_frames=81,
guidance_scale=5.0,
).frames[0]
export_to_video(output, "output.mp4", fps=16)
Download model
Weights for this model are available in Safetensors format.
Download them in the Files & versions tab.