--- license: apache-2.0 language: - en base_model: - Wan-AI/Wan2.1-T2V-14B tags: - text-to-video - lora - diffusers - template:diffusion-lora widget: - text: >- [malone] a man with a beard and mustache, wearing a dark green baseball cap, singing and dancing, with a few blurred lights visible.. output: url: videos/1742519489362__000000750_0.webp - text: >- [malone] a man singing while holding a cigarette in his hand and dancing. output: url: videos/1742520256525__000000750_1.webp - text: >- [malone] a man with a beard and mustache, wearing a dark green baseball cap, singing and dancing, with a few blurred lights visible.. output: url: videos/1742522116718__000001000_0.mp4 - text: >- [malone] a man singing while holding a cigarette in his hand and dancing. output: url: videos/1742522882787__000001000_1.mp4 - text: >- [malone] a man with a beard and mustache, wearing a dark green baseball cap, singing and dancing, with a few blurred lights visible.. output: url: videos/1742524740417__000001250_0.webp - text: >- [malone] a man singing while holding a cigarette in his hand and dancing. output: url: videos/1742525503397__000001250_1.webp --- # PostMalone Lora for WanVideo2.1 - First 2 videos are from [750 steps trained] - Middle 2 videos are from [1000 steps trained] which is available for download - Last 2 videos are from [1250 steps trained] ## Trigger words You should use `malone` to trigger the video generation. ## Using with Diffusers ```py pip install git+https://github.com/huggingface/diffusers.git ``` ```py import torch from diffusers.utils import export_to_video from diffusers import AutoencoderKLWan, WanPipeline from diffusers.schedulers.scheduling_unipc_multistep import UniPCMultistepScheduler # Available models: Wan-AI/Wan2.1-T2V-14B-Diffusers, Wan-AI/Wan2.1-T2V-1.3B-Diffusers model_id = "Wan-AI/Wan2.1-T2V-14B-Diffusers" vae = AutoencoderKLWan.from_pretrained(model_id, subfolder="vae", torch_dtype=torch.float32) pipe = WanPipeline.from_pretrained(model_id, vae=vae, torch_dtype=torch.bfloat16) flow_shift = 5.0 # 5.0 for 720P, 3.0 for 480P pipe.scheduler = UniPCMultistepScheduler.from_config(pipe.scheduler.config, flow_shift=flow_shift) pipe.to("cuda") pipe.load_lora_weights("shauray/PostMalone_WanLora") pipe.enable_model_cpu_offload() #for low-vram environments prompt = "malone a man walking through texas" output = pipe( prompt=prompt, height=480, width=720, num_frames=81, guidance_scale=5.0, ).frames[0] export_to_video(output, "output.mp4", fps=16) ``` ## Download model Weights for this model are available in Safetensors format. [Download](/shauray/PostMalone_WanLora/tree/main) them in the Files & versions tab.