VLADIMIR LENIN Identity Low Rank Adaptor (LoRA)
For the Wan2.1 14B T2V & I2V Base Models
Historical Revolutionaries & Poets LoRAs Series
|||||| By SilverAgePoets.com ||||||
About this LoRA
This is a Rank 32 LoRA for the Wan2.1 14b video generation model.
It is intended repruduce and animate the historic likeness of the immortal guide and liberator of the international proletariat:
Vladimir Ilyich Lenin!
This adaptor was fine-tuned on over 30 colorized photographs of Vladimir Lenin taken between 1918 & 1923. Some of the colorizations were done by Klimbim, others by us.
For Wan2.1 1.3B versions of our Lenin LoRA, see: VERSION 1 (Rank 64) and VERSION 2 (Rank 128)
For a Text to Image variant of this LoRA, check out our SchnelLenin model for Flux.1 Schnell.
It can be used with diffusers or ComfyUI, and can be loaded against both the text-to-video and image-to-video Wan2.1 models. It was trained on Replicate using AI toolkit: https://replicate.com/ostris/wan-lora-trainer/train
Trigger words
You should use movie clip of LEN Vladimir Lenin, the Bolshevik, close up, photorealistic, short patchy goatee and mustache, head bold on top, in a suit and vest
, etc, to awaken and summon the face of the Revolution our of his mausoleum hideout!
Use this LoRA
Replicate has a collection of Wan2.1 models that are optimised for speed and cost. They can also be used with this LoRA:
Run this LoRA with an API using Replicate
import replicate
input = {
"prompt": "LEN Vladimir Lenin",
"lora_url": "https://huggingface.co/alekseycalvin/lenin_wan14b_t2v_lora/resolve/main/wan2.1-14b-len-vladimir-lenin-lora.safetensors"
}
output = replicate.run(
"fofr/wan2.1-with-lora:f83b84064136a38415a3aff66c326f94c66859b8ad7a2cb432e2822774f07b08",
model="14b",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.mp4", "wb") as file:
file.write(item.read())
Using with Diffusers
pip install git+https://github.com/huggingface/diffusers.git
import torch
from diffusers.utils import export_to_video
from diffusers import AutoencoderKLWan, WanPipeline
from diffusers.schedulers.scheduling_unipc_multistep import UniPCMultistepScheduler
model_id = "Wan-AI/Wan2.1-T2V-14B-Diffusers"
vae = AutoencoderKLWan.from_pretrained(model_id, subfolder="vae", torch_dtype=torch.float32)
pipe = WanPipeline.from_pretrained(model_id, vae=vae, torch_dtype=torch.bfloat16)
flow_shift = 3.0 # 5.0 for 720P, 3.0 for 480P
pipe.scheduler = UniPCMultistepScheduler.from_config(pipe.scheduler.config, flow_shift=flow_shift)
pipe.to("cuda")
pipe.load_lora_weights("alekseycalvin/lenin_wan14b_t2v_lora")
pipe.enable_model_cpu_offload() #for low-vram environments
prompt = "LEN Vladimir Lenin"
negative_prompt = "Bright tones, overexposed, static, blurred details, subtitles, style, works, paintings, images, static, overall gray, worst quality, low quality, JPEG compression residue, ugly, incomplete, extra fingers, poorly drawn hands, poorly drawn faces, deformed, disfigured, misshapen limbs, fused fingers, still picture, messy background, three legs, many people in the background, walking backwards"
output = pipe(
prompt=prompt,
negative_prompt=negative_prompt,
height=480,
width=832,
num_frames=81,
guidance_scale=5.0,
).frames[0]
export_to_video(output, "output.mp4", fps=16)
Training details
- Steps: 750
- Learning rate: 0.0002
- LoRA rank: 32
Contribute your own examples
You can use the community tab to add videos that show off what you’ve made with this LoRA.
- Downloads last month
- 2
Model tree for AlekseyCalvin/VladimirLENIN_Wan2.1_14B_T2V_LoRA_bySilverAgePoets
Base model
Wan-AI/Wan2.1-T2V-14B-Diffusers