simpletuner
This is a LyCORIS adapter derived from Lightricks/LTX-Video.
The main validation prompt used during training was:
A photo-realistic image of a cat sitting in a field of lavender flowers. The cat is looking at the viewer.
Validation settings
- CFG:
4.2
- CFG Rescale:
0.0
- Steps:
5
- Sampler:
FlowMatchEulerDiscreteScheduler
- Seed:
42
- Resolution:
384x256
Note: The validation settings are not necessarily the same as the training settings.
You can find some example images in the following gallery:

- Prompt
- unconditional (blank prompt)
- Negative Prompt
- blurry, cropped, ugly

- Prompt
- A photo-realistic image of a cat sitting in a field of lavender flowers. The cat is looking at the viewer.
- Negative Prompt
- blurry, cropped, ugly
The text encoder was not trained. You may reuse the base model text encoder for inference.
Training settings
- Training epochs: 0
- Training steps: 60
- Learning rate: 8e-05
- Learning rate schedule: constant
- Warmup steps: 0
- Max grad value: 2.0
- Effective batch size: 4
- Micro-batch size: 4
- Gradient accumulation steps: 1
- Number of GPUs: 1
- Gradient checkpointing: True
- Prediction type: flow-matching (extra parameters=['training_scheduler_timestep_spacing=trailing', 'inference_scheduler_timestep_spacing=trailing'])
- Optimizer: optimi-lion
- Trainable parameter precision: Pure BF16
- Base model precision:
no_change
- Caption dropout probability: 10.0%
LyCORIS Config:
{
"algo": "lokr",
"multiplier": 1.0,
"linear_dim": 10000,
"bypass_mode": true,
"linear_alpha": 1,
"factor": 16,
"apply_preset": {
"target_module": [
"Attention",
"FeedForward"
],
"module_algo_map": {
"Attention": {
"factor": 16
},
"FeedForward": {
"factor": 8
}
}
}
}
Datasets
image-dataset-384
- Repeats: 4
- Total number of images: 11
- Total number of aspect buckets: 2
- Resolution: 0.147456 megapixels
- Cropped: False
- Crop style: None
- Crop aspect: None
- Used for regularisation data: No
video-dataset-384
- Repeats: 4
- Total number of images: 7
- Total number of aspect buckets: 1
- Resolution: 0.147456 megapixels
- Cropped: False
- Crop style: None
- Crop aspect: None
- Used for regularisation data: No
Inference
import torch
from diffusers import DiffusionPipeline
from lycoris import create_lycoris_from_weights
def download_adapter(repo_id: str):
import os
from huggingface_hub import hf_hub_download
adapter_filename = "pytorch_lora_weights.safetensors"
cache_dir = os.environ.get('HF_PATH', os.path.expanduser('~/.cache/huggingface/hub/models'))
cleaned_adapter_path = repo_id.replace("/", "_").replace("\\", "_").replace(":", "_")
path_to_adapter = os.path.join(cache_dir, cleaned_adapter_path)
path_to_adapter_file = os.path.join(path_to_adapter, adapter_filename)
os.makedirs(path_to_adapter, exist_ok=True)
hf_hub_download(
repo_id=repo_id, filename=adapter_filename, local_dir=path_to_adapter
)
return path_to_adapter_file
model_id = 'Lightricks/LTX-Video'
adapter_repo_id = 'bghira/simpletuner'
adapter_filename = 'pytorch_lora_weights.safetensors'
adapter_file_path = download_adapter(repo_id=adapter_repo_id)
pipeline = DiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.bfloat16) # loading directly in bf16
lora_scale = 1.0
wrapper, _ = create_lycoris_from_weights(lora_scale, adapter_file_path, pipeline.transformer)
wrapper.merge_to()
prompt = "A photo-realistic image of a cat sitting in a field of lavender flowers. The cat is looking at the viewer."
negative_prompt = 'blurry, cropped, ugly'
## Optional: quantise the model to save on vram.
## Note: The model was not quantised during training, so it is not necessary to quantise it during inference time.
#from optimum.quanto import quantize, freeze, qint8
#quantize(pipeline.transformer, weights=qint8)
#freeze(pipeline.transformer)
pipeline.to('cuda' if torch.cuda.is_available() else 'mps' if torch.backends.mps.is_available() else 'cpu') # the pipeline is already in its target precision level
model_output = pipeline(
prompt=prompt,
negative_prompt=negative_prompt,
num_inference_steps=5,
generator=torch.Generator(device='cuda' if torch.cuda.is_available() else 'mps' if torch.backends.mps.is_available() else 'cpu').manual_seed(42),
width=384,
height=256,
guidance_scale=4.2,
).frames[0]
from diffusers.utils.export_utils import export_to_gif
export_to_gif(model_output, "output.gif", fps=25)
- Downloads last month
- 831
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support
HF Inference deployability: The HF Inference API does not support text-to-video models for diffusers
library.
Model tree for bghira/simpletuner
Base model
Lightricks/LTX-Video