Transfusion - VAE

How to use with 🧨 diffusers

from diffusers.models import AutoencoderKL
vae = AutoencoderKL.from_pretrained("lavinal712/transfusion-vae")

Model

This model was trained for 50 (legacy: 7) epochs on ImageNet, COCO and FFHQ (legacy: ImageNet), with training parameters following the original Transfusion paper.

LVAE=L1+LLPIPS+0.5LGAN+0.2LID+0.000001LKL\mathcal{L}_{\mathrm{VAE}} = \mathcal{L}_1 + \mathcal{L}_{\mathrm{LPIPS}} + 0.5\mathcal{L}_{\mathrm{GAN}} + 0.2\mathcal{L}_{\mathrm{ID}} + 0.000001\mathcal{L}_{\mathrm{KL}}

Evaluation

ImageNet 2012 (256x256, val, 50000 images)

Model rFID PSNR SSIM LPIPS
Transfusion-VAE 0.408 28.723 0.845 0.081
SD-VAE 0.692 26.910 0.772 0.130

COCO 2017 (256x256, val, 5000 images)

Model rFID PSNR SSIM LPIPS
Transfusion-VAE 2.749 28.556 0.855 0.078
SD-VAE 4.246 26.622 0.784 0.127

Evaluation (legacy)

ImageNet 2012 (256x256, val, 50000 images)

Model rFID PSNR SSIM LPIPS
Transfusion-VAE 0.567 28.195 0.829 0.100
SD-VAE 0.692 26.910 0.772 0.130

Paper: Transfusion: Predict the Next Token and Diffuse Images with One Multi-Modal Model

Dataset: ImageNet, COCO, FFHQ

Base Code: lavinal712/AutoencoderKL

Training Code: lavinal712/AutoencoderKL/tree/transfusion_vae

Downloads last month
31
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Dataset used to train lavinal712/transfusion-vae