Edit model card

Pix2Pix UNet Model

Model Description

Custom UNet model for Pix2Pix image translation.

  • Image Size: 1024
  • Model Type: big_UNet (1024)

Usage

import torch
from small_256_model import UNet as small_UNet
from big_1024_model import UNet as big_UNet
big = True
# Load the model
name='big_model_weights.pth' if big else 'small_model_weights.pth'
checkpoint = torch.load(name)
model = big_UNet() if checkpoint['model_config']['big'] else small_UNet()
model.load_state_dict(checkpoint['model_state_dict'])
model.eval()

Model Architecture

UNet( (encoder): Sequential( (0): Conv2d(3, 64, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1)) (1): ReLU(inplace=True) (2): Conv2d(64, 128, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1)) (3): ReLU(inplace=True) (4): Conv2d(128, 256, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1)) (5): ReLU(inplace=True) (6): Conv2d(256, 512, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1)) (7): ReLU(inplace=True) (8): Conv2d(512, 1024, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1)) (9): ReLU(inplace=True) ) (decoder): Sequential( (0): ConvTranspose2d(1024, 512, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1)) (1): ReLU(inplace=True) (2): ConvTranspose2d(512, 256, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1)) (3): ReLU(inplace=True) (4): ConvTranspose2d(256, 128, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1)) (5): ReLU(inplace=True) (6): ConvTranspose2d(128, 64, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1)) (7): ReLU(inplace=True) (8): ConvTranspose2d(64, 3, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1)) (9): Tanh() ) )

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Examples
Inference API (serverless) does not yet support pytorch models for this pipeline type.

Dataset used to train K00B404/pix2pix_flux

Space using K00B404/pix2pix_flux 1