Dataset Viewer
Auto-converted to Parquet
Search is not available for this dataset
image
imagewidth (px)
662
1.44k

VQGAN-Processed ImageNet Test Latents

This dataset contains the ImageNet Test images encoded into 3×64×64 latent representations using VQGAN.
These latent features can be used for tasks such as image generation, compression, and semantic communication research.

Example

The figures below show the visualization of a sample in the latent space (3 channels):

VQGAN Latent Space VQGAN Latent Space

latent_imagenet (VQGAN-encoded ImageNet Test Set)

Dataset Summary

This dataset contains latent representations of the ImageNet test set, generated using a VQGAN encoder.
Each image is encoded into a tensor of shape 3 × 64 × 64 in the latent space.

This latent dataset can be used for:

  • Accelerating training and inference in generative models (e.g., diffusion models in latent space)
  • Research on semantic compression and generative modeling
  • Experiments with reconstruction, super-resolution, and latent editing

Potential Applications

This dataset can facilitate research and experimentation in areas including, but not limited to:

  • Latent-space generative modeling: Training or fine-tuning diffusion models, VAEs, or GANs directly in latent space to reduce computational cost.
  • Image compression and reconstruction: Exploring semantic compression, low-dimensional representations, and high-quality image reconstruction.
  • Semantic communication: Studying transmission of essential visual information via compressed latent representations.
  • Image editing and manipulation: Performing latent-space operations for style transfer, attribute modification, or super-resolution.
  • Benchmarking and analysis: Comparing model performance using latent representations versus original images.

Dataset Details

  • Source dataset: ImageNet
  • Subset: test split
  • Preprocessing: Images were encoded into latent tensors using a pre-trained VQGAN encoder.
  • Latent shape: 3 × 64 × 64
  • File format: PyTorch tensor (.pt / .pth) or NumPy array (depending on your processing pipeline)
  • Number of samples: Equal to the ImageNet test set size.

Original vs Reconstructed Images

The figures below illustrate the comparison between original images and their reconstructions obtained by decoding the latent representations.
This demonstrates the fidelity of VQGAN latents for reconstruction and downstream experiments:

Reconstruction Comparison


Decoding Latents

You can decode the latent representations back into images using a pre-trained VQGAN.
Here is an example workflow:

import torch
from PIL import Image
import matplotlib.pyplot as plt
from torchvision import transforms
from ldm.util import instantiate_from_config
from omegaconf import OmegaConf

# ----------------------------
# 1) Initialize VQGAN
# ----------------------------
class VQGANProcessor:
    def __init__(self, config_path, ckpt_path, device=None):
        self.device = device or (torch.device('cuda') if torch.cuda.is_available() else torch.device('cpu'))
        config = OmegaConf.load(config_path)
        sd = torch.load(ckpt_path, map_location=self.device)['state_dict']
        model = instantiate_from_config(config.model)
        model.load_state_dict(sd, strict=False)
        model.eval().to(self.device)
        self.first_stage = model.first_stage_model

    def decode(self, latent: torch.Tensor) -> torch.Tensor:
        with torch.no_grad():
            rec = self.first_stage.decode(latent.to(self.device))
            rec = torch.clamp((rec + 1.)/2., 0, 1)
        return rec

# ----------------------------
# 2) Load a latent sample
# ----------------------------
latent_path = "latent_imagenet/class_name/sample.pt"
latent = torch.load(latent_path).unsqueeze(0)  # [1,3,64,64]

# ----------------------------
# 3) Decode the latent
# ----------------------------
processor = VQGANProcessor("configs/latent-diffusion/cin256-v2.yaml",
                           "models/ldm/cin256-v2/model.ckpt")
recon = processor.decode(latent).squeeze(0).permute(1,2,0).cpu().numpy()  # [H,W,3] ∈ [0,1]

# ----------------------------
# 4) (Optional) Compare with original image
# ----------------------------
orig_path = "process_data/imagenet/val/class_name/sample.JPEG"
orig = Image.open(orig_path).convert("RGB")
orig_resized = orig.resize((256,256))

fig, axes = plt.subplots(1,3, figsize=(18,6))
axes[0].imshow(orig); axes[0].set_title("Original"); axes[0].axis("off")
axes[1].imshow(orig_resized); axes[1].set_title("Resized 256×256"); axes[1].axis("off")
axes[2].imshow(recon); axes[2].set_title("Reconstruction"); axes[2].axis("off")
plt.show()

Example Usage

from datasets import load_dataset
import torch

# Load the dataset
dataset = load_dataset("liangzhidanta/latent_imagenet")

# Access one sample
sample = dataset["train"][0]  # or "test" depending on your split
latent = torch.tensor(sample["latent"])  # shape: [3, 64, 64]
Downloads last month
127