latent_imagenet / README.md
liangzhidanta's picture
Create README.md
70137f3 verified
|
raw
history blame
1.32 kB
metadata
license: mit
task_categories:
  - text-classification

latent_imagenet (VQGAN-encoded ImageNet Test Set)

Dataset Summary

This dataset contains latent representations of the ImageNet test set, generated using a VQGAN encoder.
Each image is encoded into a tensor of shape 3 × 64 × 64 in the latent space.

This latent dataset can be used for:

  • Accelerating training and inference in generative models (e.g., diffusion models in latent space)
  • Research on semantic compression and generative modeling
  • Experiments with reconstruction, super-resolution, and latent editing

Dataset Details

  • Source dataset: ImageNet
  • Subset: test split
  • Preprocessing: Images were encoded into latent tensors using a pre-trained VQGAN encoder.
  • Latent shape: 3 × 64 × 64
  • File format: PyTorch tensor (.pt / .pth) or NumPy array (depending on your processing pipeline)
  • Number of samples: Equal to the ImageNet test set size.

Example Usage

from datasets import load_dataset
import torch

# Load the dataset
dataset = load_dataset("liangzhidanta/latent_imagenet")

# Access one sample
sample = dataset["train"][0]  # or "test" depending on your split
latent = torch.tensor(sample["latent"])  # shape: [3, 64, 64]