liangzhidanta commited on
Commit
70137f3
·
verified ·
1 Parent(s): 677833b

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +42 -0
README.md ADDED
@@ -0,0 +1,42 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ task_categories:
4
+ - text-classification
5
+ ---
6
+ # latent_imagenet (VQGAN-encoded ImageNet Test Set)
7
+
8
+ ## Dataset Summary
9
+
10
+ This dataset contains latent representations of the **ImageNet test set**, generated using a **VQGAN encoder**.
11
+ Each image is encoded into a tensor of shape **3 × 64 × 64** in the latent space.
12
+
13
+ This latent dataset can be used for:
14
+ - Accelerating training and inference in generative models (e.g., diffusion models in latent space)
15
+ - Research on semantic compression and generative modeling
16
+ - Experiments with reconstruction, super-resolution, and latent editing
17
+
18
+ ---
19
+
20
+ ## Dataset Details
21
+
22
+ - **Source dataset**: [ImageNet](https://image-net.org/)
23
+ - **Subset**: `test` split
24
+ - **Preprocessing**: Images were encoded into latent tensors using a pre-trained VQGAN encoder.
25
+ - **Latent shape**: `3 × 64 × 64`
26
+ - **File format**: PyTorch tensor (`.pt` / `.pth`) or NumPy array (depending on your processing pipeline)
27
+ - **Number of samples**: Equal to the ImageNet test set size.
28
+
29
+ ---
30
+
31
+ ## Example Usage
32
+
33
+ ```python
34
+ from datasets import load_dataset
35
+ import torch
36
+
37
+ # Load the dataset
38
+ dataset = load_dataset("liangzhidanta/latent_imagenet")
39
+
40
+ # Access one sample
41
+ sample = dataset["train"][0] # or "test" depending on your split
42
+ latent = torch.tensor(sample["latent"]) # shape: [3, 64, 64]