NextGenC commited on
Commit
941a7fe
Β·
verified Β·
1 Parent(s): fca734b

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +73 -3
README.md CHANGED
@@ -1,3 +1,73 @@
1
- ---
2
- license: mit
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ language:
4
+ - en
5
+ tags:
6
+ - gan
7
+ - mnist
8
+ - 7gen
9
+ - pytorch
10
+ library_name: torch
11
+ model_type: image-generator
12
+ ---
13
+
14
+ ![7Gen Model](https://img.shields.io/badge/7Gen-MNIST_Generator-blue?style=for-the-badge)
15
+ ![Python](https://img.shields.io/badge/python-3.8+-blue.svg?style=for-the-badge&logo=python)
16
+ ![PyTorch](https://img.shields.io/badge/PyTorch-1.12+-red.svg?style=for-the-badge&logo=pytorch)
17
+ ![License](https://img.shields.io/badge/license-MIT-green.svg?style=for-the-badge)
18
+
19
+ # 7Gen - Advanced MNIST Digit Generation System
20
+
21
+ **State-of-the-art Conditional GAN for MNIST digit synthesis with self-attention mechanisms.**
22
+
23
+ ---
24
+
25
+ ## πŸš€ Features
26
+
27
+ - 🎯 **Conditional Generation**: Generate specific digits (0–9) on demand.
28
+ - πŸ–ΌοΈ **High Quality Output**: Sharp and realistic handwritten digit samples.
29
+ - ⚑ **Fast Inference**: Real-time generation on GPU.
30
+ - πŸ”Œ **Easy Integration**: Minimal setup, PyTorch-native implementation.
31
+ - πŸš€ **GPU Acceleration**: Full CUDA support.
32
+
33
+ ---
34
+
35
+ ## πŸ” Model Details
36
+
37
+ - **Architecture**: Conditional GAN with self-attention
38
+ - **Parameters**: 2.5M
39
+ - **Input**: 100-dimensional noise vector + class label
40
+ - **Output**: 28x28 grayscale images
41
+ - **Training Data**: MNIST dataset (60,000 images)
42
+ - **Training Time**: ~2 hours on NVIDIA RTX 3050 Ti
43
+
44
+ ---
45
+
46
+ ## πŸ§ͺ Performance Metrics
47
+
48
+ | Metric | Score |
49
+ |------------------|-------|
50
+ | **FID Score** | 12.3 |
51
+ | **Inception Score** | 8.7 |
52
+
53
+ - **Training Epochs**: 100
54
+ - **Batch Size**: 64
55
+
56
+ ---
57
+
58
+ ## βš™οΈ Training Configuration
59
+
60
+ ```yaml
61
+ model:
62
+ latent_dim: 100
63
+ num_classes: 10
64
+ generator_layers: [256, 512, 1024]
65
+ discriminator_layers: [512, 256]
66
+
67
+ training:
68
+ batch_size: 64
69
+ learning_rate: 0.0002
70
+ epochs: 100
71
+ optimizer: Adam
72
+ beta1: 0.5
73
+ beta2: 0.999