abhilash88 commited on
Commit
ca12b7f
Β·
verified Β·
1 Parent(s): 8a420df

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +271 -215
README.md CHANGED
@@ -35,306 +35,358 @@ model-index:
35
  value: 4.5
36
  name: Age MAE (years)
37
  ---
38
-
39
-
40
- # πŸ† ViT-Age-Gender-Prediction: Vision Transformer for Facial Analysis
41
 
42
  [![Model](https://img.shields.io/badge/Model-Vision%20Transformer-blue)](https://huggingface.co/abhilash88/age-gender-prediction)
43
  [![Accuracy](https://img.shields.io/badge/Gender%20Accuracy-94.3%25-green)](https://huggingface.co/abhilash88/age-gender-prediction)
44
- [![Pipeline](https://img.shields.io/badge/Pipeline-Ready-orange)](https://huggingface.co/abhilash88/age-gender-prediction)
 
 
 
 
45
 
46
- A state-of-the-art Vision Transformer model for simultaneous age estimation and gender classification, achieving **94.3% gender accuracy** and **4.5 years age MAE**. Now with **Hugging Face Pipeline support** for ultra-easy usage!
 
 
 
 
 
47
 
48
- ## πŸš€ Quick Start (Pipeline - Recommended)
49
 
50
- ### Super Simple Usage
51
 
 
52
  ```python
53
  from transformers import pipeline
54
 
55
- # Create the pipeline (one line!)
56
- classifier = pipeline("age-gender-classification", model="abhilash88/age-gender-prediction")
57
 
58
- # Predict from image path
59
- result = classifier("path/to/your/image.jpg")
60
- print(result)
61
- # Output: {'age': 25, 'gender': 'Female', 'gender_confidence': 0.892, ...}
 
62
 
63
  # Predict from URL
64
  result = classifier("https://example.com/face_image.jpg")
65
- print(f"Age: {result['age']}, Gender: {result['gender']} ({result['gender_confidence']:.1%})")
66
- ```
67
 
68
- ### Even Simpler One-Liner
 
 
 
 
 
69
 
 
70
  ```python
71
- from transformers import pipeline
72
 
73
- # One-line prediction!
74
- result = pipeline("age-gender-classification", model="abhilash88/age-gender-prediction")("your_image.jpg")
75
- print(f"Predicted: {result['age']} years old, {result['gender']}")
76
- ```
77
 
78
- ### Batch Processing
 
 
 
79
 
 
80
  ```python
81
- classifier = pipeline("age-gender-classification", model="abhilash88/age-gender-prediction")
 
82
 
83
- # Process multiple images at once
84
- images = ["image1.jpg", "image2.jpg", "image3.jpg"]
85
- results = classifier(images)
86
 
87
- for i, result in enumerate(results):
88
- print(f"Image {i+1}: {result['age']} years, {result['gender']}")
89
- ```
90
 
91
- ## πŸ”§ Advanced Usage
 
 
 
92
 
93
- ### Custom Parameters
 
 
 
 
 
 
 
 
94
 
 
 
 
 
 
 
95
  ```python
96
- classifier = pipeline("age-gender-classification", model="abhilash88/age-gender-prediction")
97
 
98
- result = classifier(
99
- "image.jpg",
100
- confidence_threshold=0.7, # Custom confidence threshold
101
- return_all_scores=True # Get raw model outputs
102
- )
103
 
104
- print(result)
105
- # Output includes raw probabilities and detailed scores
 
 
 
 
 
 
 
 
 
 
 
 
 
106
  ```
107
 
108
- ### Integration with OpenCV/PIL
109
-
110
  ```python
111
  import cv2
112
- from PIL import Image
113
  from transformers import pipeline
114
 
115
- classifier = pipeline("age-gender-classification", model="abhilash88/age-gender-prediction")
116
-
117
- # From OpenCV
118
- img_cv = cv2.imread("image.jpg")
119
- img_rgb = cv2.cvtColor(img_cv, cv2.COLOR_BGR2RGB)
120
- result = classifier(img_rgb)
121
 
122
- # From PIL
123
- img_pil = Image.open("image.jpg")
124
- result = classifier(img_pil)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
125
 
126
- # From NumPy array
127
- import numpy as np
128
- img_array = np.array(img_pil)
129
- result = classifier(img_array)
130
  ```
131
 
132
- ### Google Colab Example
133
-
134
  ```python
135
- # Install in Colab
136
- !pip install transformers torch pillow
137
-
138
- # Quick test
139
  from transformers import pipeline
140
- import matplotlib.pyplot as plt
141
- from PIL import Image
142
 
143
- # Upload image in Colab and run
144
- classifier = pipeline("age-gender-classification", model="abhilash88/age-gender-prediction")
145
- result = classifier("uploaded_image.jpg")
146
 
147
- # Display result
148
- img = Image.open("uploaded_image.jpg")
149
- plt.imshow(img)
150
- plt.title(f"Predicted: {result['age']} years, {result['gender']} ({result['gender_confidence']:.1%})")
151
- plt.axis('off')
152
- plt.show()
 
153
  ```
154
 
155
- ## 🎯 Manual Implementation (Alternative)
156
 
157
- If you prefer manual control or need to modify the model:
158
 
159
  ```python
160
- import torch
161
- import torch.nn as nn
162
- from transformers import ViTImageProcessor, ViTModel
163
- from PIL import Image
164
-
165
- class AgeGenderViTModel(nn.Module):
166
- def __init__(self):
167
- super().__init__()
168
- self.vit = ViTModel.from_pretrained('google/vit-base-patch16-224')
169
- self.age_head = nn.Linear(self.vit.config.hidden_size, 1)
170
- self.gender_head = nn.Sequential(
171
- nn.Linear(self.vit.config.hidden_size, 1),
172
- nn.Sigmoid()
173
- )
174
-
175
- def forward(self, pixel_values):
176
- outputs = self.vit(pixel_values=pixel_values)
177
- pooled_output = outputs.pooler_output
178
- age_output = self.age_head(pooled_output)
179
- gender_output = self.gender_head(pooled_output)
180
- return age_output, gender_output
181
-
182
- # Load model manually
183
- model = AgeGenderViTModel()
184
- model.load_state_dict(torch.hub.load_state_dict_from_url(
185
- "https://huggingface.co/abhilash88/age-gender-prediction/resolve/main/pytorch_model.bin"
186
- ))
187
- model.eval()
188
-
189
- processor = ViTImageProcessor.from_pretrained("google/vit-base-patch16-224")
190
-
191
- # Predict
192
- image = Image.open("your_image.jpg")
193
- inputs = processor(images=image, return_tensors="pt")
194
- with torch.no_grad():
195
- age_pred, gender_pred = model(inputs["pixel_values"])
196
-
197
- age = int(torch.clamp(age_pred, 0, 100).item())
198
- gender = "Female" if gender_pred.item() > 0.5 else "Male"
199
- confidence = gender_pred.item() if gender_pred.item() > 0.5 else 1 - gender_pred.item()
200
-
201
- print(f"Age: {age} years, Gender: {gender} ({confidence:.1%})")
202
  ```
203
 
204
- ## πŸ“Š Model Performance
 
 
 
 
 
 
205
 
206
  | Metric | Performance | Dataset |
207
  |--------|------------|---------|
208
  | **Gender Accuracy** | **94.3%** | UTKFace |
209
  | **Age MAE** | **4.5 years** | UTKFace |
210
- | **Parameters** | 86.8M | ViT-Base |
 
211
  | **Inference Speed** | ~50ms/image | CPU |
212
- | **Pipeline Support** | βœ… Native | Transformers |
213
 
214
- ### Performance by Demographics
 
 
 
 
 
 
 
 
 
 
 
 
215
 
216
- | Age Group | Gender Accuracy | Age MAE | Recommendation |
217
- |-----------|----------------|---------|----------------|
218
- | **Adults (21-60)** | 94.3% | 4.5 years | βœ… **Excellent** |
219
- | **Young Adults (16-30)** | 92.1% | 5.2 years | βœ… **Very Good** |
220
- | **Teenagers (13-20)** | 89.7% | 6.8 years | βœ… **Good** |
221
- | **Children (5-12)** | 78.4% | 8.9 years | ⚠️ **Limited** |
222
- | **Seniors (60+)** | 87.2% | 7.1 years | βœ… **Good** |
 
 
 
223
 
224
- ## πŸ”„ Pipeline vs Manual Usage
 
 
 
 
225
 
226
- ### βœ… Pipeline Advantages
227
- - **One-line usage**: Extremely simple API
228
- - **Auto-downloading**: No manual model loading
229
- - **Batch processing**: Handle multiple images easily
230
- - **Type flexibility**: Works with paths, URLs, PIL, NumPy
231
- - **Error handling**: Built-in robust error management
232
- - **Future-proof**: Automatic updates with transformers library
233
 
234
- ### πŸ”§ Manual Advantages
235
- - **Full control**: Modify model architecture
236
- - **Custom preprocessing**: Add your own image processing
237
- - **Memory efficiency**: Load model once, reuse multiple times
238
- - **Custom outputs**: Access raw model predictions
239
- - **Debugging**: Step-through model internals
240
 
241
- ## πŸ“ˆ Usage Examples by Use Case
242
 
243
  ### Content Moderation
244
  ```python
245
- classifier = pipeline("age-gender-classification", model="abhilash88/age-gender-prediction")
 
 
246
 
247
- def moderate_content(image_url):
248
- result = classifier(image_url)
249
- if result['age'] < 18:
250
- return "Minor detected - content flagged"
251
- return f"Adult content: {result['gender']}, {result['age']} years"
 
 
 
 
 
252
  ```
253
 
254
  ### Marketing Analytics
255
  ```python
256
- def analyze_audience(image_list):
257
- classifier = pipeline("age-gender-classification", model="abhilash88/age-gender-prediction")
258
- results = classifier(image_list)
 
 
 
 
 
259
 
260
- demographics = {"male": 0, "female": 0, "avg_age": 0}
261
- for result in results:
262
- demographics[result['gender'].lower()] += 1
263
- demographics['avg_age'] += result['age']
 
 
 
 
 
264
 
265
- demographics['avg_age'] /= len(results)
266
  return demographics
 
 
 
 
267
  ```
268
 
269
- ### Real-time Webcam
270
  ```python
271
- import cv2
272
  from transformers import pipeline
273
 
274
- classifier = pipeline("age-gender-classification", model="abhilash88/age-gender-prediction")
275
- cap = cv2.VideoCapture(0)
276
 
277
- while True:
278
- ret, frame = cap.read()
279
- if ret:
280
- # Convert BGR to RGB
281
- rgb_frame = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB)
282
- result = classifier(rgb_frame)
283
-
284
- # Display prediction
285
- text = f"Age: {result['age']}, Gender: {result['gender']}"
286
- cv2.putText(frame, text, (10, 30), cv2.FONT_HERSHEY_SIMPLEX, 1, (0, 255, 0), 2)
287
- cv2.imshow('Age-Gender Detection', frame)
288
-
289
- if cv2.waitKey(1) & 0xFF == ord('q'):
290
- break
291
 
292
- cap.release()
293
- cv2.destroyAllWindows()
294
  ```
295
 
296
- ## πŸš€ Installation & Requirements
297
 
298
- ```bash
299
- # Minimal installation
300
- pip install transformers torch pillow
 
 
301
 
302
- # Full installation with optional dependencies
303
- pip install transformers torch torchvision pillow opencv-python matplotlib
304
 
305
- # For development
306
- pip install transformers torch pillow pytest black flake8
 
 
 
 
 
 
 
 
 
 
 
 
 
307
  ```
308
 
309
- **System Requirements:**
310
- - Python 3.7+
311
- - PyTorch 1.9+
312
- - Transformers 4.20+
313
- - 2GB RAM minimum (4GB recommended)
314
- - ~500MB disk space for model
315
 
316
- ## ⚠️ Usage Guidelines
 
 
317
 
318
- ### βœ… Optimal Use Cases
319
- - **Adult demographic analysis** (16-60 years) - Best performance
320
- - **Social media content filtering** - High accuracy
321
- - **Marketing audience analysis** - Reliable demographics
322
- - **Age verification systems** - Good for adult detection
323
 
324
- ### ⚠️ Limitations
325
- - **Children (0-12)**: Reduced accuracy due to limited training data
326
- - **Very elderly (70+)**: Higher variance in predictions
327
- - **Poor image quality**: Requires clear, well-lit faces
328
- - **Extreme angles**: Works best with frontal or near-frontal faces
 
329
 
330
- ### 🎯 Best Practices
331
- - Use **pipeline approach** for ease of use
332
- - Ensure **good lighting** and **clear faces**
333
- - Consider **confidence thresholds** for your application
334
- - **Validate results** for edge cases in your domain
335
- - Use **batch processing** for multiple images
336
 
337
- ## πŸ“„ Citation
338
 
339
  ```bibtex
340
  @misc{age-gender-prediction-2025,
@@ -343,17 +395,21 @@ pip install transformers torch pillow pytest black flake8
343
  year={2025},
344
  publisher={Hugging Face},
345
  url={https://huggingface.co/abhilash88/age-gender-prediction},
346
- note={Pipeline-enabled model for easy integration}
347
  }
348
  ```
349
 
350
- ## πŸ”— Related Links
351
 
352
- - **Model Card**: [abhilash88/age-gender-prediction](https://huggingface.co/abhilash88/age-gender-prediction)
353
- - **Transformers Documentation**: [Pipeline Tutorial](https://huggingface.co/docs/transformers/pipeline_tutorial)
354
- - **Vision Transformer**: [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224)
355
- - **Dataset**: [UTKFace Dataset](https://susanqq.github.io/UTKFace/)
356
 
357
  ---
358
 
359
- **πŸŽ‰ New Feature**: Native Hugging Face Pipeline support makes this model incredibly easy to use! Try the one-liner examples above.
 
 
 
 
 
 
 
 
35
  value: 4.5
36
  name: Age MAE (years)
37
  ---
38
+ # πŸ† ViT Age-Gender Prediction: Vision Transformer for Facial Analysis
 
 
39
 
40
  [![Model](https://img.shields.io/badge/Model-Vision%20Transformer-blue)](https://huggingface.co/abhilash88/age-gender-prediction)
41
  [![Accuracy](https://img.shields.io/badge/Gender%20Accuracy-94.3%25-green)](https://huggingface.co/abhilash88/age-gender-prediction)
42
+ [![Pipeline](https://img.shields.io/badge/Pipeline-One%20Liner-brightgreen)](https://huggingface.co/abhilash88/age-gender-prediction)
43
+
44
+ A state-of-the-art Vision Transformer model for simultaneous age estimation and gender classification, achieving **94.3% gender accuracy** and **4.5 years age MAE** on the UTKFace dataset.
45
+
46
+ ## πŸš€ One-Liner Usage
47
 
48
+ ```python
49
+ from transformers import pipeline
50
+ classifier = pipeline("image-classification", model="abhilash88/age-gender-prediction", trust_remote_code=True)
51
+ result = classifier("your_image.jpg")
52
+ print(f"Age: {result[0]['age']}, Gender: {result[0]['gender']}")
53
+ ```
54
 
55
+ **That's it!** One line to get age and gender predictions.
56
 
57
+ ## πŸ“± Complete Examples
58
 
59
+ ### Basic Pipeline Usage
60
  ```python
61
  from transformers import pipeline
62
 
63
+ # Create classifier
64
+ classifier = pipeline("image-classification", model="abhilash88/age-gender-prediction", trust_remote_code=True)
65
 
66
+ # Predict from file
67
+ result = classifier("your_image.jpg")
68
+ print(f"Age: {result[0]['age']} years")
69
+ print(f"Gender: {result[0]['gender']}")
70
+ print(f"Confidence: {result[0]['gender_confidence']:.1%}")
71
 
72
  # Predict from URL
73
  result = classifier("https://example.com/face_image.jpg")
74
+ print(f"Prediction: {result[0]['age']} years, {result[0]['gender']}")
 
75
 
76
+ # Predict from PIL Image
77
+ from PIL import Image
78
+ img = Image.open("image.jpg")
79
+ result = classifier(img)
80
+ print(f"Result: {result[0]['age']} years, {result[0]['gender']}")
81
+ ```
82
 
83
+ ### Simple Helper Functions
84
  ```python
85
+ from model import predict_age_gender, simple_predict
86
 
87
+ # Method 1: Detailed result
88
+ result = predict_age_gender("your_image.jpg")
89
+ print(f"Age: {result['age']}, Gender: {result['gender']}")
90
+ print(f"Confidence: {result['confidence']:.1%}")
91
 
92
+ # Method 2: Simple string output
93
+ prediction = simple_predict("your_image.jpg")
94
+ print(prediction) # "25 years, Female (87% confidence)"
95
+ ```
96
 
97
+ ### Google Colab
98
  ```python
99
+ # Install requirements
100
+ !pip install transformers torch pillow
101
 
102
+ from transformers import pipeline
103
+ import matplotlib.pyplot as plt
104
+ from PIL import Image
105
 
106
+ # Create classifier
107
+ classifier = pipeline("image-classification", model="abhilash88/age-gender-prediction", trust_remote_code=True)
 
108
 
109
+ # Upload image in Colab
110
+ from google.colab import files
111
+ uploaded = files.upload()
112
+ filename = list(uploaded.keys())[0]
113
 
114
+ # Predict and display
115
+ result = classifier(filename)
116
+ img = Image.open(filename)
117
+
118
+ plt.figure(figsize=(8, 6))
119
+ plt.imshow(img)
120
+ plt.title(f"Prediction: {result[0]['age']} years, {result[0]['gender']} ({result[0]['gender_confidence']:.1%})")
121
+ plt.axis('off')
122
+ plt.show()
123
 
124
+ print(f"Age: {result[0]['age']} years")
125
+ print(f"Gender: {result[0]['gender']}")
126
+ print(f"Confidence: {result[0]['gender_confidence']:.1%}")
127
+ ```
128
+
129
+ ### Batch Processing
130
  ```python
131
+ from transformers import pipeline
132
 
133
+ classifier = pipeline("image-classification", model="abhilash88/age-gender-prediction", trust_remote_code=True)
 
 
 
 
134
 
135
+ # Process multiple images
136
+ images = ["image1.jpg", "image2.jpg", "image3.jpg"]
137
+ results = []
138
+
139
+ for image in images:
140
+ result = classifier(image)
141
+ results.append({
142
+ 'image': image,
143
+ 'age': result[0]['age'],
144
+ 'gender': result[0]['gender'],
145
+ 'confidence': result[0]['gender_confidence']
146
+ })
147
+
148
+ for result in results:
149
+ print(f"{result['image']}: {result['age']} years, {result['gender']} ({result['confidence']:.1%})")
150
  ```
151
 
152
+ ### Real-time Webcam
 
153
  ```python
154
  import cv2
 
155
  from transformers import pipeline
156
 
157
+ classifier = pipeline("image-classification", model="abhilash88/age-gender-prediction", trust_remote_code=True)
 
 
 
 
 
158
 
159
+ cap = cv2.VideoCapture(0)
160
+ while True:
161
+ ret, frame = cap.read()
162
+ if ret:
163
+ # Save frame temporarily
164
+ cv2.imwrite("temp_frame.jpg", frame)
165
+
166
+ # Predict
167
+ result = classifier("temp_frame.jpg")
168
+
169
+ # Display prediction
170
+ text = f"Age: {result[0]['age']}, Gender: {result[0]['gender']}"
171
+ cv2.putText(frame, text, (10, 30), cv2.FONT_HERSHEY_SIMPLEX, 1, (0, 255, 0), 2)
172
+ cv2.imshow('Age-Gender Detection', frame)
173
+
174
+ if cv2.waitKey(1) & 0xFF == ord('q'):
175
+ break
176
 
177
+ cap.release()
178
+ cv2.destroyAllWindows()
 
 
179
  ```
180
 
181
+ ### URL Images
 
182
  ```python
 
 
 
 
183
  from transformers import pipeline
 
 
184
 
185
+ classifier = pipeline("image-classification", model="abhilash88/age-gender-prediction", trust_remote_code=True)
 
 
186
 
187
+ # Direct URL prediction
188
+ image_url = "https://images.unsplash.com/photo-1507003211169-0a1dd7228f2d?w=300"
189
+ result = classifier(image_url)
190
+
191
+ print(f"Age: {result[0]['age']} years")
192
+ print(f"Gender: {result[0]['gender']}")
193
+ print(f"Confidence: {result[0]['gender_confidence']:.1%}")
194
  ```
195
 
196
+ ## πŸ“Š Pipeline Output Format
197
 
198
+ The pipeline returns a list with one prediction:
199
 
200
  ```python
201
+ [
202
+ {
203
+ "label": "25 years, Female",
204
+ "score": 0.873,
205
+ "age": 25,
206
+ "gender": "Female",
207
+ "gender_confidence": 0.873,
208
+ "gender_probability_female": 0.873,
209
+ "gender_probability_male": 0.127
210
+ }
211
+ ]
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
212
  ```
213
 
214
+ **Access the values:**
215
+ - `result[0]['age']` - Predicted age (integer)
216
+ - `result[0]['gender']` - Predicted gender ("Male" or "Female")
217
+ - `result[0]['gender_confidence']` - Confidence score (0-1)
218
+ - `result[0]['label']` - Formatted string summary
219
+
220
+ ## 🎯 Model Performance
221
 
222
  | Metric | Performance | Dataset |
223
  |--------|------------|---------|
224
  | **Gender Accuracy** | **94.3%** | UTKFace |
225
  | **Age MAE** | **4.5 years** | UTKFace |
226
+ | **Architecture** | ViT-Base + Dual Head | 768β†’256β†’64β†’1 |
227
+ | **Parameters** | 86.8M | Optimized |
228
  | **Inference Speed** | ~50ms/image | CPU |
 
229
 
230
+ ### Performance by Age Group
231
+ - **Adults (21-60 years)**: 94.3% gender accuracy, 4.5 years age MAE βœ… **Excellent**
232
+ - **Young Adults (16-30 years)**: 92.1% gender accuracy βœ… **Very Good**
233
+ - **Teenagers (13-20 years)**: 89.7% gender accuracy βœ… **Good**
234
+ - **Children (5-12 years)**: 78.4% gender accuracy ⚠️ **Limited**
235
+ - **Seniors (60+ years)**: 87.2% gender accuracy βœ… **Good**
236
+
237
+ ## ⚠️ Usage Guidelines
238
+
239
+ ### βœ… Optimal Performance
240
+ - **Best for**: Adults 16-60 years old
241
+ - **Image quality**: Clear, well-lit, front-facing faces
242
+ - **Use cases**: Demographic analysis, content filtering, marketing research
243
 
244
+ ### ❌ Known Limitations
245
+ - **Children (0-12)**: Reduced accuracy due to limited training data
246
+ - **Very elderly (70+)**: Higher prediction variance
247
+ - **Poor conditions**: Low light, extreme angles, heavy occlusion
248
+
249
+ ### 🎯 Tips for Best Results
250
+ - Use clear, well-lit images
251
+ - Ensure faces are clearly visible and front-facing
252
+ - Consider confidence scores for critical applications
253
+ - Validate results for your specific use case
254
 
255
+ ## πŸ› οΈ Installation
256
+
257
+ ```bash
258
+ # Minimal installation
259
+ pip install transformers torch pillow
260
 
261
+ # Full installation with optional dependencies
262
+ pip install transformers torch torchvision pillow opencv-python matplotlib
 
 
 
 
 
263
 
264
+ # For development
265
+ pip install transformers torch pillow pytest black flake8
266
+ ```
 
 
 
267
 
268
+ ## πŸ“ˆ Use Cases & Examples
269
 
270
  ### Content Moderation
271
  ```python
272
+ from transformers import pipeline
273
+
274
+ classifier = pipeline("image-classification", model="abhilash88/age-gender-prediction", trust_remote_code=True)
275
 
276
+ def moderate_content(image_path):
277
+ result = classifier(image_path)
278
+ age = result[0]['age']
279
+
280
+ if age < 18:
281
+ return f"Minor detected ({age} years) - content flagged for review"
282
+ return f"Adult content approved: {age} years, {result[0]['gender']}"
283
+
284
+ status = moderate_content("user_upload.jpg")
285
+ print(status)
286
  ```
287
 
288
  ### Marketing Analytics
289
  ```python
290
+ from transformers import pipeline
291
+
292
+ classifier = pipeline("image-classification", model="abhilash88/age-gender-prediction", trust_remote_code=True)
293
+
294
+ def analyze_audience(image_folder):
295
+ from glob import glob
296
+
297
+ demographics = {"male": 0, "female": 0, "total_age": 0, "count": 0}
298
 
299
+ for image_path in glob(f"{image_folder}/*.jpg"):
300
+ result = classifier(image_path)
301
+ demographics[result[0]['gender'].lower()] += 1
302
+ demographics['total_age'] += result[0]['age']
303
+ demographics['count'] += 1
304
+
305
+ demographics['avg_age'] = demographics['total_age'] / demographics['count']
306
+ demographics['male_percent'] = demographics['male'] / demographics['count'] * 100
307
+ demographics['female_percent'] = demographics['female'] / demographics['count'] * 100
308
 
 
309
  return demographics
310
+
311
+ stats = analyze_audience("customer_photos/")
312
+ print(f"Average age: {stats['avg_age']:.1f}")
313
+ print(f"Gender split: {stats['male_percent']:.1f}% Male, {stats['female_percent']:.1f}% Female")
314
  ```
315
 
316
+ ### Age Verification
317
  ```python
 
318
  from transformers import pipeline
319
 
320
+ classifier = pipeline("image-classification", model="abhilash88/age-gender-prediction", trust_remote_code=True)
 
321
 
322
+ def verify_age(image_path, min_age=18):
323
+ result = classifier(image_path)
324
+ age = result[0]['age']
325
+ confidence = result[0]['gender_confidence']
326
+
327
+ if confidence < 0.7: # Low confidence
328
+ return "Please provide a clearer image"
329
+
330
+ if age >= min_age:
331
+ return f"Verified: {age} years old (meets {min_age}+ requirement)"
332
+ else:
333
+ return f"Age verification failed: {age} years old"
 
 
334
 
335
+ verification = verify_age("id_photo.jpg", min_age=21)
336
+ print(verification)
337
  ```
338
 
339
+ ## πŸ”§ Technical Details
340
 
341
+ - **Base Model**: google/vit-base-patch16-224 (Vision Transformer)
342
+ - **Input Resolution**: 224Γ—224 RGB images
343
+ - **Architecture**: Dual-head design with age regression and gender classification
344
+ - **Training Dataset**: UTKFace (23,687 images)
345
+ - **Training**: 15 epochs, AdamW optimizer, 2e-5 learning rate
346
 
347
+ ## 🌟 Key Features
 
348
 
349
+ - βœ… **True one-line usage** with transformers pipeline
350
+ - βœ… **High accuracy** (94.3% gender, 4.5 years age MAE)
351
+ - βœ… **Multiple input types** (file paths, URLs, PIL Images, NumPy arrays)
352
+ - βœ… **Batch processing** support
353
+ - βœ… **Real-time capable** (~50ms inference)
354
+ - βœ… **Google Colab ready**
355
+ - βœ… **Production tested**
356
+
357
+ ## πŸš€ Quick Start Examples
358
+
359
+ ### Absolute Minimal Usage
360
+ ```python
361
+ from transformers import pipeline
362
+ result = pipeline("image-classification", model="abhilash88/age-gender-prediction", trust_remote_code=True)("image.jpg")
363
+ print(f"Age: {result[0]['age']}, Gender: {result[0]['gender']}")
364
  ```
365
 
366
+ ### With Helper Function
367
+ ```python
368
+ from model import simple_predict
369
+ print(simple_predict("image.jpg")) # "25 years, Female (87% confidence)"
370
+ ```
 
371
 
372
+ ### Error Handling
373
+ ```python
374
+ from transformers import pipeline
375
 
376
+ classifier = pipeline("image-classification", model="abhilash88/age-gender-prediction", trust_remote_code=True)
 
 
 
 
377
 
378
+ def safe_predict(image_path):
379
+ try:
380
+ result = classifier(image_path)
381
+ return f"Age: {result[0]['age']}, Gender: {result[0]['gender']}"
382
+ except Exception as e:
383
+ return f"Prediction failed: {e}"
384
 
385
+ prediction = safe_predict("any_image.jpg")
386
+ print(prediction)
387
+ ```
 
 
 
388
 
389
+ ## πŸ“ Citation
390
 
391
  ```bibtex
392
  @misc{age-gender-prediction-2025,
 
395
  year={2025},
396
  publisher={Hugging Face},
397
  url={https://huggingface.co/abhilash88/age-gender-prediction},
398
+ note={One-liner pipeline with 94.3\% gender accuracy}
399
  }
400
  ```
401
 
402
+ ## πŸ“„ License
403
 
404
+ Licensed under Apache 2.0. Commercial use permitted with attribution.
 
 
 
405
 
406
  ---
407
 
408
+ **πŸŽ‰ Ready to use!** Just one line of code to get accurate age and gender predictions from any facial image! πŸš€
409
+
410
+ **Try it now:**
411
+ ```python
412
+ from transformers import pipeline
413
+ result = pipeline("image-classification", model="abhilash88/age-gender-prediction", trust_remote_code=True)("your_image.jpg")
414
+ print(f"Age: {result[0]['age']}, Gender: {result[0]['gender']}")
415
+ ```