File size: 2,960 Bytes
d8a3e66
 
0d207f4
 
 
 
 
 
 
 
 
 
d8a3e66
ca41029
 
0d207f4
 
 
 
 
d8a3e66
 
 
 
 
 
 
 
 
 
 
 
 
0d207f4
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
---
license: apache-2.0
language:
- en
base_model:
- google/siglip2-base-patch16-224
pipeline_tag: image-classification
library_name: transformers
tags:
- gender
- fashion
- product
---
 
![16.png](https://cdn-uploads.huggingface.co/production/uploads/65bb837dbfb878f46c77de4c/1rf5M6UtlzkYJOeFx0yTQ.png)

# **Fashion-Product-Gender**

> **Fashion-Product-Gender** is a vision model fine-tuned from **google/siglip2-base-patch16-224** using the **SiglipForImageClassification** architecture. It classifies fashion product images into one of five gender categories.

```py
Classification Report:
              precision    recall  f1-score   support

        Boys     0.4127    0.0940    0.1531       830
       Girls     0.5000    0.0061    0.0121       655
         Men     0.7506    0.8393    0.7925     22104
      Unisex     0.5714    0.0188    0.0364      2126
       Women     0.7317    0.7609    0.7460     18357

    accuracy                         0.7407     44072
   macro avg     0.5933    0.3438    0.3480     44072
weighted avg     0.7240    0.7407    0.7130     44072
```

The model predicts one of the following gender categories for fashion products:

- **0:** Boys  
- **1:** Girls  
- **2:** Men  
- **3:** Unisex  
- **4:** Women

---

# **Run with Transformers 🤗**

```python
!pip install -q transformers torch pillow gradio
```

```python
import gradio as gr
from transformers import AutoImageProcessor, SiglipForImageClassification
from PIL import Image
import torch

# Load model and processor
model_name = "prithivMLmods/Fashion-Product-Gender"  # Replace with your actual model path
model = SiglipForImageClassification.from_pretrained(model_name)
processor = AutoImageProcessor.from_pretrained(model_name)

# Label mapping
id2label = {
    0: "Boys",
    1: "Girls",
    2: "Men",
    3: "Unisex",
    4: "Women"
}

def classify_gender(image):
    """Predicts the gender category for a fashion product."""
    image = Image.fromarray(image).convert("RGB")
    inputs = processor(images=image, return_tensors="pt")

    with torch.no_grad():
        outputs = model(**inputs)
        logits = outputs.logits
        probs = torch.nn.functional.softmax(logits, dim=1).squeeze().tolist()

    predictions = {id2label[i]: round(probs[i], 3) for i in range(len(probs))}
    return predictions

# Gradio interface
iface = gr.Interface(
    fn=classify_gender,
    inputs=gr.Image(type="numpy"),
    outputs=gr.Label(label="Gender Prediction Scores"),
    title="Fashion-Product-Gender",
    description="Upload a fashion product image to predict the target gender category (Boys, Girls, Men, Unisex, Women)."
)

# Launch the app
if __name__ == "__main__":
    iface.launch()
```

---

# **Intended Use**

This model is best suited for:

- **Fashion E-commerce tagging and search**  
- **Personalized recommendations based on gender**  
- **Catalog organization and gender-based filters**  
- **Retail analytics and demographic insights**